Curated Resource ( ? )

For the good of humanity, AI needs to know when it's incompetent | WIRED UK

For the good of humanity, AI needs to know when it's incompetent | WIRED UK

my notes ( ? )

AI doesn't inherently understand its own competency. If a human worker needs help, they can ask for it — but how do you build an understanding of personal limitations into code? ... forecast when it's going to reach a situation where it has no experience... it gently taps the human on the shoulder... to take control... while we can delegate decision making to machines, we can't delegate responsibility... Unless we have a strong notion of competency awareness in AI, it won't become mainstream....
What happens if incompetent AI is paired with an equally incompetent human? ... humans working with or accountable for AI need to fully understand the systems... Explainable AI is not enough, you have to have trusted AI ... have human decision making in the loop... systems and humans will work in tandem, helping each other with their blind spots...

Read the Full Post

The above notes were curated from the full post www.wired.co.uk/article/ai-decision-making.

Related reading

More Stuff I Like

More Stuff tagged ai , inscrutable , liability

See also: Digital Transformation , Innovation Strategy , Science&Technology

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.