my notes ( ? )
artificial neural network... takes in a given type of data... and finds patterns in them... Nobody knows quite how they work. And that means no one can predict when they might fail... If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”..
European Union recently proposed to establish a “right to explanation,” which allows citizens to demand transparency for algorithmic decisions.... didn’t specify exactly what “transparency” means... some believe that such a definition might be impossible...
modern machine learning offers a choice ... to know what will happen with high accuracy, or why something will happen, at the expense of accuracy? The “why” helps us strategize, adapt, and know when our model is about to break. The “what” helps us act appropriately in the immediate future...
computer scientists are importing techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains...
What machines are picking up on are not facts about the world... They’re facts about the dataset ... if you don’t know how it works, you don’t know how it will fail. And ... they fail spectacularly disgracefully... They are as compact and simplified as they can be, exquisitely well suited to their environment—and ill-adapted to any other... If “understanding” in this field does come, he says, it could be of the sort found not in physics, but evolutionary biology.
Read the Full Post
The above notes were curated from the full post
nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable?utm_source=frontpage&utm_medium=mview&utm_campaign=is-artificial-intelligence-permanently-inscrutable&utm_source=Nautilus&utm_campaign=16628049c8-EMAIL_CAMPAIGN_2017_01_27&utm_medium=email&utm_term=0_dc96ec7a9d-16628049c8-60724709.