On the one hand, we hear AI is the solution to all of our problems (NYT), robots are becoming better than us at practically anything (Fortune), and good things come to those who use smarter robots (The Verge) – so give us more of your data (TechCrunch) because that will help cure/solve/understand/explain (WIRED) whatever it is we’re talking about.
On the other hand, AI is destroying our jobs (MIT Tech Review), robots are taking our place (The Guardian), the algorithms overlords are here (Salon) and their incarnation is the GAFAM (European acronym for Google, Amazon, Facebook, Apple, Microsoft because blaming it on the Americans is easiest in Europe) (The Guardian).
We need to actually understand what’s going on under the AI hood. How do algorithms learn (Nature)? Can they be made more transparent and accountable (Rue 89, in French)? How do we avoid biases and discriminations (The Guardian) through AI-mediated decisions (The Guardian)? All these questions get to the fundamental one: “who’s responsible when an AI takes a bad decision?” and the answer to that question is fundamental going forward.
My prediction for 2017 is that we’ll find creative ways to make people accountable for algorithmic decisions even when the algorithms themselves are not: some of those solutions will look curiously like data protection measures, forbidding the collection of certain kind of personal data and requiring the subject’s consent in all processings of personal data.