Thursday, December 13, 2018

Royal Society's You and AI panel (11/Nov) - part 1

The Royal Society is hosting a series of discussion panels on You an AI (this link also has the recordings), intended to be a reflection of different aspects of AI/Machine Learning and its impacts on society.

The most recent debate was hosted by well-known science communicator Professor Brian Cox, and also had as panelists Dr. Vivienne Ming, Prof Peter Donnelly and Prof Suchi Saria. What follows are my comments and personal highlights of the night, very much aligned with the views of Dr. Ming who totally "stole the show" and whose more careful approach towards AI is close to my own.

What is AI?

This was the first topic of the night, and it's a question that has been debated to exhaustion. The two main definitions presented were the general "Autonomous systems who can make decisions under uncertainty" and the Machine Learning-specific "form of AI that learns using examples/by identifying patterns in data". The "under uncertainty" detail is curious -- my reading is that if when is certainty, a simpler deterministic rules-based system could be used.

Something else I found curious was the historical note that "proving theorems automatically (one of the first uses of "AI" in the 1960s) is simpler than determining if people sitting here in the first row are smiling", which has only become feasible to computers in the last 5-6 years. For most of us, the second task is trivial and the first very complex -- and this does seem to imply that current Artificial Intelligence is very different from Human Intelligence.[1]

Who benefits from AI?

Things started to get more interesting from here onwards. Dr. Ming said that typically new technology starts by "benefiting those who need it the least", and relativizing that initial impact. "If you put an app [with AI] in an AppStore, you've solved NOTHING in the world". It's easy to contradict this (just think of Facebook), but what she meant was that you won't solve world hunger, or poverty, or go to Mars, with an app using AI. And in that, she's right.

The discussion then went briefly to the obvious possibilities in terms of healthcare and education, where the potential benefits are huge, but quickly steered into more sensitive topics, namely the impact on jobs. There are several books and frequent studies about this, usually with consulting companies predicting that more jobs will be created (Accenture, Mckinsey), and schollars on the opposite side/predicting the need for deep societal adaptations to cope with the upcoming changes (such as an universal living wage) . One thing is true, "Every CFO tries to reduce costs with wages [and if the opportunity is there to do it / s/he'll take it]".

(By this point in the discussion, it was clear there were two sides on stage: Dr. Ming on the side of need for moderation, and Prof Saria on the side of absolute optimism).

Another interesting point was again made by Dr. Ming: "It's not impossible to create a robot to pick up berries, or to drive a car, but it's much simpler to replace a finantial services analyst" (or a doctor?). The key message here was: AI will probably have more impact on middle-class qualified jobs than lower skilled jobs, just because they are simpler to replace. And in doing that, it will obviously increase social inequality. The argument is just obvious and simple. It's not just the menial/mindless tasks that will be automated, but also many jobs for which people today spend years studying in universities. And this does include software developers, by the way -- how much time is spent writing boilerplate code?

This section ended with something more speculative, "which jobs will be the last to automate?" The suggested answer was - those requiring creativity/creative problem solving (so not only artists, but engineers, etc.) But this may be antropocentric optimism: we see Creativity as being something uniquely human, so naturally we see it as our last bastion "against the machines" - even if animals also have it just to a lesser degree. Today we have AI's winning games like Go or Chess using unique strategies we had never considered, or creating works of art or music. So we shouldn't bet too much on this answer - maybe jobs dealing with the unexpected would be a better answer.

How can Society benefit from AI?

This seemed to be a simpler part of the panel, but it went straight into the topic of explainability, a complicated if not impossible task for the more complex approaches to AI such as Deep Neural Networks. Prof. Saria said she thought the need to explain should simply be replaced by trust. Prof. Donnelly then raised an interesting dilemma: if you suspected you had a health problem, and you could a) be seen by a doctor who gave you a diagnosis with 85% accuracy, and explain all of it properly; or b) be diagnosed by an AI with 95% accuracy but with no explanation, which would you pick? Most of the audience picked the second, but a better option would be c) have an AI augment the human diagnosis, increasing its accuracy and providing the explainability.

It seemed clear that in many cases we'll need some form of explainability (such as when beeing considered for a job, or getting a loan, or in healthcare -- and GDPR actually mandates it), and in others that's less relevant (like in face recognition or flying an airplane). My view is that if it's something that seriously impacts people's lives, it should be explainable. But there is a contradiction in this position: as the books "Strangers to Ourselves" by Timothy D. Wilson and "Thinking fast and Slow" by Daniel Kahneman explore, our brains actually make up explanations on the fly, we're less rational than we think. So there's a double standard at stake, when demanding it of machines. It may all come down to familiarity with humans vs AI, or simply to knowing a bit of how it works under the hood, and being umconfortable with the risk of blind delegation of medical diagnoses or trial decisions or on credit ratings to a complex number crunching/statistical process.

This post is already long, so I'll continue in a part 2. On the meantime, the video of the debate is available here.

[1] Gödel's incompleteness theorems, proving that there are theorems that are true but impossible to prove, were not mentioned, but it doesn't change the argument.


No comments:

Post a Comment