Monday, December 17, 2018

Royal Society's You and AI panel (11/Nov) - part 2

(part 1 of this post can be read here, and the video of the panel discussion is here)

How can Society Benefit from AI (cont)
After the discussion on explainability, the discussion went to another hot topic: the concentration of data in a very small number of companies. Some points discussed were:
  • "I'm worried about that, but I prefer to have centralization and improved health care, than not having anything"
  • "how do I exert the rights over my data, or on AIs targeting me? Do I as an individual have to go for a lawsuit under GDPR? "
  • "These companies have the Data, but they also have the Talent and the Infrastructure, and these three imply concentration, not many companies are in the same position."
  • "They operate under their own laws, even if they are present in our homes [reference to digital assistants like Alexa]"
All of these are worrying. There are constant news of data breaches in the news (including by the big ones, Google and Facebook), and there is the feeling that these companies are above the law. Having data is just one issue, having the data being used to make decisions which are not transparent is even more worrying. See this case, for example. Access to our data, plus an unexplainable report, people's lives affected, and we have a dystopian Sci-Fi universe with us. I'm sure there are better uses of Data/AI than this.

How do we get there?
This last section was varied, with some points being:
  • "How do we regulate maths?" -- a rethorical question obviously asked by Prof Saria when talking about reasons for AI/Machine Learning not to be regulated, quickly countered by Prof Donnelly who said that it was all about of the context for its use. [1]
  • Reference to the recent news about Amazon's AI-driven Hiring failure, as an example of how software developed by a big company can still fail in a way that reveals obvious biases. "You have an army of smart, male, young smart people building an hammer, but they have never seen an house" [2]
  • Addressing the ongoing conversation about Neural Networks/current Machine Learning approaches not being enough (see here, here and also here), "You can put a Deep Neural Network listening to the BBC 24 hours a day, it will never wake up, or form an opinion"
  • "You don't need Artificial General Intelligence (AGI) to build a killer robot or to keep an autocracy in power"
You could say that the topic for this last section was addressed indirectly, via the discussion on the current concerns around the use of AI (really, Machine "Learning", or even more correct, Data Science). The conversation finished off back in AGI's, and wrapped up with the Turing test (it actually made me go and read his original article).

All in all, an interesting night and conversation, if anything somewhat focused on the risks and failures of AI than the possibilities.

My Take
It's probably obvious from my summary that my opinions are very much on the side of Dr. Vivienne Ming.

In my view, we do have to worry about bias, explainability, fairness, regulation, impact on jobs, etc.. Working for one of the companies developing some of the technology mentioned in the panel, I really appreciate that it has a focus on Ethical AI, in some ways that are public and others which I can't share.

I also doubt the current "Artificial Intelligence" techniques will bring us Artificial General Intelligence. Following daily what comes out on Twitter or in blog aggregators, reding about fantastic advances in things like generating realistic faces (impressive and useless at the same time) and many articles describing tiny new advances, it does make me feel - quoting Judea Pearl, that "All the impressive achievements of deep learning amount to just curve fitting". It's probably a mistake to call it "Artificial Intelligence", or even "Machine Learning", when all it is is "Data Science" and crunching abstract numbers until we have models that represent data in a adequate way. [3]

Admittedly, the panel's discussion was skewed to the side of caution and the problems of AI. There are fantastic possibilities in fields like Health care, Farming, Medical Research, Science, Energy, etc. Again with impacts that we have to worry about, but with clear potential benefits. I try not to blog or post about how "AI will unleash/unlock" whatever it is, but it seems obvious to me that several of these areas will see gains.

A final note to address the "big corporations concentration" part of the discussion -- in October I attended a Instant Expert: Artificial Intelligence panel organized by the New Scientist where I heard a talk by David Runciman ("professor of politics, Leverhulme Centre for the Future of Intelligence, University of Cambridge. Author of How Democracy Ends"). One of his main points was that the powers that these private companies don't have is a) the power of the law and b) the military. When Zuckerberg repeatedly refuses to come to the UK's House of Commons, the House could legislate to make this illegal, and, as it happened in Brazil with WhatsApp, shut down the social network in the country. If it would or will, and what Facebook would do to fight it, is a different story. But the State has that power, and it could do it.

I know I'll return to these topics, but for now I close with Turing's words: "We can only see a short distance ahead, but we can see plenty there that needs to be done."


[1] This made me think of Project Manhattan and the atom bomb in WW2.
[2] But the attempts will continue. And one of these days you may have to give access to your social network profile in addition to your CV when applying for a job...
[3] But hey, what do I know?

No comments:

Post a Comment