Thursday, May 24, 2018

Should algorithms be in the driver's seat?

A few days ago I wrote about going to the AI Talks meetup and disagreed with the view that humans should always have the last word. I just read “Should algorithms be in the driver's seat?” on Daniel Kahneman’s opinion on the subject, and we share the same views regarding biases affecting humans (in truth, my own view was probably heavily influenced by his book):

“The big advantage that algorithms have over humans is that they don’t have noise,” Kahneman said. “You present them the same problem twice, and you get the same output. That’s just not true of people.”
A human can still provide valuable inputs that a machine can’t — impressions, judgments — but they’re not particularly good at integrating information in a reliable and robust way, Kahneman said. The primary function of a person in a human-machine relationship should be to provide a fail-safe in an emergency situation. […]

You can combine humans and machines, provided that machines have the last word,” Kahneman said. He added: “In general, if you allow people to override algorithms you lose validity, because they override too often and they override on the basis of their impressions, which are biased and inaccurate and noisy.

I do think Kahneman here is talking about Narrow AI (*) and not the General AI about which concerns have been raised, but it’s good to read someone actually remind us that, Wait a minute, we’re not perfect, maybe lets turn the priorities around.

The article above includes the full video of a conversation at the 2018 Digital Economy Conference in April this year in NYC, where he made these coments.

(*) Saying Artificial Narrow Intelligence, although correct, just sounds awkward to me.

Sunday, May 20, 2018

Truly intelligent machines

This interview with Judea Pearl in Quanta Magazine apropos his book “The Book of Why: The New Science of Cause and Effect” has been making the rounds on social networks . Here are some thoughts:
“[…] as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.”
It is true that the hype around ML/DS is at a peak. The stream of new posts on Twitter and LinkedIn on algorithms like Gradient Descent or their applications/pitfalls give me new things to read every day. I have downloaded a few free Machine Learning books a couple of weeks ago, and what is notorious is that they jump very quickly into math and statistics, from linear algebra to derivatives and calculus. A few days ago my partner was doing something with Deep Neural Networks, and I suddenly saw her get paper and pencil and start solving derivatives. “What are you doing?” “Oh, I have to solve this to implement back propagation”…
I had one class of Artificial Intelligence when I did my Computer Science degree way back then. We studied things like A* (the algorithm used to explore and rank alternatives in games like checkers), Expert Systems to help diagnose health problems, SNePS for knowledge representation and talked about Neural Networks briefly. There was no complex maths or statistics. Now, I’m not saying that “those were the good old days” - as the area was having one of its “AI Winters” and finding limited success, but stating that today’s conversation is dominated by a relatively narrow set of Narrow AI techniques, heavily based in statistics and focused on training models with very high volumes of data. These are proving successful and having a major impact in many areas, with more to come. But is this it? Will this be what leads us to AI, is this THE critical approach that will crack [General] AI, or will we bump into a “local maximum”?
“I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labelling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.
I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough […]”
This seems to be a powerful argument. But where Judea Pearl gets really challenging is with the following:
“As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial. […]
I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions?
The first part of this will no doubt create aversion in most people – myself included – who are passionate about where tech is going, but it’s hard to shake off the feeling – especially with the second part – that he probably has a point here. It may not matter for the people and companies, as the achievements are indeed impressive, and all across areas like healthcare, retail, autonomous driving, finance – or anywhere with data to process – there is a wealth of data to process and new automation/personalization to do. The impact in our lives will continue to happen. But what we’re doing at the moment is likely not enough.
And just one more quote from this interview.
“AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.
[…] a serious soul-searching effort [is developing] that involves asking: Where are we going? What’s the next step?
Maybe the momentum, the exponential speed of development in the field, will make this question moot. Something will will come up – or many somethings - that will allow the tech companies to go on. Or maybe we just set a bar so high that, even though these data-based approaches can perform better than humans in more and more areas, we deem them insufficient because “self-driving cars using similar technology run into pedestrians and posts and we wonder whether they can ever be trustworthy.“ (WSJ, paywall, article on the same topic)

Thursday, May 17, 2018

AITalks: How can our digital self benefit from AI?

Yesterday I attended the first edition of the AITalks meetup (also here), a panel focused on AI and the impact for our digital self. The panel included the CEO of Echobox, a company who specializes in using AI tech to manage the social media profiles of companies like the Financial Times, Guardian or the New Scientist – impressive stuff, the CEO of Lobster, the “the Uber of Independent Stock Photography” (who use for example image processing to auto-tag photos or target ads) and the Director of Social Media at Motaword (couldn’t understand what the company does, but the representative was very strongly affiliated with Google).

It was in interesting night and conversation, but I have to admit I felt disappointed with what I felt were at times simplistic ways of addressing some of the topics.

On GDPR, Privacy and Personalization

One of the topics discussed was that of GDPR, and the feeling towards the regulation seemed to be of cautious resistance – to be expected from the point of view of businesses that rely on personal data to do personalization or content targeting. The CEO of Lobster said they lost 500k photos from their catalogue (if I’m not mistaken) that had been made available by theirs authors on Instagram because Facebook cut access suddenly recently and without warning, and an estimate of 60% of companies – especial smaller ones – are not ready for the regulation, a number which doesn’t surprise me.

Later in the conversation, I heard things like “I don’t see a threat in [people] sharing any data. We have to trust legal mechanisms to go after bad guys if there’s abuse” and “I think that’s a price we have to accept [the loss of privacy] in return for better and personalized services”. This worldview makes me feel extremely uncomfortable, but the CEO of Echobox did add words of caution: “If you share something on a social network, you should assume it’s now public. And you can be impersonated, someone can go and create a profile with your name, your photo, and start adding your friends. You have to be aware of that.” More a topic about privacy than AI, but without data to process there is no personalization, right? And considering the recent scandals with Facebook and Cambridge Analytica, I have to admit I didn’t like this way of seeing things.

On jobs/societal impact

Another topic I felt was addressed lightly/dismissively was the impact of AI on jobs and society, especially in the first part of the conversation before Q&A was opened.

I’ve been reading a lot of the topic, and I’m still unable to understand how can increased automation lead to MORE jobs being created by the wider adoption of AI, like Gartner is saying will happen by 2020. If 1.2 million people in the transport industry in the US alone go unemployed because of Autonomous Driving, what “meaningful and creative” jobs are they going to find? especially if AI is also starting to show up in the creative industries? I don’t see this nirvana of optimism coming to happen, and I do anticipate the need for profound Societal change especially in the “developed world”. Not tomorrow, but in time. How can a world based on Consumption be sustainable if the ones who Consume don’t have the financial means to do so? Maybe we’ll just cope and find creative ways to create meaning for ourselves, but not sure how we’ll manage the transition.
I do like the idea of “Human Augmentation” (i.e., Human + AI assisting), which was also mentioned and is easier to implement/coming sooner.

Anyway, on this topic, I think change is inevitable and it will happen (almost) inevitably, I’m hoping we’ll find ways to make it work for people and discuss it properly.

On healthcare or hiring

This was discussed more briefly, but the gist of it was that a lot of value was put in “human intuition” or “gut feeling” or “Humans having the last word”, and I personally think that both Healthcare and Hiring are precisely areas where there is more space for objectivity, especially considering the sheer quantity of cognitive biases that affect us.

Dr. House was a terrific TV Series, but he worked by trial&error – and I’d prefer not to be experimented on until suddenly (if I’m lucky) my doctor has an epiphany and finds the root cause of a problem. Or if s/he had a rough night and didn’t remember an important side-effect. And on hiring, it is known that the best predictor of future performance is past performance, and there are specific techniques that try to remove bias from interviews, such as always asking the same questions and rating candidates comparatively on them – but here I see AI’s having a harder job to help in this area (apart from ranking CVs and Linkedin profiles, maybe).

Daniel Kahneman’s “Thinking Fast and Slow” is mandatory reading for those interested in this kind of human cognitive topics.

PS

One of the replies to a question on Airplane Autopilot Systems (which reportedly have control of 97-99% of the flight time in modern commercial airplanes) was that “they aren’t AI”, which made me think of Tesler’s Theorem: ”AI is whatever hasn't been done yet.” I understood what he meant – I don’t think airplanes run neural networks, but it *IS* Artificial Intelligence. Unless, of course, flying an airplane plane doesn’t require some for of Intelligence. Winking smile.

Two years at Microsoft and a new challenge – a short reflection

Hard to believe it’s been two years since I moved to the UK and started an adventure at Microsoft!
I joined as a Cloud Solution Architect in May/2016 working with customers in Financial Services, then transitioned to Cloud Applications Solution Architect, a similar role but more focused on AppDev/PaaS services. Did the role well and got good results, and since January/2018 I eventually moved [back] to a managerial role and started as a Cloud Solution Architect Manager. I also changed industry vertical to focus now on the more dynamic Retail,Travel & Transport – where I have done more work in the totality of my career – and the new technology area of Data&AI, which actually covers a broad spectrum covering Big Data, Data Lake, Data Warehousing, Machine Learning/Data Science, cognitive services, and even IoT. It’s not an area where I have historically a lot of experience, but I’m both learning fast and  lucky to be working with an absolutely awesome team doing amazing work in some of the most important Azure customers.

One of the threads I’ve picked up recently in the Customer Success Unit is that of Artificial Intelligence, where I’m leading conversations and coordinating local investments. A fascinating area where not only Microsoft but all the big players are investing massively – Facebook and Google are names who tend to show up a lot (I’m still trying to understand what IBM Watson actually *does*, and AWS seems slightly less visible).

With Microsoft being a platform company and not focused on the delivery of services to consumers like some of the others, we tend perhaps not to be as visible or have the the same “coolness” factor, but that also frees us to focus our own investments in ethical AI and doing the right thing for our customers, under the leadership of Satya Nadella. And in the products we do deliver to consumers, like Office or Windows, you can see some of the investments in AI coming to very large numbers of people, all at once – the accessibility or translation features in PowerPoint or Skype being good examples. Together with Quantum Computing it's what interests me more in what the company is doing.

Anyway, since picking up the AI flag I’ve been posting interesting things I read on both Twitter and Linkedin (not just Microsoft contents…), so follow me if you have an interest in knowing what I’m chasing: Twitter / Linkedin.