Thursday, May 17, 2018

AITalks: How can our digital self benefit from AI?

Yesterday I attended the first edition of the AITalks meetup (also here), a panel focused on AI and the impact for our digital self. The panel included the CEO of Echobox, a company who specializes in using AI tech to manage the social media profiles of companies like the Financial Times, Guardian or the New Scientist – impressive stuff, the CEO of Lobster, the “the Uber of Independent Stock Photography” (who use for example image processing to auto-tag photos or target ads) and the Director of Social Media at Motaword (couldn’t understand what the company does, but the representative was very strongly affiliated with Google).

It was in interesting night and conversation, but I have to admit I felt disappointed with what I felt were at times simplistic ways of addressing some of the topics.

On GDPR, Privacy and Personalization

One of the topics discussed was that of GDPR, and the feeling towards the regulation seemed to be of cautious resistance – to be expected from the point of view of businesses that rely on personal data to do personalization or content targeting. The CEO of Lobster said they lost 500k photos from their catalogue (if I’m not mistaken) that had been made available by theirs authors on Instagram because Facebook cut access suddenly recently and without warning, and an estimate of 60% of companies – especial smaller ones – are not ready for the regulation, a number which doesn’t surprise me.

Later in the conversation, I heard things like “I don’t see a threat in [people] sharing any data. We have to trust legal mechanisms to go after bad guys if there’s abuse” and “I think that’s a price we have to accept [the loss of privacy] in return for better and personalized services”. This worldview makes me feel extremely uncomfortable, but the CEO of Echobox did add words of caution: “If you share something on a social network, you should assume it’s now public. And you can be impersonated, someone can go and create a profile with your name, your photo, and start adding your friends. You have to be aware of that.” More a topic about privacy than AI, but without data to process there is no personalization, right? And considering the recent scandals with Facebook and Cambridge Analytica, I have to admit I didn’t like this way of seeing things.

On jobs/societal impact

Another topic I felt was addressed lightly/dismissively was the impact of AI on jobs and society, especially in the first part of the conversation before Q&A was opened.

I’ve been reading a lot of the topic, and I’m still unable to understand how can increased automation lead to MORE jobs being created by the wider adoption of AI, like Gartner is saying will happen by 2020. If 1.2 million people in the transport industry in the US alone go unemployed because of Autonomous Driving, what “meaningful and creative” jobs are they going to find? especially if AI is also starting to show up in the creative industries? I don’t see this nirvana of optimism coming to happen, and I do anticipate the need for profound Societal change especially in the “developed world”. Not tomorrow, but in time. How can a world based on Consumption be sustainable if the ones who Consume don’t have the financial means to do so? Maybe we’ll just cope and find creative ways to create meaning for ourselves, but not sure how we’ll manage the transition.
I do like the idea of “Human Augmentation” (i.e., Human + AI assisting), which was also mentioned and is easier to implement/coming sooner.

Anyway, on this topic, I think change is inevitable and it will happen (almost) inevitably, I’m hoping we’ll find ways to make it work for people and discuss it properly.

On healthcare or hiring

This was discussed more briefly, but the gist of it was that a lot of value was put in “human intuition” or “gut feeling” or “Humans having the last word”, and I personally think that both Healthcare and Hiring are precisely areas where there is more space for objectivity, especially considering the sheer quantity of cognitive biases that affect us.

Dr. House was a terrific TV Series, but he worked by trial&error – and I’d prefer not to be experimented on until suddenly (if I’m lucky) my doctor has an epiphany and finds the root cause of a problem. Or if s/he had a rough night and didn’t remember an important side-effect. And on hiring, it is known that the best predictor of future performance is past performance, and there are specific techniques that try to remove bias from interviews, such as always asking the same questions and rating candidates comparatively on them – but here I see AI’s having a harder job to help in this area (apart from ranking CVs and Linkedin profiles, maybe).

Daniel Kahneman’s “Thinking Fast and Slow” is mandatory reading for those interested in this kind of human cognitive topics.

PS

One of the replies to a question on Airplane Autopilot Systems (which reportedly have control of 97-99% of the flight time in modern commercial airplanes) was that “they aren’t AI”, which made me think of Tesler’s Theorem: ”AI is whatever hasn't been done yet.” I understood what he meant – I don’t think airplanes run neural networks, but it *IS* Artificial Intelligence. Unless, of course, flying an airplane plane doesn’t require some for of Intelligence. Winking smile.

No comments:

Post a Comment