My highlight for the day is a publication that came out a few weeks ago, the 2021 State of AI Report ( https://www.stateof.ai/ ), an attempt at compiling the most interesting developments in AI/ML of the year, in (mostly) not too much technical detail. It still runs at 188 slides, so there’s a lot there to see. A couple of the topics that specially interest me are the uses in biology, the impact of transformer models not only in NLP but also Computer Vision, and the discussion on the trend for larger and larger language models. AI chips also make an appearance, but possibly because of the global chip shortage, with less prominence than usual.
As a somewhat related topic, something that I found missing was the lack of mention to either Causal Reasoning (think Judea Pearl’s “Book of Why”) or hybrid approaches to AI - e.g. the “Symbolic + Connectionist” approach (think Gary Marcus + Ernest Davis’ “Rebooting AI”, as a popular recent reference). Maybe we’re just still amazed with the achievements of modern large language models.
On the above topic, on December 23rd, if the time and timezones work, it could be a good idea to register to the free “AI Debate #3” ( https://www.eventbrite.ca/e/ai-debate-3-live-streaming-tickets-133817911977 ), co-organized by MONTREAL.AI and Gary Marcus and with a great list of speakers.
As a very final note – when I was in Uni and briefly studied the topic of AI, Neural Networks were just a theoretical concept, and had to hand-draw a SNEPS network to represent a domain of knowledge, and code A* in LISP for a checkers game. How far things have come.