Monday, December 17, 2018

Royal Society's You and AI panel (11/Nov) - part 2

(part 1 of this post can be read here, and the video of the panel discussion is here)

How can Society Benefit from AI (cont)
After the discussion on explainability, the discussion went to another hot topic: the concentration of data in a very small number of companies. Some points discussed were:
  • "I'm worried about that, but I prefer to have centralization and improved health care, than not having anything"
  • "how do I exert the rights over my data, or on AIs targeting me? Do I as an individual have to go for a lawsuit under GDPR? "
  • "These companies have the Data, but they also have the Talent and the Infrastructure, and these three imply concentration, not many companies are in the same position."
  • "They operate under their own laws, even if they are present in our homes [reference to digital assistants like Alexa]"
All of these are worrying. There are constant news of data breaches in the news (including by the big ones, Google and Facebook), and there is the feeling that these companies are above the law. Having data is just one issue, having the data being used to make decisions which are not transparent is even more worrying. See this case, for example. Access to our data, plus an unexplainable report, people's lives affected, and we have a dystopian Sci-Fi universe with us. I'm sure there are better uses of Data/AI than this.

How do we get there?
This last section was varied, with some points being:
  • "How do we regulate maths?" -- a rethorical question obviously asked by Prof Saria when talking about reasons for AI/Machine Learning not to be regulated, quickly countered by Prof Donnelly who said that it was all about of the context for its use. [1]
  • Reference to the recent news about Amazon's AI-driven Hiring failure, as an example of how software developed by a big company can still fail in a way that reveals obvious biases. "You have an army of smart, male, young smart people building an hammer, but they have never seen an house" [2]
  • Addressing the ongoing conversation about Neural Networks/current Machine Learning approaches not being enough (see here, here and also here), "You can put a Deep Neural Network listening to the BBC 24 hours a day, it will never wake up, or form an opinion"
  • "You don't need Artificial General Intelligence (AGI) to build a killer robot or to keep an autocracy in power"
You could say that the topic for this last section was addressed indirectly, via the discussion on the current concerns around the use of AI (really, Machine "Learning", or even more correct, Data Science). The conversation finished off back in AGI's, and wrapped up with the Turing test (it actually made me go and read his original article).

All in all, an interesting night and conversation, if anything somewhat focused on the risks and failures of AI than the possibilities.

My Take
It's probably obvious from my summary that my opinions are very much on the side of Dr. Vivienne Ming.

In my view, we do have to worry about bias, explainability, fairness, regulation, impact on jobs, etc.. Working for one of the companies developing some of the technology mentioned in the panel, I really appreciate that it has a focus on Ethical AI, in some ways that are public and others which I can't share.

I also doubt the current "Artificial Intelligence" techniques will bring us Artificial General Intelligence. Following daily what comes out on Twitter or in blog aggregators, reding about fantastic advances in things like generating realistic faces (impressive and useless at the same time) and many articles describing tiny new advances, it does make me feel - quoting Judea Pearl, that "All the impressive achievements of deep learning amount to just curve fitting". It's probably a mistake to call it "Artificial Intelligence", or even "Machine Learning", when all it is is "Data Science" and crunching abstract numbers until we have models that represent data in a adequate way. [3]

Admittedly, the panel's discussion was skewed to the side of caution and the problems of AI. There are fantastic possibilities in fields like Health care, Farming, Medical Research, Science, Energy, etc. Again with impacts that we have to worry about, but with clear potential benefits. I try not to blog or post about how "AI will unleash/unlock" whatever it is, but it seems obvious to me that several of these areas will see gains.

A final note to address the "big corporations concentration" part of the discussion -- in October I attended a Instant Expert: Artificial Intelligence panel organized by the New Scientist where I heard a talk by David Runciman ("professor of politics, Leverhulme Centre for the Future of Intelligence, University of Cambridge. Author of How Democracy Ends"). One of his main points was that the powers that these private companies don't have is a) the power of the law and b) the military. When Zuckerberg repeatedly refuses to come to the UK's House of Commons, the House could legislate to make this illegal, and, as it happened in Brazil with WhatsApp, shut down the social network in the country. If it would or will, and what Facebook would do to fight it, is a different story. But the State has that power, and it could do it.

I know I'll return to these topics, but for now I close with Turing's words: "We can only see a short distance ahead, but we can see plenty there that needs to be done."


[1] This made me think of Project Manhattan and the atom bomb in WW2.
[2] But the attempts will continue. And one of these days you may have to give access to your social network profile in addition to your CV when applying for a job...
[3] But hey, what do I know?

Thursday, December 13, 2018

Royal Society's You and AI panel (11/Nov) - part 1

The Royal Society is hosting a series of discussion panels on You an AI (this link also has the recordings), intended to be a reflection of different aspects of AI/Machine Learning and its impacts on society.

The most recent debate was hosted by well-known science communicator Professor Brian Cox, and also had as panelists Dr. Vivienne Ming, Prof Peter Donnelly and Prof Suchi Saria. What follows are my comments and personal highlights of the night, very much aligned with the views of Dr. Ming who totally "stole the show" and whose more careful approach towards AI is close to my own.

What is AI?

This was the first topic of the night, and it's a question that has been debated to exhaustion. The two main definitions presented were the general "Autonomous systems who can make decisions under uncertainty" and the Machine Learning-specific "form of AI that learns using examples/by identifying patterns in data". The "under uncertainty" detail is curious -- my reading is that if when is certainty, a simpler deterministic rules-based system could be used.

Something else I found curious was the historical note that "proving theorems automatically (one of the first uses of "AI" in the 1960s) is simpler than determining if people sitting here in the first row are smiling", which has only become feasible to computers in the last 5-6 years. For most of us, the second task is trivial and the first very complex -- and this does seem to imply that current Artificial Intelligence is very different from Human Intelligence.[1]

Who benefits from AI?

Things started to get more interesting from here onwards. Dr. Ming said that typically new technology starts by "benefiting those who need it the least", and relativizing that initial impact. "If you put an app [with AI] in an AppStore, you've solved NOTHING in the world". It's easy to contradict this (just think of Facebook), but what she meant was that you won't solve world hunger, or poverty, or go to Mars, with an app using AI. And in that, she's right.

The discussion then went briefly to the obvious possibilities in terms of healthcare and education, where the potential benefits are huge, but quickly steered into more sensitive topics, namely the impact on jobs. There are several books and frequent studies about this, usually with consulting companies predicting that more jobs will be created (Accenture, Mckinsey), and schollars on the opposite side/predicting the need for deep societal adaptations to cope with the upcoming changes (such as an universal living wage) . One thing is true, "Every CFO tries to reduce costs with wages [and if the opportunity is there to do it / s/he'll take it]".

(By this point in the discussion, it was clear there were two sides on stage: Dr. Ming on the side of need for moderation, and Prof Saria on the side of absolute optimism).

Another interesting point was again made by Dr. Ming: "It's not impossible to create a robot to pick up berries, or to drive a car, but it's much simpler to replace a finantial services analyst" (or a doctor?). The key message here was: AI will probably have more impact on middle-class qualified jobs than lower skilled jobs, just because they are simpler to replace. And in doing that, it will obviously increase social inequality. The argument is just obvious and simple. It's not just the menial/mindless tasks that will be automated, but also many jobs for which people today spend years studying in universities. And this does include software developers, by the way -- how much time is spent writing boilerplate code?

This section ended with something more speculative, "which jobs will be the last to automate?" The suggested answer was - those requiring creativity/creative problem solving (so not only artists, but engineers, etc.) But this may be antropocentric optimism: we see Creativity as being something uniquely human, so naturally we see it as our last bastion "against the machines" - even if animals also have it just to a lesser degree. Today we have AI's winning games like Go or Chess using unique strategies we had never considered, or creating works of art or music. So we shouldn't bet too much on this answer - maybe jobs dealing with the unexpected would be a better answer.

How can Society benefit from AI?

This seemed to be a simpler part of the panel, but it went straight into the topic of explainability, a complicated if not impossible task for the more complex approaches to AI such as Deep Neural Networks. Prof. Saria said she thought the need to explain should simply be replaced by trust. Prof. Donnelly then raised an interesting dilemma: if you suspected you had a health problem, and you could a) be seen by a doctor who gave you a diagnosis with 85% accuracy, and explain all of it properly; or b) be diagnosed by an AI with 95% accuracy but with no explanation, which would you pick? Most of the audience picked the second, but a better option would be c) have an AI augment the human diagnosis, increasing its accuracy and providing the explainability.

It seemed clear that in many cases we'll need some form of explainability (such as when beeing considered for a job, or getting a loan, or in healthcare -- and GDPR actually mandates it), and in others that's less relevant (like in face recognition or flying an airplane). My view is that if it's something that seriously impacts people's lives, it should be explainable. But there is a contradiction in this position: as the books "Strangers to Ourselves" by Timothy D. Wilson and "Thinking fast and Slow" by Daniel Kahneman explore, our brains actually make up explanations on the fly, we're less rational than we think. So there's a double standard at stake, when demanding it of machines. It may all come down to familiarity with humans vs AI, or simply to knowing a bit of how it works under the hood, and being umconfortable with the risk of blind delegation of medical diagnoses or trial decisions or on credit ratings to a complex number crunching/statistical process.

This post is already long, so I'll continue in a part 2. On the meantime, the video of the debate is available here.

[1] Gödel's incompleteness theorems, proving that there are theorems that are true but impossible to prove, were not mentioned, but it doesn't change the argument.


Thursday, December 6, 2018

Microsoft QuantumML tutorial

[this post was written for the 2018 Q# advent calendar]

Two colleagues recently went to Microsoft's internal Machine Learning and Data Science conference, and recommended a tutorial they did on-site, on Quantum Machine Learning. The materials for this lab have just been published on GitHub, and the following are my learnings while doing it.

The lab is implemented with Microsoft's Quantum SDK, using Q# for the Quantum code and C# for the driver code. The goal is to implement a Classifier, a discriminator able to classify a value into one of two classes -- just like a Logistic Regression classifier. Or in other words, a Quantum Perceptron. It is simple to implement if you know the core concepts of Quantum Computing, and most of it is very guided -- you just have to fill in the blanks with the right Quantum primitives, following the instructions in the comments.

Simplifying, what the algorithm does is as follows: imagine you have a pizza cut in two halves at a given angle 𝜃, and the angles higher than that are of class One, and Zero otherwise:



You also have a labeled training data set, specifying for a large number of angle values what the class/label is:


Note that the angles are represented in radians (i.e, the full circle is 0 to 2*PI) instead of 0-360º, but that is a detail. This is equivalent to having normalized data between 0 and 1.

The goal of the lab is first - to implement an algorithm that finds what the separation angle 𝜃, and second - classify new angle values as either class Zero or One. Finding the separation angle (which is equivalent to training a logistic regressor and finding its parameters) is achieved with a mixed of C# and Q# code, while doing the classification is purely Q# quantum code.

Some of the issues that confused me while doing the lab were the following:

Finding the angle - the underlying logic of Quantum Computing features a lof of linear algebra (matrixes) and trigonometry (angles). One important thing to keep in mind is that what the algorithm needs to find is not the separation angle at which the pizza was cut, but an angle perpendicular/90º to it. In the following code snippet, the separation angle is 2.0 (equivalent to 114.6º), but the angle that the algorithm needs to find is "correctAngle". By adding or subtracting PI/2 we get the perpendicular angle:

double separationAngle = 2.0; 
double correctAngle = separationAngle - Math.PI / 2;

The reason for this is related to the quantum transformations available to us. The slidedeck in the Github repo talks about this, but it wasn't immediately clear to me when I read it.

Success Rate - the Main() in the provided C# driver relies heavily on a Q# operator called QuantumClassifier_SuccessRate. What this does is find how well the quantum algorithm can classify the data points in the training data, for a given angle it is called with. The Q# operator returns this as a percentage.
The C# code then calls it multiple times with different angles using ternary search (imagine a binary tree search, but with 3 'halves'), until the error rate is low enough. This is the bulk of the training process, and when it ends it has found a good approximation of the "correctAngle" mentioned above (note that it's not looking for the separation angle 𝜃).

The QuantumClassifier_SuccessRate calls two other operators:
  • EncodeDataInQubits - as an analogy to "classical" machine learning, this can be seen as a sort of data preparation step, where you initialize the Qubits and generate a sort of "quantum feature", dataQubit. The output label is also encoded in a Qubit.
  • Validate - again as an analogy, this can be seen as applying the Hypothesis and check if we're doing the right prediction. It can be useful to think of the CNOT "truth table" to understand this code:

    CNOT(dataQubit, labelQubit);
    

     Remembering that the CNOT flips the second qubit if the first one is 1 ( |1> really), we have the following "truth table":

    CNOT(0,0) -> 0
    CNOT(1,0) -> 1
    CNOT(0,1) -> 1
    CNOT(1,1) -> 0


    Or, in other words: we have 0 as an output when the dataQubit == labelQubit, i.e., we have done the right prediction. 
Finally, the logic of the QuantumClassifier_SuccessRate itself includes two loops: one to iterate over all the values in the training dataset (0..N-1), and the second repeats each Validate operation several times (1..nSamples, where nSamples = 201 by default) to account for the probabilistic nature of Quantum Computing when you do a Measurement. Again note that nSamples is possibly a misleading name -- this doesn't refer to data samples, but to iterations of the algorithm. You can reduce this number to 100 for example, and you'll see the quality of the predictions will go down.
Doing predictions - as I mentioned in the beggining, a big part of the exercise is working on the training code, implementing the 3 operators mentioned above. For this second part, you have to implement:

a) C# code to generate a new dataset, the test dataset which you will ask your Quantum code to classify;

b) Q# code to actually do the classification. For both parts you can reuse/adapt code you have done before. This is what I ended up with:


operation QuantumClassifier (
    alpha : Double, 
    dataPoints : Double[]) : Int[] {
        
    let N = Length(dataPoints);
    mutable predictions = new Int[N];

    let nSamples = 201;

    // Allocate two qubits to be used in the classification
    using ((dataQubit, predictionQubit) = (Qubit(), Qubit())) {
            
        // Iterate over all points of the dataset
        for (i in 0 .. N - 1) {
                
            mutable zeroLabelCount = 0;
                
            // Classify i-th data point by running classification circuit nSamples times
            for (j in 1 .. nSamples) {

                // encode
                Reset(dataQubit);
                Reset(predictionQubit);
        
                Ry(dataPoints[i], dataQubit);
        
                // classify
                Ry(-alpha, dataQubit); 

                CNOT(dataQubit, predictionQubit);

                let result = M(predictionQubit) == Zero;
                if(result == true)
                { // count the number of zeros
                    set zeroLabelCount = zeroLabelCount + 1;
                }
            }

            if(zeroLabelCount > nSamples/2) {
                // if the majority of classifications are zero, we say it's a Zero
                set predictions[i] = 0;
            }
            else {
                set predictions[i] = 1;
            }
        }

        // Clean up both qubits before deallocating them using library operation Reset.
        Reset(dataQubit);
        Reset(predictionQubit);
    }
        
    return predictions;
}

This code then does the following correct predictions based on a new data set:


And that's it, your Quantum Perceptron is finished :). Now we only need hardware to run it!

While working on the lab I've gone and looked around on the web and found two articles that seem to be related to the approach followed here: "Quantum Perceptron Network" [paywall] and "Simulating a perceptron on a quantum computer", both possibly worth taking a look.

If you already have some basic knowledge of both Machine Learning and Q#, you should expect to spend maybe 2 hours on it.

Monday, December 3, 2018

Databricks/Spark Hands-on lab

This past week I organized a 4-day technical training for internal teams called LearnAI. I had some time to spare in the agenda in one of the days, so hacked together a simple challenge using Azure Databricks/Spark and Python.

Being a fan of Astronomy, I based this off a personal pet project of mine - explore ESA's Gaia satellite data using Spark. A few months back I completed Coursera's Data-driven Astronomy, and felt this was an amazing way of exploring big-data challenges while also learning some Astronomy along the way.

Anyway, I have put the resulting Notebooks and Word document with setup instructions on github. The exercises have the format of a Notebook where you fill in the missing Python code, and I've also included a solutions' notebook. The exercises are mostly introductory and should take max 3 hours to complete. You'll also need access to an Azure subscription, as I'm using Azure Blob Storage to store the data.

PS: it feels good to code once in a while ;-)

Saturday, December 1, 2018

Spark 2.4 - Avro vs Parquet

A few days ago Databricks posted this article announcing "Apache Avro as a Built-in Data Source in Apache Spark 2.4", and comparing the performance against the previous version of the Avro format support.

I was curious as to the performance of this new support against regular Parquet, so adapted the notebook Databricks supported to include a test versus this format, spun up my Azure Databricks cluster (running two Standard_DS3_v2 VMs with 14.0 GB Memory, 4 Cores, 0.75 DBUs each) using Databricks Runtime 5.0.

The notebook with the Scala code is available here, and the results I got were:

Test Avro Parquet Comparison
Read time (ms) 28061 18131 65%
Write time (ms) 41342 33904 82%
Disk space (mb) 2138 2037 95%

Parquet is the superior format in all three tests, but considering Avro is row-based and Parquet is columnar, I did expect - given the nature of the tests - for Avro to be most performant. Anyway, my goal was just to satisfy my curiosity about the performance differences at a high level, not compare the formats in general. For that, this deck is a couple of years old but has interesting information.