On 21st November along with Sharmila ParmanandSylvie Delacroix and Harish Natarajan I debated with IBM’s Project Debater, a machine debating team put together by IBM in a project led by Noam Slonim. The motion was “This House Believes AI Will Bring More Harm Than Good”, I was speaking for the motion. Note in this style of debate, you don’t necessarily believe in the motion you speak for, but it’s an exercise in building arguments and making points.
Project Debater opened the debate for both sides (for and against the motion). In the past Project Debater has also constructed a response and a closing, but in this debate the idea was to show a capability of crowd sourcing arguments. Project Debater’s opening was distilled, in each case, from over 500 arguments submitted by members of the public.
You can my speech from the debate in the video above. Below I include a transcript of my speech and some answers to questions put to me before the debate by Katia Mwoskvitch. The debate featured on BBC Click, CNN, Fortune Magazine and Newsweek.
You might also be interested in this blog post on blog post on System Zero.
Transcript
So there are three types of lies. They are lies, damned lies and big data.
Now that’s a misquote from Mark Twain, or a quote popularised by Mark Twain, but what Mark Twain said that I think is almost more interesting is “Figures often beguile me, particularly when I have the arranging of them to myself”.
Why is that important? Because there’s actually a problem with the term “Artifiial Intelligence”. It implies that we are creating something that is like us … intelligent like us. And that is not true. All we’re doing is “computers and statistics”. And those computers are feeding on our data.
So what does that mean for us? Well the first thing is to understand how they’re doing it, and the analogy I like to give is to talk about a tragic event that happened to a gentleman by the name of Jean Dominique Bauby. He was the Editor in Chief of the French Elle Magazine when in 1993 he suffered a brain stem stroke that almost completely disabilitated him. He was left with only the ability to move his left eye. The remarkable thing about Bauby is we know his story because he wrote a book. And it took him, I think, 7 months of four hours a day to write this book. I think when we think about that we all think about what it would be like to be in that state, and the first important point is [that] relative to our friend Project Debater, we are all in that state. A locked in state.
So you can estimate Bauby’s communication rate by seeing how quickly he wrote his book. And we can estimate that Bauby could communicate … in Shannon information theory terms, and that’s one bit of information is the equivalent information if I tell you the answer of a coin toss … he could communicate at a rate of six bits of information per minute.
Now speaking to you now … Shannon also estimated the entropy of the English language … and I can tell you that I’m roughly communicating to you at a rate of 2000 bits per minute. Our friend project debater is communicating, when it desires to do so, at a rate of around 60 billion bits per minute.
So to put that in context, 2000 bits per minute, 2000 is a reasonable monthly salary. 60 billion is the wealth of the richest person on the planet, being paid to you every month. So that’s enough to pay for the whole of the NHS, buy a few American Football teams and Manchester City and whatever you so desire across the year.
It’s a very very different type of cognition that is reliant on things that are sort of beyond our understanding. So Sylvie gave Project Debater a name, she called her … it … Debbie. I’m going to try the name “Cybertronia the All-Knowing” because in some sense that’s more representative of what we’re dealing with. And it’s a very powerful technology that in a very nice way … complementary to our own ability, but the challenge with it is that our method of computation [misspoke, should have said communication] is … because we’re so [bandwidth] limited … is to use our powerful computation in our head to think about the motivations of all around us and to and to anthropomorphasise the things we communicate [with] and we do that to these machines that’s why we like to give them names but in reality they don’t have names. Now there’s a danger to this, because that’s the point in the quote “lies, damned lies and big data”. They are a new route to manipulating statistics as presented to us … facts as presented to us.
In the past this danger was perceived in the 1890s and the invention of the field of mathematical statistics was designed to deal with that danger. So people like Galton, Pearson, Fisher. They looked at the misrepresentation of statistics and they said this is how you represent it so you can draw correct conclusions. Unfortunately, they also decided that an appropriate use of this new technology was eugenics. Because they thought they had some single access to some underlying truth about you could prove that humanity should move forward. That is an enormous mistake that we continue to make where AI is present. That there is some objective, that there is some truth that we can optimise ourselves towards, that we are anything more than a collective of information processing individuals who are massively handicapped in our ability to communicate to each other and we perform this extraordinary cognitive dance in order to do so.
We have created entities that are undermining that dance. So when it comes to our interaction with these entities we place them in roles where they can see who we are. They peer deeply into our soul because of the amount of data we trail on an everyday basis. So these machines know us better than we can know ourselves. How can that be? Because within you there is a model of who you are that is incorrect. You all think you are nicer people than you genuinely are.
The machine knows who we are. That is limiting our freedoms, it’s limiting our aspirations, because through knowing who we truly are the machine can undermine us. And it does that in an emergent relationship where … one way of thinking about it is dual process cognition theory which says that there are two systems. One is the higher system, the higher reasoning system, the slow system Kahneman would call this, System 2. And System 1 is the low intuitive fast thinking. Machines, when they provide us information on social media, are plugging into System 1, not System 2. They’re creating something adictive. Something I think of as the “high fructose corn syrup” of our cognitive diet. They are making us cognitively obese as we consume this material. As they do that they are creating an ecosystem that sits under us, something I call System Zero. An ecosystem of shared behaviour that draws us into a certain direction, that divides our society, that feeds our worst instincts, that separates us from our higher cognition, our ability to think above ourselves and work together as a community. That’s the challenge we’re facing with the next decade of AI. It is anti-diversity it is bringing about a form of latent eugenics.
The rewards of this technology are many, and have been elucidated beautifully by project debater. And I’m sure they’re going to be even more so by Harish.
But bear this in mind, over the next 10 years we are going to be on a perilous journey. A journey where we are going to the heart of who we are and undermining our very selves.
Now my main argument is akin to Pascal’s wager, we should believe that AI will do us harm because it’s the best way to prevent us from falling into those harms. If we state here that AI is some universal good that will take us on a journey of freedom and health we’ll be in for a very sorry ending as a civilization.
Thank you.
Katia’s Interview Questions
What’s your view on the future of AI - especially in the context of this debate - do you think it’s important that AI may effectively augment humans in the near future?
The debate context is around 10 years, AI is part of a broader spectrum of technologies that is already augmenting humans. The important question is on whose terms is that happening? On the computers’ or the humans’? At the moment the computer dominates which impinges on humans’ ability to self determine.
What do you think about the prospect of people working with robots? Do you find it sort of like an uncanny valley-like scenario? Or it doesn’t bother you?
People are already working with automated decision making systems, whether it’s their cars or their computers. So in reality it’s already happened.
Do you think we are still far away from real applications of a machine like Project Debater? Where do you think it’ll be used first?
As I say, it’s already happened, and to think not is a misunderstanding of the type of technology we have, a confusion between artificial intelligence and anthropomorphic intelligence.
What’s the purpose of this experiment on Thursday, in your view?
To encourage people to think about how they’re interacting with machines and how we can ensure our interactions with machines enhance us rather than control us.
The AI is, effectively, just software - but there’s this monolithic rectangular block on stage for people to look at - do you think it’s important to have it? Would it be too weird without it, just with a voice in the room?
Its a two-edged sword. Embodying AI helps us to reason about it, we are not used to disembodied intelligences, but in reality it is a disemobidied intelligence and we should think about it differently from other humans. It’s a trade off between increasing our comfort levels as humans and correctly representing technologies. I can see arguments both ways, and it is often a matter for judgment.
How will the robot pick the arguments from those submitted by the crowd? What’s the actual approach? (very briefly)
I think that’s a question for Noam really, but I can describe the challenge as I see it. The challenge is for the machine to make a consistent narrative from a number of points, but it should also try and represent the diversity of those points as best as it can.
It has three options it can either selectively choose arguments that conform to a consistent narrative (with the potential to introduce bias in the arguments) or it can attempt to build a new narrative that sits above the arguments which is consistent. The third approach would be to not build a narrative and just make separate points. The first approach and third approaches are easier, but the second is more interesting because it’s trying to assimilate knownedge and present it in a way that a human can more easily comprehend.