On 15th September, 1830, the Liverpool and Manchester Railway opened. It was the first passenger railway. It is said that when rail travel was first proposed fears were voiced that travel at high speeds would cause asphyxia or permanently damage passengers’ eyes. Heady speeds of 50 km/h were unheard of at the time.

The first death on the railways actually occurred the day they opened. The MP for Liverpool disembarked from a stationary train and, in an attempt to greet the Duke of Wellington, he was run over by George Stephenson’s Rocket. His eyes and breathing had survived the initial journey just fine, but his legs were parted from his body under the Rocket’s wheels and he died later in a local hospital. Nowadays, we know not to disembark unless the train’s in a station.

The challenge of predicting the consequences of new technology is non trivial but the debate over what our AI futures holds sometimes has the feeling that we are predicting death by asphyxia or blindness, each induced by rapid motion.

A recent resurgence in popular fears about AI was triggered by Elon Musk, investor in DeepMind, who warned that humanity was ‘summoning the demon’.1 The previous post in this series parodied events in the field as a Homeric Greek tragi-comedy, in this post I’ll try and give some perspective on the evolving futures debate.

Ancient Greeks (featured in the last post in this series) weren’t just famous for Homeric legend. They were also famous for politics and philosophy. Our idea of philosophy comes from Plato’s description of Socrates, the man who talked all thoughts and ideas through. He didn’t believe in the written word, he believed in discussion and persuasion.

Understanding and validating the predictions of others requires that we first actually listen to what they are saying, the next challenge is in unpicking what their words actually mean. The recent debate about AI is occurring across different academic boundaries because the effects are expected to be so widespread.

Understanding people across different academic boundaries is particularly difficult. Firstly because, as academics, we are often more fond of talking than listening, but secondly because in each of our subfields there is a Babel-like propagation of terminology: each word means a different thing to each of us. Ethics, models, morals, noise, generalisation, mechanisism even probability. There are many barriers to communication, and Google translate will not do the job for us. Overcoming them requires patience and understanding.

I do believe we are participating in some form of revolution, but it has been ongoing for 300 years going back to Thomas Newcomen’s steam engine. In terms of the emergence of machine learning as a widespread technology and its effect on AI we won’t know its particular significance until the narratives are written well into our futures.

However, many of the ideas that are being debated are extremely important, both for researchers and the public. But the reaction of the machine learning community, including myself, sometimes varies between incredulity and ridicule. To move forward we need to change that reaction to accepting and our response to educating. Both ourselves and others. We need to take very seriously the way we are perceived and what our impact on the society around us is.

In response to the increased interest NIPS 2015 hosted a Symposium on Machine Learning in Society. A few weeks later, in January, the NYU Centre for Data Science recently convened a meeting to bring experts together, not just the worriers, but developers of AI systems, economists, futurologists, philosophers, psychologists, roboticists.

Future of AI meeting at NYU

The remit of the “Future of AI” was to be cross-disciplinary covering the potential and pitfalls of our AI solutions. It was split into two parts, the first day was public and focussed more on the potential of AI. We heard leading academics talk, and speakers from companies such as Google, Facebook, Microsoft, Mobileye and Nvidia each told of how core it is to their strategy. The roster of corporate speakers was also impressive, each company was represented typically by a CEO or a CTO.

Some speakers talked grandly of the ability of AI to solve challenges such as tackling climate, it was left to Thomas Dietterich, in one of the discussions, to point out that we aren’t going to solve climate by ‘deep learning’. That’s a key point to me, and one needs wider understanding. There’s nothing in the very recent developments in machine learning that significantly affects our ability to model and make predictions in complex systems where data is scarce. For climate change, our ability to access only one earth means that data is particularly scarce.

We heard Sebastian Seung speaking on the connectome: the mapping of the wiring of the human brain, and he also touched upon ‘uploading’, the idea that we could live virtually after our deaths by emulating ourselves on computers. I didn’t speak directly to Sebastian, but I think he has the right amount of skepticism about the viability of these ideas. However, I do worry that the credence he gives them is taken somewhat as a green light for rather fantastic (as in fantasy, not as in brilliant) ideas about where our immediate futures lie. I would agree that we are further down the technological road to ‘uploading’ than Tutankahmen’s high priests were when they liquefied his brain and sucked it out through his nose,2 but not much further.

Although the idea of uploading is a wonderful literary meme, such sensationalist perspectives can have the effect of overwhelming the more reasoned ideas we were hearing from attendees such as Bernhard Schoelkopf who was emphasizing the importance of causal reasoning in intelligence.

Chatham House Rules

The second part of the meeting (days 2 and 3) the sessions were closed. They were at the same time more interdisciplinary but also more focussed. They included economists, philosophers, psychologists, cognitive scientists and futurologists. The aim being to stimulate a wide ranging debate about our AI future and its effect on society.

Chatham House Rules mean that we can summarise the essence of what was said but not by whom, unless they gave their explicit permission. This rule is likely necessary because for an open debate on the pitfalls people need to be secure that their words won’t be sensationalized, particularly in the instances where they may have been playing devil’s advocate.

Speakers included very well known figures such as Erik Brynjolfsson (co-author with Andrew McAfee of “The Second Machine Age”), Nick Bostrom (author of “Superintelligence”) and Max Tegmark (founder of the “Future of Life” institute).

Hearing the economists’ discussion on productivity caused me to think a little more about the issue of jobs. I think I’m yet to be fully persuaded that anything dramatic is about to happen, my Dad’s job is different from mine and his was different from my grandfather’s. My grandmother worked full time at a time when it was unusual for women to do so, my wife does so when it is more common but she still experiences a working environment that has a structure designed by males and an environment dominated by males.

In the 1970s when they were predicting the future, Xerox PARC postulated the idea of the paper free office. They developed the mouse and the graphical user interface. Nearly 50 years later, and there’s a pad of paper by my side as I write, and I just signed a paper check for my breakfast. Not only that, but paper consumption drastically increased in the 1980s and 1990s as a result of the computer.3 Although thinking about the future did help: after all, regardless of the paper either side of me the mouse and the GUI did come into being.

However, by analogy, it might be that in the near term artificial intelligence won’t eliminate the need for the intellectual labor force, but actually (initially) increase it. Future prediction is fraught with uncertainty. We should be very wary of precise narratives that make strong claims.

There was a wide range of talks, other speakers covered areas such as the law, value systems and cognitive robotics (in particular our desire to embody intelligence).

Perspective

Events like the “Future of AI” are vital for extending the debate, achieving a shared set of objectives. However, it can be problematic when particular aspects aren’t covered.

One particular facet of debates on AI is they assume some kind of mystical quality. In particular, we seem to all think of ourselves and our intelligence as something very special, holy even, the ghost in the machine. But with our self-idolization comes a Icarian fear of what it might mean to emulate those characteristics that we perceive of as uniquely human.

The recent advances in AI are all underpinned by machine learning. Machine learning is data driven and connects closely to statistics. This means that there is a smooth continuum between statistics on the one end, machine learning in the middle and “artificial intelligence” at the far end. The recent developments in AI are entirely underpinned by data. But data was almost never mentioned in the meeting.

This seems to me part of dangerous (and fairly pervasive) tendency to separate our discussion on AI from our discussion on data. Machine learning is not just the principle technology underpinning the recent success stories in AI, it is, along with statistics, the principle technology driving our agenda in data science. This is particularly interesting because the NYU meeting was hosted by the NYU Centre for Data Science, so it is not as if attendees were unaware of this (Yann Le Cun, one of the main conveners of the meeting is most certainly extremely aware of this).

Perhaps another explanation for the apparent absence of data4 is that there was a necessary interest in framing the discussion through the wider public debate. Two particular ideas seem to capture the public imagination. In the near term people are worried about loosing their jobs to “robots”, in the further term people are worried about loosing “humanity’s earthly inheritance” to the robots. They are worried about killer terminator robots.

With this in mind, the last session in New York was on “AI Safety”. Quite an emotive term itself (we don’t normally worry about safety for things that aren’t dangerous). There were a range of interesting talks including Stuart Russell and Michael Littman. We also heard from Nick Bostrom whose book “Superintelligence” conceived future potential responses to AI, more of that in the next episode in this series.

On return to the UK, I went straight to the Royal Society in London to participate in evidence gathering for the Royal Society’s working group on Machine Learning. Our remit is focussed on the next five to ten years, and data is featuring very prominently in our discussions. The sessions themselves could not have been more contrasting. Small group evidence gathering, with particular questions targetted at invited experts who had responded to an earlier call for written evidence.

There will be a report arising from the Royal Society Working Group that I should not prejudge. However, it did feel rather extraordinary to go (in a single 24 hour period) from chatting (briefly) to Daniel Kahneman to interviewing Baroness O’Neill. I was also relieved at the extent to which the Royal Society working group acknowledges the importance of data in the debate.

Having said that it’s important to emphasize that both approaches: the small focused meeting of the Royal Society, and the larger, interdisciplinary debate of the Future of AI meeting, are a vital part of informing and understanding how our new technologies may effect us.

I’m looking forward to more in this space.

This is post 4 in a series. Previous post here and next post here.

  1. Elon Musk Summoning the Demon 

  2. A process known as excerebration, which is probably closer to us in the scale of required technological developments than the rather more delicate process of extracting the state of $10^{14}$ synapses via the nasal or any other passage. 

  3. Today that trend may have reversed due to widespread use of tablets, but as recently as 2012 the Economist was reporting on the widespread use of paper

  4. I mean the absence of data as a subject. Of course there was an absence of data to validate arguments too, but that’s somewhat inevitable when there’s an amount of future-prediction going on.