AI and ML Futures 1: Background
With the purchase of DeepMind by Google for a rumoured 400 million pounds a chain of events was set off that began a debate in the glare of the media: just how far away was superintelligence, the AI singularity?
Elon Musk was an investor in DeepMind, and a reader of Nick Boström’s book “Superintelligence” and he became convinced that artificial intelligence was a threat to humanity “We are summoning the demon” he said.
To the researchers behind the most recent developments in AI, the idea that our faltering steps towards artificial perceptual systems were anywhere close to a demon seemed ridiculous (speech recognition, object recognition). But the public perception remained and yet others with little knowledge of the technologies underpinng the advances added their voices to the fray.
At the post conference banquet for NIPS 2014, a few of us were talking about the potential effect of these discussions on our research. There were various ideas about what the issues were and what our priorities should be,1 but the one aspect we all agreed on was that embracing the debate was the right thing to do. If the progenitors of the approaches which instigated these fears did not engage, someone else would on their behalf.
As it happened, in planning the 2015 edition of the conference one discussion was how to handle larger issues of the day (such as deep learning) which were in danger of dominating the conference. To appease the appetite for such issues to be comprehensively covered the 2015 committee resurrected the idea of “symposia”. Last run in Vancouver2 these were longer focussed events to develop topics of wider interest to NIPS attendees. When a symposium was proposed by Adrian Weller, Michael Osborne and Murray Shanahan: “Algorithms Amoung Us”, it was selected as one of three to be presented in 2015. It became apparent that with regard to the idea of xengagement, the view of those around the table in 2014 was shared by many, and the meeting, which covered many societal issues of AI: short term effects and long term fears, was warmly received.
However, at this point it was also apparent that more also needed to be done to bridge the arguments of the different fields and exchange points of view to found a more constructive debate. Fortunately in the meantime other initiatives had also been planned. At the 2015 DALI meeting Yann LeCun proposed a meeting to be held in January in New York on the “Future of AI”. Before reviewing that meeting though (held under Chatham House Rules) I want to pause and give some perspectives on the future of the field.
To stop any one post getting too long, I’m splitting these thoughts into several posts. This has given some of the background of how we came to be speaking about the societal impact of machine learning. Next I’ll try and give a short personal historical perspective of societal, which is mainly industrial, involvement in machine learning in the past.
This is post 1 in a series. Next Post here.
Footnotes
-
My own thoughts focussed on the near and present issues of data and privacy, and have led to a series of articles in the Guardian and on this blog trying to raise awareness of this domain. ↩
-
As conincidence would have it, I was the last ‘chair’ of the symposia, because I happened to be Workshops and Symposia chair in 2011. The symposia had a history, ironically centred around a workshop in deep learning held to celebrate Geoff’s 60th birthday. This informal event was formalised by the committee, but dropped when the conference moved to Lake Tahoe, I think due to pressure for more posters, there are slightly different pressures on the conference now so it made a lot of sense to revisit symposia. ↩