Future of AI 5: The Singularians
Update: through a tweet I found this paper from last year by Luciano Floridi on broadly the same idea. Clearly Luciano’s been thinking about this longer than me. Nice writing, and an additional concept of a AItheist. He uses the term Singularitarian.
Religious belief offers the promising and delightful idea of an eternal life in paradise. But as a non believer one of the things you need come to terms with is the finality of death.1 There is a happy side effect of realising that since death is final, you realize the importance of extracting value from your limited life: technically this is known as a ‘finite horizon effect’.2
Greek gods have beards and live on a mountain
However, I’m beginning to suspect that not everyone of irreligious persuasion is quite as comfortable with this realisation. Recent breakthroughs in machine learning are triggering emerging debates that, at first, I found difficult to interpret. But through some reading, some thought and some conversation with colleagues, I believe there’s a plausible latent cause for these discussions. Everything becomes much clearer if you posit that in many influential minds there’s an emerging strand of belief, of faith, that I’ve come to think of as “Singularism”.3
The Day of Judgment: Technological Singularity
The prophets of Singularism foretold that there would come a time when the rate of progress of technology accelerated so much that it outpaced human ability to keep up. To a Singularian this time is nigh.
Singularism is to religion what scientology is to science. Scientology is religion expressing itself as science and Singularism is science expressing itself as religion.
Religion historically had a seat at the table of power. Either as host or guest of honour. Close interaction between power and priesthood. Those that promise the technological singularity also hold high office. They are embedded in leading companies and even some of our top universities.
We each should be free to hold our own beliefs, and having faith in the singularity should also be seen of as a belief. However, one way religion can impinge on society is if people become so enamored of its teachings that they fail to fulfill their practical everyday duties, the functions of life or leadership. It is a problem when life becomes a prelude to afterlife. Obsession with the technological singularity may be beginning to have that effect.
The Consequence
There are very real challenges facing the machine learning community, very real opportunities, but a lot of the popular literature fails to address these challenges and instead presents narratives that appeal to each of the religious facets of Singularism.
Some of the appeal of this literature might originate in a kind of technopop philosophy: framing popular ideas in technical language so as to provide a flavour of plausibility. Although sometimes technopop feels inaccurate, some books take on a more arcane turn, more in keeping with religious texts. To the uninitiated the concepts may appear powerful and intimidating, but an alternative interpretation of them is rather more of mundane misunderstandings followed by overly convolved consequences.
I’m a great believer in society’s sustenance of different individual beliefs and philosophies. A robust variation of approaches to life seems, perhaps, the only safety net we have against the uncertainties we experience each day and across our lives. But when a set of faith-based arguments begins to persuasively influence the wider debate in a (potentially) damaging manner, then I think this needs highlighting.
The damage being caused by the Singularians is arising from their apparent authority, yet their failure to address significant challenges. Much of the debate around the long term future of AI omits many of the real challenges that we face right now, and concerns itself with hand-wringing about the singularity as the day of judgment and the process by which the chosen will be anointed.
As a micro-exemplar of this recently we can look at the UK Artificial Intelligence company DeepMind. Their launch onto the public consciousness after their purchase by Google was accompanied by a wave of propheteering about the imminent dooms of AI. But in practice, their first (minor) misstep has taken place over their failure to be more open about agreements with the Royal Free hospital concerning 1.6 million patientsworth of data.4
Data-sharing is among the great many challenges associated with the rising influence of algorithmic decision making on our lives, but there is little to no mention of data in much of the debate around AI, nor is there a realization that individual privacy is the keystone of our protection against the rise of the algorithm. The present danger is not that we create an artificial general intelligence to dominate our lives, but that a rather more mundane algorithms we distribute today will increasingly restrict our existing freedoms.
But why worry about surly earthly bonds when heaven’s saintly kiss is apparently within reach? The focus of much of the AI debate seems entirely on how to avoid the pitfalls of the perceived paradise. Singularism has its day of judgment, “the singularity”. For intelligence this is the day where we create intelligent machines which themselves have the ability to create new intelligent machines. The resulting cascade affect brings about runaway intelligence.
Like many religions, Singularism acknowledges two possible outcomes from the day of judgment, the singularity. Superintelligence focuses on ensuring that our heaven, our galactic inheritance (e.g. immortal life via ‘brain uploading’), is achieved at the expense of hell (death by killer terminator robot).
Intelligent Overlords
Also inherent in this new religion is the idea of artificially intelligent overlords, either beneficent or malevolent (the final nature of the new religion is not yet clear).
Although stimulating for societal debate, the challenges of the high church of Singularism are deflecting us from the humdrum of the actual decisions we need to make. It is through data that our intelligent systems are built, but it seems that we are not communicating the actualities of today’s technology and we are encouraging images of an intelligent future that are little removed from ideas people proposed half a century ago.
This is post 5 in a series. Previous post here and next post here.
-
There may be many reasons why people choose to be non religious. I once joined the UK’s British Humanist Association, but I found the preaching nature of the literature they sent in my welcome pack somewhat disturbing, so despite being broadly aligned with their perspective, I felt alienated by their pious narrative. This may make me a contrarian, and perhaps this post should be read in that light. ↩
-
The average American experiences over an hour a day on TV adverts. Of course this is only an average, imagine what a really good American could manage. ↩
-
Some post-writing research (always the best kind) showed this movement is also called Transhumanism or Singularitarianism. But the first sounds a little bit like people who believe they are animals born in human bodies, and the second is difficult to say (imagine when there’s a someone who wants to sever links between Singularitarianists and the state, they will have to say they are a singularitariodisestablishmentarian). ↩
-
DeepMind’s purchase by Google was the trigger for Elon Musk to warn that we were ‘releasing the demon’ of artificial intelligence. As an investor in DeepMind Musk was persuaded by their technologies. That human intelligence through artificial systems was nearby. However, as a non-expert in AI he was unable to appreciate the myriad ways in which we are far from that goal. Even the extremely impressive achievement of AlphaGo is long distant from human intelligence. It is domain-focussed and data-inefficient. In the meantime DeepMind have, perhaps at the behest of clinicians from the Royal Free Hospital, set up a health division. In a domain in which they had no track record, a group of clinicians thought it appropriate to engage DeepMind on 1.6 million patientsworth of data, without consulting those patients. The scale of the deal only emerged through a piece in the New Scientist. Knowing colleagues at DeepMind, I would declare them not guilty on both counts. I do not believe they are close to resolving the mechanisms by which human intelligence operates (although they are a world leading lab in this area, with some magnificent achievements to their name). And they are entering the health arena with the best of intentions. But in the real world, the best of intentions is often not enough. There are major issues with an individual’s control of their personal private data. And the fact that a company and group of clinicians, through naivety, can be blind to these issues despite their regular presence in the media does not bode well. ↩