How to Prevent Creeping Artificial Intelligence Becoming Creepy
Today an Op-ed I wrote has been published in the Guardian Media & Tech Network.
This is the original link here http://www.theguardian.com/media-network/2015/jun/12/artificial-intelligence-ai-human-computer.
And I’ve included the text here as well:
The traditional view of benevolent artificial intelligence (AI) is as a companion, someone who understands and enhances us. Think of the computer in Star Trek or JARVIS in Iron Man. They don’t just have a vast knowledge and extraordinary computational abilities, but they exhibit emotional intelligence and remain subservient to their human masters. This is the utopian view of AI.
The alternative dystopia has been expressed by Tony Stark’s real life counterpart , Elon Musk. What if such intelligence isn’t satisfied with such a back seat role? What if it becomes self aware and begins to use its knowledge and computational resources against us?
Comparing computers in the real world with those in the movies, they tick the computational box and the knowledge box, but seem to fall down as far as the emotional intelligence goes.
One of the pleasures of knowing someone is understanding how they will think, how they react. At the moment, when we project this idea on to our relationship with computers we are frustrated because the machine doesn’t know how to react to us. Machines are pedantic, requiring us to formalise ourselves to communicate with them. They can’t sense how we are feeling.
Compare this with our longstanding human companion, the dog. In computational terms, and with regard to access to knowledge, they are limited, but in terms of emotional intelligence they are well ahead of their silicon rivals. They can even seem to understand when we need emotional support. At some level we understand our dogs and they understand us.
Successful AI is emerging slowly, almost without people noticing. A large proportion of our interactions with computers is already dictated by machine learning algorithms: the ranking of your posts on Facebook, the placement of adverts by Google, and recommendations from Amazon. When it is done well, though, we don’t notice it is happening. We can think of this phenomenon as creeping AI.
The computer tries to understand you by seeing how you’ve behaved in the past and predicting how you might behave in the future. It doesn’t try to be our friend, but it also doesn’t push itself in our face anymore, it’s just there in the background trying to second-guess us.
The danger is that this type of understanding becomes creepy AI. The computer begins to second-guess us, but we are unaware of where this knowledge came from or what its motivations are.
Improving the understanding between human and computer is key to facilitating this relationship: a relationship of trust. When the transition between computer and human is done well, it can be difficult to see where the human stops and the machine learning starts. Learning systems are already very capable of reacting to our personalities and desires, but they do this in very different ways to how humans do it. To prevent creeping AI becoming creepy AI we need to improve our own understanding of it as it improves its understanding of us.
Most popular media would have you believe that the future will be all about the computer, but bridging the gap will also require that we do a lot more to understand the human: what are our expectations of an ‘intelligent’ companion?
Just as there are dog-people and cat-people there will be different types of computer-people. We can expect considerable variation in the extent to which people expect their computers to be dependent on them for instruction, or the extent to which they trust their computers to act on their behalf.
The relationship between animal and owner is at its most effective when there is a mutual understanding. We get the most from our tools by being familiar with them. For the moment the computer is closer to a hammer than a horse, but as any DIY hobbyist knows, a mishandled hammer still leads to a throbbing thumb.
Neil Lawrence is a professor of machine learning at the University of Sheffield