Future of AI and Machine Learning
Abstract
Machine learning technologies have driven a revolution in artificial intelligence. Our machines are now able to identify objects in images, transcribe spoke language, translate between languages and even generate text of their own. In this talk we consider what this means for the future of AI and our own intelligence with a particular focus on what the opportunities and pitfalls for businesses are.
Evolved Relationship with Information [edit]
The high bandwidth of computers has resulted in a close relationship between the computer and data. Large amounts of information can flow between the two. The degree to which the computer is mediating our relationship with data means that we should consider it an intermediary.
Originaly our low bandwith relationship with data was affected by two characteristics. Firstly, our tendency to over-interpret driven by our need to extract as much knowledge from our low bandwidth information channel as possible. Secondly, by our improved understanding of the domain of mathematical statistics and how our cognitive biases can mislead us.
With this new set up there is a potential for assimilating far more information via the computer, but the computer can present this to us in various ways. If it’s motives are not aligned with ours then it can misrepresent the information. This needn’t be nefarious it can be simply as a result of the computer pursuing a different objective from us. For example, if the computer is aiming to maximize our interaction time that may be a different objective from ours which may be to summarize information in a representative manner in the shortest possible length of time.
For example, for me, it was a common experience to pick up my telephone with the intention of checking when my next appointment was, but to soon find myself distracted by another application on the phone, and end up reading something on the internet. By the time I’d finished reading, I would often have forgotten the reason I picked up my phone in the first place.
There are great benefits to be had from the huge amount of information we can unlock from this evolved relationship between us and data. In biology, large scale data sharing has been driven by a revolution in genomic, transcriptomic and epigenomic measurement. The improved inferences that can be drawn through summarizing data by computer have fundamentally changed the nature of biological science, now this phenomenon is also infuencing us in our daily lives as data measured by happenstance is increasingly used to characterize us.
Better mediation of this flow actually requires a better understanding of human-computer interaction. This in turn involves understanding our own intelligence better, what its cognitive biases are and how these might mislead us.
For further thoughts see Guardian article on marketing in the internet era from 2015.
You can also check my blog post on System Zero..
Embodiment Factors [edit]
|
|||
bits/min | billions | 2000 | 6 |
billion calculations/s |
~100 | a billion | a billion |
embodiment | 20 minutes | 5 billion years | 15 trillion years |
Let me explain what I mean. Claude Shannon introduced a mathematical concept of information for the purposes of understanding telephone exchanges.
Information has many meanings, but mathematically, Shannon defined a bit of information to be the amount of information you get from tossing a coin.
If I toss a coin, and look at it, I know the answer. You don’t. But if I now tell you the answer I communicate to you 1 bit of information. Shannon defined this as the fundamental unit of information.
If I toss the coin twice, and tell you the result of both tosses, I give you two bits of information. Information is additive.
Shannon also estimated the average information associated with the English language. He estimated that the average information in any word is 12 bits, equivalent to twelve coin tosses.
So every two minutes Bauby was able to communicate 12 bits, or six bits per minute.
This is the information transfer rate he was limited to, the rate at which he could communicate.
Compare this to me, talking now. The average speaker for TEDX speaks around 160 words per minute. That’s 320 times faster than Bauby or around a 2000 bits per minute. 2000 coin tosses per minute.
But, just think how much thought Bauby was putting into every sentence. Imagine how carefully chosen each of his words was. Because he was communication constrained he could put more thought into each of his words. Into thinking about his audience.
So, his intelligence became locked in. He thinks as fast as any of us, but can communicate slower. Like the tree falling in the woods with no one there to hear it, his intelligence is embedded inside him.
Two thousand coin tosses per minute sounds pretty impressive, but this talk is not just about us, it’s about our computers, and the type of intelligence we are creating within them.
So how does two thousand compare to our digital companions? When computers talk to each other, they do so with billions of coin tosses per minute.
Let’s imagine for a moment, that instead of talking about communication of information, we are actually talking about money. Bauby would have 6 dollars. I would have 2000 dollars, and my computer has billions of dollars.
The internet has interconnected computers and equipped them with extremely high transfer rates.
However, by our very best estimates, computers actually think slower than us.
How can that be? You might ask, computers calculate much faster than me. That’s true, but underlying your conscious thoughts there are a lot of calculations going on.
Each thought involves many thousands, millions or billions of calculations. How many exactly, we don’t know yet, because we don’t know how the brain turns calculations into thoughts.
Our best estimates suggest that to simulate your brain a computer would have to be as large as the UK Met Office machine here in Exeter. That’s a 250 million pound machine, the fastest in the UK. It can do 16 billion billon calculations per second.
It simulates the weather across the word every day, that’s how much power we think we need to simulate our brains.
So, in terms of our computational power we are extraordinary, but in terms of our ability to explain ourselves, just like Bauby, we are locked in.
For a typical computer, to communicate everything it computes in one second, it would only take it a couple of minutes. For us to do the same would take 15 billion years.
If intelligence is fundamentally about processing and sharing of information. This gives us a fundamental constraint on human intelligence that dictates its nature.
I call this ratio between the time it takes to compute something, and the time it takes to say it, the embodiment factor (Lawrence 2017a). Because it reflects how embodied our cognition is.
If it takes you two minutes to say the thing you have thought in a second, then you are a computer. If it takes you 15 billion years, then you are a human.
What is Machine Learning? [edit]
What is machine learning? At its most basic level machine learning is a combination of
$$\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}$$
where data is our observations. They can be actively or passively acquired (meta-data). The model contains our assumptions, based on previous experience. That experience can be other data, it can come from transfer learning, or it can merely be our beliefs about the regularities of the universe. In humans our models include our inductive biases. The prediction is an action to be taken or a categorization or a quality score. The reason that machine learning has become a mainstay of artificial intelligence is the importance of predictions in artificial intelligence. The data and the model are combined through computation.
In practice we normally perform machine learning using two functions. To combine data with a model we typically make use of:
a prediction function a function which is used to make the predictions. It includes our beliefs about the regularities of the universe, our assumptions about how the world works, e.g. smoothness, spatial similarities, temporal similarities.
an objective function a function which defines the cost of misprediction. Typically it includes knowledge about the world’s generating processes (probabilistic objectives) or the costs we pay for mispredictions (empiricial risk minimization).
The combination of data and model through the prediction function and the objective function leads to a learning algorithm. The class of prediction functions and objective functions we can make use of is restricted by the algorithms they lead to. If the prediction function or the objective function are too complex, then it can be difficult to find an appropriate learning algorithm. Much of the acdemic field of machine learning is the quest for new learning algorithms that allow us to bring different types of models and data together.
A useful reference for state of the art in machine learning is the UK Royal Society Report, Machine Learning: Power and Promise of Computers that Learn by Example.
You can also check my post blog post on What is Machine Learning?..
Artificial Intelligence and Data Science [edit]
Machine learning technologies have been the driver of two related, but distinct disciplines. The first is data science. Data science is an emerging field that arises from the fact that we now collect so much data by happenstance, rather than by experimental design. Classical statistics is the science of drawing conclusions from data, and to do so statistical experiments are carefully designed. In the modern era we collect so much data that there’s a desire to draw inferences directly from the data.
As well as machine learning, the field of data science draws from statistics, cloud computing, data storage (e.g. streaming data), visualization and data mining.
In contrast, artificial intelligence technologies typically focus on emulating some form of human behaviour, such as understanding an image, or some speech, or translating text from one form to another. The recent advances in artifcial intelligence have come from machine learning providing the automation. But in contrast to data science, in artifcial intelligence the data is normally collected with the specific task in mind. In this sense it has strong relations to classical statistics.
Classically artificial intelligence worried more about logic and planning and focussed less on data driven decision making. Modern machine learning owes more to the field of Cybernetics (Wiener 1948) than artificial intelligence. Related fields include robotics, speech recognition, language understanding and computer vision.
There are strong overlaps between the fields, the wide availability of data by happenstance makes it easier to collect data for designing AI systems. These relations are coming through wide availability of sensing technologies that are interconnected by celluar networks, WiFi and the internet. This phenomenon is sometimes known as the Internet of Things, but this feels like a dangerous misnomer. We must never forget that we are interconnecting people, not things.
Deep Learning [edit]
DeepFace [edit]
The DeepFace architecture (Taigman et al. 2014) consists of layers that deal with translation and rotational invariances. These layers are followed by three locally-connected layers and two fully-connected layers. Color illustrates feature maps produced at each layer. The neural network includes more than 120 million parameters, where more than 95% come from the local and fully connected layers.
Deep Learning as Pinball [edit]
Sometimes deep learning models are described as being like the brain, or too complex to understand, but one analogy I find useful to help the gist of these models is to think of them as being similar to early pin ball machines.
In a deep neural network, we input a number (or numbers), whereas in pinball, we input a ball.
Think of the location of the ball on the left-right axis as a single number. Our simple pinball machine can only take one number at a time. As the ball falls through the machine, each layer of pins can be thought of as a different layer of ‘neurons’. Each layer acts to move the ball from left to right.
In a pinball machine, when the ball gets to the bottom it might fall into a hole defining a score, in a neural network, that is equivalent to the decision: a classification of the input object.
An image has more than one number associated with it, so it is like playing pinball in a hyper-space.
Learning involves moving all the pins to be in the correct position, so that the ball ends up in the right place when it’s fallen through the machine. But moving all these pins in hyperspace can be difficult.
In a hyper-space you have to put a lot of data through the machine for to explore the positions of all the pins. Even when you feed many millions of data points through the machine, there are likely to be regions in the hyper-space where no ball has passed. When future test data passes through the machine in a new route unusual things can happen.
Adversarial examples exploit this high dimensional space. If you have access to the pinball machine, you can use gradient methods to find a position for the ball in the hyper space where the image looks like one thing, but will be classified as another.
Probabilistic methods explore more of the space by considering a range of possible paths for the ball through the machine. This helps to make them more data efficient and gives some robustness to adversarial examples.
Five AI Myths [edit]
- AI will be the first wave of automation that adapts to us.
- Hearsay data has significant value.
- The big tech companies have the landscape all ‘sewn up’
- ‘data scientists’ will come and solve all problems.
- The normal rules of business don’t apply to AI.
The five AI myths are patterns of thinking I’ve identified amoung those that are trying to take advantage of artificial intelligence to deploy new products.
The first myth is the “promise of AI” myth, that AI will be the first wave of machine-based automation that adapts to us, rather than us having to adapt to it. The reality is that we haven’t yet created machines that are as flexible as humans, the automation we are producing is still ‘fragile’, in that if it encounters unforeseen circumstances it breaks. This is a consequence of the way we design systems, flexible natural systems such as ourselves are evolved, not designed. And evolved systems have a first priority to ‘not fail’. What we think of us ‘common sense’ in the human is in reality a set of heuristics that prevent us doing stupid things in the name of achieving a goal. Our AI systems don’t exhibit this.
The second myth is that there is value in ‘hearsay data’. Hearsay data is data that people have heard exists, so they say it exists (See this blog onblog post on Data Readiness Levels. (Lawrence 2017b). The failure to understand the importance of data quality is resulting in unrealistic projects staffed by people with the wrong skill sets. Most decision makers don’t understand that implementation of a machine learning model is relatively trivial. But preparation of the data set and the data ecosystem around the model is extremely difficult. So the wrong investments are made, millions spent on recruiting machine learning PhDs and minimal spend on data infrastructure and systems for data auditing.
The third myth is that platform effects mean that there is no room for knew innovation in AI. Three factors will prevent the platforms dominating in the long term. Firstly, they are not agile, their approach to AI software development is grounded in the world that pre-dates wide availability of machine learning systems. Agile software development needs revisiting in the context of machine learning and this form of cultural change is difficult to achieve. In practice, these companies are larger than they need to be to deliver their services because they can afford to employ people to handle operational load. Newer agile companies will develop a better culture around data and machine learning. One that requires less operational overhead. This doesn’t just reduce costs, but it increases speed of movement and develops better understanding of the underlying systems. See this blog on blog post on The 3Ds of Machine Learning Systems Design..
The fourth myth is that soon there will be a wave of Data Scientists who will be equipped to enter companies and resolve their problems around data and AI. The mistake here is to assume that these graduates will have been trained in the necessary skills to do data science within a company. In fact, Universities will naturally focus on algorithms and models, because that material is teachable. Much more important is systems thinking and data wrangling. Processes to ensure that data is actionable. The weakness of senior decision makers, including CIOs and CTOs is that they don’t have a deep understanding of the technology, so they don’t perform critical thinking in this space. It’s a problem that can be deferred and solved by a mythical set of experts who will soon be arriving. In reality, domain expertise is key to successful data science, and bridging existing expertise with an understanding of the new landscape is far more important to delivering succesful systems.
The final myth is perhaps the most perniscious. It involves a suspension of normal business skepticism where AI is concerned. It may arise from the use of the term AI, which implies intelligence. If these systems were really ‘intelligent’, in the way a human is intelligent, plus if they had the skills of a computer, that really would be revolutionary. However, that’s not what’s happening, and won’t happen in the foreseeable future (i.e. on timelines that matter to business). In reality this is an evolution of existing technology, and it has the ususual challenges of adpotion that existing technologies have. The challenge for decision makers is how to assimilate the implications of this new technology within their business skill set. This means familiarisation, and doing courses etc isn’t good enough. Senior business leaders need to take time out working closely with the technology in their own environments to better calibrate their understanding of its strengths and weaknesses.
Recommendation: How to bust to these myths? The primary recommendation for businesses is that they start pilot projects which have executive sponsorship. They involve the CTO (or CIO or CDO), a technical ‘data science’ expert and a target domain area. Instead of feigning knowledge in this space, each admits their own ignorance of the other domains, and starts from scratch. Egos are left out of the room. The small pilot project is explored and delivered with the real challenges being noted. In this way each of the individuals will learn quickly where the pitfalls are.
One challenge is that for most projects the data will be too poor to even conduct the pilot. However, one data source that is consistently of good quality across companies is financial data. So a further suggestion is to initially focus on collaborating with the CFO and focus on financial forecasting (or similar). If the CFO, CTO and CEO gain a better understanding of the capabilities of data science, then the company can begin to turn around its systems and culture focussing on the important changes, making calibrated changes, rather than reacting to the sensationalism around AI.
Importantly, don’t go all in. Major companies are susceptible to what I call ‘(Grand Old) Duke of York Effect’, march 10,000 people to the top of the hill and march them down again. Command and control is not the right response to an uncertain and environment. Don’t think like regular troops, think like special forces, small groups with specialist expertise that are empowered to think independently and explore the landscape. Find which hill to march up, before committing significant resource.
- Data grooming: Do the basics right.
- Incentivisation for data quality
- People training: At all levels
- Case studies
Thanks! [edit]
For more information on these subjects and more you might want to check the following resources.
- twitter: @lawrennd
- podcast: The Talking Machines
- newspaper: Guardian Profile Page
- blog: http://inverseprobability.com
References
Lawrence, Neil D. 2017a. “Living Together: Mind and Machine Intelligence.” arXiv. https://arxiv.org/abs/1705.07996.
———. 2017b. “Data Readiness Levels.” arXiv.
Taigman, Yaniv, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. 2014. “DeepFace: Closing the Gap to Human-Level Performance in Face Verification.” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2014.220.
Wiener, Norbert. 1948. Cybernetics: Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.