The AI Fallacy
There is a lot of variation in the use of the term artificial intelligence. I’m sometimes asked to define it, but depending on whether you’re speaking to a member of the public, a fellow machine learning researcher, or someone from the business community, the sense of the term differs.
Underlying its use, I’ve detected a trend I think of as “The Great AI Fallacy.”
The fallacy is associated with an implicit promise that is embedded in many statements about Artificial Intelligence. Artificial Intelligence, as it currently exists, is merely a form of automated decision making. The implicit promise of Artificial Intelligence is that it will be the first wave of automation where the machine adapts to the human, rather than the human adapting to the machine.
Most of what we talk about today as AI is a technology called machine learning. At its most basic level machine learning is a combination of data, models and compute to produce a prediction.
The reality is that we haven’t yet created machines that are as flexible as humans, the automation we are producing is still ‘fragile,’ in that if it encounters unforeseen circumstances it breaks. This is a consequence of the way we design systems, flexible natural systems such as ourselves are evolved, not designed. And evolved systems have a first priority to ‘not fail.’ What we think of us ‘common sense’ in the human is in reality a set of heuristics that prevent us doing stupid things in the name of achieving a goal. Our AI systems don’t exhibit this
Despite this, the AI fallacy is has very real effects on the way we think about creating and deploying artificial intelligence solutions. There are serious benefits to society in deploying this new wave of data-driven automated decision making, but the AI Fallacy is causing us to suspend our calibrated skepticism that is needed to deploy these systems safely and efficiently.
Techno-solutionism and techno-skepticism
In public discussions about AI, we’ve seen lots of promises that AI will save lives, address global challenges such as climate change, and add trillions of dollars to the global economy and make our daily life easier and more productive. There are also warnings about killer robots, job losses and the machines rising up. These portrayals play into a long history we have of hopes and fears around intelligent machines and our place in the world. Neither outcome is inevitable.
While much of the public and policy debate about AI and work has tended to oscillate between fears of the ‘end of work’ and reassurances that little will change in terms of overall employment, evidence suggests that neither of these extremes is likely. However, there is consensus that AI will have a disruptive effect on work, with some jobs being lost, others being created, and others changing.
From Royal Society blog
We can look to history for insights into how AI technologies might affect our work in the decades to come. Studies from the industrial revolution to the introduction of the computer to the modern workplace show us that there is often a lag from technology invention to widespread economic and social benefit, as individuals and organisations adopt and adapt to the new ways of working that technologies allow, and there is often disruption for some groups, sectors or places, with these disruptions tending to have a bigger impact on already vulnerable communities.
While technological advances have ignited recent debates about the future of work, what AI is technically able to automate is only one factor that contributes to the impact of AI on businesses and organisations. Political, economic, cultural and organisational influences all play a role – and decision-makers across organisations have the ability to influence how AI will affect them, through identifying how they can adopt AI, introducing technologies in ways that work well for human users, and supporting their workforce.
Organisational data readiness
A recent report (The DELVE Initiative, 2020) by the DELVE Initiative considered what action organisations needed to take to ensure they had the absorptive capacity to begin to make use of data-enabled technologies like machine learning. A first step is considering organisational data maturity.
In that report, we noted that
Many organisations aspire to be data driven in their decision making, but are held back from achieving this goal by issues arising from the accessibility and availability of data between teams or collaborators. Data maturity frameworks can provide practical guidance for organisations seeking to improve their data management practices and create value from the data they hold.
Our proposed framework for data maturity encourages leaders to interrogate the extent to which businesses are able to deploy data in their work and to consider what action needs to be taken to support data science projects.
|Maturity Level||Data Sharing|
|1 Reactive||Data sharing is not possible or ad-hoc at best.|
|2 Repeatable||Some limited data service provision is possible and expected, in particular between neighboring teams. Some limited data provision to distinct teams may also be possible|
|3 Managed and Integrated||Data is available through published APIs; corrections to requested data are monitored and API service quality is discussed within the team. Data security protocols are partially automated ensuring electronic access for the data is possible.|
|4 Optimized||Teams provide reliable data services to other teams. The security and privacy implications of data sharing are automatically handled through privacy and security aware ecosystems.|
|5 Transparent||Internal organizational data is available to external organizations with appropriate privacy and security policies. Decision making across the organisation is data-enabled, with transparent metrics that could be audited through organisational data logs. If appropriate governance frameworks are agreed, data dependent services (including AI systems) could be rapidly and securely redeployed on company data in the service of national emergencies.|
Failure to understand the importance of data quality leads to unrealistic projects staffed by people with the wrong skill sets. Implementation of a machine learning model can be relatively trivial. But preparation of the data set and the data ecosystem around the model is extremely difficult. Without understanding data quality, there’s a risk that the wrong investments are made – millions spent on recruiting machine learning PhDs and minimal spend on data infrastructure and systems for data auditing – and the resulting systems poorly serve organisations or employees.
We can track this back to the AI Fallacy – that AI business problems will be solved by just adding some algorithms created by a wave of magician-data-scientists.
Leaders need to avoid a suspension of normal business skepticism where AI is concerned. If these systems were really ‘intelligent,’ in the way a human is intelligent, plus if they had the skills of a computer, that really would be revolutionary. However, that’s not what’s happening, and won’t happen in the foreseeable future (i.e. on timelines that matter to business). In reality this is an evolution of existing technology, and it has the usual challenges of adoption that existing technologies have. The challenge for decision makers is how to assimilate the implications of this new technology within their business skill set. Senior business leaders need to take time out working closely with the technology in their own environments to better calibrate their understanding of its strengths and weaknesses.
For more information on these subjects and more you might want to check the following resources.
- twitter: @lawrennd
- podcast: The Talking Machines
- newspaper: Guardian Profile Page
- blog: http://inverseprobability.com