Business and the Atomic Human
Abstract
: As AI technologies reshape the business landscape, leaders face questions about balancing automation with individual judgment, information flows, and organisational decision-making. This talk builds on the ideas in Atomic Human to explore the practical implications of AI for businesses through the lens of information topography, decision-making structures, and human-AI collaboration. Drawing from real-world examples and insights from the book we’ll explore how businesses can strategically implement AI while maintaining human agency, intelligent accountability, and organisational effectiveness.
Introduction: AI’s Impact on Business Information Flows
New Flow of Information
Classically the field of statistics focused on mediating the relationship between the machine and the human. Our limited bandwidth of communication means we tend to over-interpret the limited information that we are given, in the extreme we assign motives and desires to inanimate objects (a process known as anthropomorphizing). Much of mathematical statistics was developed to help temper this tendency and understand when we are valid in drawing conclusions from data.
Figure: The trinity of human, data, and computer, and highlights the modern phenomenon. The communication channel between computer and data now has an extremely high bandwidth. The channel between human and computer and the channel between data and human is narrow. New direction of information flow, information is reaching us mediated by the computer. The focus on classical statistics reflected the importance of the direct communication between human and data. The modern challenges of data science emerge when that relationship is being mediated by the machine.
Data science brings new challenges. In particular, there is a very large bandwidth connection between the machine and data. This means that our relationship with data is now commonly being mediated by the machine. Whether this is in the acquisition of new data, which now happens by happenstance rather than with purpose, or the interpretation of that data where we are increasingly relying on machines to summarize what the data contains. This is leading to the emerging field of data science, which must not only deal with the same challenges that mathematical statistics faced in tempering our tendency to over interpret data but must also deal with the possibility that the machine has either inadvertently or maliciously misrepresented the underlying data.
The Atomic Human
Figure: The Atomic Eye, by slicing away aspects of the human that we used to believe to be unique to us, but are now the preserve of the machine, we learn something about what it means to be human.
The development of what some are calling intelligence in machines, raises questions around what machine intelligence means for our intelligence. The idea of the atomic human is derived from Democritus’s atomism.
In the fifth century bce the Greek philosopher Democritus posed a question about our physical universe. He imagined cutting physical matter into pieces in a repeated process: cutting a piece, then taking one of the cut pieces and cutting it again so that each time it becomes smaller and smaller. Democritus believed this process had to stop somewhere, that we would be left with an indivisible piece. The Greek word for indivisible is atom, and so this theory was called atomism.
The Atomic Human considers the same question, but in a different domain, asking: As the machine slices away portions of human capabilities, are we left with a kernel of humanity, an indivisible piece that can no longer be divided into parts? Or does the human disappear altogether? If we are left with something, then that uncuttable piece, a form of atomic human, would tell us something about our human spirit.
See Lawrence (2024) atomic human, the p. 13.
Networked Interactions
Our modern society intertwines the machine with human interactions. The key question is who has control over these interfaces between humans and machines.
Figure: Humans and computers interacting should be a major focus of our research and engineering efforts.
So the real challenge that we face for society is understanding which systemic interventions will encourage the right interactions between the humans and the machine at all of these interfaces.
The API Mandate
The API Mandate was a memo issued by Jeff Bezos in 2002. Internet folklore has the memo making five statements:
- All teams will henceforth expose their data and functionality through service interfaces.
- Teams must communicate with each other through these interfaces.
- There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
- It doesn’t matter what technology they use.
- All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
The mandate marked a shift in the way Amazon viewed software, moving to a model that dominates the way software is built today, so-called “Software-as-a-Service”.
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.
Conway (n.d.)
The law is cited in the classic software engineering text, The Mythical Man Month (Brooks, n.d.).
As a result, and in awareness of Conway’s law, the implementation of this mandate also had a dramatic effect on Amazon’s organizational structure.
Because the design that occurs first is almost never the best possible, the prevailing system concept may need to change. Therefore, flexibility of organization is important to effective design.
Conway (n.d.)
Amazon is set up around the notion of the “two pizza team”. Teams of 6-10 people that can be theoretically fed by two (American) pizzas. This structure is tightly interconnected with the software. Each of these teams owns one of these “services”. Amazon is strict about the team that develops the service owning the service in production. This approach is the secret to their scale as a company, and the approach has been adopted by many other large tech companies. The software-as-a-service approach changed the information infrastructure of the company. The routes through which information is shared. This had a knock-on effect on the corporate culture.
Amazon works through an approach I think of as “devolved autonomy”. The culture of the company is widely taught (e.g. Customer Obsession, Ownership, Frugality), a team’s inputs and outputs are strictly defined, but within those parameters, teams have a great of autonomy in how they operate. The information infrastructure was devolved, so the autonomy was devolved. The different parts of Amazon are then bound together through shared corporate culture.
Information Topography: How AI Reshapes Organizational Decision Making
An Attention Economy
I don’t know what the future holds, but there are three things that (in the longer term) I think we can expect to be true.
- Human attention will always be a “scarce resource” (See Simon, 1971)
- Humans will never stop being interested in other humans.
- Organisations will keep trying to “capture” the attention economy.
Over the next few years our social structures will be significantly disrupted, and during periods of volatility it’s difficult to predict what will be financially successful. But in the longer term the scarce resource in the economy will be the “capital” of human attention. Even if all traditionally “productive jobs” such as manufacturing were automated, and sustainable energy problems are resolved, human attention is still the bottle neck in the economy. See Simon (1971)
Beyond that, humans will not stop being interested in other humans, sport is a nice example of this, we are as interested in the human stories of athletes as their achievements (as a series of Netflix productions evidences: Quaterback, Receiver, Drive to Survive, THe Last Dance) etc. Or the “creator economy” on YouTube. While we might prefer a future where the labour in such an economy is distributed, such that we all individually can participate in the creation as well as the consumption, my final thought is that there are significant forces to centralise this so that the many consume from the few, and companies will be financially incentivised to capture this emerging attention economy. For more on the attention economy see Tim O’Reilly’s talk here: https://www.mctd.ac.uk/watch-ai-and-the-attention-economy-tim-oreilly/.

Figure: This is the drawing Dan was inspired to create for Chapter 3.
See blog post on Dan Andrews image from Chapter 3..
Balancing Centralized Control with Devolved Authority
Question Mark Emails

Figure: Jeff Bezos sends employees at Amazon question mark emails. They require an explaination. The explaination required is different at different levels of the management hierarchy. See this article.
One challenge at Amazon was what I call the “L4 to Q4 problem”. The issue when an graduate engineer (Level 4 in Amazon terminology) makes a change to the code base that has a detrimental effect but we only discover it when the 4th Quarter results are released (Q4).
The challenge in explaining what went wrong is a challenge in intellectual debt.
Executive Sponsorship
Another lever that can be deployed is that of executive sponsorship. My feeling is that organisational change is most likely if the executive is seen to be behind it. This feeds the corporate culture. While it may be a necessary condition, or at least it is helpful, it is not a sufficient condition. It does not solve the challenge of the institutional antibodies that will obstruct long term change. Here by executive sponsorship I mean that of the CEO of the organisation. That might be equivalent to the Prime Minister or the Cabinet Secretary.
A key part of this executive sponsorship is to develop understanding in the executive of how data driven decision making can help, while also helping senior leadership understand what the pitfalls of this decision making are.
Pathfinder Projects
I do exec education courses for the Judge Business School. One of my main recommendations there is that a project is developed that directly involves the CEO, the CFO and the CIO (or CDO, CTO … whichever the appropriate role is) and operates on some aspect of critical importance for the business.
The inclusion of the CFO is critical for two purposes. Firstly, financial data is one of the few sources of data that tends to be of high quality and availability in any organisation. This is because it is one of the few forms of data that is regularly audited. This means that such a project will have a good chance of success. Secondly, if the CFO is bought in to these technologies, and capable of understanding their strengths and weaknesses, then that will facilitate the funding of future projects.
In the DELVE data report (The DELVE Initiative, 2020), we translated this recommendation into that of “pathfinder projects”. Projects that cut across departments, and involve treasury. Although I appreciate the nuances of the relationship between Treasury and No 10 do not map precisely onto that of CEO and CFO in a normal business. However, the importance of cross cutting exemplar projects that have the close attention of the executive remains.
2016 US Elections
In the US the 2016 elections saw manipulation through social media and the Russian troll farm, the Internet Research Agency.
See Lawrence (2024) Cambridge Analytica p. 371.
Techonomy 16
. . . the idea that fake news on Facebook . . . influenced the election in any way I think is a pretty crazy idea
Mark Zuckerberg Techonomy 16, 10th November 2016
See Lawrence (2024) Zuckerberg, Mark; Techonomy 16 and p. 79-80.
Facebook estimates that as many as 126 million Americans on the social media platform came into contact with content manufactured and disseminated by the IRA
Facebook evidence, 30th October 2017
11 months later on Monday 30th October 2017 Facebook’s evidence to the Senate Intelligence committee1 suggested 126 million Americans came into contact with misinformation sown by the Internet Research Agency.2
See Lawrence (2024) Facebook; US Senate Intelligence Commitee and p. 80.
Human-Analogue Machines (HAMs) as Business Tools
Human Analogue Machine
Recent breakthroughs in generative models, particularly large language models, have enabled machines that, for the first time, can converse plausibly with other humans.
The Apollo guidance computer provided Armstrong with an analogy when he landed it on the Moon. He controlled it through a stick which provided him with an analogy. The analogy is based in the experience that Amelia Earhart had when she flew her plane. Armstrong’s control exploited his experience as a test pilot flying planes that had control columns which were directly connected to their control surfaces.
Figure: The human analogue machine is the new interface that large language models have enabled the human to present. It has the capabilities of the computer in terms of communication, but it appears to present a “human face” to the user in terms of its ability to communicate on our terms. (Image quite obviously not drawn by generative AI!)
The generative systems we have produced do not provide us with the “AI” of science fiction. Because their intelligence is based on emulating human knowledge. Through being forced to reproduce our literature and our art they have developed aspects which are analogous to the cultural proxy truths we use to describe our world.
These machines are to humans what the MONIAC was the British economy. Not a replacement, but an analogue computer that captures some aspects of humanity while providing advantages of high bandwidth of the machine.
HAM
The Human-Analogue Machine or HAM therefore provides a route through which we could better understand our world through improving the way we interact with machines.
Figure: The trinity of human, data, and computer, and highlights the modern phenomenon. The communication channel between computer and data now has an extremely high bandwidth. The channel between human and computer and the channel between data and human is narrow. New direction of information flow, information is reaching us mediated by the computer. The focus on classical statistics reflected the importance of the direct communication between human and data. The modern challenges of data science emerge when that relationship is being mediated by the machine.
The HAM can provide an interface between the digital computer and the human allowing humans to work closely with computers regardless of their understandin gf the more technical parts of software engineering.
Figure: The HAM now sits between us and the traditional digital computer.
Of course this route provides new routes for manipulation, new ways in which the machine can undermine our autonomy or exploit our cognitive foibles. The major challenge we face is steering between these worlds where we gain the advantage of the computer’s bandwidth without undermining our culture and individual autonomy.
See Lawrence (2024) human-analogue machine (HAMs) p. 343-347, 359-359, 365-368.
Bandwidth vs Complexity
The computer communicates in Gigabits per second, One way of imagining just how much slower we are than the machine is to look for something that communicates in nanobits per second.
![]() |
|||
bits/min | \(100 \times 10^{-9}\) | \(2,000\) | \(600 \times 10^9\) |
Figure: When we look at communication rates based on the information passing from one human to another across generations through their genetics, we see that computers watching us communicate is roughly equivalent to us watching organisms evolve. Estimates of germline mutation rates taken from Scally (2016).
Figure: Bandwidth vs Complexity.
The challenge we face is that while speed is on the side of the machine, complexity is on the side of our ecology. Many of the risks we face are associated with the way our speed undermines our ecology and the machines speed undermines our human culture.
See Lawrence (2024) Human evolution rates p. 98-99. See Lawrence (2024) Psychological representation of Ecologies p. 323-327.
Understanding the Limitations
AlphaGo
In January 2016, the UK company DeepMind’s machine learning system AlphaGo won a challenge match in which it beat the world champion Go player, Lee Sedol.

Figure: AlphaGo’s win made the front cover of the journal Nature.
Go is a board game that is known to be over 2,500 years old. It is considered challenging for computer systems becaue of its branching factor: the number of possible moves that can be made at a given board postion. The branching factor of Chess is around 35. The branching factor of Go is around 250. This makes Go less susceptible to exhaustive search techniques which were a foundation of DeepBlue, the chess machine that was able to win against Gary Kasparov in 1997. As a result, many commentators predicted that Go was out of the reach of contemporary AI systems, with some predicting that beating the world champion wouldn’t occur until 2025.
Figure: The AlphaGo documentary tells the story of the tournament between Lee Se-dol and AlphaGo.
While exhaustive search was beyond the reach of computer systems, they combined stochastic search of the game tree with neural networks. But when training those neural networks vast quantities of data and game play were used. I wrote more about this at the time in the Guardian article “Guardian article on Google AI versus the Go grandmaster”.
However, despite the many millions of matches that AlphaGo had played, Lee Sedol managed to find a board position that was distinct from anything AlphaGo had seen before. Within the high dimensional pinball machine that made up AlphaGo’s decision making systems, Lee Sedol found a niche, an Achillean chink in AlphaGo’s armour. He found a path through the neural network where no data had every been before. He found a location in feature space “where there be dragons”. A space where the model had not seen data before and one where it became confused.
![]() |
![]() |
This is a remarkable achievement, a human, with far less experience than the machine of the game, was able to outplay by placing the machine in an unfamiliar situation. In honour of this achievements, I like to call these voids in the machines understanding “Sedolian voids”.
Uber ATG
Unfortunately, such Sedolian voids are not constrained to game playing machines. On March 18th 2018, just two years after AlphaGo’s victory, the Uber ATG self-driving vehicle killed a pedestrian in Tuscson Arizona. The neural networks that were trained on pedestrian detection did not detect Elaine because she was pushing a bicycle, laden with her bags, across the highway.3 This situation represented a Sedolian void for the neural network, and it failed to stop the car.
Figure: A vehicle operated by Uber ATG was involved in a fatal crash when it killed pedestrian Elaine Herzberg, 49.
Characterising the regions where this is happening for these models remains an outstanding challenge.

Figure: This is the drawing Dan was inspired to create for Chapter 4. It highlights how even if these machines can generate creative works the lack of origin in humans menas it is not the same as works of art that come to us through history.
See blog post on Art is Human..
For the Working Group for the Royal Society report on Machine Learning, back in 2016, the group worked with Ipsos MORI to engage in public dialogue around the technology. Ever since I’ve been struck about how much more sensible the conversations that emerge from public dialogue are than the breathless drivel that seems to emerge from supposedly more informed individuals.
There were a number of messages that emerged from those dialogues, and many of those messages were reinforced in two further public dialogues we organised in September.
However, there was one area we asked about in 2017, but we didn’t ask about in 2024. That was an area where the public unequivocal that they didn’t want the research community to pursue progress. Quoting from the report (my emphasis).
Art: Participants failed to see the purpose of machine learning-written poetry. For all the other case studies, participants recognised that a machine might be able to do a better job than a human. However, they did not think this would be the case when creating art, as doing so was considered to be a fundamentally human activity that machines could only mimic at best.
Public Views of Machine Learning, April, 2017
How right they were.
System Zero: The Risk of Data-Driven Manipulation
System Zero
The Elephant and its Rider

Figure: The elephant and its rider is an analogy used by Haidt to describe the difference between System One and System Two.
Elephant as System One

You can also check my blog post on System Zero. This was also written in 2015.
See Lawrence (2024) System Zero p. 242–247, 306, 309, 329, 350, 355, 359, 361, 363, 364.
The Mechanical Elephant


Thinking Fast and Slow

The Hindoo Earth



Figure: This is the drawing Dan was inspired to create for Chapter 5. It celebrates the stochastic parrots paper (Bender et al., 2021) but also captures how the feedback of this parrotry is damaging the quality of the international debate.
See blog post on Two Types of Stochastic Parrots.
The stochastic parrots paper (Bender et al., 2021) was the moment that the research community, through a group of brave researchers, some of whom paid with their jobs, raised the first warnings about these technologies. Despite their bravery, at least in the UK, their voices and those of many other female researchers were erased from the public debate around AI.
Their voices were replaced by a different type of stochastic parrot, a group of “fleshy GPTs” that speak confidently and eloquently but have little experience of real life and make arguments that, for those with deeper knowledge are flawed in naive and obvious ways.
The story is a depressing reflection of a similar pattern that undermined the UK computer industry Hicks (2018).
We all have a tendency to fall into the trap of becoming fleshy GPTs, and the best way to prevent that happening is to gather diverse voices around ourselves and take their perspectives seriously even when we might instinctively disagree.
Sunday Times article “Our lives may be enhanced by AI, but Big Tech just sees dollar signs” Times article “Don’t expect AI to just fix everything, professor warns”
Two Types of Stochastic Parrot

Figure: This is the drawing Dan was inspired to create for Chapter 5. An AI parrot repeats information about AI doom panicking humans.
Bender et al. (2021) was a landmark paper where researchers first raised significant warnings about large language models, characterizing them as stochastic parrots. Some of these researchers paid for their bravery with their jobs, and particularly in the UK, their voices and those of other female researchers were largely erased from public debate.
Today we see a second type of stochastic parrot emerging: “fleshy GPTs” who speak confidently and eloquently but lack real-world experience. Like the language models they champion, they make arguments that appear superficially convincing but reveal naive flaws to those with deeper domain knowledge. Ironically, some of these voices even claim the research community failed to warn about the implications of these technologies, despite papers like Bender et al. (2021) doing exactly that.
See this reflection on Two Types of Stochastic Parrots.
Superficial Automation
The rise of AI has enabled automation of many surface-level tasks - what we might call “superficial automation.” These are tasks that appear complex but primarily involve reformatting or restructuring existing information, such as converting bullet points into prose, summarizing documents, or generating routine emails.
While such automation can increase immediate productivity, it risks missing the deeper value of these seemingly mundane tasks. For example, the process of composing an email isn’t just about converting thoughts into text - it’s about:
- Reflection time to properly consider the message
- Building relationships through personal communication
- Developing and refining ideas through the act of writing
- Creating institutional memory through thoughtful documentation
- Projecting corporate culture
When we automate these superficial aspects, we can create what appears to be a more efficient process, but one that gradually loses meaning without human involvement. It’s like having a meeting transcription without anyone actually attending the meeting - the words are there, but the value isn’t.
Consider email composition: An AI can convert bullet points into a polished email instantly, but this bypasses the valuable thinking time that comes with composition. The human “pause” in communication isn’t inefficiency - it’s often where the real value lies.
This points to a broader challenge with AI automation: the need to distinguish between tasks that are merely complex (and can be automated) versus those that are genuinely complicated (requiring human judgment and involvement). Effective deployment of AI requires understanding this distinction and preserving the human elements that give business processes their true value.
The risk is creating what appears to be a more efficient system but is actually a hollow process - one that moves faster but creates less real value. True digital transformation isn’t about removing humans from the loop, but about augmenting human capabilities while preserving the essential human elements that give work its meaning and effectiveness.

Figure: Public dialogue held in Liverpool alongside the 2024 Labour Party Conference. The process of discussion is as important as the material discussed. In line with previous dialogues attendees urged us to develop technology where AI operates as a tool for human augmentation, not replacement.
In our public dialogues we saw the same theme: good process can drive purpose. Discussion is as important as the conclusions reached. Attendees urged us to develop technology where AI operates as a tool for human augmentation, not replacement.
Building Trust and Accountability in AI Systems
Institutional Character
Before we start, I’d like to highlight one idea that will be key for contextualisation of everything else. There is a strong interaction between the structure of an organisation and the structure of its software.
This is known as Conway’s law:
Organizations, who design systems, are constrained to produce designs which are copies of the communication structures of these organizations.
Melvin Conway Conway (n.d.)
Amazon prides itself on agility, I spent three years there and I can confirm things move very quickly. I used to joke that just as a dog year is seven normal years, an Amazon year is four normal years in any other company.
Not all institutions move quickly. My current role is at the University of Cambridge. There are similarities between the way a University operates and the way Amazon operates. In particular Universities exploit devolved autonomy to empower their research leads.
Naturally there are differences too, for example, Universities do less work on developing culture. Corporate culture is a critical element in ensuring that despite the devolved autonomy of Amazon, there is a common vision.
Cambridge University is over 800 years old. Agility is not a word that is commonly used to describe its institutional framework. I don’t want to make a claim for whether an agile institution is better or worse, it’s circumstantial. Institutions have characters, like people. The institutional character of the University is the one of a steady and reliable friend. The institutional character of Amazon is more mecurial.
Why do I emphasise this? Because when it comes to organisational data science, when it comes to a data driven culture around our decision making, that culture inter-plays with the institutional character. If decision making is truly data-driven, then we should expect co-evolution between the information infrastructure and the institutional structures.
A common mistake I’ve seen is to transplant a data culture from one (ostensibly) successful institution to another. Such transplants commonly lead to organisational rejection. The institutional character of the new host will have cultural antibodies that operate against the transplant even if, at some (typically executive) level the institution is aware of the vital need for integrating the data driven culture.
A major part of my job at Amazon was dealing with these tensions. As a scientist, initially working across the company, working with my team introduced dependencies and practices that impinged on the devolved autonomy. I face a similar challenge at Cambridge. Our aim is to integrate data driven methodologies with the classical sciences, humanities and across the academic spectrum. The devolved autonomy inherent in University research provides a similar set of challenges to those I faced at Amazon.
My role before Amazon was at the University of Sheffield. Those were quieter times in terms of societal interest in machine learning and data science, but the Royal Society was already convening a working group on Machine Learning. This was my first interaction with policy advice, I’ve continued that interaction today by working with the AI Council, convening the DELVE group to give pandemic advice, serving on the Advisory Council for the Centre for Science and Policy, and the Advisory Board for the Centre for Data Ethics and Innovation. I’m not an expert on the civil service and government, but I believe many of the themes I’ve outlined above also apply within government. The ideas I’ll talk about today build on the experiences I’ve had at Sheffield, Amazon, and Cambridge alongside the policy work I’ve been involved in to make suggestions of what the barriers are for enabling a culture of data driven policy making.

Figure: Black Box Thinking by Matthew Syed. Matthew compares the open approach to errors taken in the airline industry to way errors are dealt with in the health system drawing on Martin Bromiley’s experiences.
Propagation of Best Practice
We must also be careful to maintain openness in this new genaration of digital solutions across health and society. Matthew Syed’s book, Black Box Thinking (Syed, 2015), emphasizes the importance of surfacing errors as a route to learning and improved process. Taking aviation as an example, and contrasting it with the culture in medicine, Matthew relates the story of Martin Bromiley, an airline pilot whose wife died during a routine hospital procedure and his efforts to improve the culture of safety in medicine. The motivation for the book is the difference in culture between aviation and medicine in how errors are acknowledged and dealt with. We must ensure that these high standards of oversight apply to the era of data-driven automated decision making.
Practical Solutions for Business Implementation
- Data grooming: Do the basics right.
- Incentivisation for data quality
- People training: At all levels
- Case studies
Attention Reinvestment Cycle
Figure: The attention flywheel focusses on reinvesting human capital.
While the traditional productivity flywheel focuses on reinvesting financial capital, the attention flywheel focuses on reinvesting human capital - our most precious resource in an AI-augmented world. This requires deliberately creating systems that capture the value of freed attention and channel it toward human-centered activities that machines cannot replicate.
Intellectual Debt

Figure: Jonathan Zittrain’s term to describe the challenges of explanation that come with AI is Intellectual Debt.
In the context of machine learning and complex systems, Jonathan Zittrain has coined the term “Intellectual Debt” to describe the challenge of understanding what you’ve created. In the ML@CL group we’ve been foucssing on developing the notion of a data-oriented architecture to deal with intellectual debt (Cabrera et al., 2023).
Zittrain points out the challenge around the lack of interpretability of individual ML models as the origin of intellectual debt. In machine learning I refer to work in this area as fairness, interpretability and transparency or FIT models. To an extent I agree with Zittrain, but if we understand the context and purpose of the decision making, I believe this is readily put right by the correct monitoring and retraining regime around the model. A concept I refer to as “progression testing”. Indeed, the best teams do this at the moment, and their failure to do it feels more of a matter of technical debt rather than intellectual, because arguably it is a maintenance task rather than an explanation task. After all, we have good statistical tools for interpreting individual models and decisions when we have the context. We can linearise around the operating point, we can perform counterfactual tests on the model. We can build empirical validation sets that explore fairness or accuracy of the model.
See Lawrence (2024) intellectual debt p. 84, 85, 349, 365.
Technical Debt
In computer systems the concept of technical debt has been surfaced by authors including Sculley et al. (2015). It is an important concept, that I think is somewhat hidden from the academic community, because it is a phenomenon that occurs when a computer software system is deployed.
Separation of Concerns
To construct such complex systems an approach known as “separation of concerns” has been developed. The idea is that you architect your system, which consists of a large-scale complex task, into a set of simpler tasks. Each of these tasks is separately implemented. This is known as the decomposition of the task.
This is where Jonathan Zittrain’s beautifully named term “intellectual debt” rises to the fore. Separation of concerns enables the construction of a complex system. But who is concerned with the overall system?
Technical debt is the inability to maintain your complex software system.
Intellectual debt is the inability to explain your software system.
It is right there in our approach to software engineering. “Separation of concerns” means no one is concerned about the overall system itself.
See Lawrence (2024) separation of concerns p. 84-85, 103, 109, 199, 284, 371.
See Lawrence (2024) intellectual debt p. 84-85, 349, 365, 376.
Dealing with Intellectual Debt
What we Did at Amazon
Corporate culture turns out to be an important component of how you can react to digital transformation. Amazon is a company that likes to take a data driven approach. It has a corporate culture that is set up around data-driven decision making. In particular customer obsession and other leadership principles help build a cohesive approach to how data is assimilated.
Amazon has 14 leadership principles in total, but two I found to be particularly useful are called right a lot and dive deep.
Are Right a Lot
Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs.
I chose “right a lot” because of the the final sentence. Many people find this leadership princple odd because how can you be ‘right a lot’. Well, I think it’s less about being right, but more about how you interact with those around you. Seeking diverse perspectives and working to disconfirm your beliefs. One of my favourite aspects of Amazon was how new ideas are presented. They are presented in the form of a document that is discussed. There is a particular writing style where claims in the document need to be backed up with evidence, often in the form of data. Importantly, when these documents are produced, they are read in silence at the beginning of the meeting. When everyone has finished reading, the most junior person speaks first. Senior leaders speak last. This is one approach to ensuring that a diverse range of perspectives are heard. It is this sort of respect that needs to be brought to complex decisions around data.
Dive Deep
Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdote differ. No task is beneath them.
I chose “dive deep” because of the last phrase of the second sentence. Amazon is suggesting that leaders “are skeptical when metrics and anecdote differ”. This phrase is vitally important, data inattention bias means that there’s a tendency to ‘miss the gorilla’. The gorilla is often your own business instinct and/or domain expertise or that of others. If the data you are seeing contradicts the anecdotes you are hearing, this is a clue that something may be wrong. Your data skepticism should be on high alert. This leadership principle is teaching us how to mediate between ‘seeing the forest’ and ‘seeing the tree’. It warns us to look for inconsistencies between what we’re hearing about the individual tree and teh wider forest.
Understanding your own corporate culture, and what levers you have at your disposal, is a key component of bringing the right approach to data driven decision making.
These challenges can be particularly difficult if your organisation is dominated by operational concerns. If rapid decision making is required, the Gorilla may be missed. And this may be mostly OK, for example, in Amazon’s supply chain there are weekly business reviews that are looking at the international state of the supply chain. If there are problems, they often need quick actions to rectify them. When quick actions are required, ‘command and control’ tends to predominate over more collaorative decision making that we hope allows us to see the Gorilla. Unfortunately, it can be hard, even as senior leaders, to switch between this type of operational decision making, and the more inclusive decision making we need around complex data scenarios. One possibility is to reserve a day for meetings that are dealing with the more complex decision making. In Amazon later in the week was more appropriate for this type of meeting. So making, e.g. Thursday into a more thoughtful day (Thoughtsday if you will?) you can potentially switch modes of thinking and take a longer term view on a given day in the week.
What we did in DELVE
Delve Timeline
- First contact 3rd April
- First meeting 7th April
- First working group 16th April
The Delve initiative is a group that was convened by the Royal Society to help provide data-driven insights about the pandemic, with an initial focus on exiting the first lockdown and particular interest in using the variation of strategies across different international governments to inform policy.
Right from the start, data was at the heart of what DELVE does, but the reality is that little can be done without domain expertise and often the data we required wasn’t available.
However, even when it is not present, the notion of what data might be needed can also have a convening effect, bringing together multiple disciplines around the policy questons at hand. The Delve Data Readiness report (The DELVE Initiative, 2020) makes recommendations for how we can improve our processes around data, but this talk also focuses on how data brings different disciplines together around data.
Any policy question can be framed in a number of different ways - what are the health outcomes; what is the impact on NHS capacity; how are different groups affected; what is the economic impact – and each has different types of evidence associated with it. Complex and uncertain challenges require efforts to draw insights together from across disciplines.
Data as a Convener
To improve communication, we need to ‘externalise cognition’: have objects that are outside our brains, are persistent in the real world, that we can combine with our individual knowledge. Doing otherwise leaves us imagining the world as our personal domain-utopias, ignoring the ugly realities of the way things actual progress.
Data can provide an excellent convener, because even if it doesn’t exist it allows conversations to occur about what data should or could exist and how it might allow us to address the questions of importance.
Models, while also of great potential value in externalising cognition, can be two complex to have conversations about and they can entrench beliefs, triggering model induced blindness (a variation on Kahneman’s theory induced blindness (Kahneman, 2011)).
Figure: Models can also be used to externalise cognition, but if the model is highly complex it’s difficult for two individuals to understand each others’ models. This shuts down conversation, often “mathematical intimidation” is used to shut down a line of questioning. This is highly destructive of the necessary cognitive diversity.
Bandwidth constraints on individuals mean that they tend to focus on their own specialism. This can be particularly problematic for those on the more theoretical side, because mathematical models are complex, and require a lot of deep thought. However, when communicating with others, unless they have the same in depth experience of mathematical modelling as the theoreticians, the models do not bring about good information coherence. Indeed, many computational models themselves are so complex now that no individual can understand the model whole.
Figure: Data can be queried, but the simplest query, what data do we need? Doesn’t even require the data to exist. It seems data can be highly effective for convening a multidisciplinary conversation.
Fritz Heider referred to happenings that are “psychologically represented in each of the participants” (Heider, 1958) as a prerequisite for conversation. Data is a route to that psychological representation.
Note: my introduction to Fritz Heider was through a talk by Nick Chater in 2010, you can read Nick’s thoughts on these issues in his book, The Mind is Flat (Chater, 2019).
For more on the experience of giving advice to government during a pandemic see this talk.
Supply Chain of Ideas
Model is “supply chain of ideas” framework, particularly in the context of information technology and AI solutions like machine learning and large language models. You suggest that this idea flow, from creation to application, is similar to how physical goods move through economic supply chains.
In the realm of IT solutions, there’s been an overemphasis on macro-economic “supply-side” stimulation - focusing on creating new technologies and ideas - without enough attention to the micro-economic “demand-side” - understanding and addressing real-world needs and challenges.
Imagining the supply chain rather than just the notion of the Innovation Economy allows the conceptualisation of the gaps between macro and micro economic issues, enabling a different way of thinking about process innovation.
Phrasing things in terms of a supply chain of ideas suggests that innovation requires both characterisation of the demand and the supply of ideas. This leads to four key elements:
- Multiple sources of ideas (diversity)
- Efficient delivery mechanisms
- Quick deployment capabilities
- Customer-driven prioritization
The next priority is mapping the demand for ideas to the supply of ideas. This is where much of our innovation system is failing. In supply chain optimisaiton a large effort is spent on understanding current stock and managing resources to bring the supply to map to the demand. This includes shaping the supply as well as managing it.
The objective is to create a system that can generate, evaluate, and deploy ideas efficiently and effectively, while ensuring that people’s needs and preferences are met. The customer here depends on the context - it could be the public, it could be a business, it could be a government department but very often it’s individual citizens. The loss of their voice in the innovation economy is a trigger for the gap between the innovation supply (at a macro level) and the innovation demand (at a micro level).

Figure: This is the drawing Dan was inspired to create for Chapter 6. It highlights how uncertainty means that a diversity of approaches brings resilience.
See blog post on Balancing Reflective and Reflexive..
From motor intelligence to mathematical instinct, it feels like there’s a full spectrum of decision-making approaches that can be deployed and that best performance is when they are judiciously deployed according to the circumstances. The Atomic Human tries to explore this in different contexts and I think Dan Andrews did a great job of capturing some of those explorations in his image for Chapter 7.
I think the reason why they relate is because in both cases there is time pressure, it’s from the outside world that pressures come and require us to deliver a conclusion on a particular timeframe. What I find remarkable in human intelligence is how we sustain both these fast and slow answers together, so that we’re ready to go with some form of answer at any given moment. That means that as individuals we are filled with contradictions, differences between the versions of our selves we imagine versus how we behave in practice.
Conclusion: The Business Imperative
AI cannot replace atomic human

Figure: Opinion piece in the FT that describes the idea of a social flywheel to drive the targeted growth we need in AI innovation.

Figure: This is the drawing Dan was inspired to create for Chapter 12. It captures the challenge the analogy where the speed of information assimilation associated with machines is related to the speed assimilation associated with humans.
See blog post on the launch of Facebook’s AI lab..
Thanks!
For more information on these subjects and more you might want to check the following resources.
- book: The Atomic Human
- twitter: @lawrennd
- podcast: The Talking Machines
- newspaper: Guardian Profile Page
- blog: http://inverseprobability.com