Update: readers of the post have also pointed out this critique by Ernest Davis and this response to Davis by Rob Bensinger.

Update 2: Both Rob Bensinger and Michael Tetelman rightly pointed out that my intelligence definition was sloppily defined. I’ve added a clarification that the defintion is ‘for a given task’.

Update 3: The Future of Humanity Institute kindly invited me to give a seminar and have a discussion about these issues. You can find a recording of the talk here.

Update 4: In October Maciej Ceglowski gave talk on “Superintelligence: The Idea that Eats Smart People” and provided the transcript here.

Cover of Superintelligence

This post is a discussion of Nick Bostrom’s book “Superintelligence”. The book has had an effect on the thinking of many of the world’s thought leaders. Not just in artificial intelligence, but in a range of different domains (politicians, physicists, business leaders). In that light, and given this series of blog posts is about the “Future of AI”, it seemed important to read the book and discuss his ideas.

In an ideal world, this post would certainly have contained more summaries of the books arguments and perhaps a later update will improve on that aspect. For the moment the review focuses on counter-arguments and perceived omissions (the post already got too long with just covering those).

Bostrom considers various routes we have to forming intelligent machines and what the possible outcomes might be from developing such technologies. He is a professor of philosophy but has an impressive array of background degrees in areas such as mathematics, logic, philosophy and computational neuroscience.

So let’s start at the beginning and put the book in context by trying to understand what is meant by the term “superintelligence”

Defining Intelligence

In common with many contributions to the debate on artificial intelligence, Bostrom never defines what he means by intelligence. Obviously, this can be problematic. On the other hand, superintelligence is defined as outperforming humans in every intelligent capability that they express.

Personally, I’ve developed the following definition of intelligence: “Use of information to take decisions which save energy in pursuit of a given task”.1 Here by information I might mean data or facts or rules, and by saving energy I mean saving ‘free’ energy.2

However, accepting Bostrom’s lack of definition of intelligence (and perhaps taking note of my own), we can still consider the routes to superintelligence Bostrom proposes. It is important to bear in mind that Bostrom is worried about the effect of intelligence on 30 year (and greater) timescales. These are timescales which are difficult to predict over. I think it is admirable that Nick is trying to address this, but I’m also keen to ensure that particular ideas which are at best implausible, but at worst a misrepresentation of current research, don’t become memes in the very important debate on the future of machine intelligence.

Technological Singularities

A technological singularity is when a technology becomes ‘transhuman’ in its possibilities, moving beyond our own capabilities through self improvement. It’s a simple idea, and often there’s nothing to be afraid of. For example, in mechanical engineering, we long ago began to make tools that could manufacture other tools. And indeed, the precision of the manufactured tools outperformed those that we could make by hand. This led to a ‘technological singularity’ of precision made tools. We developed ‘transhuman’ milling machines and lathes. We developed ‘superprecision’, precision that is beyond the capabilities of any human. Of course there are physical limits on how far this particular technological singularity has taken us. We cannot achieve infinitely precise machining tolerances.

In machining, the concept of ‘precision’3 can be defined in terms of the ‘tolerance’ that the resulting parts are made to. Unfortunately, the lack of a definition of intelligence in Bostrom’s book makes it harder to ground the argument. In practice this means that the book often exploits different facets of intelligence and combines them in worse case scenarios while simultaneously conflating conflicting principles.

Embodied Intelligence

The book gives little thought to the differing natures of machine and human intelligence. For example, there is no acknowledgment of the embodied nature of our intelligence.4 There are physical constraints on communication rates. For humans these constraints are much stronger than for machines. Machine intelligences communicate with one another in gigabits per second. Humans in bits per second. For our relative computational abilities the best estimates are that, in terms of underlying computation in the brain, we are computing much quicker than machines. This means humans have a very high compute/communicate ratio. We might think of that as an embodiment factor. We can compute far more than we can communicate, leading to a backlog of conclusions within our own minds.5 Much of our human intelligence seems doomed to remain within ourselves. This dominates the nature of human intelligence. In contrast, this phenomenon is only weakly observed in computers, if at all. Computers can distribute the results of their intelligence at approximately the same rate that they compute them.6

Bostrom’s idea of superintelligence is an intelligence that outperforms us in all its facets. But if our emotional intelligence is a result of our limited communication ability, then it might be impossible to emulate it without also implementing the limited communication.7 Since communication also affects other facets of our intelligence we can see how it may, therefore, be impossible to dominate human abilities in the manner which the concept of superintelligence envisages. A better definition of intelligence would have helped resolve these arguments.

My own belief is that we became individually intelligent through a need to model each other (and ourselves) to perform better planning.8 So we evolved to undertake collaborative planning and developed complex social interactions. As a result our species, our collective intelligence, became increasingly complex (on evolutionary timescales) as we evolved greater intelligence within each of the individuals that made up our social group.9 Because of this process I find it difficult to fully separate our collective intelligence from our individual intelligences. I don’t think Bostrom suffers with this dichotomy because my impression is that his book only views human intelligence as an individual characteristic. My feeling is that this is limiting because any algorithmics we create to emulate our intelligence will actually operate on societal scales and the interaction of the artificial intelligence with our own should be considered in that context.10

Prediction, Uncertainty and Intelligence Saturation

As humans, we are a complex society of interacting intelligences. Any predictions we make within that society would seem particularly fraught. Intelligent decision making relies on such predictions to quantify the value of a particular decision (in terms of the energy it might save). But when we want to consider future plausible scenarios we are faced with exponential growth of complexity in an already extremely complex system.

In practice we can make progress with our predictions by compressing the complex world into abstractions: simplifications of the world around that are sufficiently predictive for our purposes, but retain tractability. However, using such abstractions involves introducing model uncertainty. Model uncertainty reflects the unknown way in which the actual world will differ from our simplifications.

Practitioners who have performed sensitivity analysis on time series prediction will know how quickly uncertainty accumulates as you try to look forward in time. There is normally a time frame ahead of which things become too misty to compute any more. Further computational power doesn’t help you in this instance, because uncertainty dominates. Reducing model uncertainty requires exponentially greater computation.11 We might try to handle this uncertainty by quantifying it, but even this can prove intractable.

So just like the elusive concept of infinite precision in mechanical machining, there is likely a limit on the degree to which an entity can be intelligent. We cannot predict with infinite precision and this will render our predictions useless on some particular time horizon.

The limit on predictive precision is imposed by the exponential growth in complexity of exact simulation, coupled with the accumulation of error associated with the necessary abstraction of our predictive models. As we predict forward these uncertainties can saturate dominating our predictions. As a result we often only have a very vague notion of what is to come. This limit on our predictive ability places a fundamental limit on our ability to make intelligent decisions.

There was a time when people believed in perpetual motion machines (and quite a lot of effort was put into building them). Physical limitations of such machines were only understood in the late 19th century (for example the limit on efficiency of heat engines was theoretically formulated by Carnot). We don’t yet know the theoretical limits of intelligence, but the intellectual gymnastics of some of the entities described in Superintelligence will likely be curtailed by the underlying mathematics.12 In practice the singularity will saturate, it’s just a question of where that saturation will occur relative to our current intelligence. Bostrom thinks it will be a long way ahead, I tend to agree but I don’t think that the results will be as unimaginable as is made out. Machines are already a long way ahead of us in many areas (weather prediction for example) but I don’t find that unimaginable either.

Unfortunately, in his own analysis, Bostrom hardly makes any use of uncertainty when envisaging future intelligences. In practice correct handling of uncertainty is critical in intelligent systems. By ignoring it Bostrom can give the impression that a superintelligence would act with unerving confidence. Indeed the only point where I recollect the mention of uncertainty is when it is used to unnerve us further. Bostrom refers to how he thinks a sensible Bayesian agent would respond to being given a particular goal. Bostrom suggests that due to uncertainty it would believe it might not have achieved its goal and continue to consume world resource in an effort to do so.13 In this respect the agent appears to be taking the inverse action of that suggested by the Greek skeptic Aenesidemus,14 who advocated suspension of judgment, or epoché, in the presence of uncertainty. Suspension of judgment (delay of decision making) meaning specifically ‘refrain from action’. That is indeed the intelligent reaction to uncertainty. Don’t needlessly expend energy when the outcome is uncertain (to do so would contradict my definition of intelligent behavior). This idea emerges as optimal behavior from a mathematical treatment of such systems when uncertainty is incorporated.

This meme occurs through out the book. The “savant idiot”,15 a gifted intelligence that does a particular thing really stupidly. As such it contradicts the concept of superintelligence. The superintelligence is better in all ways than us, but then somehow must also be taught values and morals. Values and morals are part of our complex emergent human behaviour. Part of both our innate and our developed intelligence, both individually and collectively as a species. They are part of our natural conservatism that constrains extreme behavior. Constraints on extreme behaviour are necessary because of the general futility of absolute prediction. Just as in machining, we cannot achieve infinitely precise prediction.

Another way the savant idiot expresses itself in the book is through extreme confidence about its predictions in the future. The premise is that it will agressively follow a strategy (potentially to the severe detriment of humankind) in an effort to fulfill a defined ‘final goal’. We’ll address the mistaken idea of a simplistic final goal below.

With a shallow reading Bostrom’s ideas seem to provide an interesting narrative. In the manner of an Ian Fleming novel, the narrative is littered with technical detail to increase the plausibility for the reader.16 However, in the same way that so many of Blofeld’s17 schemes are quite fragile when exposed to deeper analysis, many of Bostrom’s ideas are as well.

In reality, challenges associated with abstracting the world render the future inherently unpredictable, both to humans and to our computers. Even when many aspects of a system are broadly understood (such as our weather) prediction far into the future is untenable due to propagation of uncertainty through the system. Uncertainty tends to inflate as time passes rendering only near term prediction plausible. Inherent to any intelligent behavior is an understanding of the limits of prediction. Intelligent behaviour withdraws, when appropriate, to the “suspension of judgement”, inactivity, the epoché. This simple idea finesses many of the challenges of artificial intelligence that Bostrom identifies.

Whole Brain Emulation

Large sections of the book are dedicated to whole brain emulation, under the premise that this might be achievable before we have understood intelligence (superintelligence could then achieved by hitting the turbo button and running those brains faster). Simultaneously, hybrid brain-machine systems are rejected as a route forward due to the perceived difficulty of developing such interfaces.

Such unevenhanded treatment of future possible paths to AI makes the book a very frustrating read. If we had the level of understanding we need to fully emulate the brain, then we would know what is important to emulate in the brain to recreate intelligence. The path to that achievement would also involve improvements of our ability to directly interface with the brain. Given that there are immediate applications with patients, e.g. with spinal problems or suffering from ALS, I think we will have developed hybrid systems that interface directly with the brain a long time before we have managed a full emulation of the human brain. Indeed, such applications may prove to be critical to developing our understanding of how the brain implements intelligence.

Perhaps Bostrom’s naive premise about the ease of brain emulation comes form a lack of understanding of what it would involve. It could not involve an exact simulation of each neuron in the brain down to the quantum level (and if it did, it would be many orders of magnitude more computationally demanding than is suggested in the text). Instead it would involve some level of abstraction. Abstraction as to those aspects of the biochemistry and physics of the brain that are important in generating our intelligence. Modelling and simulation of the brain would require that our simulations replace actual mechanism with those salient parts of those mechanisms that the brain makes use of for intelligence.

As we’ve mentioned in the context of uncertainty, an understanding of this sort of abstraction is missing from Superintelligence, but it is vital in modelling, and, I believe, it is vital in intelligence. Such abstractions require a deep understanding of how the brain is working, and such understandings are exactly what Bostrom says are impossible to determine for developing hybrid systems.

Over the 30 year time horizons that Bostrom is interested in, hybrid human-machine systems could become very important. They are highly likely to arise before a full understanding of the brain is developed, and if they did then they would change the way society would evolve. That’s not to say that we won’t experience societal challenges, but they are likely to be very different from the threats that Bostrom perceives. Importantly, when considering humans and computers, the line of separation between the two may not be as distinctly drawn as Bostrom suggests. It wouldn’t be human vs computer, but augmented human vs computer.18

Dedication of Resources and Control of Progress

One aspect that, it seems, must be hard to understand if you’re not an active researcher is nature of technological advance at the cutting edge. The impression Bostrom gives is that research in AI is all a set of journeys with predefined goals.19 It’s therefore merely a matter of assigning resources, planning, and navigating your way there. In his strategies for reacting to the potential dangers of AI, Bostrom suggests different areas in which we should focus our advances (which of these expeditions should we fund, and which should we impede). In reality, we cannot switch on and off research directions in such a simplistic manner. Most research in AI is less of an organized journey, but more of an exploration of uncharted terrain. You set sail from Spain with government backing and a vague notion of a shortcut to the spice trade of Asia, but instead you stumble on an unknown continent of gold-ridden cities. Even then you don’t realize the truth of what you discovered within your own lifetime.20

Even for the technologies that are within our reach, when we look to the past, we see that people were normally overly optimistic about how rapidly new advances could be deployed and assimilated by society. In the 1970s Xerox PARC focused on the idea that the ‘office of the future’ would be paperless. It was a sensible projection, but before it came about (indeed it’s not quite here yet) there was an enormous proliferation of the use of paper, so the demand for paper increased.

Rather than the sudden arrival of the singleton, I suspect we’ll experience something very similar to our ‘journey to the paperless office’ with artificial intelligence technologies. As we develop AI further, we will likely require more sophistication from humans. For example, we won’t be able to replace doctors immediately, first we will need doctors who have a more sophisticated understanding of data. They’ll need to interpret the results of, e.g., high resolution genetic testing. They’ll need to assimilate that understanding with their other knowledge. The hybrid human-machine nature of the emergence of artificial intelligence is given only sparse treatment by Bostrom. Perhaps because the narrative of such co-evolution is much more difficult to describe than an independent evolution.

The explorative nature of research adds to the uncertainties about where we’ll be at any given time. Bostrom talks about how to control and guide our research in AI, but the inherent uncertainties require much more sophisticated thinking about control than Bostrom offers. In a stochastic system, a controller needs to be more intelligent and more reactive. The right action depends crucially on the time horizon. These horizons are unknown. Of course, that does not mean the research should be totally unregulated, but it means that those that suggest regulation need to be much closer to the nature of research and its capabilities. They need to work in collaboration with the community.

Arguments for large amounts of preparatory work for regulation are also undermined by the imprecision with which we can predict the nature of what will arrive and when it will come. In 1865 Jules Verne21 correctly envisaged that one day humans would reach the moon. However, the manner in which they reached the moon in his book proved very different from how we arrived in reality. Verne’s idea was that we’d do it using a very big gun. A good idea, but not correct. Verne was, however, correct that the Americans would get there first. One hundred and four years after he wrote the goal was achieved through rocket power (and without any chickens inside the capsule).

This is not to say that we shouldn’t be concerned about the paths we are taking. There are many issues that the increasing use of algorithmic decision making raises and they need to be addressed. It is to say that the nature of the concerns that Bostrom raises are implausible because of the imprecision of our predictions over such time frames.

Final Goals and AI

Some of Bostrom’s perspectives may also come from a lack of experience in deploying systems in practice. The book focuses a great deal on the programmed ‘final goal’ of our artificial intelligences. It is true that most machine learning systems have objective functions, but an objective function doesn’t really map very nicely to the idea of a ‘final goal’ for an intelligent system. The objective functions we normally develop are really only effective for simplistic tasks, such as classification or regression. Perhaps the more complex notion of a reward in reinforcement learning is closer, but even then the reward tends to be task specific.

Arguably, if the system does have a simplistic ‘final goal’, then it is already failing its test of superintelligence, even the simplest human is a robust combination of, sometimes conflicting, goals that reflect the uncertainties around us. So if we are goal driven in our intelligence, then it is by sophisticated goals (akin to multi-objective optimisation) and each of us weights those goals according to sets of values that we each evolve, both across generations and within generations. We are sophisticated about our goals, rather than simplistic, because our environment itself is evolving, implying that our ways of behaviour need to evolve as well. Any AI with a simplistic final goal would fail the test of being a ‘dominant intelligence’. It would not be a superintelligence because it would under-perform humans in one or more critical aspects.

Data and the Reality of Current Intelligence

One of the routes explored by Bostrom to superintelligence involves speeding up implementations of our own intelligence. Such speed would not necessarily bring about significant advances in all domains of intelligence, due to fundamental limits on predictability. Linear improvements22 in speed cannot deal with exponential increases in computational tractability. But Bostrom also seems to assume that speeding up intelligences will necessarily take them beyond our comprehension or control. Of course in practice there are many examples where this is not the case. IBM Watson’s won Jeopardy. But it did it by storing a lot more knowledge than we ever could, then it used some simplistic techniques from language processing to recover those facts: it was a fancy search engine. These systems outperform us, but they are by no means beyond our comprehension. Still, that does not mean we shouldn’t fear this phenomenon.

Given the quantity of data we are making available about our own behaviors and the rapid ability of computers to assimilate and intercommunicate, it is already conceivable that machines can predict our behavior better than we can. Not by superintelligence but by scaling up of simple systems. They’ve finessed the uncertainty by access to large quantities of data. These are the advances we should be wary of, yet they are not beyond our understanding. Such speeding up of compute and acquisition of large data is exactly what has led to the recent revolution in convolutional neural networks and recurrent neural networks. All our recent successes are just more compute and more data.

This brings me to another major omission of the book, and this one is ironic, because it is the fuel for the current breakthroughs in artificial intelligence. Those breakthroughs are driven by machine learning. And machine learning is driven by data. Very often our personal data. Machines do not need to exceed our capabilities in intelligence to have a highly significant social effect. They outperform us so greatly in their ability to process large volumes of data that they are able to second guess us without expressing any form of higher intelligence. This is not the future of AI, this is here today.

Deep neural networks of today are not performant because someone did something new and clever. Those methods did not work23 with the amount of data we had available in the 1990s. They work with the quantity of data we have now. They require a lot more data than any human uses to perform similar tasks. So already, the nature of the intelligence around us is data dominated. Any future advances will capitalise further on this phenomenon.

The data we have comes about because of rapid interconnectivity and high storage (this is connected to the low embodiment factor of the computer). It is the consequence of the successes of the past and it will feed the successes of the future. Because current AI breakthroughs are based on accumulation of personal data, there is opportunity to control its development by reformation of our rules on data.

Unfortunately, this most obvious route to our AI futures is not addressed at all in the book.

Summary

Debates about the future of AI and machine learning are very important for society. People need to be well informed so that they continue to retain their individual agency when making decisions about their lives.

I welcome the entry of philosophers to this debate, but I don’t think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.

I am not an apologist for machine learning, or a promoter of an unthinking march to algorithmic dominance. I have my own fears about how these methods will effect our society, and those fears are immediate. Bostrom’s book has the feel of an argument for doomsday prepping. But a challenge for all doomsday preppers is the quandary of exactly which doomsday they are preparing for. Problematically, if we become distracted with those images of Armageddon, we are in danger of ignoring existent challenges that urgently need to be addressed.

This is post 6 in a series. Previous post here

  1. Think of this as a pocketknife definition of intelligence. It’s designed to be portable and fulfill a variety of roles (it even has a bit for intelligent removal of stones from horses hooves). 

  2. Heat engines are systems for converting heat into useful work. By my definition intelligence is conducted through an inference engine that is for taking information and using it to conserve work (by avoiding things we didn’t need to do). 

  3. Not machine precision, but machining precision. 

  4. Current machine intelligence is very different from human intelligence because it is disembodied. In particular, the rate at which computers can talk to each other far exceeds that with which humans can interact. That is what makes our intelligence special. I’ve referred to it as ‘locked in’ intelligence in a previous blog. Bostrom doesn’t seem to acknowledge this. He talks about collective emulation of brains as one potential future and segues between embodied and disembodied intelligences opportunistically as the argument requires. A lot of what we do in our own brains in terms of communication (which includes second guessing those around us and what they might expect us to say) is not necessary for current machine intelligences. They can simply broadcast their state to one another. They also have access to a far greater amount of data than we do. Inter-machine communication is an area where machines already completely outperform us, and arguably have done so since newswire services introduced multiplex devices in the 1920s. 

  5. If this weren’t true then we could perhaps more easily upload our brains, making the Singularians very happy

  6. This embodiment factor could be measured in communication rate divided by computation rate. Computation rates are sometimes measured in flops (floating point operations per second) and communication rates in baud (or bits per second). On a modern computer a floating point number is often represented with 64 bits. So a comp-com ration of 1/64 would be the ability of an entity to share continuously the result of everything it computes. Any ratio higher than this implies that there is some form of computational backlog. A computer with 10 Gigaflops of computation and a 1 gigabit connection to the internet has a ratio of about 10. Estimating flops for the human brain is a fraught process, although you often see estimates in the petaflop range. Assuming a brain has 1 petaflop compute capacity and we have the ability to communicate at 100 bits per second, then that gives us an embodiment factor of around 1013, vastly higher than our computers. In practice this means that, relative to computers, we spend a lot of time thinking before speaking. 

  7. To give a better technical sense of this argument, it seems that Bostrom accepts that intelligence is multifaceted. So Bostrom’s idea of superintelligence is that it is a ‘dominating’ intelligence (one that outperforms us in all respects). But there is interconnection between the different facets of our intelligence. This means that it might be that it is impossible to dominate human intelligence. Improvement in one characteristic may lead to a deterioration in another. In multiobjective optimisation this phenomenon is known as a Pareto front. In practice, whether we can actually be dominated or not seems to require more precise definitions of intelligence and a better characterization of our own abilities. 

  8. Modelling in the sense of predictive modelling. It may be to understand intent and depending on context, or to determine our individual collaborative or competitive response. When planning into the future that also requires a model for ourselves and how we are likely to respond to given circumstances. This model might be a candidate for our perceived sense of self. 

  9. I’ve a lot of sympathy with the notion that our own intelligence is the result of some form of evolutionary singularity, brought about by sexual selection being dependent on enhanced intelligence, in particular the ability to plan and model social context. Since this was a competitive endeavor and at each stage, by evolving our own intelligence, we increased the complexity of our social environment you can follow that the selective pressures increased. I think this concept is more involved than the concept of Fisherian runaways because the demands of planning within the environment would have increased as the intelligence of the individuals increased. In a classical Fisherian runaway (such as the peacock’s tail) the interaction between the characteristic developed and the environmental complexity is simpler. 

  10. As a result the debate transcends technology, philosophy, psychology and social science. 

  11. In practice this means that our choice of model abstraction depends on the timescales over which we wish to compute, for example we use different models for our climate predictions versus our weather predictions despite the underlying physical system being the same. 

  12. Singularities are normally unsustainable in practice because the mechanisms which they exploit at launch are normally exhausted and saturation occurs. There may be an exponential explotion but if it encounters problems of exponential complexity the irresistible force encounters the immovable object and a stalemate (or saturation) results. 

  13. On page 123 Bostrom describes how an uncertain “sensible Bayesian agent” that had actually achieved its goals might continue to use resources to achieve them, thereby destroying the universe: “On the contrary: if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal—this, after all, being an empirical hypothesis against which the AI can have only uncertain perceptual evidence. The AI should therefore continue to make paperclips in order to reduce the (perhaps astronomically small) probability that it has somehow still failed to make a million of them, all appearances notwithstanding”. The notion Bostrom refers to comes from subjective probability and is known as “Cromwell’s rule”. It suggests that Bayesian statisticians should never place prior probabilities on a particular outcome as zero (“I beseech you, in the bowels of Christ, think it possible you may be mistaken.” — Oliver Cromwell in a letter to the synod of the Church of Scotland, 1650). This is a maxim that we could all do well to bear in mind. However, its originator, Dennis Lindley (Section 6.8 of “Understanding Uncertainty”, also pg 104 “Making Decisions”), has it in mind for prior probabilities, the idea is mis-deployed here. 

  14. Aenesidemus who took the skeptic philosophy to extremes realized that the right response to uncertainty was to exist in a state of “suspended judgment” where no action is taken. This is known as epoché. This is absolutely correct. The same notion emerges mathematically in stochastic optimal control. Only in practice, as you wait and acquire more data your doubt reduces and because of finite time horizons you are eventually forced into a decision. The point at which your judgment is reanimated depends on the time horizon over which you are operating (if it’s long term you suspend decisions for longer), and the cost of both action and a mistaken decision. These ideas are explored in the game of Kappenball that you can read about here and download for the iPhone. 

  15. An “idiot savant” is a person with limited intelligence who has a particular gift (insight or memory etc). It therefore seems appropriate to define “savant idiot” to indicate the idea of a “superintelligence” that does particularly stupid things. Not sure of the correctness in the French here, google translate offers “génie fou” as an alternative. 

  16. Fleming was famous for this technique: we join Bond as he’s issued with his Walther PPK, replacing his Beretta:
    ‘How does the Armourer suggest I carry it?’
    ‘Berns Martin Triple-draw holster,’ said Major Boothroyd succinctly. ‘Best worn inside the trouser band to the left. But it’s all right below the shoulder.’
    Very persuasive isn’t it? Unfortunately it turns out that the ‘Berns Martin Triple-draw holster’ is a revolver holster, and the Walther PPK is an automatic. Of course you have to know the technical detail to see that the narrative is false. Quote is from page 27 of ‘Dr No’. 

  17. Bond’s nemesis and head of the organization SPECTRE. 

  18. For example, can a human working in collaboration with a computer beat the best pure computer players at Go or Chess? 

  19. Perhaps this impression comes from very large scale projects like the large hadron collider, or the manned mission to the moon. It’s true that they came about through a large injection of resource, but only when the goal was tangibly within reach. El Dorado had been sighted. All that is now required is the final assault in its walls. In artificial intelligence El Dorado remains as elusive as Atlantis, it is possible that we’ll stumble onto it accidentally, but we don’t yet have a map of its location or even an understanding whether we’re on the right continent. 

  20. Columbus went to his grave not realizing that the land he’d discovered was formerly unknown to Europeans. He still thought he’d reached Asia. If you think it’s odd that we should credit the discovery of a continent to someone who wasn’t the first person there, wasn’t even the first European there and who didn’t even know he discovered it then you should read about Stigler’s law of eponymy and this list of examples of this law (not invented by Stigler), showing that researchers also share this unusual characteristic with explorers too. 

  21. From the Earth to the Moon by Jules Verne. A classic book. I particularly enjoyed the fact that they took some chickens with them, which, given the understanding at the time, seems very sensible and practical. 

  22. Of course Moore’s law has meant that our improvements in speed have been super-linear (exponential) so far, but eventually the limits of computation will be reached. Even then it still took us around 20 years to go from beating humans at Chess to beating humans at Go, largely because, while both games are exponentially complex, Go has a much larger branching factor (about 10 times as big) and that factor appears in the exponent of the complexity, causing the complexity to increase dramatically. 

  23. By work here I mean outperform humans (or perform similarly to humans). They often worked in the other sense, but were displaced by methods that were as performant and easier to understand from a modelling perspective.