Imagine in 1925 a world where the automobile is already transforming society, but big promises are being made for things to come. The stock market is soaring, the 1918 pandemic is forgotten. And every major automobile manufacturer is investing heavily on the promise they will each be the first to produce a car that needs no fuel. A perpetual motion machine.

An advert for a Fuelless car as rendered by ChatGPT after being fed the blogpost

Well, of course that didn’t happen. But I sometimes wonder if what we’re seeing today 100 years later is the modern equivalent of that. This week it was announced that Yann’s leaving the FAIR lab he founded. And one can’t help but wonder if the billions that Zuckerberg is now investing in superintelligence are the modern equivalent of arguing for perpetual motion.

Why might this be the case? Well, that’s where entropy comes in. The second law of thermodynamics tells us that entropy always increases. So we can’t have motion without entropy production. How might we make an equivalent statement for the bizarre claims around superintelligence? Some inspiration comes from Maxwell’s demon, an “intelligent” entity which operates against the laws of thermodynamics. The inspiration comes because the demon suggests that for the second law to hold there must be a relationship between the demon’s decisions and thermodynamic entropy. One of the resolutions comes from Landauer’s principle, the notion that erasure of information requires heat dissipation.

I’ve been scratching my head about this for a few years now, but on Monday I shared an arXiv paper1 that captures some of the directions I’ve been taking. By considering an axiomatic game where entropy is maximised I’ve been exploring how the dynamics emerge. I admit, it seems a long way from my original motivation, but its just been one of those threads that when you start pulling on it you end up having to go further back. I’m relieved to have the paper out, not because I know it’s all correct, I’m pretty sure I’ve misunderstood lots of things and made some clumsy mathematical errors. But I’m hoping the foundation is still solid enough to build on.

The idea is for an “inaccessible game” which is information-isolated from observers. The assumption is that inside this game the dynamics take the form of “information relaxation” which turns out to be equivalent to constrained entropy production. The nice thing is that resulting dynamics turn out to have a structure that resembles GENERIC, a formalism that is at the heart of modern non-equilibrium thermodynamics.

By building on the similarities to GENERIC one can show an energy-information equivalance that leads to a principle equivalent to Landauer’s.

All this gives me the feeling that the work is on the right track, but it’s clearly still a long way from its destination. My hope is that some friendly folks will be interested enough to take a look and help me tidy it up a bit!

I’ve no doubt that AI technologies will transform our world just as much as the automobile has. But I also have no doubt that the promise of superintelligence is just as silly as the promise of perpetual motion. Maybe the insights from the inaccessible game could provide one way of understanding that.

  1. The software to recreate the simulations from the paper is here: https://github.com/lawrennd/tig-code/