If quantum biology is a thing, reality “computes” more efficiently than any computer.
Introduction
To some of us it is obvious, something we know in our hearts beyond doubt. There is nothing more real to me than my own consciousness. It is an example of what Immanuel Kant called a synthetic a priori. We may not know what our individual consciousness is, but we know unshakably that it is not something generated by computation.
Others might cite the results of experiments in parapsychology. If we have premonitions of the future, or if we can know what our distant friends are thinking, then clearly our minds are doing things that computers cannot do. This argument, however, cuts both ways. If our universe is a computer simulation it doesn’t have to be programmed in a way that conforms consistently to causal laws. If the programmer knows what is happening elsewhere (or elsewhen), he might leak that information into some of the simulated brains.
It is common among scientists to think in terms of physical realism, i.e., the hypothesis that physical reality is the only reality. It follows that consciousness is an epiphenomenon generated by the physical brain. If the brain is a computing engine, then it is reasonable to conclude that our multiple consciousnesses may be generated by a much larger computing engine.
To other scientists it is not at all obvious that physical realism is the right way to go. A competing world-view was articulated prominently by William James, rooted in the dualism of Descartes. It is that consciousness is a separate phenomenon, independent of physical reality, and the brain is not the generator but the transducer of consciousness. The function of the brain is to connect the non-physical world of thought to the physical realm, downloading intent and uploading sensation. From this perspective, the idea that we are living in a simulation is nonsensical.
Even for those who believe in physical realism, and whose subjective experience doesn’t support the synthetic a priori, there is a cogent argument from quantum physics that our universe is not a simulation.
Summary
We know that a simulation of the gross aspects of a macroscopic system can run much faster than the system itself. But this is not true of quantum systems. The complexity of computations in classical physics grows linearly with the number of particles, but the complexity of quantum mechanical computations grows exponentially. Solving the quantum wave equation for a system as simple as one molecule of H2O is far beyond the capacity of any conceivable digital computer. To the extent that our world depends on details at the quantum level—for example, if you believe at all that quantum biology is a real science—it cannot be simulated with any reasonably sized computer.
We are so used to the idea that “theory is faster and more convenient than experiment” that we may not even recognize this as an assumption. Sometimes, it’s true.
If we can predict an orbit with a formula, it’s much, much faster and more convenient than watching a moon or a planet and tabulating its trajectory. For the sun and one planet, Newton’s gravitational equations can be solved to give the formula for an ellipse, and the planet’s position can be predicted with moderate accuracy using just pencil and paper.
For more complicated problems, three or more bodies, there is no formula. But the orbit can still be computed second-by-second. We use the fields now to predict the motion for the next second, then update the positions of all the bodies and update the fields. Then use the new positions to compute fields, and the new fields to compute the motions of all bodies. Rinse and repeat, second by second. This technique, called numerical integration, was known to Newton 300 years ago. It was tedious to do this calculation by hand, but not nearly as tedious as watching the planets and waiting for them to move. Today, a computer program can do numerical integration in a flash. Big supercomputer programs run for weeks at a time to calculate models of the entire universe with moderate precision.
It’s our expectation that — even if computer programs are complex and a bit slow — they are much, much faster than watching the galaxies or even the planets move. Of course, theory is always faster and more convenient than experiment. Except when it’s not.
Quantum calculations are complex on a scale that leaves classical systems in the dust
The equations of quantum mechanics are notoriously complex. But theoreticians routinely solve them for interactions of two particles. In classical mechanics, the three-body system is chaotic, meaning that it never settles into repetitive orbital motion. The quantum three-body problem doesn’t have this problem; there is a stationary ground state solution to the Schrödinger equation. But a different problem arises: the three-body problem requires enormously more computation compared to the two-body problem. The equation for the wave function of a Helium atom (two electrons and a nucleus) is just barely within the range of modern computing power.
And with each additional particle, solving the wave equation requires exponentially more computing power. In classical mechanics, it takes twice as long to calculate 4 planets as to calculate 2 plantes. But in quantum mechanics, it takes a billion billion times as long to calculate 4 electrons, compared to 2. Why is this? Quantum equations aren’t about particles — they are about configurations. The configuration for one particle lives in a 3-dimensional space .But the configuration for two particles lives in a 6 dimensional space, 3 for each particle. When we get to four electrons, we need a 12-dimensional space, and each dimension must be divided into roughly 1,000 pixels to get good resolution. So adding each new particles multiplies the difficulty of the problem by 1,000 * 1,000 * 1,000 = 1 billion.
A small molecule might have a few dozen electrons, and it would take a computer much larger than the universe to calculate it.
You have heard that quantum mechanics makes very precise predictions
Quantum physics has a reputation for making exact predictions that have been verified by experiment. All the exact predictions of QM are based on two-particle systems. For example, all the computations of high energy physics that are used to predict what happens inside particle accelerators are based on two particles colliding. Larger systems are solved only with a gross approximation, usually the assumption that individual electrons are not entangled with one another. We can calculate the hydrogen atom exactly (one electron, one proton), and the helium atom (three particles) with a stretch of our exact methods, but every larger atom is based on approximations and assumptions.
The theory of bonding that underlies our understanding of molecules is rooted in calculations that are based on the (quantum) hydrogen atom. We pretend that electrons don’t interact with one another, and then we can solve the equations. Supplemented by detailed empirical measurement, this becomes a reasonably useful approximation.
Solid state physics routinely calculates wave functions for clouds of huge numbers of particles, but the calculation is based on the assumption that each electron sees only the lattice of positive nuclei and not the sea of other electrons. It works pretty well to the extent that the other electrons are spread out and thus pushing equally from all directions. We can do better by taking the electron cloud calculated in this way and using it to calculate a corrected wave function. This is called perturbation theory, and it can be iterated indefinitely, each approximation used to produce a better next approximation. When this is done, the process converges on a solution, but it is not the exact solution that would be obtained if the interaction of every electron with every other were fully accounted.
A quantum system “calculates” its own state far more efficiently than a computer can
We start to see that nature is one huge quantum computer, performing parallel calculations far faster and more efficiently than we know how to solve the same system using our equations. This is an extraordinary fact. We may have exactly the right equation, but we can’t solve it exactly, so we look to nature’s matter computer to give us exact answers. This is a case where experiment is more efficient than theory. The experiments are faster, more convenient, and easier. In fact, solving the equations exactly is “impossible” with the any of the computing tools that we know how to build.
Every quantum system of more than three particles “simulates itself” far more efficiently than we can devise a computer program to simulate it.
For those of us seeking to understand physics of the universe, this is the ultimate cosmic joke. We have equations that we believe (with good reason) to be the basis of all physical interactions at “low energy”, where by “low” we mean up to and including energies that are reached at the center of the sun. And yet, we cannot know for sure, because we don’t know what predictions these equations make, that is, we cannot solve the equations.
As far as we know, the universe is working out the consequences of these equations in the most efficient possible way, far more efficient than any supercomputer we can program. In fact, the universe is working out answers far more efficiently than a hypothetical digital computer the size of the universe.
Simulating a universe: Is detail at the quantum level really necessary?
Maybe the approximations (independent electrons) on which chemistry is based are good enough to compute bulk behaviors of materials. Maybe we don’t have to compute the detailed behavior of every particle in the sun to know its average brightness. Maybe the solar flares and sunspots can be approximated well enough with random number generators.
Maybe.
But living systems seem to depend on quantum mechanics at the level of single molecules. There are molecular machines — single molecules that operate with extraordinary intelligence that we don’t understand. It seems a sure bet that these have evolved to exploit the full potential of quantum physics.
- DNA polymerase is a single molecule that crawls along a single strand of DNA and pulls the right nucleotide base out of the surrounding plasma to pair with the nucleotide that it finds to be next along the chain.
- Homologous DNA repair is the process of filling in a gap in a section of a chromosome that is severely damaged. There are molecules that seek out the sister chromosome for reference, using pairs of chromosomes as a backup system. The whole process of finding the corresponding area of the sister chromosome, reading the information, and using it to repair the break is accomplished by evolved molecular machines.
- The Golgi apparatus, present in every eukaryotic cell, tags individual molecules with a destination address and guides each molecule to the place it is needed.
- Photosynthesis is the most efficient conversion of light energy to chemical energy known to man. It is accomplished by the molecule called chlorophyl, using quantum tricks.
- The mysterious, intelligent behavior of single biomolecules is likely mediated by the quantum properties of water, which support structures in response to molecular and surface interactions that extend ~100 micrometers from the source. This is a scale commensurate with the size of typical eukaryotic cells.
Brains are not static computation engines. The brain adapts to the way in which it is used, growing new capacity in areas where past experience has taught it to anticipate future demands. How this works is not understood, but it is easy to believe this ability invokes mechanisms at least as sophisticated as DNA repair.
A brain is not a neural network. Neural networks use pseudo-random number generators to determine the firing of each neuron. The behavior of real neurons may look to us to be partially random, simply because we don’t understand them; but we should not be so arrogant as to assume the detailed molecular interactions within each neuron are adequately modeled by randomness. Neural networks generally are structured with a pyramid shape, summarizing a large amount of data with a single output. Neural networks for artificial general intelligence have not to my knowledge been realized.
Stuart Kauffman has inspired and interpreted research suggesting that quantum criticality in biomolecules is an evolved adaptation. His idea is that, while human-designed machines are large enough to make quantum fluctuations irrelevant, evolution-designed molecular machines seem to be consistently poised on a quantum knife edge, as if nature wanted “quantum randomness” to determine macroscopic outcomes. There is a hint here that what we regard as “randomness” is not really random in biological systems. This is not physically unreasonable. Heisenberg’s “randomness” depends on the absence of long-range correlations, and Kauffman argues that biological systems are evolved to use large-scale quantum entanglement that we don’t understand, so the behavior appears random to us.
QED
To simulate in detail the quantum behavior of a single small molecule would require a computer larger than the universe.
The behavior of the universe can only be simulated (with a computer smaller than the universe) if we think that detailed quantum behavior is irrelevant, and can be replaced with pseudo-random algorithms. It is possible, but by no means certain, that non-living systems can be adequately simulated in a way that ignores quantum details. Life, however, seems to depend on large-scale quantum entanglement in ways that we are only beginning to understand. To the extent that living systems are evolved to exploit quantum entanglement, living systems cannot be simulated with a computer smaller or faster than the system itself.
Fascinating! But rather than simulate the the entire universe in all the dimensions you describe, couldn’t such software just provide simulated observations to those characters which go looking for quantum behavior? That seems a much smaller computing task.
This is a fascinating idea! Maybe someone didn’t bother to simulate the whole universe — all he had to do was simulate my sensory input, and give me the illusion that there is a vast and fascinating universe that obeys physical laws, sortof, and there are 7.7 billion beings like me on this planet, beings that don’t exist even in simulation. Is that you, Ben? Are you one of the beings that only exists inside my private simulation? Ben? Are you there?
🙂 You got me!
Thanks for a genuinely interesting post that answered some questions I’ve been thinking about for a long time!
I have an issue with this argument as “proof” that we aren’t living in a simulation, though. It seems to me that it only disproves that we are living in a simulation that runs within our own universe. You’ve made it clear that the physics of our universe are too complicated to simulate with our computers, even hypothetically with much more powerful computers than we currently have. However, it is generally impossible to reproduce a simulation within itself.
Let’s use Minecraft as an example, because it’s well-known that you can build a functional computer from scratch in the Minecraft world. But you can’t (while confined to the physical rules that are programmed into Minecraft) build a computer that runs Minecraft itself more efficiently than the real-world computer you’re already using. It seems to me that the argument of this post would conclude that Minecraft is therefore not simulated.
Much like the real world is much more complex than Minecraft, there could exist a world external to our universe with similarly greater complexity. The physical laws that can potentially be programmed into Minecraft are limited by the rules of our complex universe, not by the laws that have previously been programmed into Minecraft . Similarly, it doesn’t make sense (in my view) to claim that a hypothetical world in which our universe is simulated would be bound by the physical laws within our universe.
I genuinely think this was a good post and I’m making this comment because it’s a topic that interests and excites me, so I hope you won’t take my critique as personally insulting your intelligence (I hope it doesn’t read as insulting, but this is the internet and people can be mean so I thought it best to clarify). I’d be happy to continue discussing this if you’re up for it!
Crystal – My fundamental argument is that you can’t simulate our universe with a Turing machine. (All present computers in use on earth are Turing machines.) Of course, you can simulate our universe with a quantum computer that is the size of our universe. In fact, you could say that the simulation, in this case, would be indistinguishable from a physical realization.
I think that when people from David Chalmers and Elon Musk to Rizwan VIrk and Nick Bockstrom talk about a simulation, they are talking about Turing machines. Once you hypothesize a much larger universe operating under different rules such that simulating our universe is practical, of course, all bets are off.
WOW! What a marvelous piece !! Potential counterargument – I’m not sure why you imply that DNA polymerase uses quantum ? On the other hand – not sure how it could be proved that it’s not using it. Nevertheless – thanks a lot for the marvelous ideas and excellent thought structure !!