Ah, no. At best they prove that we can't simulate our own universe. They don't prove ours isn't simulated or that other, higher fidelity simulations can't simulate similar ones.
The idea that the universe is a simulation proceeds as follows:
(1) Person notices that computer simulations are getting increasingly powerful. Maybe we will be able to simulate something like the universe one day which will have life in it.
(2) If simulating the universe is so easy and inevitable, what are the odds that we are at the top level?
The idea in the article would refute the inductive step.
> The idea in the article would refute the inductive step.
No it doesn't. The article describes a proof that it is impossible for a computer to simulate this physical universe with perfect accuracy; but, that's not actually a problem for Nick Bostrom's simulation argument. For the simulation argument to work, you don't need to simulate the universe with perfect accuracy – just with sufficient accuracy that your simulated people can't distinguish it from a real one. And this proof isn't about "ability to simulate a universe to the point the simulated people can't tell that it is a simulation", it is about "ability to simulate a universe with perfect accuracy". So the proof isn't actually relevant to that argument at all.
Please explain how to simulate a universe which is indistinguishable from a simulation but which is not accurate according to the rules of the article.
Does the article propose anything empirically testable?
I mean, suppose we are actually in a computer simulation-what observations could we perform, which according to the rules of this article, would show that we were in one, and not the “real” world?
Addendum: from what I understand, the article’s proof relies on computational quantum gravity having a Gödel sentence. Now, quantum gravity is in practice, as far as we know, experimentally untestable-the distinctive phenomena it predicts only occur at scales far beyond our present technological ability to explore-and who can say if that will ever change. So, is it possible for a computer to simulate humanity as it currently exists, such that the simulated humans couldn’t detect they were simulated? I don’t know; but what I can confidently say, is this research has nothing useful to say about that question, because this is theoretical quantum gravity research, and I’m not aware of any good reason to believe quantum gravity has any relevance to answering that specific question. This research claims to show computers are incapable of simulating aspects of reality which are empirically unavailable to us; even if the research is right, it makes zero difference to the question of whether the actual empirical experiences we do have are simulated or not.
One simulator would just "run the laws of our universe" (I don't care to make that precise, since presently it's a trivial statement, but hopefully it's clear that I mean to distinguish it from running a computer), but then that sort of trivializes the idea of "simulation".
Maybe it's possible for us to create a sub-universe of ours with a quantum computer such that we view the entities as a part of our universe, but the entities cannot be aware of us. (This is the insects at the surface of a pond idea, unaware of the dimension above or below.)
At that point the whole idea becomes quite removed from what most people would think of when asked to consider if the universe is a simulation.
To clarify: without being able to simulate the universe from within the universe itself (i.e. needing to resort to some "outside" higher-fidelity universe), then the word "simulation" becomes meaningless.
We could just as easily refer to the whole thing (the inner "simulation" and the outer "simulation") as just being different "layers of abstraction" of the same universe, and drop the word "simulation" altogether. It would have the same ontology with less baggage.
According to the current mathematical model we use to define the universe, built from Einstein’s field equations, we’re not in a simulation.
The said model is significantly misaligned with human perception regarding the start and edges of spacetime, so it’s completely valid to point out that it’s just a model (and that we might be in a simulation).
"We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity," says Dr. Faizal. "Therefore, no physically complete and consistent theory of everything can be derived from computation alone. Rather, it requires a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity and therefore more fundamental than spacetime itself."
Seems like quantum gravity theory might be missing something, no?
It's such a silly idea that whatever is simulating us would be in any way similar or care about what's possible in our universe. It's like a game of life glider thinking it can't be simulated because someone would have to know what's beyond the neighbouring cells and that's impossible! But the host universe just keeps chugging along unimpressed by our proofs.
It is still just a collection of inanimate parts. At no point does it suddenly come to possess any properties that can not be explained as such.
Now, apply the same logic to a computer and explain how AGI will suddenly just "emerge" from a box of inanimate binary switches --- aka a "computer" as we know it.
Regardless of the number of binary switches, how fast they operate or how much power is consumed in it's operation, this inanimate box we call a "computer" will never be anything more than what it was designed to be --- a binary logic playback device.
Thinking otherwise is not based on logic or physics.
I think we might be using "emergence" differently, possibly due to different philosophical traditions muddying the waters.
I'm going to stick purely to a workable definition of emergence for now.
Also, let me try a purely empirical approach:
You said the car "never possesses any properties that can't be explained as a collection of parts." But consider: can that pile of parts on the workshop floor transport me over the Autobahn to Munich at 200 km/h?
We can try sitting on top of the pile while holding the loose steering wheel up in the air, making "vroom vroom, beep beep" noises, but I don't think we'll get very far.
On the other hand, once it's put (back) together, the assembled car most certainly can!
That's a measurable, testable difference.
That (the ability of the assembled car to move and go places) is what I call an emergent property. Not because it's inexplicable or magical, but simply because it exists at one level of organization and not another. The capability is fully reducible to physics, yet it's not present in the pile.
parts × organization → new properties
That's all I mean by emergence. No magic, no strong metaphysical claims. Just the observation that organization matters and creates new causal powers.
Or, here's another way to see it: Compare Differentiation and Integration. When you differentiate a formula, you lose terms on the right hand side. Integration brings them back in the form of integration constants. No one considers integration constants to be magical. It was merely information that was lost when we differentiated.
86 billion neurons, 100 trillion connections, and each connection modulated by dozens of different neurotransmitters and action potential levels and uncounted timing sequences (and that's just what I remember off the top of my head from undergrad neuroscience courses decades ago).
It hasn't even been done for a single pair of neurons because all the variables are not yet understood. All the neural nets use only the most oversimplified version of what a neuron does — merely a binary fire/don't fire algo with training-adjusted weights.
Even assuming all the neurotransmitters, action potentials, and timing sequences, and internal biochemistry of each neuron type (and all the neuron-supporting cells) were understood and simulate-able, using all 250 billion GPUs shipped in 2024 [0] to each simulate a neuron and all its connections, neurotransmitters and timings, it'd take 344 years to accumulate 86 billion of them to simulate one brain.
Even if the average connection between neurons is one foot long, to simulate 100 trillion connections is 18 billion miles of wire. Even if the average connection is 0.3mm, that's 18 million miles of wire.
I'm not even going to bother back-of-the-envelope calculating the power to run all that.
The point is it is not even close to happening until we achieve many orders of magnitude greater computation density.
Will many useful things be achieved before that level of integration? Absolutely, just these oversimplified neural nets are producing useful things.
But just as we can conceptually imagine faster-than-light travel, imagining full-fidelity human brain simulation (which is not the same as good-enough-to-be-useful or good-enough-to-fool-many-people) is only maybe a bit closer to reality.
Well, the amount of compute is certainly finite in this era. 250 million GPUs in a year is a big number, but clearly insufficient even for current demand from LLM companies, who are also buying up all available memory chips increasing general prices rapidly, so the current situation is definitely finite and even limited in very practical ways.
And, considering the visible universe is also finite, with finite amounts of matter and energy, it would follow ultimate compute quantity is also finite, unless there is an argument for compute without energy or matter, and/or unlimited compute being made available from outside the visible universe or our light cone. I don't know of any such valid arguments, but perhaps you can point to some?
There's also no proof that intelligence can not be produced by an algorithm. Given the evidence so far, like LLMs seem to be able to beat average humans at most tests and exams it seems quite likely.
As the simplest theory, my default position is the universe is computable and that everything in the universe is computable. Note that they are not the same thing.
Some intuition:
1. If the universe contains an uncomputable thing, then you could utilize this to build a super turing complete computer. This would only make CS more interesting.
2. If the universe extends beyond the observable universe, and it's infinite, and on some level it exists, and there is some way that we know it all moves forward (not necessarily time, as it's uneven), but that's an infinite amount of information, which can never be stepped forward at once (so it's not computable). The paper itself touches on this, requiring time not to break down. Though it may be the case, the universe does not "step" infinitely much information.
One quick side, this paper uses a proof with model theory. I stumbled upon this subfield of mathematics a few weeks ago, and I deeply regret not learning about it during my time studying formal systems/type theory. If you're interested in CS or math, make sure you know the compactness theorem.
Do you mean like ghosts or like quantum randomness and Heisenberg's uncertainty principle?
We cannot compute exactly what happens because we don't know what it is, and there's randomness. Superdeterminism is a common cop out to this. However, when I am talking about whether something is computable, I mean whether that interaction produces a result that is more complicated than a turing complete computer can produce. If it's random, it can't be predicted. So perhaps a more precise statement would be, my default assumption is that "similar" enough realities or sequences of events can be computed, given access to randomness, where "similar" is defined by an ability to distinguish this similulation from reality by any means.
The last digit of pi doesn't exist since it's irrational. Chaitan's constant, later busy beaver numbers, or any number of functions may be uncomputable, but since they are uncomputable, I'd be assuming that their realizations don't exist. Sure, we can talk about the concept, and they have a meaning in the formal system, but that's precisely what I'm saying: they don't exist in this world. They only exist as an idea.
Say for instance that you could arrange quarks in some way, and out pops, from the fabric of the universe, a way to find the next busy beaver numbers. Well, we'd be really feeling sorry then, not least because "computable" would turn out to be a misnomer in the formalism, and we'd have to call this clever party trick "mega"-computable. We'd have discovered something that exists beyond turing machines, we'd have discovered, say, a "Turing Oracle". Then, we'd be able to "mega"-compute these constants. Another reason we'd really feel sorry is because it would break all our crypto.
However, that's different than the "idea of Chaitan's constant" existing. That is, the idea exists, but we can't compute the actual constant itself, we only have a metaphor for it.
What's the difference between a simulation and a non-simulation? Nothing, except where the simulation can be broken.
Can we accurately simulate a smaller universe in this universe? If I understand correctly, according to this paper the answer is "no". Except how do we determine the simulation is inaccurate, without either knowing what is accurate (and thus having a correct simulation), or being unable to distinguish the inaccuracy from randomness (the simulation already won't perfectly predict a small part of the real universe due to such randomness, so you can't point to a discrepancy)? What does it mean for a simulation to be “inaccurate”?
Also, you don't need to simulate the entire universe to effectively simulate it for one person, e.g. put them in a VR world. From that person's perspective, both scenarios are the same.
At best, this proves the imperative paradigm of computation with any set of instructions close to what we currently use cannot adequately simulate a universe. I like this, lest we forget there are and there can be many more ways to compete, fundamentally different from what we currently consider the typical one.
Ok, but the simulation could easily have been written to include an adjunct professor at UBC’s much-less-well-known Okanagan campus who isn’t actually that great at Gödeling.
Yup, if we're in a simulation pretty much all bets are off. Mandela Effect could merely be update patches. It could patch a proof that our world is not a simulation.
Let’s assume it’s simulations all the way down and we exist in plane P=n. The question is are we at n=0.
Looks like this result says we can’t simulate our plane in a computer. But the stuff in that simulation exists in P=n+1. So maybe the conclusion is “you can’t simulate n from within n+1” which means we can’t simulate our own plane, let alone our potential parent, and doesn’t mean we don’t have one
The simulation hypothesis runs in the Exponential Resource Problem:
To simulate a system with N states/particles with full fidelity, the simulator needs resources that scale with N (or worse, exponentially with N for quantum systems).
This creates a hierarchy problem:
- Level 0 (base reality): has X computational resources
- Level 1 (first sim): needs X resources to simulate Level 0, but exists within Level 0, so can only access some fraction of X
- Level 2: would need even more resources than Level 1 has available.
Every simulation layer must have fewer resources than the layer above it (since it is contained within it), but needs more resources to simulate that layer. This is mathematically impossible for high-fidelity simulations.
This means either:
a) we're in base reality - there's no way to create a full-fidelity simulation without having more computational power than the universe you're simulating contains
b) simulations must be extremely "lossy" - using shortcuts, approximations, rendering only what's observed (like a video game), etc. But then you must answer: why do unobserved quantum experiments still produce consistent results? Why does the universe render distant galaxies we will never visit?
c) the simultation uses physics we don't understand - perhaps the base reality operates on completely different principles that are vastly more computationally efficient. But this is unfalsifiable speculation.
This is also known as the "substrate problem"; you can't create something more complex thatn youself only using your own resources.
Even more devastating is the CASCADING COMPUTATION PROBLEM.
Issue: it is not just that you need resources proportional to the simulate system's complexity, you need resources to compute every state transition.
The cascade:
a) simulated universe at Time T: has N particles / states
b) to compute time T+1: the simulator must process all N states according to physics laws
c) that computation itself has states: the simulator's computation involves memory states, processor states, energy flows. Let's call that M computational states
d) but M > N: the simulator needs additional machinery beyond just representing the simulated states. It needs the computational apparatus to calculate state transitions, store intermediate values, handle the simulation logic itself.
The TIME PROBLEM
There's also a temporal dimension:
- one "tick" of simulated time requires many ticks of simulator time (to compute all the physics)
- if the simulator is itself simulated, its ticks require even more meta-simulator ticks
- time dilates exponentially down the simulation stack
So either:
a) we're in base reality, or
b) we're in a very shallow simulateion (maybe 1 - 2 levels deep max), or
c) the sim uses radical shortcuts that should be observable
Another thing to think about: if we are here, and assuming we experience things - that is we are not biological robots - and if the universe is indeed a simulation, then how are our conciousnesses any different from the ones that exist in the parent universe?
Anyone that has had a really vivid dream will tell you the brain needs no help in creating a simulation.
The idea that no computer or system could possibly be powerful enough for the complexities of a simulation is a very trivial way of looking at things and isn’t thinking of something that is readily available.
I have long held the theory the brain is very capable of filling in all the complex details required for a simulation.
Isn't there a deeper philosophical question on what it means to be a simulation?
Is the constraint of the "simulation" definition that the thing "simulating" the universe would be a computer less complex than the universe itself?
Consider a game world in a computer, we call it a simulation, and it is, but is it any less real than our reality, when thinking in terms of realities? In other words, we feel like our reality is more real because the game is less complex, we understand it fully and it is run using mechanisms we know and understand. So what would make us think it is a more real reality than our own?: If we didn't understand how it works? If its workings and rules are more complex than ours?
Taking a step back, are we as humans even capable of understanding a reality that isn't ours, even as a concept? Things like time, space, and fundamental logic are properties of a reality. I can't imagine a reality without them (at least time and space). We keep thinking in terms of "another place with time and space", how about a place with just one or neither? Imagine a computer program trying to understand a reality that isn't memory and clock rate. memory isn't space as in the space we know, it is capacity. clock rate isn't time as in the time we know, but it is very similar. In an SMP system you have "clock rate" spread across cores and processors so it is a concept different from our concept of linear time. If our reality is in an SMP, there would be multiple separate parallel but converging timelines, but then again is dejavu speculative/preemptive execution?
I know I'm all over the place with this post, but my goal is to question the entire concept of a "simulation". Is it simply a relativistic and human-centric way of expressing our perception of reality relative to other realities? When We dream, is that dream world any less real than ours? Certainly, for us it is no different than any unreality, but that's only for us.
I'm thinking the whole concept of "simulation" stops making sense if there are multiple realities (which I'm only talking about hypothetically, I don't actually believe that). In terms of a single reality within the same time-space and rules of physics and all that, what does it mean for the universe to be a simulation?
With multiple realities, you have to stop presuming things like time and space as we understand them, just the same as time and space in a dream, or in a video game (or any program). Is the world of bits, bytes, processor instructions and memory addresses any less real or more of of a simulation than ours would be in a multiple-reality scenario?
Consider the very basic assumption of causality, that things originate from somewhere and sometime, if the space-time assumption isn't a given, then the very concept of causality might not apply in some realities, and thus in the relationship between realities, and therefore the whole concept of our reality being a simulation depends on causality being a thing, because we're saying our reality is caused by another reality. For there to be a causal relationship, not only does space-time need to exist but it needs to be in the same space-time reference-frame for one to cause another. But again, we can't assume the rules of causality are the same or that there isn't some other fundamental element of reality that makes it all work when talking about inter-reality relationships.
I think we are too tethered to things like mass, energy, time, space. 1+1 resulting in two. What I would like to see explored more (by people smarter than me) is the fundamental element of reality that is information. Before all of those things (time,space,mass,energy, rules,etc...) there is information. Similar to the realities we create in our computers and how they need information to exist first and foremost, and then things can be done with that information and our own little primitive proto-reality is created. All those other things may be different from the perspective of a computer program, but information, while transformed in the way it is represented and processed, at least in our simulations (or proto-realities), the information from our reality is the whole point of that sub-reality's existence.
You can infer things about our reality as a computer program if you focus on the information. no matter how well it is described to a computer program, it can look at a picture and think "hmm.. an apple" but it simply cannot perceive things as we do. it does not experience time,space, color,taste like we do.
So, if we consider reality relative to the experience of the observer as defined by the properties of the world they're in, then the concept of a simulation is entirely relative to our experience in our world and its properties. But if our reality is "caused" by and "executed on" (presumptive concepts) elsewhere, then we would need to understand elsewhere's world's properties and perceive that super-reality, and only then can we experiencially claim that our reality is simulated?
It's a bit like motion and relativity isn't it? if you can't define the frame of reference you can't measure the motion in any meaningful way. You can't tell how fast a car is going if you can't define from what perspective you're measuring from. That sounds silly at first until you consider the entire planet is in motion around the sun, and the solar system hurtling through the galaxy. Not to mention another car traveling at the same speed would not observe any motion in relation to itself. We're trying to measure simulation, but from our perspective (we're the thing that's "moving" if it were motion), we can potentially measure it from the other reality's perspective but not without knowing what that reality is.
Can a computer program tell that it is in a computer reality?
Can you write code that can do that? Certainly it can print output that claims as much, we can even simulate the entire computer system within a program running in that system. But it still can't figure out what "space" means or "time" means as we experience it, it can learn about energy, rules of physics,etc..but it can't experience them. So when it determines that that is in a simulation, its definition of things is still relative to its own experience so it isn't really determining that it is in a simulation, it is just describing things we told it via information transfer about our reality. When you tell that program "We created your reality" its concept of "created" or "originated" is vastly different from ours, so unless it can test for things it can't even conceptualize, how can it truly tell that it is in a simulation?
Sorry for the really long post! I just wanted to sort of dump my philosophical thoughts on this (and I was bored). I think theoretical physics and philosophy need to work very closely together. Questioning philosophical assumptions is important before talking about theoretical physics. The title says "Physicists prove", that's what I keyed on, you can't prove something whose definition we (at least I?) don't entirely agree on, or haven't resolved. If we can't write a computer program that can prove it is in a computer on its own, how can we prove that we're in a simulation?
reply