Today, Science-heavy post! Of course, when we leave hypothetical questions and dive into actual science, the possibility of being objectively wrong is a lot higher, so please don’t hesitate to correct anything I didn’t quite get right. I’m trying to understand that stuff, and share it because I find it interesting. I’m definitely not a pro on the topic of quantum mechanics and physics (real or theoretical).
A pretty long time ago, I wrote a post describing The Simulation Hypothesis.
As some of you have noticed in my writing and choice of topics, the idea itself is something that I am very interested in. But as for the actual classic “we might be living in a simulation” argument, I have done a lot of actual research about it, unlike most of my other what-ifs.
Lots of people don’t think it is possible. Simulations in general are obviously possible, but no one can ever perfectly simulate the universe holding the simulation, because at first it seems like that would require using every single bit of matter that exists to hold the tremendous amount of data and do the huge calculations. But that is just what it would look like. Just like it looks like the weather is random, and it feels like there should be less even numbers than total numbers, our first guess can sometimes be very far from the truth.
There are two important principles (as of today) that solve this problem. The first one is “real”, while the second one is theoretically proven, but we do not have the technology (and it is dubious if we ever will have it) to actually do it.
The first one is patterns. Let’s take this picture, which is exactly 256 x 256 pixels in size, and all of them are black.
When it comes to data, there are many ways to store images. Let’s use a simple method of storing each pixel using a 0-255 RGB value with no transparency and no compression whatsoever. We just store each pixel by itself. Since every pixel has a location (x and y), and both those values would fit in a byte (a bit is a 0 or a 1, and a byte is 8 bits, therefore 256 possible values), we need two bytes for the location, plus 3 bytes for the color (0-255 for red, blue and green values). Every single pixel requires 5 bytes of data. We have 256 x 256 pixels, which is 65536 pixels total. Multiply that by 5 bytes, and you have 327680 bytes, which is 320 kilobytes. That might not tell you a lot if you don’t know how data storage works, but that’s a lot for a small picture.
When I created this picture in Paint and saved it, it ended up taking only 320 bytes. Not even a single kilobyte. So obviously, it wasn’t stored using the technique described above. That technique was literally the worst possible way of storing data, which is to write of every single value of every single object on the disk. It would take the exact same amount of space no matter what the content of the picture would be, since we store every pixel individually no matter the content.
That would be the technique to store every single attribute (location, nature, direction, speed, magnetic field, etc) of every single particle and wave in the universe for a simulation. The worst possible way to do it.
Let’s think about that picture again. 256 x 256, all black. Let’s say that we store it by simply writing down that: Length: 256, Width: 256, All pixels: Black (RGB = 0, 0, 0).
That would be enough for a human to recreate the picture, altough that particular human might get annoyed if you destroy the picture every time and tell him to draw it again, computers don’t get annoyed (yet), so we can use rules instead of raw data, and tell it to draw the picture again every single time you want to look at it. Now those rules would just take 5 bytes. Obviously, a real computer has to write down a bunch of other stuff too, to understand that the file is a picture, to know where the file is on your drive, to figure out what type of colors you use, to know the rules to drawing it again when you want to look at it, etc. So it takes up 320 bytes total. That’s still a crazy improvement. 320 kilobytes to 320 bytes is a 1024-fold improvement for the same picture.
Now what would happen if we tried to save the following picture:

This is the same square, but half is white. The background is white too so it looks kinda bad, sorry.
Precisely half of the pixels are black, and the other half are white.
Using the first technique, it would take exactly the same size. But using the second technique, it would be slightly bigger. Now you need a slightly more complicated set of rules. The size is 256 x 256, and every pixel with a “x” value under 128 are white, while the rest are black. To a computer, reading those rules and creating the picture is just as easy, but you need slightly more space to store the rules. On my disk, it takes 815 bytes now.
So, to save space, you have to divide your data into smaller pieces that obey patterns. The simpler the pattern, the smaller the data. Logically, the biggest possible picture on disk is one where every pixel has a completely unpredictable color, that literally follow no pattern. You’d then have to save every pixel individually, and it would take the most disk space. (Try it yourself! even using the same image format, the more complicated the picture is, the more space it will take up, even though it has the same number of pixels!)

Size: 2.74 Kb

Size: 3.85 Kb

Size: 4.22 Kb
Same goes with the universe. It might not be completely black or white, but there has to be patterns that could be used to reduce its size in storage. Even the smallest reduction would still be terrifyingly huge when compared to any storage space we have at the moment, which are completely insignificant when compared to our universe. But this would allow us to store all the necessary data while using less than a universe’s worth of matter in hard drives, which is good.
The next part is computing power. Not only would it probably take all the energy in the world to simulate all the energy in the world, but now if we use patterns for storage, then we need even more calculations to actually recreate the universe from the patterns every time we want to look at it. How can we possibly optimize this one?
Well, the answer to this one is basically time travel. Specifically, Faster than light communication.
Pretty much all of our physics models agree on two things: Anything going faster than light would probably go back in time, and we have no idea how to go faster than light.
So basically, if we ever figure out how to transmit matter or data faster than light, we unlock time travel. It is infinitely more likely that we do it with data first, since matter is heavy, and heavier things are harder to push. But what’s the point of sending data back in time?
Most computers nowadays work by transmitting energy (electrical current) through different nodes. That’s how the different pieces communicate with each other, and do calculations. Obviously that’s an extremely simplified explanation, but that will do for now. Electricity moves almost as fast as light in cables, and nowadays we aren’t very good at making the transmission faster. Our method of improving circuits relies more on making components smaller, and therefore reducing the distance that signals have go through. This is way more efficient at increasing response time than actually accelerating the signal.
Accelerating the signal from 99% of the speed of light to 99.9% of the speed of light would just make the computer slightly faster. If it takes you a second to travel somewhere, then now you would only need basically 0.99 seconds. But, that’s not taking into account relativity, which says that when you get closer to the speed of light, time slows down for you. This means a lot of weird stuff happens, and the signal would “feel” like his travel is a lot longer than what we see. Since the signal doesn’t care how long it takes for him, the only people who care in this situation are the ones who are basically unaffected, us. But as soon as we manage to go past the treshold of 100%, even by the smallest margin. then it would start taking a negative time, because of the time travel. In order to avoid violating causation and causing paradoxes (we have no idea what would happen and that might be dangerous), we would simply “slow” down the signal, in order to force it to reach its destination in precisely 0 seconds. When you actually have instant communication and calculations, you can do infinite calculations in a limited time.
So there you have it. Infinite processing power. Of course, that relies on the near impossible premise that faster than light communication is ever be achieved, and later made easy enough to give this ability to computer chips, circuit boards, and many other pieces. That might look impossible, but the internet and even videos would probably look impossible to 2nd century people, so I’m still optimistic.
It is also very important to note that faster than light communication by itself is not truly enough to make this work. You would need a form of faster than light communication that actually includes something (matter, energy, wave, etc) moving faster than light. There are some other potential forms of faster than light communication, like Quantum Entanglement, that are kinda cheating. Nothing actually travels between the two points (that we know of, at least), they just react instantly. No actual travel –> No going back in time.
Also, we expect going faster than light to bring time travel in some form, and if it does, then there probably is a way to control it. We can’t really test that yet, that’s just theoretical (relativity and stuff). If we happen to be completely wrong, and it either is completely uncontrollable or simply doesn’t work that way, the point is moot.
So we can already compress data easily, and we just need to figure out faster than light communication (with real movement) to (probably) literally obtain infinite computing power. Oh and we also need to avoid creating any universe-ending (or, slightly better, universe-resetting) paradoxes during our tests.
One last thing: The simulation hypothesis doesn’t literally say that we absolutely live in simulation. It argues that one of the following has to be true:
1: We will never aquire the ability to simulate a universe perfectly identical to ours
2: Nobody will ever try to simulate a universe identical to ours.
3: We live in a simulated universe.
So us never inventing faster than light travel, and never figuring out another way to obtain near-infinite computing power, basically means that the first statement would be true. So if you believe that creating a simulation is impossible, you agree with the simulation hypothesis. You are just agreeing by being pessimistic about our future or our expected technological progress. This could happen if we somehow become extinct before figuring it out, or if it really is impossible.
The only way to truly disagree with the simulation hypothesis is by saying that creating identical simulations will one day be possible, and we will do it, but we just happen to be the top-level universe, and the only one (out of an infinity of levels) to be actually real. To me, that’s just wishful thinking and ignoring math. But you’re free to disagree, that’s what makes discussions interesting!