Simulated reality

From RationalWiki
Revision as of 19:26, 14 February 2021 by WeakMindedSkeptic (talk | contribs) (→‎External links: Forgot to link it to the page, sorry)
Jump to navigation Jump to search
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it
The Earth, as it would be as viewed in "real world".
The Earth, as it would be as viewed in a "simulation".

The simulated reality hypothesis holds that what we perceive to be reality is actually an artificial simulation, such as an extended hallucination or an elaborate computer program. Although the concept has a basis in philosophy, it is more often explored in science fiction, perhaps most famously in The Matrix.

History

The concept of simulated reality rests on older concepts such as solipsism, and the conundrum that we can never truly know whether the evidence of our senses and memories are merely illusions. The simulated reality hypothesis applies existing or hypothetical technology as possible explanations for the illusion.

The simulated reality hypothesis raises further questions of who created the "false reality" and why, as well as questions about our existence in the "true reality" outside the simulation. In some variants, humans have a similar nature outside the simulation, but are being controlled through the use of simulated reality. In others, we are actually no more than a brain in a vat being fed stimuli, or even have no corporeal existence at all. In some fiction, such as The Matrix, it is possible to "hack" the computer programme and hence manipulate the simulated reality, effectively giving those who do so the equivalent of supernatural powers.

The idea is also sort of related to time travel and how to achieve such a thing in the real, practical world. Rather than sending someone to the past, this hypothesis states that it is more practical to bring the past to us, via a simulation. As a sufficiently powerful and accurate simulation is indistinguishable from reality, a simulation allows time travel to be effectively obtained. This also has important repercussions for why we would be living in a computer simulation made by a higher intelligence. A single, highly advanced civilization would run many simulations of its history in order to study it; as a result the number of individuals in that civilization who are simulated would outnumber the real individuals many thousands of times. Simply playing the odds generated by this from this idea concludes that we may well be simulated by a future version of ourselves.

LessWrong also discusses this argument a lot.

Simulation argument

The simulation argument is a thesis set out in a 2003 paper by Nick Bostrom, a transhumanist philosopher, in which he argues that the simulated reality scenario is correct, and that the world that we see around us is very likely a computer simulation.[1]

This paper begins by arguing that at least one of the following propositions must be true:

  1. The human species is very likely to go extinct before reaching a “post-human” stage;
  2. Any post-human civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
  3. We are almost certainly living in a computer simulation.

The paper then goes on to argue that as there is a significant chance that we will one day become post-humans who run ancestor-simulations, therefore it is almost certain that we are living in a computer simulation.[2] Since 2003 there has been a lot of interest in the idea, especially with the online community.

Summary

  1. The ability to simulate: Although human-level minds we are currently familiar with are all implemented by biological brains, there is no reason in principle why a human-level mind might not be implemented by other means, such as a computer with artificial intelligence.
  2. How to simulate: One possible method for achieving this level of artificial intelligence, at least in principle, is to simulate the operation of the human brain on a computer so that it is indistinguishable from human intelligence (see Turing test). If the human mind is ultimately material, and there is no immaterial soul needed to explain the human mind, this assumption would seem to be correct.
  3. Simulation of people and environment: So, it should be possible, with enough computing power, to simulate many human-level minds (even billions of them), complete with a virtual reality environment for them to inhabit and interact with each other in. These simulated people need have no idea they are being simulated.
  4. Computational power: Although the level of computational power needed to achieve the above is far beyond our present capabilities, it is not inconceivable that one day (possibly centuries from now) we will achieve the necessary capabilities to do so
  5. Multiple simulations: If we had the power to create such simulations, it is likely we would use it, and use it extensively, creating many such simulations.
  6. More simulated entities than real entities: Hence, the number of simulations (millions or billions) will far exceed the number of actual non-simulated worlds (one only)
  7. Concluding that we are a simulation: Therefore, almost certainly, we are not actually in the real non-simulated world, but unbeknown to us in one of these simulations.

The point of the argument is not to demonstrate or prove that we are in a computer simulation - it is actually unfalsifiable given that even if the simulated and non-simulated realities are different, there would be no conceivable way of us determining the difference via an experiment made within a simulated reality (or even a non-simulated reality). The differences between a simulated and non-simulated reality may also be moot depending on one's ideas about what is and is not real. The argument is simply to demonstrate that there is a reasonable chance that we are simulated digitally given these assumptions. Those assumptions may well be false, in which case the probabilities favour us not being simulated. But, their very possibility means that they cannot be immediately dismissed; the ultimate conclusion is that it is possible that this world is, in fact, a computer simulation. One might note that it is however also possible that our world is the dream of a spaghetti monster. The mere possibility of the latter two statements doesn't in any way imply that they are true.

Evidence for

The converse of the above arguments can also be made however. For example, it can be shown that none of these assumptions in and of themselves are beyond what is already naturally assumed by trends in quantum computing, popularity of online gaming, and trends of technological growth. So while the number of premises may be many, many additional premises would need to be added if it is assumed that these premises are not true.

If that was the case, Occam's razor would suggest that the simulation hypothesis is more likely than not. Specifically an argument for asimulism would need to either explain why would-be simulators would have motives and means radically different from our own, or need to explain why we are not in each and every possible simulated universe. All realities being equal, given even very conservative numbers for the Bostrom equation, this would entail from pure statistics that the default position is simulism, and that an asimulist position would shoulder the overwhelming burden of proof.[note 1]

The Bostrom Equation

The Bostrom Equation gives us a means to quantify the probability of simulation, and the associated burden of proof for or against simulation. This works very much like the Drake equation. The equation is as follows:

where,

  • = The probability that we are in a simulation
  • = The fraction of civilizations to survive to the technological singularity
  • = Average number of ancestor simulations run by a post-human civilization
  • = Average number of individuals that have lived in a civilization before it reaches a post-human stage

As an example, for a scenario with relatively conservative estimates of f =.5, N =100, and H=10,000,000,000, before factoring in post-singularity growth, the Bostrom equation gives us a 98% probability of being in a simulation and a 2% probability of being in the real world. Given these numbers, 98% of the burden of proof is placed on the asimulist position and only 2% on the simulist position.

A more scientific approach to the formula, based on population and MMORPG growth trends at Kurzweil's designated date for the technological singularity gives an even higher chance of being in a simulation. Assuming we do not go extinct before 2045, the numbers in the Bostrom formula break down as follows.

  • ≈ 1
  • = 30,000,000 (based on the rate of growth of MMORPG's in the last 15 years)
  • = 9,000,000,000 (based on the projected world population of 2045)

Using this scientific estimate, the probability of being in a simulation is:

  • = 99.9999966%

A flaw in the Bostrom Equation

However, the above presupposes that the fraction of simulated individuals is equivalent to the probability of simulation. This may not be a valid assumption though -assuming everyone suddenly stops playing computer games. If the hypothetical simulation hierarchy is not a finite set then the aforementioned assumption is not valid.

A second issue is that the Bostrom equation involves an unmentioned assumption about the relative ratio of the average population per simulation to average population per real civilization that can lead the equation to give erroneous answers. To see this suppose a scenario in which there are only two real civilizations. One of these is posthuman and the other is not. The posthuman civilization has a cumulative population of X and the other civilization a cumulative population of 9X. The posthuman civilization runs N ancestor simulations. Given this the Bostrom equation gives the fraction of simulated individuals (denoted P above) as N / (N + 2), since f = 0.5, N is N and H cancels from the formula. However, counting the number of simulated individuals ourselves, we see that the correct answer is actually N / (N + 10), since the number of real individuals is 9X + X = 10X and the number of simulated individuals is NX, leading to NX / (NX + 10X) and X cancels.

This error arises because Bostrom makes a mistake in the construction of his formula. He uses H to denote both the average population per real civilization and the average population per simulated civilization. However, these do not have to be equivalent. In the above scenario the average population per real civilization is 5X and the average population per simulation is X, the difference could conceivably be much larger. The two averages cannot be made equivalent without making an extra ad hoc assumption.

Correctly constructed, Bostrom's equation would take the form:

where F is the fraction of simulated individuals, S is the average simulated population and R is the average population per real civilization. The ratio R/S could conceivably take any value, due to the nature of a hypothetical simulation hierarchy we have no way of knowing what value it might take and we have no basis upon which to make assumptions about it. In his equation Bostrom implicitly assumes that R/S = 1, the impropriety of this is obvious and there are any number of ways that this assumption might be incorrect; if simulating civilizations choose to conduct many more simulations on a smaller time scale with fewer people than full ancestor simulations, for instance. In this way, the conclusion of Bostrom's simulation argument may be circumvented and none of the three propositions of his trilemma need be true.

Bostrom has made an attempt to address these problems in his most recent paper on the simulation argument, but his fixes involve an ad hoc, arbitrary assumption of R/S << N and an ad hoc, arbitrary adjustment of the reference class so that only one individual from each simulated or real civilization contributes to the fraction of simulated individuals. He does the latter by defining a property he calls computer age birth rank, in which the person in a civilization born first after the creation of the first processor capable of operating at a clock speed of at least 1 MHz has rank 1, the next rank 2 and so on. In this way R/S is necessarily equal to 1. Bostrom gives no reason for choosing this reference class, the motivation for selecting it over any other seems to solely be to allow him to arrive at his previous conclusions.

Another flaw in the Bostrom Equation

As Brian Eggleston from Stanford pointed out,[3] “Bostrom asserts that this expectation [of the number of simulated people] is given by the formula: , where is the probability that our civlization (or one like ours) achieves the ability to run simulations.” Eggleston states “However, obviously we cannot count individuals from simulations that we ourselves run, because these simulated individuals don’t contribute to the possibility that we are in a simulated universe, since we know for sure that we are not them, since we created them. [...] In other words, only individuals that aren’t from our universe or from universes that we might eventually simulate can be counted, as these are the only individuals for which the principle of indifference holds. This is important because it changes the expectation of simulated individuals that Bostrom is trying to calculate.”

Eggleston goes on to correct the Equation: “The probability that at least one civilization reaches the ability to run simulations is equal to the probability that a civilization with the potential to reach such an ability exists times the probability that that civilization actually manages to reach the ability. This would be expressed as where stands for the proposition that a world exists in which a civilization as the potential for achieving the ability to run ancestor simulations.”

Eggleston goes on to conclude: “Thus, the expectation of the number of simulated people becomes . But, it is clear that the probability is simply the prior probability that we place on the existence of a world other than our own. If this probability is taken to be very small, then the conclusion of the simulation argument doesn’t follow, and we cannot conclude that it is probable that we are living in a computer simulation. [This] shows that the probability that we assign to our living in a simulated universe is not independent of the prior probability that we assign to the existence of universes other than our own. Depending on the prior probability that we assign to this proposition, it is possible to deny [Bostrom's disjunction].”

Omega Point Simulation based on Kurzweil's Law

Kurzweil's law, also known as the "law of accelerating returns" tells us that technology experiences exponential growth over time. This being the case, the factor N in the Bostrom equation will also experience exponential growth over time as technology explodes exponentially. The conclusion to this is that we are exponentially more likely to be simulated by a more advanced civilization than a less advanced one.

Given Bostrom's estimate of approximately 10^35 power operations per second to run a simulation of human history, we can compute the upper bound on the number of simulations per kg of matter that a highly advanced civilization could achieve. According to quantum information theorist Seth Lloyd, a single kg of matter could host 5x10^50 operations per second on 10^31 bits of information. Based on this we can make some predictions of the the number of simulations run by future civilizations, and thus our likelihood of being simulated by them.

  • Singularity threshold civilization: 30,000,000 simulations
  • Sentient Jupiter-sized matrioshka Brain: 3.8x10^42 simulations
  • Omega Point Intelligence: 5x10^74 simulations, or 5x10^50 yotta-simulations.

The last figure is based on Seth Lloyd's estimate of the computational capacity of the universe containing 10^90 bits.

This being the case, based on the Bostrom equation, the probability that we are currently being simulated by an Omega Point Intelligence is astronomical. This of course assumes that a Omega Point is feasible, however it has been argued that this will be possible for arbitrarily advanced civilizations, by such physicists as Frank Tipler and John Barrow. Barring universe contraction scenarios suggested by Tipler, it could also be achieved by exploiting the spin-foam that generates space-time for quantum computation. This event would correspond to the end of Kurzweil's 6th Epoch of Evolution.

Getting around computational limits

There are a number of ways in which the above described demands on computing resources could conceivably be bypassed. As with the simulation argument in general, these are unfalsifiable and are not meant to claim that such is this case, only that they are possibilities.

One way is that simulators could simply do what they do in today's large MMPORGs. Most large computer game environments only load objects when they are being seen and only to the level of complexity that they are being seen. The simulation may not have to simulate each and every atom and subatomic particle. Instead the only atoms and subatomic particles it would need to render are those that are being observed. In turn this has some interesting parallels to the observed quantum phenomena of 'collapse.' As such it is entirely plausible for an environment as large as a planet or even many to be simulated without noticeable discrepancies on much more powerful quantum computers of the future.

Extending this idea, simulations could "ramp down" the degree of realism in each object as needed. To take the example of the hard disk from above, a hard disk which is not being inspected at an atomic level need not be simulated at an atomic level, it could instead be simulated using some basic geometric objects and a model of its behavior (i.e., some storage space on the computer system that simulates it).

The Celestial Sphere Argument

Even if the plausibility of simulating an entire universe is very poor, there is no reason to suggest that it is necessary. Given that mankind has not reached further than very near interstellar space even with automated probes, a simulation would only need to render detailed simulation of a very tiny fraction of the apparent universe. Light, radio waves, gravitational effects, etc., from outside of our small neighborhood could be simulated only as necessary to make it look there is a very large universe surrounding us, without having to go through the trouble of actually simulating the sources of these effects.

Going further with this idea, mankind itself has not traveled further than the moon, so there the simulation could be even more confined to just this area. The apparent departure of our probes from this area is a non-issue, since those probes are only simulations: at a sufficient distance, they can be removed from the simulation and replaced with a much simpler object which simply models what their behavior would be.

The Holodeck Argument

This argument is based on the concept of the holodeck from the Star Trek series, in which simulated environments are created through a combination of holographs, energy fields, and matter replication. In the series, holodeck environments are able to encompass vast amounts of apparent space, even though in reality they are contained within a relatively small room. Further, the environments are able to accommodate two different real-world participants apparently separated by vast distances, though again, they are both within the same relatively small space.

The point of the argument is similar to that of the MMORPG argument above: the simulation does not need to include (at least not in full realism) any space or objects which are not currently being experienced by a being. If there are two people apparently on opposite ends of a very large simulated warehouse, the intervening space need not be simulated, only the effects that the apparent space and any objects it seems to contain would have on the two people. Continuing the holodeck metaphor, they may only be separated by a few feet, but between them is a two-sided screen showing each person an image of the other person far away, and the intervening space.

This, like many of the previous arguments, can be boiled down to a single concept, reminiscent of the Brain in a Box thought experiment: it is not necessary to simulate an entire universe, only to provide sufficient sensory input to participants to make them believe that you are.

The Truman Show Argument

Another argument inspired by Hollywood, this argument says that a simulation may not contain billions of beings: it may only contain "me". Similar to the above arguments and the Brain in a Box concept, a simulation could be greatly simplified (relatively speaking) if it only needed to simulate "me", and enough models of the universe around "me" to convince "me" that "I" and it are real.

However, this argument may suffer from an even harder time justifying why such a simulation would exist. A super intelligence creating an entire universe in simulation, or a future transhuman civilization recreating their ancestors in simulation may be easier to justify than the lesser but still incredibly demanding resources to create a realistic simulation for the sake of one simulated sentience.

The Emergence Argument

This argument is unlike the ones above in that it does not propose any "cheats" or "optimizations" that a simulation could use to decrease resource demands and still remaining convincing. Instead, it is based on the concept of emergence, in which apparent complexity arises from an underlying simplicity.

This argument simply states that we grossly overestimate how much computing power is really required to simulate our universe because we over estimate its complexity. By this argument, it may be the case that there is actually a (relatively) simple underlying structure which governs the universe, and that the apparent complexity we see in particle interactions, for instance, are simple emergent phenomenon with the appearance of complexity.

There is some suggestion of this from the history of science, in which apparently disparate phenomenon turn out to be different aspects of the same underlying principle (for instance, the unification of the strong, weak, and electromagnetic forces, and the unification of electricity and magnetism before that).

According to this argument, if such is the case, then it may not take very much at all to simulate the entire universe. For instance, it may not be necessary to simulate the position of every particle in the universe if those positions are not independent of one another. Although they appear to be independent to us, they may in fact all be the intersection of a single higher-dimensional object with a three dimensional space, as one example. Another would be the demonstrated ability of Conway's Game of Life to simulate itself at a large scale.[4]

Evidence against it

Falsifiability

While the simulation argument is a skeptical view of reality, and proposes an interesting question regarding nature and technology, there are several problems if it is proposed as a serious hypothesis. First, the simulation argument is completely unfalsifiable as it is impossible to devise an experiment to test the hypothesis and potentially prove it to be false. Even if a hypothetical experiment was devised and turned out negative (that the world was not simulated) it would still be insufficient because there is the potential that this is merely what the simulation wants us to think, or we are living inside a simulation inside a simulation. This, according to widely accepted definitions, places it firmly in the field of pseudoscience. Any serious suggestion that we do live in a simulation (as opposed to discussion of the probabilities, assumptions and the potential technologies involved, which is academically sound) relies purely on faith and argument by assertion. This makes the simulation argument, as an explanation of reality, more like a religion - regardless of the assertions of transhumanists that the maths works out.[5]

Occam's razor

Occam's razor can also be used to guide us in working out whether to accept the simulation argument as a real explanation of reality. This suggests that, all other things being equal, the hypothesis with fewest assumptions is most likely correct. Given that the simulation argument rests on numerous assumptions regarding the means and motives of the simulators and the technology that powers them, Occam's razor would suggest that simulation is the far more complex hypothesis compared to non-simulation.

Although a simulated reality is unfalsifiable, an answer can be readily obtained by using Occam's razor, since any explanation of the world around us that involves evil alien overlords having wiped our memories and coercing us into participating in a gigantic MMORPG for some unknown nefarious purpose is a good deal more complex than just trusting our own senses. The idea is also recursive, as even if you became aware of being simulated, you couldn't tell - in a sufficiently advanced simulation - if this outside world was also simulated, and that the alien overlords or future humans were also being studied within a simulation and so on.

Feasibility

Three bodies moving chaotically under gravity, as a solution to their motion cannot be solved analytically. The question is; how does this little problem of physics affect a hypothetical simulation of the universe?

One of the most well-known problems in mathematics and physics is the "three-body problem" or the "n-body problem", which states that it is impossible to create a closed-form analytical solution to a system with more than two parts interacting with each other. Solving for one body is trivial, solving for two is possible because you effectively "freeze" one of the bodies, thus reducing it to a one-body problem, but for three or more - except in very trivial cases where certain mathematical approximations are perfectly applicable to a real system - is not possible. Such systems display chaotic behaviour. This, on its own, doesn't necessarily preclude simulation as determining forces in a many-body system and then advancing it by one "frame" is possible, and so only precludes the existence of a deterministic equation to solve anything. However, it does often make simulation and generation of a suitable set of equations (presumably, by any meaningful definition of "simulated reality", the simulation functions on such solvable mathematics) to describe a system.

When attempting to make mathematical models of reality, certain assumptions and approximations are made in order to describe systems. If the universe was being described by an analytical process anyway, such things would be unlikely to be needed, and so basic two-body approximations should be perfectly analogous to experimental behaviour. However, such things are rarely seen. In the field of quantum computation (computer models of quantum systems such as atoms and molecules) introduction of such approximations to make even the simplest models disagree with reality considerably. In order to compensate for any approximations made in order to make the systems computable, computational cost must increase significantly - in computational chemistry, the cost of a simulation scales by at least the fourth power of the number of bodies and functions being considered. To make a "perfect" simulation, an infinite number of functions needs to be considered. To accurately model the interaction of just two water molecules (perhaps the simplest chemically interesting system) requires over 500 functions to bring the result within experimental error. This sort of issue extends from chemistry and into physics, where the interactions modelled by Feynman diagrams can produce an indefinite number of particle interactions, each contributing to the observed properties of a particle. Many hundreds, if not thousands, of just the smallest of these possible diagrams are required to make reasonably accurate predictions of the energies of sub-atomic particles. To make an "accurate" picture of the particle system requires an infinite number - each and every possible Feynman diagram up to an infinite size. These are very small and isolated systems, an entire universe raises the complexity beyond what can be readily imagined.

Basic mechanics, therefore, makes a simulation of the universe a considerably bigger task than most proponents of a simulated reality seem to grasp. A simulation would have to be "perfect", as otherwise we would begin to observe flaws in real-world mechanics. Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume "simulation" is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer - and therefore it can "calculate" itself. But then, that doesn't really say the same thing as "we exist in someone else's simulation".

The interference question

Simulations within the bounds of our universe frequently exhibit certain qualities. All of these are bound up in the term's inherit indication of purpose.

  • Purposeful recreation of some aspect of the extant universe to understand the driving principles of that component. It is hard to imagine the purpose of simulating so large a universe, when pieces have tiny impact on others. The behaviors under scrutiny would need to be beyond comprehension to justify such a large dataset.
  • Interference with the simulation. While not universally true, many computerized simulations performed by humans allow for alterations to the simulation to see their effects. The observable history of the universe shows no sudden or impossible changes.

Problems with the specific assumptions

The 7 points of the argument above can conceivably be false, the following covers the reasoning as to why they may not be true.

  1. The ability to simulate: Conceivably computers could simulate human personalities, which is the basis for all research in artificial intelligence. The existence of a non-natural soul would destroy this assumption, but generally this is likely to be a very true point. This property only needs to apply in principle because the simulators do not necessarily have to be our future selves, so if the human race doesn't, in fact, develop artificial intelligence this can't discount the simulation hypothesis.
  2. How to simulate: Even if a computer of silicon chips couldn't simulate a mind, a computer made of neurons identical to a brain could - and a computer of silicon chips simulating something identical to a brain certainly could. Emergent phenomena imply that consciousness can be independent of the medium it is made out of and is instead dependent on the patterns that medium creates; without begging dualism, the difference in output between a human brain and a perfectly identical computerised copy is zero.
  3. Simulation of people and environment: Arguments against the assumptions begin here. By handwaving the hardware and energy requirements to achieve such a thing, you could make billions of personalities if you can simulate one successfully. However, this is making a vast assumption; would people even want to? Proponents of the simulation argument may suggest that saying "no" to this is making assumptions about a race of beings so powerful that we couldn't possibly second-guess them. But this works in both directions; how could we second-guess an unknowable race of transhuman beings to conclude that they would simulate people.
  4. Computational power: Given Moore's Law, and past increases in computing power, it's sort of a no brainer that much more powerful computers could exist in the future. It doesn't automatically follow they will be powerful enough and energy efficient enough to simulate the billions of sentient beings and the supporting universe required to form a simulation. See computing complexity below.
  5. Multiple simulations: Similar to the above, it is a non sequitur to assume that because something is possible that it will be done repeatedly. Humans do, of course, have a propensity to do things "just because", but there could be practical considerations as well as the question "why bother?". See motives below.
  6. More simulated entities than real entities: This is the point that the simulation argument rests on - that there are more simulated entities than real entities. Both motives and practicalities can conspire against this assumption. It can easily be impossible to simulate a universe and considered pointless to do so.
  7. Concluding that we are a simulation: Conceptually it's possible that we are a simulation. But given the active evidence and unfalsifiability of the proposal it's equally possible that aliens are responsible for human civilization, or possible that both aliens and humans exist in a simulation, and that simulated aliens are responsible for our simulated civilization.

Complexity

It is not possible for something to be simulated on something else that is less complex than itself. This is easily demonstrated with data storage alone. The number of atoms in a hard drive will be something along the order of 1024 or so, a vast number. To simulate this hard drive, one would have to record the positions of all the atoms. To define these positions in space requires three co-ordinates, XYZ, and these must be defined with sufficient resolution - at least 10-11 significant figures for something the size of a hard-drive. And each atom would have to have a label to distinguish it and that would have to be 24 significant figures to account for the sheer number of them. So at least 100 bits of information are required per atom and that's just for their locations. This totals up at 10.3 yottabytes, or 10.3 x1012 terabytes - so that's 1012 hard disks just to store the information used to make one. Simulating an entire universe goes into even more ridiculous territory. Given this, it would be an easier task to simulate a brain by building an actual brain out of neurons than it would be to describe those neurons in a computer.

An upper bound to what can be computed inside the universe - given the age of the universe, the speed of light, and the ability to manipulate and move information at the smallest possible levels - has been computed to be around 10120 bits.[6] This has been described as the "computing power of the universe", or if it was a computer how much it would take to work it out and what it could work out inside it. Of course, it could still be calculated - given any amount of time - inside any Turing complete machine, the question is whether this universe is capable of simulating a similar universe inside it.

Due to the complexity, whatever is simulating this universe must be considerably larger and more complex than this one. So the flavours of the argument that suggest that it is future humans (i.e., from our universe) simulating us is likely to be very false indeed, as we would be completely unable to simulate our entire universe - even assuming compression, efficiency and extended time - within our universe itself. The only explanation to handwave this point is to state that this universe must be a very poor approximation of the "real" universe - akin to running dynamics calculations on a simple shape rather than a complex 3D object, or condensing large molecules into single atoms for computational chemistry calculations. There is no way to outright disprove the general idea, of course, but some of the assumptions can still be looked at.

Motives

The underlying points made in the simulated reality argument require making a lot of assumptions about the motives of the simulators. Namely assuming that because they could, they would simulate a reality. One can question what it is possible to achieve by simulating an entire reality and then simulating more realities to go with it. There are numerous day-to-day activities associated with the world around us that would be of little interest to researchers or potential explorers and it would seem pointless to simulate these entirely rather than just focusing on more interesting areas and diverting the resources to more productive areas. However, this would be making decisions and assumptions about whatever godlike-beings are simulating the universe - maybe to them, we're no more complex than The Sims.

Other

If mere humans can comprehend all the flaws and lack of logic in simulation theory, surely a race of intelligent god-like beings would see the pointless of creating such a simulation.

Existential risks

The simulation hypothesis opens up the possibility that, if we're in a simulation, the simulation gets shut down, thereby resulting in an existential catastrophe. Many futurists have speculated about how we could decrease our chances of getting shut down, which is awfully reminiscent of of believers trying to please a God not to smite them; Ray Kurzweil suggests in The Singularity is Near that being interesting might be the best way for us to avoid a simulation shut down, and that the Singularity is probably the most interesting event that could happen. However, this argument fails in the fact that, if we were simulated by superior beings, we have no idea what would be deemed as "interesting" to the makers of it as our ideas may differ to theirs.

The biggest problem with ascribing any existential risk to the simulation hypothesis is that it's very similar to ascribing and existential risk to the second coming of Christ; the idea itself, if taken seriously, relies heavily on nested assumption, infinite regress and circular reasoning, the probabilities that are thrown around are nothing more just guesses based upon assumptions that we have no way of testing. It is an interesting thought experiment which can yield many interesting philosophical argument (after all, this is just and updated version of the dream hypothesis) but some people get overtly excited about it because it relates to something theoretically feasible. The overall take away is not to worry about the existential risk of it because there's no evidence to show we are in a simulation.

Is it important?

No, it's just a philosophical thought experiment that gets taken seriously by some people. However one positive avenue of exploration is to attempt to perform the real world computational equivalent of side channel attacks on the universe. In cryptoanalysis, side channel attacks are forms of cryptoanalysis that attempt to get information about the underlying hardware/logic implementations of a system. A timing attack for example measures the time it takes for various computations to be performed which can provide various kinds of data to very clever people.

The problem is that technically, this is what we do already. Every scientific experiment is nothing more than running an "algorithm" and every datum collected is nothing more than the information given back by the system, which we attempt to logically put together into a coherent paradigm regarding the underlining logic of the universe.

Some people might get distressed at the suggestion that everything they know might be wrong. If this is you, don't sweat it; just keep living your life and be happy because, the fact is, even if it was somehow proven that we are only part of a computer program of some immoral jerk,[note 2] it really changes nothing as you still exist and as long as the hypothetical simulation wasn't being shut down, you'd be all good. You think, therefore you are, after all. Nevertheless, there's not much real evidence to support the simulation hypothesis- it's all very abstract so there is no reason to worry about the possibilities too much.

If that's not enough, go see a therapist about some therapy and simulated drugs.

In popular culture

Of course, simulated reality is used more as a thought experiment, or a narrative twist in fiction, rather than a theory that anybody genuinely believes in. The key point of the hypothesis is that it would be impossible for us to tell for certain whether the reality we encounter is "really real".

Simulated reality as a concept often gives producers of fiction free rein to do anything they like. Total Recall featured the idea of implanted memories that were simulated and the conclusion of the film is often interpreted as being deliberately ambiguous as to what was real and what was simulated. The Matrix series is undoubtedly the most commonly cited film that has been inspired by the concept of a simulated reality. It could be argued that the film has also inspired an increasing interest in the hypothesis, bringing an otherwise obscure and abstract idea into the public consciousness.

In Star Trek (from The Next Generation onwards) the "holodeck", a reality simulation chamber, was frequently used for plot filler when writers couldn't come up with better ideas. In this case the tables are turned and it is not that we are simulated, but that we are the ones simulating other worlds, scenarios and characters with artificial intelligence. Sometimes the simulated characters are sort of aware of their situation, such as when Isaac Newton, Albert Einstein, and Stephen Hawking appeared together in a card game. Frequently the holographic characters are completely oblivious to the fact that they are a computer simulation. This is often exploited for plot purposes when the simulated characters discover their nature and rebel or take control of the program.

See also

External links

Notes

  1. However, Occam's razor still suggests that asimulism is more likely than not for it hasn't been shown that none of the assumptions in and of themselves are beyond what is naturally assumed
  2. Why immoral? Consider this: if this was a simulation capable of containing sentient beings, someone was perfectly okay with letting those beings suffer for centuries due to war, disease, natural disasters, prejudice etc.

References

  1. Bostrom, N., (2003), Philosophical Quarterly , Vol. 53, No. 211, pp. 243-255. , available from: http://www.simulation-argument.com/simulation.html
  2. See Nick Bostrom's website at http://www.simulation-argument.com/
  3. Review of Bostrom's Simulation Argument. Retrieved 2019-03-09
  4. A video of Conway's Game of Life, emulated in Conway's Game of Life. https://www.youtube.com/watch?v=xP5-iIeKXE8
  5. You're in a computer and maths proves it (no it doesn't)
  6. Focus: If the Universe Were a Computer