Introduction

There already exists sufficient evidence, in the form of thought experiments and paradoxes, to eliminate most interpretations of reality which imply locality and realism. Taking seeming contradictions like the Doomsday paradox, Bell’s theorem, Zeno’s paradox, and circumstantial evidence from Boltzmann brains and Feynman diagrams, invariant spacetime distance, the structure of reality can be narrowed down to exclude most interpretations other than a timeless conglomeration of all possible well-formed and non-formulas, nested in all possible ways with each other and woven together by observer moments. All that’s needed is the will to follow results to their conclusion, no matter how counterintuitive. Already, thought experiments are eliminating many interpretations of quantum mechanics as nonsensical and others like the ultraviolet catastrophe show the power of simple, isolated contradictions’ ability to demand reform of entire views of reality. Other paradoxes are introduced along with attempts to resolve them.

While many philosophical ideas successfully unravel the existence of time, space, and locality, this paper will attempt to do so using computational analogies and, more uniquely, reconstitute our particular observable reality, complete with its probability of existence from nothing other than the universe’s true “substance,” which is argued to be probability. In positing a more hairy universe than even a level 4 mathematical universe, we resolve the measure problem as well as conclusively disprove both a physical reality and the simulation hypothesis.

Table of Contents

 0.1. title: Acausal Space - Recreating the universe without space or time

  1. Introduction
  2. Table of Contents
  3. Abstract
  4. Problem
  5. Claims
     5.1. Observer moments can exist in a Turing Machine
      5.1.1. Animal brain duplicated in stateful computer
      5.1.2. Neurons can apparently be duplicated
      5.1.3. Simulations cannot perceive pauses in simulation
      5.1.4. Simulations cannot perceive rate of simulation
      5.1.5. Simulations cannot perceive copying or moving
      5.1.6. Simulations cannot perceive stepping
     5.2. Observer moments not in state transitions
      5.2.1. 3 possible seats of consciousness: states, transitions, or weaves them together
      5.2.2. Time model: consciousness occurs at boundary between past and future
      5.2.3. In the traditional model, consciousness exists where the computation between states occurs
     5.3. Time Can’t Exist
      5.3.1. Storing all states creates a timeless, continuously looped experience
      5.3.2. A timeless, static, continuous experience is indistinguishable from a single lifespan inside of time
      5.3.3. Zeno’s Arrow paradox resolved
     5.4. Space Can’t Exist
      5.4.1. Space is also unnecessary
      5.4.2. Mathematical Platonism
     5.5. Amathematical Universe: locality doesn’t exist
      5.5.1. Math still exists in a void
      5.5.2. Not just true math, but false, inconsistent statements also exist in this soup
      5.5.3. A mathematical universe excludes a physical universe
      5.5.4. Mathematical universes lift observer moments away from any extant physical universes
     5.6. Continuum Hypothesis
      5.6.1. The continuum hypothesis prevents a fraction of observer moments from landing in physical or lower mathematical universes
      5.6.2. CH also prevents observer moments from landing in a simulated universe
     5.7. Boltzmann brains
      5.7.1. Most of us should be Boltzmann brains
      5.7.2. Worse, we should jump among all Boltzmann brains
      5.7.3. Instead, our moments have landed in a historied, consistent, conserved physical-like universe
     5.8. Arrow of Time
      5.8.1. Past and future are generated from our current observer moment
      5.8.2. Time’s apparent reversibility compounds observer moments to give us a consistent history
      5.8.3. Update: Spacetime block generation can be independent of time sweeping
      5.8.4. Boltzmann brains by definition do not get their current moment compounded from past and future
      5.8.5. Horizon problem
     5.9. Mathematical universe vs. fluctuation multiverse
     5.10. Two-State Vector Formalism
      5.10.1. TSVF supports the generation of observer moments from both the past and future direction
      5.10.2. Continuous time requires hidden variables
      5.10.3. Quantum foam and randomness are an artifact of our lifting from a lower cardinality
      5.10.4. Superdeterminism
      5.10.5. Non-locality is expected
      5.10.6. Locality/Proximity formation
      5.10.7. Speed of light
     5.11. Doomsday Paradox
      5.11.1. From your birth order, you can guess how long humanity will last
      5.11.2. If you can figure out how many people will ever live, you’re not in a physical reality
      5.11.3. You can only determine your position if the universe starts from now and expands into past and future
      5.11.4. Anywhere you can consider the Doomsday paradox, you cannot be in a physical universe
      5.11.5. Probability is the substance of the universes
  6. Problems
     6.1. A brain is a complex way to generate its underlying state
     6.2. Youngness problem
      6.2.1. Reconcile QM and GR
      6.2.2. Synchronization of the now-line
     6.3. Must Explain
     6.4. Need Not Explain
  7. Predictions
  8. Summary
  9. Implications
  10. Questions
  11. Potential Allies
  12. Email

Abstract

Observer moments must necessarily exist in brain states and not in the transition between states. Following that, space, time, and matter not only do not exist, but cannot exist. The perceived universe is reconstructed from a superset of amathematical universes without these using a proposed solution to the measure problem involving the continuum hypothesis and a compounding of observer moments from multiple now-tiled realities while also excluding Boltzmann brains. Also gained are solutions to both the Doomsday and Zeno’s paradox, which are intractable in a physical universe with time. This explanation also suggests the two-state vector formalism interpretation of quantum mechanics with an intuitive explanation of its validity and a graph-node model of space similar to causal dynamical triangulation but without the requirement of actual time.

Conversely, this can be seen as a proof that consciousness must exist in the computed transition between states and not the states themselves, otherwise time and space cannot exist.

Problem

The hard problem of consciousness is not that it’s undefinable that experiences can exist. Any data processing can generate experience and there’s nothing essentially “red” other than its association with other red things any more than the word “lofty,” or any other word in any language has qualia beyond its position in a network of connected ideas. The hard problem is of time. How can information processing that can be reduced to states experience the passage of time or weave together multiple states into continuous experience?

Claims

Observer moments can exist in a Turing Machine

Animal brain duplicated in stateful computer

Researchers have created a complete computer model of the brain of the roundworm C. elgans. With only 302 neurons and less than 10,000 synapses, its brain is one of the simplest of all animals. The Open Worm project simulates the entirety of this organism using a Turing-complete programming language, Java. Inside the computer, any given state of the worm exists as a series of 0s and 1s. As the computer’s clock cycle advances, other 0s and 1s which describe instructions to perform alter its current state to create a new state, also represented by 0s and 1s. These advancements of state represent time, with each one being a moment of its life.

Grid of 10x5 1s and 0s changing acted on by a 3x3 static matrix, 5 frames

Neurons can apparently be duplicated

However neurons and synapses work, their function can apparently be duplicated by the manipulation of static, individual states. The connectome represented in the computer approximating the worm behave comparably to the physical animal. Further convolutional models of neural networks show aptitude comparable to humans for recognizing images and sound and playing games.

It is assumed that this scales up. A computerized representation of a rat would behave as a rat, despite the much greater connectome complexity. The increase in complexity is quantitative. There aren’t new or different kinds of neurons, just more of them with more connections. There’s no reason not to believe this wouldn’t scale all the way up to a human brain, including all of our information-processing artifacts such as conscious experience.

Clearly define state and state transition

Simulations cannot perceive pauses in simulation

If the computer worm simulation is paused and then resumed days later, there is no malfunction or glitch. The worm continues from the previous state as if no pause occurred. If the worm had the ability of conscious self-reflection, it would be unaware of this pause, even if it occurred mid-thought. If it did experience something unusual, the simulation would not run identically. Any change in perception of the worm’s mind, or “thought,” would register as a different state somewhere in the simulation. For the worm to experience a pause and copy as different than a continuous run-through, the simulations would have to differ, otherwise by definition, they are having exactly the same experience.

Grid of 1s and 0s teleported to new location, with a middle row of 1s that continue scrolling right after copy

Computers do this thousands of times a second to all simultaneously running programs. First, their state is saved. Then, they are removed from memory. Another is pulled from memory and resumed, only to have the same thing done to it a millisecond later. Programs don’t need to be written in a special way to support this, nor are they “aware” of it, for the most part.

preemptive multitasking/context switching animation

Simulations cannot perceive rate of simulation

We can even run the simulation at different speeds with no effect on internal state (“thoughts”) or behavior. The worm isn’t panicky if simulated at 100x speed or sluggish at 1/100th speed. It goes through identical states, as its environment is scaled faster or slower at the same rate. If the worm were self-aware, it would not perceive any difference in time regardless of its simulation speed. To argue that it is possible for a simulated organism to perceive its simulation speed, you must argue for a non-deterministic element in the simulation. To perceive varying speed, some states would be altered when run faster or slower, representing varying perceptions or “thoughts” about the speed. That all states and the outcome is identical regardless of the speed or pause time is proof that the worm is unable to perceive, in any way, its simulation rate.

slow 1 and 0s changing contrasted with fast 1s and 0s changing, but both end with the same grid of 1s and 1s, with the 1s highlighted in red in a recognizable identical pattern in both

Simulations cannot perceive copying or moving

One could combine both pausing and copying the simulation to no effect. Run one cycle, pause, copy the new state to a new computer, run another cycle, pause, and so on. Following the conclusions above, this worm still experiences a complete, normal life in normal time identical to the state changes it goes through when run in real-time on one computer. All states and state changes remain intact.

1 step, pause, copy to new location, resume 1 step, copy back to third location, loop

Simulations cannot perceive stepping

Further, we could even pre-calculate all states, say 10 seconds worth, and execute the thousand states (100Hz) over the course of only one second. Start 10 separate simulations beginning from the nth second of the original simulation at once. Even if you believe the magic happens in the state transitions, most transitions are occurring here and the being is able to live a normal 10 seconds of life in a single second and cannot distinguish that it is doing so in 10 distant universes simultaneously. Even in our universe, this could be subdivided to the point that the entire universe could execute in Plank time and we’d experience the full 13 billion years, if spread across enough universes.

There are two ways in which the actual experiencing could take place. It could be in the transition between states or the states themselves. I argue that it is in the states themselves and the transition is immaterial.

This allows us to experience billions of years in a universe that exists as only a flash. A counterargument is that the experience is actually occurring in the calculation phase. If that is the case, what happens when the simulation is executed the second time, after the calculation is complete? Inspecting the states, we see a complete lifetime, with memories from the first second persistent in the 10th second, despite them perhaps being remembered before they actually occur? What if we run the calculation in reverse, computing previous states from their endpoint? Since this universe appears time symmetric, we know this should be possible. From the organism’s perspective, does time still run forward? Its memories at each state still reflect a forward progression through time. Assigning conscious experience to the state transitions or calculation phase creates many intractable paradoxes.

A note on what gets the worm from one moment to the next. The computation portion which allows the worm to change state over time is effectively the laws of physics simplified. Instead of electron forces, inertia, and gravity acting on the worm’s neurons, programmers approximate the effects of these forces over time and simplify. If one neuron fires, instead of calculating the position of atoms as they travel along synapses, assume the next neuron fires, and so on. The instructions operating on the virtual worm are analogues of the physical laws of our universe.

Observer moments not in state transitions

3 possible seats of consciousness: states, transitions, or weaves them together

The experience of consciousness and the passage of time must occur because of one of three of the following:

  1. Individual states
  2. Transition between states
  3. “Antenna” model

The traditional model which allows for the existence of time is that as one state transitions to the next, an observer moment occurs. This avoids many issues which seem counterintuitive like all moments occurring simultaneously and timelessly as well as providing an anchor for the triangular outbranching in CDT (I assume).

An argument against this, which works against both states and transitions is the type of transition is immaterial to the product states. If we assume that computing the state of a brain 1 minute from now which is identical to the state it ends up in anyway, the amount of variability in methods of reaching it is immense. We could compute it with bits or use paper or people standing with their hands in different positions to represent neurons. The step resolution is even variable, yet the same states occur between and at the end. If the end thought contains “I feel like 60 seconds have passed” regardless of the simulation rate, that’s evidence against rate being material to the problem.

Time model: consciousness occurs at boundary between past and future

Instead of letting the computer calculate the state changes using its formula, we can do it on paper, writing down any previous state, applying the instructions, and getting a new state. We can do this from any state to get the next state. Let’s do it instead of letting the computer. Read the state off one computer, calculate the next, and type it into another computer. The worm still experiences a full life, normal time, and no spatial jumps, despite its existence being only a grid of static 1s and 0s sitting on many different computers. [This isn’t a great example]

What if I have a great memory and flip the bits myself by hand to their next state, without calculating? Does the worm still have the experience of the passage of time? Did its experience occur inside my brain, but are inaccessible to me as I only see them as 1s and 0s?

If self-aware, is the worm only experiencing the passage of time as I crunch its next state with paper and pencil? We know all its states are invariant regardless of which method we choose to calculate them. That is a strong argument that its consciousness is contained in the static states. If the consciousness existed in the state transitions, which are highly variant, we would expect the worm to have different experiences depending on how the state transitions work. Since the states contain its “thoughts” and behaviors and do not change, it is fair to conclude states are its true thoughts.

What happens if we do the math wrong and go back and correct it? Does it experience an anomaly and then “unexperience” it when we erase it as if it never happened, like we’re wiping its memories?

Counterargument: this is sleight of hand and thoughts are actually the state transitions. You can change how the state is represented and make the same argument in reverse.

In the traditional model, consciousness exists where the computation between states occurs

Consciousness is traditionally considered to exist in the transition from state to state. As the machine computes a new state, something about the transition between one and another, and not the states themselves, give rise to the sensation of consciousness. If you run a simulation of a worm, or a human, the animal experiences the passage of each moment as the computation of one state turns it into another state.

There’s the state transitions and state transitioner. The transitioner is nothing more than a moving head and a couple of rules, like a Turing machine. Its simplicity makes it a bad candidate for the seat of consciousness. There’s also the state transitions themselves. I don’t understand what they really are, but I need to investigate if I want to argue against them.

Time Can’t Exist

Storing all states creates a timeless, continuously looped experience

The only alternative to this is that the transitions are immaterial and consciousness somehow exists in the states themselves. If all the worm’s states are stored somewhere, then the worm experiences its entire life not just over and over again, but always. It is always experiencing every moment of its existence. This implies the static 4-D block of all spacetime exists and our experiences are an immaterial wave flowing through it. The wave doesn’t occur over time. The wave is fictional time being generated by the existence of states, that if transitioned through, would generate a time-like experience.

The generative function that creates future from past could just as easily reorient time into a spatial dimension. It would be a much more complex generator, but if it could generate this block of spacetime from left to right rather than past to present, we would experience time “flowing” in that direction and what we previously called time would be a spatial dimension.

Do they even need to be written down in order? It’s not the physical proximity of bits that determines their next state or how they’re operated on. If we simply write down all possible combinations of 0s and 1s sequentially, will a time-like experience self-assemble, hopping between states as needed?

A timeless, static, continuous experience is indistinguishable from a single lifespan inside of time

It will still experience time normally. There’ll be no mix-ups or confusion. Each state represents a particular time and the being from a day ago reliving that day or from tomorrow experiencing tomorrow will not interfere with the you now experiencing now. Each state only carries the memories of its past and its own now. It has no knowledge of the others or even of itself occurring continuously. (This probably needs a lot more explanation. I find it intuitive but it seems like a big leap.)

Zeno’s Arrow paradox resolved

Since only states give rise to the universe and observer moments and the transition between them is immaterial, motion is not necessary. Of the proposed solutions to Zeno’s paradox, only the infinitely many motions of infinitely small distance makes sense, but it doesn’t work for us in a universe with Plank time and distance. At any moment, anything not at motion is simply not at motion. Nothing not at motion has any ability to transition to another state, and if motion did exist, which it can’t, there’s no way for something to move. Atomicization is no resolution to the paradox. Objects teleport Plank length at a time instantly, at an infinite speed and stop each Plank length?

Claims that time is continuous construct with no instants contradicts what computers have shown us about consciousness. We may not know what it is, but we have no problem simulating brain-like information processing using stateful computers which represent one moment at a time as a static array of 0s and 1s. If brain-like processing were something that specifically occurred in a universe with real motion, we’d be hitting some very odd roadblocks in programming neural networks that recognize sound and images using stateful machines.

Problem: moving to time as a series of static states turns momentum into a hidden variable, mentioned below with citation.

Space Can’t Exist

Space is also unnecessary

If all that’s necessary to experience a complete lifetime is states laid out sequentially, for example on a computer hard drive, why does the sequence matter? Can we rearrange the states in a different order and prevent the being from experiencing its lifetime? If so, how? If the experience of existing exists in the states, how does it track through space? There’s no hard drive needle moving from one state to the next. No activation is required. No computation between them. It is only the existence of the states that creates the timeless, continuous experience. How would their spatial position effect that? Can I move one bit from one state to another galaxy and interrupt its existence? We know from other examples, like the pause, move, resume, that experience is not changed by position. Why is the proximity of one bit to another relevant to the experience?

Consciousness seems to be a fundamentally nonlocal phenomenon. With no senses, the experience of existing is not necessarily localized to any one place. To use Dennett’s example, if billions of people in China exchange cards in the pattern of neurons firing, where is the consciousness? Hovering over the country? It’s not a sensical questions. Only when paired with sensory devices like eyes and ears can consciousness localize itself.

This brings up some strange problems with locality. If the proximity of the bits of the states is irrelevant, why do we experience moments inside a brain with highly localized states? Our consciousness did not land in a rock which contains a pattern of molecules with spin orientations corresponding to the 1s and 0s of this state. It’s inside a wet, but largely traditional computer, computing away. Locality appears to play a large role in our existence and is highly structured. (this is a different argument and doesn’t really follow)

For every conscious experience, can a universe which hosts that experience be generated? Conversely, if a moment that cannot generate a universe, is it not experienced? Is a fundamentally nonlocal Boltzmann brain experience not able to generate a history and future which generates itself? Our experience is complex enough to have needed to generate 4 billion years of reverse evolution in a 13 billion year old universe in order to have these thoughts.

Mathematical Platonism

What if instead of writing down all the states, we simply write 1 and 0? What is to prevent the states from using those bits and duplicating them as needed? The state 11001011 can get its 1s and 0s from there and still exist in the same way as if it’s written down. 1 and 0 are simply ideas. If we don’t write them, they still exist. If we accept this, space and locality are also unnecessary. The states are like a timeless, endless soup of math that permeate a void. In the way that 2+2=4 always exists, regardless of whether there is a universe for it to exist in, 1s and 0s always exist, and so all combinations of states and their intersections always exist.

The reason we invented math the way it is, with equalities like 2+2=4 is for 3 reasons. Conservation, substitution, and nesting. All three are useful when building up space from a soup of nothing infinitely deep.

Amathematical Universe: locality doesn’t exist

Like Tegmark’s argument for a mathematical universe, I also believe that all mathematical structures exist and are all that exist. Further, I believe all nonmathematical structures exist. If 2+2=4 is represented by the string (SS0 + SS0) = SSSS0 in TNT, then S0 + S0 = SSSSSSSSSSSSSSSS0 also exists. Further, I believe that inconsistent mathematical structures can embed themselves in mathematical structures in this timeless void, creating a horrendous mishmash of every possible true and false statement. That is, a mathematical universe generated from a simple equation repeated on itself also has false statements “injected” at every possible opportunity. Not only that, it has all possible false statements injected at every opportunity.

This makes the amathematical universe much harder to defend, but also avoids assumptions of beauty or simplicity that have no arguable basis other than in reverse (look how simple our universe is, it can’t include amathematical structures). In exchange, I will put forth ways in which existing theorems and observable properties reduce these combinations of amathematical structures to the more manageable straightforward universe we observe.

Math still exists in a void

(duplicate of above) Instead of a universe with time, space, and matter, we are left with a void, nothing anywhere. Even in a void, mathematical truths exist. Imagine an endless soup somewhat resembling Hofsteader’s TNT. If 2+2=4 is always true, regardless of our existence or the existence of this universe, then why isn’t (2+2)+(2+2)=8? Truths that join together are still just truths. A truth assembled from smaller truths has a corresponding complex unassembled truth. In a way, this combining of truths “always” and “continuously” “everywhere” “creates” all possible and impossible universes, at all time. Everything exists and nothing exists.

There the game stops. Proof surfaced in 1898 that the reals, complex numbers, quaternions and octonions are the only kinds of numbers that can be added, subtracted, multiplied and divided... Real numbers can be ordered from smallest to largest, for instance, “whereas in the complex plane there’s no such concept.” Next, quaternions lose commutativity; for them, a × b doesn’t equal b × a. This makes sense, since multiplying higher-dimensional numbers involves rotation, and when you switch the order of rotations in more than two dimensions you end up in a different place. Much more bizarrely, the octonions are nonassociative, meaning (a × b) × c doesn’t equal a × (b × c)... all multiplicative chains of elements of R⊗C⊗H⊗O can be generated by 10 matrices called “generators.” Nine of the generators act like spatial dimensions, and the 10th, which has the opposite sign, behaves like time. Sedenions can still be added, multiplied, subtracted and divided. it's just that multiplication and division lose most of their useful properties. With octonions you've already lost associativity and commutativity, though.

The main superficial distinction between a physical and a mathematical universe is substance. A physical universe has substance, while a mathematical one is a potential, or idea. The differentiation of a substantive and mathematical universe can be broken down into two components: conservation and self-interaction. Conservation is quickly becoming discarded as a hallmark of even a physical universe. Whether the universe is infinite in size or there are infinitely many physical universes, conservation is off the table. What would it add if we did demand it? Second, self-interaction. This is the most important part. A physical universe has objects bumping into one another and bouncing off. We don’t imagine mathematics to have this property. However, in the simulated worm, mathematics perfectly duplicates it bumping into wlals. A subatomic particle event could be represented by an equation like 1+3=4, then 4=2+2. When combining, the truths in essence interact with one another in the same way as “physical” particles. Also, as we’re able to simulate portions of a universe on computer with great success, we’ve shown whatever physicality we consider to be critical for a “real” universe to be directly substitutable with electronic representations which have none of the locality, material, or movement patterns of the physical objects they’re standing in for.

Not just true math, but false, inconsistent statements also exist in this soup

Most formulations of a mathematical universe do not consider that math “external” to that universe can interact with math inside and cause problems. I do not exclude this possibility, though it makes a mathematical universe harder to defend. I also do not exclude the possibility that inconsistent, false mathematical statements can interact with true ones, causing most universes to be inconsistent and not follow physical laws. Instead, I will resolve these problems using a combination of the continuum hypothesis and a superposition of past and future which also explains the arrow of time.

A mathematical universe excludes a physical universe

Accept the existence that mathematical tautologies exist in a void necessarily invalidates the existence of a physical universe. When considering how a physical universe differs from a mathematical one, I imagine two things. One, a physical universe has components that can interact with itself. Two, a physical universe’s components are conserved in some way. For the first, tautologies in a void should have no problem coalescing and interacting with one another, as those are just more complex tautologies that would come into existence on their own anyway, as all possible combinations are always extant. Two, by making the only difference between a mathematical and physical universe the conservation of items, observer moments can no longer occur in the physical universe via the continuum hypothesis.

Mathematical universes lift observer moments away from any extant physical universes

For each physical universe, there are infinitely more mathematical universes that are almost identical to that physical universe but jumbled up in infinitely many ways (Cantor’s diagonal argument) while still containing the same observer moment lifetime-states as the physical universe. Since both that physical universe and the infinite recombination of mathematical universes both contain the same consciousness lifetimes, the continuum hypothesis forces all consciousness in the physical universe to “land” in the analogous, infinite mathematical universes. Just as above we proved you could pause, copy, and move consciousness undetectably, these mathematical universe do the same to the observer moments in the physical universe. An infinite number of copies is made and “lifted” to the infinite mathematical universes, forcing all those observer moments to land in a mathematical universe even if a physical universe with 1 such moment “really” exists. By the continuum hypothesis, the physical universes, if they did or could exist somehow, could have no fraction of observers. Without it, whatever number exists between cardinalities would be the fraction of times the mathematical universes “stole” observer moments from physical ones. Its independence from ZFC and universality may be a necessity.

This may prevent both simulation and physical realities by additional methods. Simulations require a limited number of states per time period. A simulation which produces 1 second of observer experience through a million cycles still produces infinitely fewer slices than the continuous mathematical structures which exist at all possible (uncountably infinite) states in between those calculated.

Given this, it would be impossible for observer moments to land in a discretely stateful universe, like one that would exist if Planck time were interpreted as the minimal unit of time.

Continuum Hypothesis

The continuum hypothesis prevents a fraction of observer moments from landing in physical or lower mathematical universes

The continuum hypothesis, which very importantly for our purposes exists outside of axiomatic set theory, states that there are no fractional differences between infinities. If there are 1 or infinitely many physical universes, which by definition are conserved in some way (or said another way, their quantity or the quantity of things inside them is limited), then the infinite combinations of mathematical universes will always “lift” these states into their purely mathematical realm. Since the continuum hypothesis prevents any counts between cardinalities, we can be guaranteed all observer moments land in mathematical universes with no “fraction” of them ending up between cardinalities or in physical universes.

CH also prevents observer moments from landing in a simulated universe

Mathematical universes do the same thing to computer simulated universes, invalidating the simulation argument. Each observer moment in a simulated universe is “lifted” away to infinitely many rescrambled but almost identical mathematical universes.

Oddly, we can build a simulation which could contain observer moments and those observers would behave as if they were fully conscious, but for each observer moment generated inside the simulation, infinitely more analogous moments would exist as mathematical universes with some underlying soup or noise to increase their quantity. So those simulated beings would instead find themselves inhabiting a world like ours anyway. But then if we simulate them with enough granularity to examine the subatomic realm, they should find it classical all the way down. This is a paradox introduced by this theory.

(Can the void create loop-like links so long as they are not exploitable? Can a particle in this universe contain a “link” to the entire universe and place it inside? Same question for a black hole.)

Without the continuum hypothesis, one could find a non-infinite fraction of mathematical universes between cardinalities. If the naturals represented one set of universes and the reals represented another, a cardinality halfway between those would cause that fraction of observer moments to land in between these two sets of universes rather than always landing in the higher cardinality ones. As long as there are infinitely more of one universe class than its lower cardinality, observer moments will always land in the higher, with no proportion going lower.

We could arguably not have consistent experiences without CH. I’m not sure what it would mean or feel like for twice as many of our moments to land in a higher cardinality universe and the remaining 1/3 land in a lower. Why would that scramble us? Is it possible to envision CH being false and that preventing any consistent observer moments where CH is false or in the entire void if CH applies regardless of set theory chosen?

Boltzmann brains

Most of us should be Boltzmann brains

We are left with a nonexistent universe that due to the unending combinations of math, assembles moments of existence for every possible being. Since the moment you are experiencing is the result of some possible combination of TNT strings joining, it exists. This is the Boltzmann brain problem. Absurd nonsensical blinks of consciousness should dominate our free-range math void. With so many Boltzmann brains springing into existence continuously, your experience should be one. Boltzmann brains are traditionally a problem with an infinite universe, but they’re just as big an issue if not more for a mathematical one.

Of all the Boltzmann brains possible, there are many other options for us to land in that aren’t a mysterious void. We could be the fluid dynamics inside a star, the spin of atoms inside a rock, the passing of papers of people in China, the gravitational attraction of stars in a galaxy.

Not sure if this is the same as Donald Hoffman’s idea, but for us to have a sequence of many observer moments that are coherent we need a substrate to interact with that allows us to generate further moments. Landing in an evolutionarily-generated animal whose goal is to keep existing and further observe is a great opportunity.

Worse, we should jump among all Boltzmann brains

In models that have time or where the “now” sensation is generated by the transition between states, Boltzmann brains are a one-off problem. Either you’re one or you’re not. If you escape being one through probability, you’re set. In this model, they’re a continuous problem that can lift away your consciousness at any and all moments. However, by embracing this possibility, we come up with a better coherent explanation than by ignoring it and saying time is real and the progression in one mathematical structure accounts for our experience.

Instead, our moments have landed in a historied, consistent, conserved physical-like universe

Instead, you can trace a consistent history back to the beginning of the universe and you seem to have a future. We would expect most Boltzmann brains to be flashes of disembodied self-awareness. Not only that, you should not perceive consistent laws of physics. Objects should be popping in and out of existence all around you. There must be a mechanism streamlining the limitless potential mathematical universes to cause us to land in this mostly consistent one resembling a physical, matter-conserved universe.

Arrow of Time

Past and future are generated from our current observer moment

Experience as we understand it is the passage of time. If reversed, physical laws still apply. This is mostly consistent with the “block of paper” model of the universe in 4-D. With processing between states being irrelevant and instead now arising from the existence of adjacent states, time is an artificial construct. In our model, what “first” exists is now. Past and future come “later.” Since time doesn’t exist, what we really mean by later is that our observer moment is the root “tile” that determines the orientation of other tiles. Regardless, the reversibility of artificial time is necessary to explain our experience as historied brains in a consistent universe.

There are alternatives to reversible time that are still built up from now rather than the distant past. Physical laws could be identical whether going forward or backward in time from now, making the past a reflection of the future. Or, we could have totally different physical laws that apply as now reaches back toward the past vs. toward the future. Instead, we have reversible laws which, most importantly, generate a consistent past and future regardless of what our current observer moment is.

Time’s apparent reversibility compounds observer moments to give us a consistent history

The reversibility of time is what allows our consciousness to land in a universe with a consistent history. Your current moment generates a past all the way back to the beginning of the universe and simultaneously generates a future all the way toward its heat death. This moment is the root tile placed on a blank board. The shape of the tile, analogous to physical laws, allow past and future moments to be placed afterward forming a light cone in both directions. All future and past moments along this path “simultaneously” generate their light cones as well. They are not necessarily in the same universe or related to each other, as no physical reality exists.

Update: Spacetime block generation can be independent of time sweeping

It’s normally thought that time is the operation of physical laws on the current moment to create the next moment. Time generates new space from this space. In our model, we take the same concept and instead argue this moment generates future and past moments. However, since time is a nonexistent continuous sweep through a spacetime block in our model, it’s not necessary that the generation of the block proceed from this moment. The block can just as easily be generated from the end of the universe backward as the start of the universe forward or from now in both directions while time sweeps through it continuously.

Most likely the sweep “starts” or has a tiling context “beginning” with a single node of a Feynman diagram and propagates outward from there. This requires all diagrams to be linked together rather than separate sets of particles which do not interact with each other, which could be considered circumstantial evidence for the theory but if there were classes of particles that did not interact with others at all, we would only be aware of the ones that make up ourselves.

This is why Feynman diagrams can be rotated through time and still be equivalent. The node network of our spacetime block expands outward and is independent of the experience of time sweeping through the network.

This was considered to deal with the issue of now being potentially far more complex than the start or end of the universe, due to the complexity of structures our bit-like consciousness is embedded in. It’s hard to imagine how now can be the starting point when the brain is so much more complex than the consciousness it carries. That is a large mishmash of matter and space to generate.

Imagine your current moment of consciousness existing in a void. It always exists, as well as all other combinations of all tautologies. For the sake of simplicity, imagine this moment as represented by some binary string representing its state: 0101101. The infinite tautologies that are always extant can operate on that state, generating all other possible states. Almost all transform it into noise, but some do so in a pattern. These very few are analogous to physical laws like momentum or gravity. Of the infinite possibilities, very few strictly conserve 1s and 0s and very few of those have an exact opposite transformation that undoes the previous. These are the laws of our own universe which generate past states and future states from 0101101.

With two of those opposing transformation rules operating on our moment (while all other possibilities are operating on it “simultaneously”), they are generating past moments and future moments, spanning “outward” like a light cone. All other rules are producing their own similar cones, but most of those lead to nonsense moments where consciousness does not exist. Others produce valid past or present moments with anomalies like a region with no gravity or intense radiation or you with an extra arm or turned inside out. However, with reversible laws applied to every generated moment, every moment can generate a portion of the universe inside another’s light cone. Now generates a nanosecond ago which in turn generates now which generates a future which in turn generates now.

This compounding is the magic bullet that allows us to deal with amathematical structures in a math-like universe. Your current observer moment is generated by every particle, structure, or node of space that exists in both your forward and reverse light cone, from the big bang to the end of the universe. With your current moment consistently generated by infinite(which aleph?) other moments, and so many more valid moments are generated than invalid (are they, how do we know, can we count like that?), your current observer moment appears consistent and your existence stretches back to the beginning of time, just as in the traditional model.

Boltzmann brains by definition do not get their current moment compounded from past and future

It is the reversibility of time that creates identical observer moments from each now-tiled moment. Your current observer moment does not just exist in isolation. It is also generated by every future and past moment. Boltzmann brains do not have this luxury, which is why they are not a problem for this model and instead our observer moments land in a universe with directional, reversible time.

Horizon problem

With time tiling backward to generate the past from now, the horizon problem is nonexistent. An observer moment inside a stable, long-lived universe with complex features is what generates the dense, hot, uniform universe just before the big bang. Inflation is acceptable, but not at all necessary.

Alternate/complementary resolution: as all nodes generate all other, consistency extends beyond one node’s light cone as it generates other nodes which have their own light cone and so forth, “stabilizing” the entire universe, or conversely, allowing destabilization in one cone to scrap an entire universe rather than be locally significantly hotter or colder.

Mathematical universe vs. fluctuation multiverse

Both have similarities as they give rise to an unlimited number of possibilities through different means. However, mathematical universes have the same probability of generating a single node or TNT string as an entire universe, while a multiverse landscape must deal with traditional probabilities of particles coalescing. In the multiverse model, a single Boltzmann brain has an astronomically higher probability of forming than an entire universe. In the mathematical model, the reinforcing of nodes backward and forward in time, and from other light cones, actually increases the probability of an observer moment landing in a historied, consistent “universe.”

Two-State Vector Formalism

TSVF supports the generation of observer moments from both the past and future direction

The two-state vector formalism interpretation of quantum mechanics supports the compounding of multiple now-tiled spacetime blocks. Now, in TSVF, can be inferred from a combination of the future and past moments. Since there are infinitely many more non-now moments generated for us from the past and future than there are of this moment, TSVF makes perfect sense. Each moment is simply an interim state generated between the two surrounding moments, including the current state.

Continuous time requires hidden variables

Without TSVF generating now from past and future moments, we are left with continuous time, something that mutates the past into the present moment. David H. Wolpert, Artemy Kolchinsky & Jeremy A. Owen have shown that continuous time is a hidden variable theory.

“ow does the classical world with its arrow of time emerge from the quantum world where the governing equation is time-symmetric? And the answer is: the classical world emerges by a process of decoherence, which is to say, by the creation of large (O(10^23)) networks of entanglements which (it can be shown mathematically) have behavior that is indistinguishable from classical systems. It is very similar to how thermodynamics and the time-irreversibility of the second law emerge from time-reversible Newtonian mechanics”

Quantum foam and randomness are an artifact of our lifting from a lower cardinality

We could have had a purely classical universe (other than the black body problem), but instead we are in a universe where any given subatomic event has a random component and particles are rapidly coming in and out of existence. We were lifted into this more complex, random universe because for each classical universe, there are an infinite number of identical quantum universes where a single particle is in an infinite number of places. Our observer moments are lifted to a very high cardinality universe with the maximum amount of “noise” it can have without that noise disrupting our conscious experiences.

(We could also be in a quantized universe because Navier-Stokes is unbounded and any true flowing of fields that existed would create infinitely deep, fractal-like eddies which could also bubble back up to the macro scale and consume the universe if several interacted or even if one interacted with itself.)

If our brains operated on a smaller scale that was disrupted by quantum mechanics, we would find the quantum realm that much smaller. Just enough so it wouldn’t. This is the opposite of most mathematical universe formulations, which place us in the simplest universe of a given observer-moment. Due to the amathematical universes, continuum hypothesis, and “lifting,” I argue instead that we are in the most complex. Our universe is maximally complex, especially beneath the scale required for a biological neural network to function classically.

Exactly how much larger could the quantum realm be before it’d disrupt our neural network? The universe spans ~40 orders of magnitude. If neurons are within 1 or 2 of their behavior being destabilized by QM, that is circumstantial evidence for this theory.

Is it possible this universe jumping is also responsible for the quantum-classical transition? Can experiments be devised that locate this transition in a way that reveals its relation to observer space? Why would it be that the hopping between universes and either the randomness of QM or its foam (distinct things) wouldn’t conform to the stretching and deforming of space from gravity or high speeds?

Further, this random quantum foam we experience is our own observer moments hopping between universes just as the pause/copy/paste worm simulations were. It’s not necessary for the consistency of our experience that every quantum event be consistent, but it is necessary for every macroscopic event to be.

Quantum mechanics is very consistent in some ways, such as how individual particles behave, but perfectly inconsistent in others, like radioactive decay being truly random.

We should be able to design very interesting experiments to tease out observer effects and how observer moments are compounded. If two researchers are working on an experiment, can one be designed that when hearing about it, it fails but when conducting it myself it succeeds? If an unexpected result like both researchers find the same thing when this theory finds only one should, is that evidence of further observer moment compounding resulting from multiple observers compounding each others’ moments?

Why then does launching subatomic particles into detectors just register nothing at all most of the time? If that were the case, it could be used as a perpetual motion machine somehow, as ejection of particles produces a backward force yet they hit nothing.

We should find our universe at some minima and maxima. Just beneath the non-randomness needed for neurons to operate properly, we should find the universe maximally complex, with truly random events occurring as deep as we can inspect without the ability to bubble up and disrupt our brains. Also, as David Hoffman argues, the physical world is indeed generated to represent each conscious experience. I disagree that it’s incomplete and merely an interface consciousness can “land” in and we will not find answers in classical mechanics or 1s and 0s because the physical world is an incomplete approximation that doesn’t fully capture it. Consciousness can generate as complex a structure as it needs to represent itself, and it has generated a pretty fancy brain to live in as well as a sense of time. I think the representation is complete. Plus, we’ve had great success generating our key components like pattern recognition and game playing with artificial structures that run on computers. I disagree on it not existing beyond our perception or as we perceive it. We generated a complete world by generating moments that generate other moments recursively. QM as a seat of consciousness is a misdirection. Using it to say things don’t exist until we see them is another misdirection.

Superdeterminism

As a solution to entanglement. Need to consider. Does it hold up if events are truly random?

Non-locality is expected

Just as now generates future and past, more often, future and past generate now. This is how nonlocality is interpreted. An entangled pair created now propagates forward in time, instantly generating a future. If the other particle is measured, that particle then generates states backward in time which reach the present, which then propagates forward in time again to reach the other particle. This is how particles can “communicate” backward and forward through time with anything in their light cones.

Just as now generates a past and future in its light cone, those pasts and futures generate the space adjacent to now, which in turn generate their light cones. This expands “instantly” to create an entire universe from a single localized observer moment.

There are a number of experiments like the delayed choice double eraser which demonstrate results like this. The past is rewritten to maximize consistency with now, an expected effect of now being what “first” exists and generates the past.

Entanglement is an “exploit” of the rules that create our past and future and appears counterintuitive as the universes generated by each moment overlap enough so that only those with maximum overlap contain moments, giving the same appearance of consistent past and future that helped us escape the amathematical universes.

Locality/Proximity formation

Other universe models have proximity as a native feature. Most mathematical and physical models of space imply that things can be near or far from each other. This is not a feature of the observer space model and must be generated statistically from the intersection of all possible and impossible mathematical universes.

To build up locality from nothing, each “node” of space generates all possible and impossible nodes using all four division algebras. Only quaternions and higher are noncommutative, a requirement for directional time. Nodes generated from complex numbers would be commutative and generate nodes which are identical to each other rather than opposites (representing a forward and backward step through time).

With all possible nodes generating all other possible nodes with quaternions or octonions, an infinitely complex graph is created which still obeys some properties, like non-commutativity. In this graph, a given node will occur infinite times, as many other nodes generate it by applying division algebras to themselves. If identical copies of this infinite graph are superimposed upon each other at the point of our repeated node, I suspect some edges and nodes will appear a cardinality more often than others. Further, and I don’t know if this step is implied by the previous, all other duplicate nodes are similarly overlapped from the same graph and then those graphs are again overlapped with each other. Now, as long as some edges occur a cardinality more often than others, those in the lower cardinality can be pruned, as the higher occurs infinitely more often, preventing their “existence” as universes have a 0 probability of using that node or edge over the higher cardinality.

I suspect this “pruning” of lower cardinality edges and nodes changes the infinite graph from one in which all nodes (points in space) are equidistant to all other points (reachable from 1 edge) into a graph resembling our physical universe with intuitive locality, in which each node is connected to exponentially more nodes via an increasing number of edges (distance). Further, and not implied by the previous, the network is consistent among all nodes, in that each finds itself a fixed number of edges away from every other node regardless of observer.

An attempt to duplicate this on a small scale. Imagine 4 nodes numbered 1 to 4. Starting graph is a square with an X inside. The operator is +1 or -1. Let’s also include operators +2 and -2 for each node to show that +/-1 should win out by creating a tighter graph.

Seemingly violating the CH, that wave functions span the entire universe, just with decreasing probability, is consistent with the emergence of locality from the compounding of edges by count. All points are in proximity to all other points, just with a vanishing probability as paths taking direct links between distant points are less likely than paths going through 1 unit long edges.

If the separation isn’t a full cardinality and instead simply a lot more often, it would explain the probability of tunneling decreasing with distance. Very short tunnels in the quantum range are possible due to edges directly connecting two distant nodes are still somewhat probable paths to take between them. The probability function relating distance to probability of tunneling would actually be expressing the node density and connectivity.

Causal dynamical triangulation creates triangles radiating from current time outward and requires edges of pyramids to be touching to build up traditional locality. Does CDT explain why this is necessary? This theory does. The infinite nodes radiating from both directions of now from each now overlap, reinforcing each other from +infinity to -infinity time. There are only an infinite number of edges along these triangles. All other edges are not reinforced by other nows, making them “disappear” via CH.

Further, why did traditional physics come up with CDT? What problem does it solve for them. In this model, space and locality can’t exist at all without a similar node-based space. There is no substitute. It would be unusual for someone to come up with something that’s a solution to a non-problem.

The generative function which creates past and future from each node is consistent with changes/movement propagating through edges at c in a block universe. Whether moving diagonally (through space) or vertically (through time) a change can only move one edge at a time. “Starting” from a single node and generating an entire universe with all history by deploying an infinite number of nodes which each deploy an infinite number of nodes. However, a node a billion years in the future following consistent rules will also generate this node now and mostly via edges leading from this node through other nodes. If the argument about states and not their transitions allowing consciousness holds, each cluster of nodes representing an observer moment would observe a universe moment corresponding to whatever reached it at c, creating a warped “plane” of simultaneity which “always” flows through all moments and can differ for observers, as shown in relativity, but which preserves the same separation of spacetime points for all observers.

Speed of light

The speed of light is the speed of all objects through spacetime. It’s also the maximum “angle” of nodes when stepping forward or backward through time along edges. It should have some relation to the current size of the universe as the more future and past nodes that can generate the current node, the more the current node exists. A photon follows a geodesic across spacetime. If the universe is homogeneous and isotropic (all points & directions look the same) the spatial component of the photon’s energy-momentum will be inversely proportional to the size of the universe. That’s cosmological redshift: λ~R.

It’s also a measure of the maximum number of nodes a node can connect to. A greater number of connections means a higher speed of light and vice versa. Depending on how many each connect to, relaxing the node network will result in different overall shapes. Relaxing a network with very high one-to-many edges will produce a network with very steep angles between nodes when stepping forward in the time direction. Many nodes being reachable from one edge corresponds to a high speed of light, as more “distant” nodes are reachable in a single edge.

This also opens a strange can of worms in which the speed of light at this moment in time could be based on the size of the universe at the end of time or how long it lasts while still creating a past which appears to have the current speed of light. But the speed of light tomorrow could be different because it generates a different consistent past.

Doomsday Paradox

Imagine a ball pit of unknown depth. In it are n number of balls that are numbered sequentially, 1 to n, and randomly distributed. You don’t know n. It could be 4 or 4 quadrillion. Grab one. You pulled number 2, Is it more likely the pit is a few inches deep and only has 4 balls or a mile deep with 4 quadrillion balls? The numbers say it’s much more likely there are only 4. Not guaranteed at all, but more likely. You’re able to extract probability information about the depth of this chasm from a single event. That ability to know is the nature of any universe where probability exists, or so we think.

From your birth order, you can guess how long humanity will last

You are human number 100 billion or so. Did you most likely draw that birth order number from 1 trillion or 100 quadrillion? It’s much more likely you drew it from a lower pool. You’re able to extract probability information about how many humans will ever live. You shouldn’t be able to look forward into the universe like that–not the universe as we understand it. Future knowledge, even probabilistic knowledge like that, is supposed to be inaccessible. This is called the Doomsday paradox.

Another example using when you joined a web site and how many posts there are at that time

If you can figure out how many people will ever live, you’re not in a physical reality

This paradox completely unravels the traditional conception of spacetime. Traditional spacetime involves a far-distant past starting point which sweeps forward, generating moments which then become the past, up until now. The future is not yet generated in this model. It does not exist. The Doomsday paradox give you access to information as if you were on a progress bar and the future was almost as knowable as the past. It places you at the top of a normal distribution curve, with the past used to tell you the approximate width and height of the curve, looking directly into the future as if it’s as real as the past. This is unbelievably strange, and should not at all be possible in any way if spacetime is what we imagine.

You can only determine your position if the universe starts from now and expands into past and future

However, it is completely reasonable to calculate that you are closer to the middle of existence than the edge of it in a mathematical universe which originates from now. Now generates past and future simultaneously. It is reasonable to assume that if you’re starting from the middle and generate in both directions, you’re going to be about in the middle of existence. You would expect, probabilistically, decay in both the forward and backward direction if now is what first exists and generates past and future “later.”

Anywhere you can consider the Doomsday paradox, you cannot be in a physical universe

The Doomsday paradox isn’t an artifact of anything specific to our universe. It occurs whenever probability exists. As such, it is actually evidence of a now-tiled mathematical reality. Just as all observer moments in hypothetical “physical” universes are lifted to mathematical ones via the continuum hypothesis, they are also lifted to now-tiled mathematical universes by the existence of probability. If you can consider the Doomsday paradox, you are not in a physical reality, because probability exists, and as soon as probability exists (the possibility of more than one outcome), your observer moments are lifted to a mathematical universe.

Probability is the substance of the universes

Rather than space or time, it is probability that is “real” in that the universe seems to keep count of all possible worlds and places observer moments according to a probability distribution. That we can think about probability, in other words, the possibility that other things can happen, which eliminates physicality and pulls back the curtain of our amathematial Platonist universe in the same way that finding you possess a gun which can fire an unlimited number of bullets reveals you are not in a real world but in a movie.

Problems

This theory suffers from uncountable bizarre problems as it tries to derive our current, consistent, historied observation with the conclusion that time, space, and matter are nonexistent and not only do all mathematical universes exist, but also all amathematical universes and random chunks of math “floating” in an eternal, timeless void. There are things it must explain and things it does not need to explain.

A brain is a complex way to generate its underlying state

It’s hard to defend that an entire brain was generated as an observer moment and then generated all space and time around it and forward and backward. It’s trillions of atoms per neuron, which only encodes a few bytes of actual state. The state or experience should be generated first, followed by the minimal substrate (organic brain is not minimal) for it to exist in which also has the maximum time forward and reverse nodes to generate itself (13 billion years in reverse, how many forward?). The more complex it is, the longer it takes to evolve, the bigger history it gets and more likely it exists?

Youngness problem

Need to review what this actually is again. Maybe it’ll be solved by us hopping to a universe that is more complex in one way or has a longer or shorter history.

Reconcile QM and GR

If the reconciliation is QM is universe hopping and GR is our observer-generated nonexistent now-line traversing a spacetime block, QM should be immune from relativistic effects. Depending on what parts of QM are universe hopping and what aren’t, some QM effects should be impossible to predict. Or not. Hopping has a probabilistic outcome.

Synchronization of the now-line

If time is going to not exist, the now-line might need synchronization of itself across space. Is the now-line a tracing of what had time to reach me? That seems too simple.

The now-line is also independent of the generation of the spacetime block. Whatever rules generate it adhere to generating rules, but the now-line imaginary traversal moves at exactly c at all points through the block, which should be independent of generation. Separating out these two is going to be hard. The laws that generate space from a node are not our laws of physics, since those are time-bound.

Must Explain

Need Not Explain

Predictions

Block spacetime generated from the big bang with a “now” plane flowing through so each tangent line moves at c should warp significantly at some points. The warping should be enough to skew some numbers about objects flying toward each other. A now-generated block should skew differently, as I’d expect now would be a flat plane and instead space skewed, perhaps in some measurable way to differentiate the two. I have no idea what “real” time would do, as that concept has been gone from me for over a decade.

To maximize the existence of each node/moment, the more nodes in its light cone, the more that node exists as it occurs more often. We should be in a universe with an infinite past and future. Evidence of this universe lasting forever through some means would be good circumstantial evidence.

Summary

  1. Consciousness can be decomposed into static states.
  2. Observer moments are generated by the existence of these states, not the transition between them.
  3. A being experiences all moments of its lifetime continuously simply by having its states “written down.” Time need not exist.
  4. Why write things down? The numbers exist regardless. Space need not exist.
  5. Zeno’s paradox is dodged by the absence of motion/unimportance of state transitions.
  6. The “ensemble” is too restrictive. Non-mathematical and inconsistent mathematical structures exist, too.
  7. For each physical universe, there’s infinite mathematical universes, preventing observer moments from existing in a physical universe.
  8. The continuum hypothesis means 0, not a fractional chance of landing in a lower cardinality mathematical or physical universe.
  9. In the same way physical universes can’t contain observer moments, neither can simulated ones.
  10. Boltzmann brains would generate histories and futures which generated themselves, making them no longer Boltzmann but like us.
  11. Observer-moments provide a “tiling context” generating a past and future which each in turn generate that moment, if consistent.
  12. This is why the universe appears consistent, many more consistent universes are generated than inconsistent by every node in our light cone.
  13. TSVF supports the generation of now from the past and present for each moment.
  14. Doomsday paradox only makes sense in a mathematical universe generated from now, not a physical one with only a history.
  15. Whenever we can consider it, we cannot be in a real physical universe. It’s a smoking gun.

Implications

Doomsday paradox

Gleaning information about the future or even past from now should not be possible outside of a multiverse, implying probability is the real substance of the universe.

Continuum hypothesis

Unusual separation of cardinalities required for observer moments to maintain separation from their complements in higher or lower cardinalities.

Feynman diagram rotations

Circumstantial evidence of an eternalist block universe in which time is not a particular spacetime direction.

Zeno’s paradox

Evidence against “flow” of time.

Navier-Stokes insolvability

Remote possibility of infinite undulating fields coalescing into fractal structures as subatomic particles. Highly speculative.

List of paradoxes to go through

Questions

Why is there an arrow of time?

Directional, reversible time allows past and future observer moments to generate each other, compounding these moments and pushing them to higher cardinalities, placing our consciousness in historied, consistent universes.

Why is math unreasonably effective at describing the universe?

Just as we tuned the axioms of set theory to produce and solve interesting problems, the mathematical universe we inhabit was built on rules that create interesting, lasting, non-consumptive structures.

Why are mass and energy conserved?

As all possible universes exist, if there were a way to bypass conservation of mass and have sentience in the same universe, that sentience would eventually craft something that could create a runaway filling or emptying of their entire light cone, removing our observer moments.

Why are things truly random at the quantum level?

Without true randomness, conservation of mass could be violated leading to the situation described above.

Why are there no aliens visible?

To maximize the number of observer moments and sentient beings, c keeps them distant and places black holes nearby for them to fall toward, maximizing their lifespan. Or, the absence of ETI gives us information that civilizations interacting destroys a lot of observer moments, so the ones we inhabit we appear to be isolated.

Why the universe prevents too much mass by surface area and not volume?

Why does the continuum hypothesis exist outside of axiomatic set theory?

I gotta learn more to answer this one, but we need it to keep fractional moments or the whole thing will be a mess.

What’s going on at the quantum level? Are we ramming the fundamental structures of logic into each other at supercolliders?

Probably not. I suspect it’s more like the welling up of infinitely deep fluid-like eddies into singularities. No real idea, though, just guessing.

Warning: consistent history is a reserved QM word.

Potential Allies

Email

contact at this domain