This blog explains why the multiverse (also known as the ‘many worlds’ theory or ‘quantum theory’) is real, based on my reading of David Deutsch’s masterpieces, The Beginning of Infinity and The Fabric of Reality.
As I’ve pointed out in my previous posts, I highly recommend reading the books themselves. All errors of interpretation in this blog are mine only. Also, Brett Hall’s podcast series, TokCast, is a valuable supplement.
A future post will explore the implications of quantum theory regarding universality and computing.
Ok here we go. Strap yourself in. Quantum theory, and specifically the idea of multiple universes, warps the mind, and for me at least, sits uncomfortably. But that is the nature of reality. We have to think beyond our immediate intuitions and what we can physically observe.
First, Deutsch illustrates the reality of the multiverse through the use of a fictional story of a ‘transporter’, a starship’s teleportation device, to demonstrate how different universes are affected by each other.
So, let’s explore the fictional world Deutsch describes.
There are a few conditions on the flow of information:
One cannot send a message to the other universe;
One cannot change anything in one’s own universe sooner than light could reach that thing;
One cannot bring new information – even random information – into the world: everything that happens is determined by the laws of physics from what has gone before;
One can bring new knowledge into the world. Knowledge consists of explanations, and none of those conditions prevents the creation of new explanations.
And there are two parallel universes. You can think of them as two instances off the object.
In Quantum Theory (QT), two or more initially fungible instances of the observer become different. Thus, making their outcomes strictly unpredictable.
The multiverse is real because there are circumstances in which histories affect each other in ways that do not amount to communication, and the need to explain those effects provides MV theory its credence.
Though initially fungible, once the voltage surge occurred in only one universe, a wave of differentiation between the universe spreads in all directions through space.
Since, at its leading edge, light travels at or near that speed, differences in the head start that some directions have over others will become an ever-smaller proportion of the total distance travelled, and so the further the wave travels the more nearly spherical it becomes. Deutsch calls it the sphere of differentiation.
With regard to the sphere of differentiation: there are two different common-sense intuitions that seem to prove that (a) it must change little; and (b) it changes everything.
How (b) plays out, in a strict physical sense:
When news of an event reaches a planet – say, in the form of pulse of photos from a communications laser, the effect will be that it will impart momentum to every atom exposed to the beam – which will be every atom in something like half of the surface of the planet which is facing the beam.
These atoms would then vibrate a little differently, affecting the atoms below through interatomic forces. As each atom affected others, the effect would spread rapidly through the planet. Soon, every atom in the planet would have been affected 0 though most of them by unimaginably tiny amounts. Thus, breaking the fungibility between each atom and its other universe counterpart. Hence, it would seem that nothing would be left fungible after the wave of differentiation had passed.
These two opposing intuitions reflect the ancient dichotomy between the discrete and the continuous.
The argument that everything in the sphere of differentiation must become different – depends on the reality of extremely physical changes – changes that would be many orders of magnitude too small to be measurable.
The existence of such changes stems from classical physics, in which fundamental quantities are continuously variable. In sharp contrast, QT puts forward the worldview of discrete variables.
In this case, Quantum Theory supersedes General Relativity.
Thus, for a typical physical quantity, there is a smallest possible change that it can undergo in a given situation.
e.g. There is a smallest possible amount of energy that can be transferred to any particular atom. The atom cannot absorb any less than that amount, which is called a ‘quantum’ of energy.
The practical effect of this is that – in the fictional multi-world story that Deutsch presents – not all atoms are changed by the arrival of the [inter-universe] message.
The typical response of a large physical object to very small influences is that most of its atoms remain strictly unchanged, while, to obey, the conservation laws, a few exhibit a discrete, relatively large change of one quantum.
In short, a trigger of a certain amount of energy is required for one quantum to change. The world is binary. Either an atom is impacted or it is not. Change does not happen instantaneously.
Now, to explore what the world is like half-way through.
(To help work out which atoms are liable to be affected by an influence: the reason being fungibility of the atoms).
The outcomes of experiments, in QT, are subjectively random (from the perspective of any one observer) eventhough everything that is happening is completely determined objectively.
This is the origin of quantum-mechanical randomness and probability; it is due to the measure that the theory provides for the multiverse, which is in turn due to what kinds of physical processes the theory allows and forbids. This random outcome is a ‘situation of diversity within fungibility’: the diversity is in the variable ‘what outcome are they going to see’. One can test the predicted value of the probability, but not the sequence of outcomes.
So, as you can see, QT poses a number of tough questions that are difficult to grapple with: what exactly is the difference between the instance of you that I can interact with and the ones that are imperceptible to me? The latter are in ‘other universes’. Universes consist only of the objects in them, which in effect means saying I can see the ones I can see. The upshot is that our laws of physics must say that every object carries within it information about which instances of it could interact with which instances of other objects. QT describes this information; it is known as entanglement information.
In quantum physics, under certain circumstances, the laws of motion allow histories to rejoin (become fungible again).
Let’s represent the splitting as follows:

Where X is the normal voltage and Y is the anomalous one introduced by the transporter, then the rejoining of histories can be represented as:

In an interference phenomenon, differentiated histories rejoin.
The presence of the Y-history interferes with what the transporter usually does to an X-history. Instead, the X and Y histories merge.
Note: the principle of the conservation of mass or any other conservation law: the total measure of all the histories remains constant.
Interference is the phenomenon that can provide the inhabitants of the multiverse with evidence of the existence of multiple histories in their world without allowing the histories to communicate.
e.g.

In short, multiple histories not only exist but affect each other strongly (that is, they behave differently according to whether the other is present or absent). Note: there is no inter-history communication. And splitting cannot occur faster than the speed of light.
This means the rejoining can only occur if no wave of differentiation has occurred. When a wave of differentiation, set off by two different values X and Y of some variable, has left an object, the object is entangled with all the differentially affected objects.

So, the rule is: interference can happen only in objects that are unentangled with the rest of the world.

If the object is entangled, then it cannot be made to undergo interference. Instead, what occurs is a splitting of history(ies).

When two or more values of a physical variable have differently affected something in the rest of the world, knock on effects typically continue indefinitely, with a wave of differentiation entangling more and more objects.
If the differential effects can all be undone, then interference between these original values becomes possible again; but the laws of QM dictate that undoing them requires fine control of all the affected objects, and that rapidly becomes infeasible. This is called decoherence.
In most situations, decoherence is very rapid, which is why splitting typically predominates over interference, and with interference – though ubiquitous on microscopic scales – is quite hard to demonstrate unambiguously in the laboratory.
Nevertheless, it can be done, and quantum interference constitute our main evidence of the existence of the multiverse and of what its laws are.
A real-life analogue of the above experiment is standard in quantum optics laboratories.
The experiment: uses individual photos and the variable being acted upon is not voltage (as per the fictional story Deutsch uses to show the effects of the multiverse) but which of two possible paths that photon is on.
One uses a semi-silvered mirror. When a photon strikes such a mirror, it bounces off in half the universes and passes straight through in the other half.

To show a re-merging of histories, one can set up an interferometer; namely the Mach-Zehnder interferometer, which performs these two transformations in quick succession:

The two ordinary mirrors (the black sloping bars) steer the photon from the first to the second semi-silvered mirror.
Note:
If a photon is introduced travelling rightwards (X) after the first mirror, then it appears to emerge randomly rightwards or downwards OR if it is introduced travelling downwards (Y) after the first mirror.
But a photon introduced travelling as per the diagram invariably emerges rightwards, never downwards. By doing the experiment repeatedly with and without detectors on the paths, one can verify that only one photon is ever present per history, because only one of those detectors is ever observed to fire during such an experiment.
Then, the fact that the intermediate histories X and Y contribute to the deterministic final outcome X makes it inescapable that both are happening at the intermediate time.
So, under the laws of quantum physics, elementary particles are undergoing such processes of their own accord all the time.
Moreover, histories may split into more than two – often into many trillions – each characterised by a slightly different direction of motion or difference in other physical variables of the elementary particle concerned.
The rate of growth in the number of distinct histories is quite mind-boggling – eventhough, thanks to interference, there is now a certain amount of spontaneous rejoining as well.
Because of this rejoining, the flow of information in the real multiverse is not divided into strictly autonomous subflows – branching, autonomous histories. Although there is still no communication between histories (in the sense of message-sending), they are intimately affecting each other, because the effect of interference on a history depends on what other histories are present. Further, not only is the multiverse no longer perfectly partitioned into histories, individual particles are not perfectly partitioned into instances.
e.g.
where X and Y represent different values of the position of a single particle:

Because these two groups of instances of the particle, initially at different positions, have gone through a moment of being fungible, there is no such thing as which of them has ended up at which final position.
This sort of interference is going on all the time, even for a single particle in a region of otherwise empty space.
There is no such thing as the ‘same’ instance of a particle at different times [after interference occurs; in different histories].
Even within the same history, particles in general do not retain their identities over time. For example, during a collision between two atoms, the histories of the event split:

So, for each particle individually, the event is rather like a collision with a semi-silvered mirror. Each atom plays the role of the mirror for the other atom.
But the multiversal view of both articles looks like this:

Where at the end of the collision some of the instances of each atom have become fungible with what was originally a different atom.
For the same reason, there is no such thing as the speed of one instance of the particle at a given location. Speed is defined as distance travelled divided by time taken, but that does not apply here, because there is no such thing as a particular instance of the particle over time. Instead, a collection of fungible instances of a particle in general have several speeds – meaning that in general they will do different things an instant later (this is another example/instance of ‘diversity within fungibility’).
Not only can a fungible collection with the same position have different speeds, a fungible group with the same speed can have different positions. Furthermore, it follows from the laws of quantum physics that, for any fungible collection of instances of a physical object, some of their attributes must be diverse, referred to as the ‘Heisenberg uncertainty principle’. Hence, for instances, an individual electron always has a range of different locations and a range of different speeds and directions of motion.
As a result, its typical behaviour is to spread out gradually in space. Its quantum mechanical law of motion resembles the law governing the spread of an ink blot – so if it is generally located in a very small region, it spreads out rapidly, and the larger it gets the more slowly it spreads.
The entanglement information that it carries ensures that no two instances of it can ever contribute to the same history (or more precisely, at times and places where there are histories, it exists in instances which can never collide).
On motion in quantum physics:
If a particle’s range of speeds is centred not on zero but some other number/value, then the whole of the ‘ink blot’ moves, with its centre obeying approximately the laws of motion in classical physics. This, too, explains, how particles in the same history can be fungible too, in something like an atomic laser.
Two ink-blot particles, each of which is a multiversal object, can coincide perfectly in space, and their entanglement information can be such that no two of their instances are ever at the same point in the same history.
Now, put a proton into the middle of that gradually spreading clout of instances of a single electron.
The proton has a positive charge, which attracts the negatively charged electron. As a result, the clout spreads spreading when its size is such that its tendency to spread outwards due to its uncertainty-principle diversity is exactly balanced by its attraction to the proton. The resulting structure is called an atom of hydrogen.
Historically, this explanation of what atoms are one of the first triumphs of Quantum Theory.
An atom consists of a positive charged nucleus surrounded by negatively charged electrons. But positive and negative charges attract each other, and if unrestrained, accelerate towards each other, emitting energy in the form of electromagnetic radiation as they go. So, it used to be a mystery why the electrons do not ‘fall’ onto the nucleus in a flash of radiation. Neither the nucleus nor the electrons individually have more than one ten-thousandth of the diameter of the atom, so what keeps them so far apart? And what makes atoms stable at that size?
In non-technical accounts, the structure of atoms is sometimes explained by analogy with the solar system: one imagines electrons in orbit around the nucleus like planets around the sun.
But the solar system analogy does not match reality. For one thing, gravitationally bound objects do slowly spiral in, emitting gravitational radiation (the process has been observed for binary neutron stars), and the electro-magnetic process in an atom would be over in a fraction of a second. For another, the existence of solid matter, which consists of atoms packed closely together, is evidence that atoms cannot easily penetrate each other, yet solar systems certainly could.
Furthermore, it turns out that, in the hydrogen atom, the electron in its lowest-energy state is not orbiting at all, but is just sitting there like an ink blot – its uncertainty principle tendency to spread exactly balanced by the electrostatic force.
In this way, the phenomenon of interference and diversity within fungibility are integral to the structure and stability of all static objects, including all solid bodies, just as they are integral to all motion.
Thanks to the strong internal interference, that is continuously underway, a typical electron is an irreducibly multiversal object, and not a collection of parallel-universe or parallel histories objects. That is to say, it has multiple positions and multiple speeds without being divisible into autonomous sub-entities, each of which has one speed and one position. Even different electrons do not have completely separate identities. So, the reality is an electron field through out the whole of space, and disturbances spread through this field as waves, at the speed of light or below.
So, it’s clear that, there is a field (or ‘waves’) in the multiverse for every individual particle that observe in a particular universe.
A history is distinguished from the others by the values of physical variables. In short, it is a channel of information flow. They preserve information because, although their contents change over time, they are approximately autonomous – that is to say, the changes in a particular history depend almost entirely on conditions inside it and not elsewhere. It is why, within a history, using classical physics, one can successfully predict some aspects of the future of that history from its past.
There are regions of the multiverse that contain short-lived histories, and others that do not even approximately contain histories. So, as is evident, every atom in an everyday object is a multiversal object, not partitioned into nearly autonomous instances and nearly autonomous histories.
The larger and more complex an object or process is, the less its gross behaviour is impacted by interference. In short, interference is suppressed by entanglement. (Interference almost always occurs either very soon or after splitting or not at all).
Remember that whilst technically the effects of entanglement, arising from a wave of differentiation, can be undone, the reality is that undoing them requires fine control of all the affected objects, which rapidly becomes infeasible – known as decoherence.
At that coarse-grained level of emergence, events in the multiverse consist of autonomous histories, with each coarse-grained history consisting of a swathe of many histories only in microscopic details but affecting each other through interference.
Spheres of differentiation tend to grow at nearly the speed of light, so, on the scale of everyday life and above, those coarse-grained histories can justly be called ‘universes’ in the ordinary sense of the word. They can usefully be called ‘parallel’ because they are nearly autonomous. To the inhabitants, each looks very like a single-universe world. Microscopic events which are accidentally amplified to that coarse-grained level are rare in any one coarse-grained history, but common in the multiverse as a whole. For example, consider a single cosmic-ray particle travelling in the direction of Earth from deep space. That particle must be travelling in a range of slightly different directions, because the uncertainty principle implies that in the multiverse it must spread sideways like an inkblot as it travels. By the time it arrives, this ink blot may well be wider than whole Earth – so most of it misses and rest strikes everywhere on the exposed surface. Remember, this is just a single particle which may consist of fungible instances. The next thing that happens is that they cease to be fungible, splitting through their interaction with atoms at their points of arrival into a finite but a huge number of instances, each of which is the origin of a separate history. In each such history, there is an autonomous instance of the cosmic-ray particle, which will dissipate its energy in creating a ‘cosmic-ray shower’ of electrically charged particles. Thus, in different histories, such a shower will occur at different locations. In some, that shower will provide a conducting path down which a lightning bolt will travel.
Every atom on the surface of the Earth will be struck by lightning in some history. In other histories, one of those cosmic-ray particles will strike a human cell, damaging some already damaged DNA in such a way as to make the cell cancerous. There exist other histories in which the course of a battle, or a war, is changed by such an event, or by a lightning bolt at exactly the right place, and time, or by any of countless other unlikely ‘random’ events. This makes it highly plausible that there exist histories in which events have played out differently, for better or worse.
Note:
There are no histories in which the fundamental constants of nature such as the speed of light or the charge on an electron are different.
There is a sense in which different laws of physics appear to be true for a period in some histories, because of a sequence of ‘unlikely accidents.’ Quantum parallelism enables multiple independent simulations to occur. With quantum computers: the ‘computers in which the information-carrying variables have been protected by a variety of means from becoming entangled with their surroundings. This allows a new mode of computation in which the flow of information is not confined to a single history. In a typical quantum computation, individual bits of information are represented in physical objects known as ‘qubits’ – quantum bits – of which there is a large variety of physical implementations but always with two essential features.
Each qubit has a variable that can take one of two discrete values.
Special measures are taken to protect the qubits from entanglement – such as cooling them to temperatures close to absolute zero.
Consequently, regarding those qubits as a register representing as a whole. A typical algorithm using quantum parallelism begins by causing the information-carrying variables in some of the qubits to acquire both their values simultaneously. Consequently, regarding those qubits as a register representing (say) a number, the number of separate instances of the register as a whole is exponentially large: two to the power of the number of qubits. Then for a period, classical computations are performed, during which waves of differentiation spread to some of the other qubits – but no further, because of the special measures that prevent this. Hence, information is processed separately in each of that vast number of autonomous histories. Finally, an interference process involving all the affected qubits combines the information in those histories into a single history. Because of the intervening computation, which has processed the information, the final state is not the same as the initial one.
Rather, it is more like this:

Y1…Ymany are intermediate results that depend on the input X. All of them are needed to compute the outcome f(x) efficiently.
Quantum computers are limited by the laws of physics that govern quantum interference.
Only certain types of parallel computation can be performed with the help of the multiverse, in this way.
They are the ones for which the mathematics of quantum interference happens to be just right for combining into a single history the information that is needed for the final result.
In such computations, a quantum computer with only a few qubits could perform far more computations in parallel than there are atoms in the visible universe. ‘Scaling’ the technology to larger numbers is the challenge (currently ~10 qubits).
When large objects are affected by a small influence, the outcome is usually that the large outcome is strictly unaffected. The reason: in the Mach-Zehnder interferometer, two instances of a single photon travel on two paths. On the way, they strike two different mirrors. Interference will happen only if the photon does not become entangled with the mirrors – but it will become entangled if either mirror retains the slightest record that it has been struck (for that would be a differential effect of the instances on the two different paths). Even a single quantum of change in the amplitude of the mirror’s vibration on its supports, for instance, would be enough to prevent the interference (the subsequent merging of the photon’s two instances). When one of the instances of the photon bounces off either mirror, its momentum changes, the mirror’s momentum must change by an equal and opposite amount. Hence it seems that, in each history, one mirror but not the other must be left vibrating with slightly more or less energy after the photon has struck it. That energy change would be a record of which path the photon took, and hence the mirrors would be entangled with the photon. But that is not what occurs.
Remember that, at a sufficiently fine level of detail, what we crudely see as a single history of the mirror, resting passively or vibrating gently on its supports, is actually a vast number of histories with instances of all of its atoms continually splitting and rejoining. In particular, the total energy of the mirror takes a vast number of possible values around the average, ‘classical’ one.
So, how does a discrete change occur to a single quantum of energy, without any discontinuity? Simply: an atom absorbs a photon, including all its energy. This energy transfer does not take place instantaneously.
How does it occur? At the beginning of the process, the atom is in its ‘ground state’, in which its electrons have the least possible energy allowed by Quantum Theory. That means that all those instances are still fungible, but now they are in the ‘excited state’, which has one additional quantum of energy.
Which begs the obvious question: what is the atom like halfway through the process? Its instances are still fungible, but now half of them are in the ground state and half in the excited state. It is as if a continuously variable amount of money changed ownership gradually from one discrete to another (Deutsch uses the analogy of dollars in a bank to illustrate their fungibility; there is no such thing as David’s dollar and Amal’s dollar – they are not stowed separately in the bank’s database, they are interchangeable. Only the dollar amount belonging to each individual matters). This is how transitions between discrete states happen in a continuous way.
In a classical physics, a ‘tiny effect’ always means a tiny change in some measurable quantities. In quantum physics, by contrast, physical variables are typically discrete and so cannot undergo tiny changes. Indeed, a ‘tiny effect’ means a tiny change in the proportions that have various discrete attributes. Relatedly, time within quantum mechanics is not fully understood. We can be fairly sure though that in whatever constitutes a quantum theory of gravity (unification of, or resolution to, the paradoxes of Quantum Theory and General Relativity). In this purported future theory, we can be certain that different times are a case of different universes.
In other words, time is an entanglement phenomenon, which places all equal clock readings into the same history. Whenever we observe anything - a scientific instrument or a galaxy or a human being - what we are actually seeing is a single-universe perspective on a larger object that extends some way into other universes. In some of those universes, the object looks exactly as it does to us, in others it looks different, or absent all together. In sum, we are channels of information flow. So are histories, and so are all relatively autonomous objects within histories; but we sentient beings are extremely unusual channels; along which (sometimes) knowledge grows.
This [knowledge growth] can have dramatic effects, not only within a history (where it can, for instance, have effects that do not diminish with distance), but also across the multiverse. Since the growth of knowledge is error-correction, and since there are many more ways of being wrong than right, knowledge-creating entities rapidly become more alike in different histories than other entities. Knowledge creating processes are therefore unique, as all other effects diminish with distance in space, and become increasingly different across the multiverse, in the long-run.
So, let’s ponder over some questions, that arise as a result of this rather unique capability - knowledge creation:
What if there is something other than information flow that can cause coherent, emergent phenomena in the multiverse?
What if knowledge, or something other than knowledge, could emerge from that, and begin to have purposes of its own, and to conform the multiverse to those purposes, as we do? Could we communicate with it?
(We are at the top rank of significance in the universe) Anything that could create explanations would supersede us humans. And there is no bound on the future growth of explanation, and, by extension, knowledge.
In sum:
The physical world is a multiverse, and its structure is determined by how information flows in it.
In many regions of the multiverse, information flows in quasi-autonomous streams called histories, one of which we call our ‘universe’.
We know of the multiverse, and can test the laws of quantum physics, because of the phenomenon of quantum interference.
Thus, a universe is not an exact but an emergent feature of the multiverse.
One of the most unfamiliar and counter-intuitive things about the multiverse is fungibility.
The laws of motion in the multiverse are deterministic, and apparent randomness is due to initially fungible instances of objects becoming different.
In quantum physics, variables are typically discrete and how they change from one value to another is a multiverse process involving interference and fungibility.
This blog explains why the multiverse (also known as the ‘many worlds’ theory or ‘quantum theory’) is real, based on my reading of David Deutsch’s masterpieces, The Beginning of Infinity and The Fabric of Reality.
As I’ve pointed out in my previous posts, I highly recommend reading the books themselves. All errors of interpretation in this blog are mine only. Also, Brett Hall’s podcast series, TokCast, is a valuable supplement.
A future post will explore the implications of quantum theory regarding universality and computing.
Ok here we go. Strap yourself in. Quantum theory, and specifically the idea of multiple universes, warps the mind, and for me at least, sits uncomfortably. But that is the nature of reality. We have to think beyond our immediate intuitions and what we can physically observe.
First, Deutsch illustrates the reality of the multiverse through the use of a fictional story of a ‘transporter’, a starship’s teleportation device, to demonstrate how different universes are affected by each other.
So, let’s explore the fictional world Deutsch describes.
There are a few conditions on the flow of information:
One cannot send a message to the other universe;
One cannot change anything in one’s own universe sooner than light could reach that thing;
One cannot bring new information – even random information – into the world: everything that happens is determined by the laws of physics from what has gone before;
One can bring new knowledge into the world. Knowledge consists of explanations, and none of those conditions prevents the creation of new explanations.
And there are two parallel universes. You can think of them as two instances off the object.
In Quantum Theory (QT), two or more initially fungible instances of the observer become different. Thus, making their outcomes strictly unpredictable.
The multiverse is real because there are circumstances in which histories affect each other in ways that do not amount to communication, and the need to explain those effects provides MV theory its credence.
Though initially fungible, once the voltage surge occurred in only one universe, a wave of differentiation between the universe spreads in all directions through space.
Since, at its leading edge, light travels at or near that speed, differences in the head start that some directions have over others will become an ever-smaller proportion of the total distance travelled, and so the further the wave travels the more nearly spherical it becomes. Deutsch calls it the sphere of differentiation.
With regard to the sphere of differentiation: there are two different common-sense intuitions that seem to prove that (a) it must change little; and (b) it changes everything.
How (b) plays out, in a strict physical sense:
When news of an event reaches a planet – say, in the form of pulse of photos from a communications laser, the effect will be that it will impart momentum to every atom exposed to the beam – which will be every atom in something like half of the surface of the planet which is facing the beam.
These atoms would then vibrate a little differently, affecting the atoms below through interatomic forces. As each atom affected others, the effect would spread rapidly through the planet. Soon, every atom in the planet would have been affected 0 though most of them by unimaginably tiny amounts. Thus, breaking the fungibility between each atom and its other universe counterpart. Hence, it would seem that nothing would be left fungible after the wave of differentiation had passed.
These two opposing intuitions reflect the ancient dichotomy between the discrete and the continuous.
The argument that everything in the sphere of differentiation must become different – depends on the reality of extremely physical changes – changes that would be many orders of magnitude too small to be measurable.
The existence of such changes stems from classical physics, in which fundamental quantities are continuously variable. In sharp contrast, QT puts forward the worldview of discrete variables.
In this case, Quantum Theory supersedes General Relativity.
Thus, for a typical physical quantity, there is a smallest possible change that it can undergo in a given situation.
e.g. There is a smallest possible amount of energy that can be transferred to any particular atom. The atom cannot absorb any less than that amount, which is called a ‘quantum’ of energy.
The practical effect of this is that – in the fictional multi-world story that Deutsch presents – not all atoms are changed by the arrival of the [inter-universe] message.
The typical response of a large physical object to very small influences is that most of its atoms remain strictly unchanged, while, to obey, the conservation laws, a few exhibit a discrete, relatively large change of one quantum.
In short, a trigger of a certain amount of energy is required for one quantum to change. The world is binary. Either an atom is impacted or it is not. Change does not happen instantaneously.
Now, to explore what the world is like half-way through.
(To help work out which atoms are liable to be affected by an influence: the reason being fungibility of the atoms).
The outcomes of experiments, in QT, are subjectively random (from the perspective of any one observer) eventhough everything that is happening is completely determined objectively.
This is the origin of quantum-mechanical randomness and probability; it is due to the measure that the theory provides for the multiverse, which is in turn due to what kinds of physical processes the theory allows and forbids. This random outcome is a ‘situation of diversity within fungibility’: the diversity is in the variable ‘what outcome are they going to see’. One can test the predicted value of the probability, but not the sequence of outcomes.
So, as you can see, QT poses a number of tough questions that are difficult to grapple with: what exactly is the difference between the instance of you that I can interact with and the ones that are imperceptible to me? The latter are in ‘other universes’. Universes consist only of the objects in them, which in effect means saying I can see the ones I can see. The upshot is that our laws of physics must say that every object carries within it information about which instances of it could interact with which instances of other objects. QT describes this information; it is known as entanglement information.
In quantum physics, under certain circumstances, the laws of motion allow histories to rejoin (become fungible again).
Let’s represent the splitting as follows:

Where X is the normal voltage and Y is the anomalous one introduced by the transporter, then the rejoining of histories can be represented as:

In an interference phenomenon, differentiated histories rejoin.
The presence of the Y-history interferes with what the transporter usually does to an X-history. Instead, the X and Y histories merge.
Note: the principle of the conservation of mass or any other conservation law: the total measure of all the histories remains constant.
Interference is the phenomenon that can provide the inhabitants of the multiverse with evidence of the existence of multiple histories in their world without allowing the histories to communicate.
e.g.

In short, multiple histories not only exist but affect each other strongly (that is, they behave differently according to whether the other is present or absent). Note: there is no inter-history communication. And splitting cannot occur faster than the speed of light.
This means the rejoining can only occur if no wave of differentiation has occurred. When a wave of differentiation, set off by two different values X and Y of some variable, has left an object, the object is entangled with all the differentially affected objects.

So, the rule is: interference can happen only in objects that are unentangled with the rest of the world.

If the object is entangled, then it cannot be made to undergo interference. Instead, what occurs is a splitting of history(ies).

When two or more values of a physical variable have differently affected something in the rest of the world, knock on effects typically continue indefinitely, with a wave of differentiation entangling more and more objects.
If the differential effects can all be undone, then interference between these original values becomes possible again; but the laws of QM dictate that undoing them requires fine control of all the affected objects, and that rapidly becomes infeasible. This is called decoherence.
In most situations, decoherence is very rapid, which is why splitting typically predominates over interference, and with interference – though ubiquitous on microscopic scales – is quite hard to demonstrate unambiguously in the laboratory.
Nevertheless, it can be done, and quantum interference constitute our main evidence of the existence of the multiverse and of what its laws are.
A real-life analogue of the above experiment is standard in quantum optics laboratories.
The experiment: uses individual photos and the variable being acted upon is not voltage (as per the fictional story Deutsch uses to show the effects of the multiverse) but which of two possible paths that photon is on.
One uses a semi-silvered mirror. When a photon strikes such a mirror, it bounces off in half the universes and passes straight through in the other half.

To show a re-merging of histories, one can set up an interferometer; namely the Mach-Zehnder interferometer, which performs these two transformations in quick succession:

The two ordinary mirrors (the black sloping bars) steer the photon from the first to the second semi-silvered mirror.
Note:
If a photon is introduced travelling rightwards (X) after the first mirror, then it appears to emerge randomly rightwards or downwards OR if it is introduced travelling downwards (Y) after the first mirror.
But a photon introduced travelling as per the diagram invariably emerges rightwards, never downwards. By doing the experiment repeatedly with and without detectors on the paths, one can verify that only one photon is ever present per history, because only one of those detectors is ever observed to fire during such an experiment.
Then, the fact that the intermediate histories X and Y contribute to the deterministic final outcome X makes it inescapable that both are happening at the intermediate time.
So, under the laws of quantum physics, elementary particles are undergoing such processes of their own accord all the time.
Moreover, histories may split into more than two – often into many trillions – each characterised by a slightly different direction of motion or difference in other physical variables of the elementary particle concerned.
The rate of growth in the number of distinct histories is quite mind-boggling – eventhough, thanks to interference, there is now a certain amount of spontaneous rejoining as well.
Because of this rejoining, the flow of information in the real multiverse is not divided into strictly autonomous subflows – branching, autonomous histories. Although there is still no communication between histories (in the sense of message-sending), they are intimately affecting each other, because the effect of interference on a history depends on what other histories are present. Further, not only is the multiverse no longer perfectly partitioned into histories, individual particles are not perfectly partitioned into instances.
e.g.
where X and Y represent different values of the position of a single particle:

Because these two groups of instances of the particle, initially at different positions, have gone through a moment of being fungible, there is no such thing as which of them has ended up at which final position.
This sort of interference is going on all the time, even for a single particle in a region of otherwise empty space.
There is no such thing as the ‘same’ instance of a particle at different times [after interference occurs; in different histories].
Even within the same history, particles in general do not retain their identities over time. For example, during a collision between two atoms, the histories of the event split:

So, for each particle individually, the event is rather like a collision with a semi-silvered mirror. Each atom plays the role of the mirror for the other atom.
But the multiversal view of both articles looks like this:

Where at the end of the collision some of the instances of each atom have become fungible with what was originally a different atom.
For the same reason, there is no such thing as the speed of one instance of the particle at a given location. Speed is defined as distance travelled divided by time taken, but that does not apply here, because there is no such thing as a particular instance of the particle over time. Instead, a collection of fungible instances of a particle in general have several speeds – meaning that in general they will do different things an instant later (this is another example/instance of ‘diversity within fungibility’).
Not only can a fungible collection with the same position have different speeds, a fungible group with the same speed can have different positions. Furthermore, it follows from the laws of quantum physics that, for any fungible collection of instances of a physical object, some of their attributes must be diverse, referred to as the ‘Heisenberg uncertainty principle’. Hence, for instances, an individual electron always has a range of different locations and a range of different speeds and directions of motion.
As a result, its typical behaviour is to spread out gradually in space. Its quantum mechanical law of motion resembles the law governing the spread of an ink blot – so if it is generally located in a very small region, it spreads out rapidly, and the larger it gets the more slowly it spreads.
The entanglement information that it carries ensures that no two instances of it can ever contribute to the same history (or more precisely, at times and places where there are histories, it exists in instances which can never collide).
On motion in quantum physics:
If a particle’s range of speeds is centred not on zero but some other number/value, then the whole of the ‘ink blot’ moves, with its centre obeying approximately the laws of motion in classical physics. This, too, explains, how particles in the same history can be fungible too, in something like an atomic laser.
Two ink-blot particles, each of which is a multiversal object, can coincide perfectly in space, and their entanglement information can be such that no two of their instances are ever at the same point in the same history.
Now, put a proton into the middle of that gradually spreading clout of instances of a single electron.
The proton has a positive charge, which attracts the negatively charged electron. As a result, the clout spreads spreading when its size is such that its tendency to spread outwards due to its uncertainty-principle diversity is exactly balanced by its attraction to the proton. The resulting structure is called an atom of hydrogen.
Historically, this explanation of what atoms are one of the first triumphs of Quantum Theory.
An atom consists of a positive charged nucleus surrounded by negatively charged electrons. But positive and negative charges attract each other, and if unrestrained, accelerate towards each other, emitting energy in the form of electromagnetic radiation as they go. So, it used to be a mystery why the electrons do not ‘fall’ onto the nucleus in a flash of radiation. Neither the nucleus nor the electrons individually have more than one ten-thousandth of the diameter of the atom, so what keeps them so far apart? And what makes atoms stable at that size?
In non-technical accounts, the structure of atoms is sometimes explained by analogy with the solar system: one imagines electrons in orbit around the nucleus like planets around the sun.
But the solar system analogy does not match reality. For one thing, gravitationally bound objects do slowly spiral in, emitting gravitational radiation (the process has been observed for binary neutron stars), and the electro-magnetic process in an atom would be over in a fraction of a second. For another, the existence of solid matter, which consists of atoms packed closely together, is evidence that atoms cannot easily penetrate each other, yet solar systems certainly could.
Furthermore, it turns out that, in the hydrogen atom, the electron in its lowest-energy state is not orbiting at all, but is just sitting there like an ink blot – its uncertainty principle tendency to spread exactly balanced by the electrostatic force.
In this way, the phenomenon of interference and diversity within fungibility are integral to the structure and stability of all static objects, including all solid bodies, just as they are integral to all motion.
Thanks to the strong internal interference, that is continuously underway, a typical electron is an irreducibly multiversal object, and not a collection of parallel-universe or parallel histories objects. That is to say, it has multiple positions and multiple speeds without being divisible into autonomous sub-entities, each of which has one speed and one position. Even different electrons do not have completely separate identities. So, the reality is an electron field through out the whole of space, and disturbances spread through this field as waves, at the speed of light or below.
So, it’s clear that, there is a field (or ‘waves’) in the multiverse for every individual particle that observe in a particular universe.
A history is distinguished from the others by the values of physical variables. In short, it is a channel of information flow. They preserve information because, although their contents change over time, they are approximately autonomous – that is to say, the changes in a particular history depend almost entirely on conditions inside it and not elsewhere. It is why, within a history, using classical physics, one can successfully predict some aspects of the future of that history from its past.
There are regions of the multiverse that contain short-lived histories, and others that do not even approximately contain histories. So, as is evident, every atom in an everyday object is a multiversal object, not partitioned into nearly autonomous instances and nearly autonomous histories.
The larger and more complex an object or process is, the less its gross behaviour is impacted by interference. In short, interference is suppressed by entanglement. (Interference almost always occurs either very soon or after splitting or not at all).
Remember that whilst technically the effects of entanglement, arising from a wave of differentiation, can be undone, the reality is that undoing them requires fine control of all the affected objects, which rapidly becomes infeasible – known as decoherence.
At that coarse-grained level of emergence, events in the multiverse consist of autonomous histories, with each coarse-grained history consisting of a swathe of many histories only in microscopic details but affecting each other through interference.
Spheres of differentiation tend to grow at nearly the speed of light, so, on the scale of everyday life and above, those coarse-grained histories can justly be called ‘universes’ in the ordinary sense of the word. They can usefully be called ‘parallel’ because they are nearly autonomous. To the inhabitants, each looks very like a single-universe world. Microscopic events which are accidentally amplified to that coarse-grained level are rare in any one coarse-grained history, but common in the multiverse as a whole. For example, consider a single cosmic-ray particle travelling in the direction of Earth from deep space. That particle must be travelling in a range of slightly different directions, because the uncertainty principle implies that in the multiverse it must spread sideways like an inkblot as it travels. By the time it arrives, this ink blot may well be wider than whole Earth – so most of it misses and rest strikes everywhere on the exposed surface. Remember, this is just a single particle which may consist of fungible instances. The next thing that happens is that they cease to be fungible, splitting through their interaction with atoms at their points of arrival into a finite but a huge number of instances, each of which is the origin of a separate history. In each such history, there is an autonomous instance of the cosmic-ray particle, which will dissipate its energy in creating a ‘cosmic-ray shower’ of electrically charged particles. Thus, in different histories, such a shower will occur at different locations. In some, that shower will provide a conducting path down which a lightning bolt will travel.
Every atom on the surface of the Earth will be struck by lightning in some history. In other histories, one of those cosmic-ray particles will strike a human cell, damaging some already damaged DNA in such a way as to make the cell cancerous. There exist other histories in which the course of a battle, or a war, is changed by such an event, or by a lightning bolt at exactly the right place, and time, or by any of countless other unlikely ‘random’ events. This makes it highly plausible that there exist histories in which events have played out differently, for better or worse.
Note:
There are no histories in which the fundamental constants of nature such as the speed of light or the charge on an electron are different.
There is a sense in which different laws of physics appear to be true for a period in some histories, because of a sequence of ‘unlikely accidents.’ Quantum parallelism enables multiple independent simulations to occur. With quantum computers: the ‘computers in which the information-carrying variables have been protected by a variety of means from becoming entangled with their surroundings. This allows a new mode of computation in which the flow of information is not confined to a single history. In a typical quantum computation, individual bits of information are represented in physical objects known as ‘qubits’ – quantum bits – of which there is a large variety of physical implementations but always with two essential features.
Each qubit has a variable that can take one of two discrete values.
Special measures are taken to protect the qubits from entanglement – such as cooling them to temperatures close to absolute zero.
Consequently, regarding those qubits as a register representing as a whole. A typical algorithm using quantum parallelism begins by causing the information-carrying variables in some of the qubits to acquire both their values simultaneously. Consequently, regarding those qubits as a register representing (say) a number, the number of separate instances of the register as a whole is exponentially large: two to the power of the number of qubits. Then for a period, classical computations are performed, during which waves of differentiation spread to some of the other qubits – but no further, because of the special measures that prevent this. Hence, information is processed separately in each of that vast number of autonomous histories. Finally, an interference process involving all the affected qubits combines the information in those histories into a single history. Because of the intervening computation, which has processed the information, the final state is not the same as the initial one.
Rather, it is more like this:

Y1…Ymany are intermediate results that depend on the input X. All of them are needed to compute the outcome f(x) efficiently.
Quantum computers are limited by the laws of physics that govern quantum interference.
Only certain types of parallel computation can be performed with the help of the multiverse, in this way.
They are the ones for which the mathematics of quantum interference happens to be just right for combining into a single history the information that is needed for the final result.
In such computations, a quantum computer with only a few qubits could perform far more computations in parallel than there are atoms in the visible universe. ‘Scaling’ the technology to larger numbers is the challenge (currently ~10 qubits).
When large objects are affected by a small influence, the outcome is usually that the large outcome is strictly unaffected. The reason: in the Mach-Zehnder interferometer, two instances of a single photon travel on two paths. On the way, they strike two different mirrors. Interference will happen only if the photon does not become entangled with the mirrors – but it will become entangled if either mirror retains the slightest record that it has been struck (for that would be a differential effect of the instances on the two different paths). Even a single quantum of change in the amplitude of the mirror’s vibration on its supports, for instance, would be enough to prevent the interference (the subsequent merging of the photon’s two instances). When one of the instances of the photon bounces off either mirror, its momentum changes, the mirror’s momentum must change by an equal and opposite amount. Hence it seems that, in each history, one mirror but not the other must be left vibrating with slightly more or less energy after the photon has struck it. That energy change would be a record of which path the photon took, and hence the mirrors would be entangled with the photon. But that is not what occurs.
Remember that, at a sufficiently fine level of detail, what we crudely see as a single history of the mirror, resting passively or vibrating gently on its supports, is actually a vast number of histories with instances of all of its atoms continually splitting and rejoining. In particular, the total energy of the mirror takes a vast number of possible values around the average, ‘classical’ one.
So, how does a discrete change occur to a single quantum of energy, without any discontinuity? Simply: an atom absorbs a photon, including all its energy. This energy transfer does not take place instantaneously.
How does it occur? At the beginning of the process, the atom is in its ‘ground state’, in which its electrons have the least possible energy allowed by Quantum Theory. That means that all those instances are still fungible, but now they are in the ‘excited state’, which has one additional quantum of energy.
Which begs the obvious question: what is the atom like halfway through the process? Its instances are still fungible, but now half of them are in the ground state and half in the excited state. It is as if a continuously variable amount of money changed ownership gradually from one discrete to another (Deutsch uses the analogy of dollars in a bank to illustrate their fungibility; there is no such thing as David’s dollar and Amal’s dollar – they are not stowed separately in the bank’s database, they are interchangeable. Only the dollar amount belonging to each individual matters). This is how transitions between discrete states happen in a continuous way.
In a classical physics, a ‘tiny effect’ always means a tiny change in some measurable quantities. In quantum physics, by contrast, physical variables are typically discrete and so cannot undergo tiny changes. Indeed, a ‘tiny effect’ means a tiny change in the proportions that have various discrete attributes. Relatedly, time within quantum mechanics is not fully understood. We can be fairly sure though that in whatever constitutes a quantum theory of gravity (unification of, or resolution to, the paradoxes of Quantum Theory and General Relativity). In this purported future theory, we can be certain that different times are a case of different universes.
In other words, time is an entanglement phenomenon, which places all equal clock readings into the same history. Whenever we observe anything - a scientific instrument or a galaxy or a human being - what we are actually seeing is a single-universe perspective on a larger object that extends some way into other universes. In some of those universes, the object looks exactly as it does to us, in others it looks different, or absent all together. In sum, we are channels of information flow. So are histories, and so are all relatively autonomous objects within histories; but we sentient beings are extremely unusual channels; along which (sometimes) knowledge grows.
This [knowledge growth] can have dramatic effects, not only within a history (where it can, for instance, have effects that do not diminish with distance), but also across the multiverse. Since the growth of knowledge is error-correction, and since there are many more ways of being wrong than right, knowledge-creating entities rapidly become more alike in different histories than other entities. Knowledge creating processes are therefore unique, as all other effects diminish with distance in space, and become increasingly different across the multiverse, in the long-run.
So, let’s ponder over some questions, that arise as a result of this rather unique capability - knowledge creation:
What if there is something other than information flow that can cause coherent, emergent phenomena in the multiverse?
What if knowledge, or something other than knowledge, could emerge from that, and begin to have purposes of its own, and to conform the multiverse to those purposes, as we do? Could we communicate with it?
(We are at the top rank of significance in the universe) Anything that could create explanations would supersede us humans. And there is no bound on the future growth of explanation, and, by extension, knowledge.
In sum:
The physical world is a multiverse, and its structure is determined by how information flows in it.
In many regions of the multiverse, information flows in quasi-autonomous streams called histories, one of which we call our ‘universe’.
We know of the multiverse, and can test the laws of quantum physics, because of the phenomenon of quantum interference.
Thus, a universe is not an exact but an emergent feature of the multiverse.
One of the most unfamiliar and counter-intuitive things about the multiverse is fungibility.
The laws of motion in the multiverse are deterministic, and apparent randomness is due to initially fungible instances of objects becoming different.
In quantum physics, variables are typically discrete and how they change from one value to another is a multiverse process involving interference and fungibility.
Personal blog. All views are mine only.
Personal blog. All views are mine only.

Subscribe to Amal Varghese

Subscribe to Amal Varghese
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet