# Quantum Computers

By [Amal Varghese](https://paragraph.com/@amal-varghese) · 2023-04-11

---

The following is a summary of the chapter ‘Quantum Computers’, from David Deutsch’s book, _The Fabric of Reality._ I recommend reading the original work.

Quantum mechanical effects are dominant in all sufficiently small systems.

A quantum computer (QC) is a machine that uses uniquely quantum-mechanical effects, especially interference, to perform wholly new types of computation that would be impossible, even in principle, on any Turing machine, and hence on any classical computer.

QCs will be the first technology that allows useful tasks to be performed in collaboration between parallel universes.

A Quantum Computer would be capable of distributing components of a complex task among vast numbers of parallel universes, and then sharing the results.

_Universality:_ computers need to be universal (beyond Turing complete) including VR generators, and must be built in a manner whereby they do not require impracticably large sources to render simple aspects of reality. In short, they must be efficient.

_Complexity theory (computational)_ calculates the amount of resources required to perform given computational tasks.

There is a distinction between tractable and intractable tasks.

_Tractability_ refers to the fact that time does not increase too sharply when we apply the same method to ever larger numbers. That is the marginal cost of computing does not increase exponentially. i.e. multiplication of numbers.

By contrast, factorisation (reverse of multiplication) seems far more difficult. Thereby adding ‘running time’ in geometric proportion (exponentially), as the number of digits grows.

Factorisation of large numbers is an _intractable_ task (i.e difficult increases exponentially).

**Classical Physics**

_Chaos theory:_ there are limitations on predictability in classical physics, stemming from the fact that almost all classical systems are inherently unstable. It refers to an _extreme sensitivity to initial conditions_. This makes it difficult to measure initial positions and velocities perfectly.

Any calculation from _slightly_ inaccurate data, tends to grow exponentially and regularly (and chaotically) with ‘time’, so after a while the original, slightly imperfectly known state, is no guide at all to what the system is doing.

**Quantum Mechanics**

In Quantum Mechanics (QM), small deviations from a specified initial state tend to cause only small deviations from the predicted final state.

But there are other difficulties in QM:

The laws of QM require an object that is initially at a given position (in all universes) to ‘spread out’ in the MV sense.

e.g. a photon and its other-universe counterparts all start from the same point on a glowing filament, but then move in trillions of different directions.

When we later measure what has happened, we too become differentiated, as each copy of us sees what has happened in our particular universe.

e.g. if the object in question is the Earth’s atmosphere, then a hurricane (or in the general, any event) may have occurred in 30% of universes say, and not in the remaining 70%.

Though subjectively we perceive this as a single unpredictable or ‘random’ outcome, though from the MV point of view, all the outcomes have actually happened. And this explains why the weather is unpredictable.

(Remember in _classical physics_, the unpredictability stems from sensitivity to initial conditions).

In neither case will any amount of computation lessen the unpredictability.

(Remember: the intractability is a computational-resource issue; that is, we could readily make a prediction if _only_ we could perform the required computation, but we cannot do so, because the resources required become impractically large.

In my previous blog on the [multiverse](https://mirror.xyz/0x9a7a1ab9151Ed11C3B9DDDb1FB9df3680eE8C54d/kLZzyMJ1Y-VwR_aJnFin25VHYNvI5p256gkXS-5FySM), I illustrated using shoddy drawing skills, how a photon could split, and later rejoin (known as _interference)_ thus demonstrating the existence of parallel (but fungible) histories.

Here it is important to understand that for many other experiments quantum theory (QT) predicts a single, definite outcome, even if the universes differed at intermediate stages of the experiment, and it predicts what the outcome will be (non-random interference phenomena).

Only when interference occurs between the two paths is the outcome predictable.

In the previously illustrated [shadow experiments](https://mirror.xyz/0x9a7a1ab9151Ed11C3B9DDDb1FB9df3680eE8C54d/kLZzyMJ1Y-VwR_aJnFin25VHYNvI5p256gkXS-5FySM), a single photon passes through a barrier in which there are some small holes, and then falls on a screen suppose that there are a thousand holes in the barrier. There are places on the screen where the photon can fall (does fall, in some universes), and places where it cannot fall. To calculate whether a particular point on the screen can or cannot ever receive the photo, we must calculate the effects of those photons on each other so as to determine whether or not they are all prevented from reaching that point.

Thus, we must perform roughly x1000 as much computation as we would if we were working out whether a classical particle would strike the specified _point or not_.

Far higher multiplicities are possible when there are two or more interacting particles involved in an interference phenomenon.

Thus: the pair can be in a million different states at an intermediate stage of the experiment, so there can be up to a million universes that differ in what this _pair_ of particles is doing.

And thus the number of different histories that we have to calculate if we want to predict what will happen in such cases increases exponentially with the number of interacting particles.

That is why the task of computing how a typical quantum system will behave is well and truly tract (requires inordinately large amounts of computation).

So unpredictability is not the issue; in fact, we see that in quantum phenomena outcomes are highly predictable.

That is, because in such phenomena the same, definite outcome is the result of interference between vast numbers of universes that were different during the experiment.

All this is in principle predictable from QT and is not overly sensitive to the initial conditions.

What makes it hard to predict that in such experiments the outcome will always be the same is that it requires an inordinately large amounts of computation.

_Intractability_ is in principle a greater impediment to universality than _unpredictability_ could ever be.

* * *

Quantum Mechanics, as seen by Feynman:

Whilst it might appear as if no practical rendering is possible for environments that do show the effects of quantum interference, Feynman postulated that is if it \[the rendering\] required so much computation to work out what will happen in an interference experiment then the very act of setting up such an experiment and measuring its outcome is tantamount to performing a complex computation.

Thus, Feynman reasoned, it might after all be possible to render quantum environments efficiently, provided the computer were allowed to perform experiments of a real quantum-mechanical object.

The computer would choose that measurements to make on an auxiliary piece of quantum hardware as it went along, and would incorporate the results of the measurements into its computations.

The auxiliary quantum hardware would in effect be a computer too. e.g interferometer.

Today, it might be called a special-purpose Quantum Computer.

It can be programmed as follows:

*   Set up the mirrors in a certain geometry and then project a single photon at the first mirror.
    
*   In a non-random interference experiment the photon will always emerge in one particular direction, determined by the settings of the mirrors, and we could interpret that direction as indicating the result of the computation.
    
*   In a more complex environment, with several interacting particles, such a computation, could of course become ‘intractable’. Yet, since we can readily obtain its result just by performing the experiment, it is not really intractable at all.
    

Feynman conjectured that it would not be necessary to use a literal copy of the environment being rendered: it would be possible to find a much more easily constructed auxiliary device whose interference properties were nevertheless analogous to those of the target environment.

A normal computer could do the rest of the rendering, working through the analogy between the auxiliary device and the target environment.

Crucially, Feynman expected that it would be a tractable task. Further, he conjectured that all the quantum-mechanical properties of any target environment could be simulated by auxiliary devices of a particular type that he specified (namely an array of spinning atoms, each interacting with its neighbours).

He called the whole class of such devices a **universal quantum simulator**.

In quantum physics there is a universal quantum computer.

A universal quantum computer could perform any computation that any other quantum computer could perform, and it could render any finite physically possible environment in Virtual Reality (VR).

The time and resources required would not increase exponentially, making it _tractable_.

In short, the theory of computation is now the quantum theory of computation.

As I previously wrote in ‘The Multiverse is Real’ post, what changes in a quantum world is the proportion of universes in the entire multiverse, not the state of any one universe.

All changes are discrete.

![](https://storage.googleapis.com/papyrus_images/81b7b51e0bb92830b168ac42c594627523a8c9838a515f8db9aeb322b928643c.jpg)

Another way in which quantum physics is implicit in classical computation is that all practical implementations of Turing-type computers rely on such things as solid matter or magnetised materials, which could not exist in the absence of _quantum-mechanical effects._

e.g. any solid body consists of an array of atoms, which are themselves composed of electrically charged particles (electrons, and protons in the nuclei).

But according to chaos theory in classical physics (sensitivity to initial conditions not being accurate means over time, the trajectory becomes exponentially more inaccurate due to classical chaos).

And because of classical chaos:

*   No array of charged particles could be stable under classical laws of motion. The positive and negative charged particles would simply move out of position and crash into each other, and the structure would disintegrate.
    
*   It is only the strong quantum interference between the various taken by charged particles in parallel universes that prevents such catastrophes and makes solid matter possible.
    

Building a universal QC:

*   Detecting an interference phenomenon involves setting up an appropriate interaction between all the variables that have been different in the universes that contribute to the interference.
    
*   The more interacting particles are involved therefore, the harder it tends to be to engineer the interaction that would display the interference – that is, the result of the computation.
    
*   Among the many technical difficulties of working at the level of a single atom or single electron, one of the most important is that of preventing the environment from being affected by the different interfering sub-computations.
    

For, if a group of atoms is undergoing an interference phenomenon, and they differentially affect other atoms in the environment, then the interference can no longer be detected by measurements of the original group alone, and the group is no longer performing any useful quantum computation. This process can be described as _decoherence._

Note: it is the effect of the quantum computation on the outside world that causes _decoherence_.

Thus, the race is on to engineer sub-microscopic systems in which information carrying variables interact among themselves, but affect their environment as little as possible.

In the quantum theory of computation, the precise form of the interactions hardly matters.

Virtually any atomic-scale system of interacting bits, so long as it does not decohere, could be made to perform useful quantum computations.

Interference phenomena involving vast numbers of particles, such as super conductivity and super fluidity, are known, but it seems that none of them can be used to perform any interesting computations.

(At time of writing, only single-bit quantum computations can be easily performed in the laboratory)

Some physicists are skeptical – they believe that decoherence will never be reduced to the point where more than a few quantum-computational steps can be performed.

Universal quantum computers mean new classes of purely mathematical computations are now _tractable_. Which means, rendering an environment is tantamount to evaluating certain mathematical functions (this has been found!). A key one being the task of factorising large numbers.

It is likely that a _quantum factorisation engine_ will be built long before the full range of quantum computations are technologically feasible.

This is of great significance to cryptography (science of securely communicating and authenticating information).

Realistic communication networks may be global and have large, constantly changing sets of participants, physically and in advance, to exchange cryptographic keys that would later allow them to communicate without fear of eavesdroppers.

The most secure known method (at time of writing) of public key cryptography depends on the _intractability_ of the problem of factorising large numbers, known as the _RSA crypto system._

It depends on a mathematical procedure whereby a message can be encoded using a large (say 250 digit) number as a key.

The recipient can freely make this key public, because message encoded within it can only be decoded given a knowledge of the factors of that number. Thus I can choose two 125-digit prime numbers and keep them secret, but multiply them together and make their 250 digit product public. Anyone can send me a message using that number as the key, but only I can read the messages because only I know the secret factors.

However, quantum factorisation (running Shor’s algorithm) could do it using only a few thousand arithmetic operations, which might well take only a matter of minutes.

Larger numbers won’t help either because the resources required by Shor’s algorithm increase only slowly with the _size_ of the number being factorised.

And as we know, in the quantum theory of computation is a very tractable task.

And whilst it is thought that, in the presence of a given level of decoherence, there would be a practical limit on the size of the number that could be factorised, there is no known lower limit on the rate of decoherence that can be achieved (technologically).

Therefore, we must predict that one day in the future, the RSA crypto system with any given length of key may become insecure. (And in one sense, it is insecure today because anyone, or anyone that records an RSA encrypted message today, and waits until they can buy a quantum factorisation engine with low enough decoherence, will be able to decode the message).

**How does a factorisation engine operate?**

When a quantum factorisation engine (QFE) is factorising a 250 digit number, the number of interfering universes will be of the order of 10 to the power 500.

This staggeringly large number is the reason why Shor’s algorithm makes factorisation tractable.

(A few thousand operations in each universe that contributes to the answer. All those computations are performed in parallel, in different universes, and share their results through interference). Shor’s algorithm operates initially only on a set of universe that are identical to one another, and it causes them to become differentiated only within the confines of the factorisation engine.

More broadly, beyond quantum computation, Shor’s algorithm is proof of the existence of multiple universes. That is because, Shor’s algorithm has factorised a number, using 10 to the power 500 or so times the computational resources that can be seen to be present, in the visible universe. But there are only about 10 to the power 80 atoms in the entire visible universe, which is miniscule compared with 10 to the power 500. In short, if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorise such a large number.

Therefore, the existence of parallel universes is the most compelling explanation for this computational capability.

An additional new task that quantum computers can perform that no classical computer could perform is that of implementing a new, absolutely secure system of quantum cryptography. In 1989, special purpose quantum computers consisted of a pair of quantum cryptographic devices. Messages are encoded in the states of individual photos emitted by a laser. Today, machines can be built using existing technology because they need to perform their quantum computations on only one photon at a time.

The system’s security is based on the properties of quantum interference, thus giving it its classically unobtainable absolute security. No amount of future computation by any sort of computer, whether for millions or trillions of years, would be of any help to an eavesdropper on quantum-encrypted messages: for if one communicates through a medium exhibiting interference, one can detect eavesdroppers (not possible in classical physics).

If one makes any measurement on a quantum system, one alters its subsequent interference properties. The communication protocol relies on this effect. The communicating parties effectively set up repeated interference experiments, coordinating them over a public communicating channel. Only if the interference passes the test for there having been no eavesdropper do they pass on to the next stage of the protocol, which is to use some of the transmitted information as a cryptographic key.

At worst, a persistent eavesdropper might prevent any communicating from taking place at all (physically cutting a line for example).

As for reading a message, only the intended recipient can do that, and the guarantee of that is provided by the laws of physics.

A limitation of quantum cryptography:

Because Quantum Cryptography relies of manipulating individual photos, it suffers from a major limitation.

Each photon that is successfully received, carrying one bit of the message, must somehow have been transmitted intact from the transmitter to the receiver. But every method of transmission involves losses, and if these are too heavy, the message will never arrive. Setting up relay stations (which is the remedy for this problem in existing communications systems) would compromise the security because an eavesdropper could, without being detected, monitor what goes on inside the station.

(1997) The best existing quantum cryptographic systems use fibre-optic cables and have a range of around 10kms. Further advances in Quantum Cryptography are required for global communication.

**In sum:**

*   The laws of physics permit computers that can render every physically possible environment without using impractically large resources.
    
*   Universal computation is possible, and tractable.
    
*   Quantum phenomena may involve vast numbers of parallel universe and therefore may not be capable of being efficiently simulated within one universe.
    
*   However, this strong form of universality still holds because quantum computers can efficiently render every physically possible quantum environment, even when vast numbers of universe are interacting.
    
*   Quantum computers can efficiently solve certain mathematical problems, such as factorisation, which are classically intractable, and can implement types of cryptography which are classically impossible.

---

*Originally published on [Amal Varghese](https://paragraph.com/@amal-varghese/quantum-computers)*
