About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information. That is the conclusion of physicists in the UK, who have calculated how many components are required to make a fault-tolerant linear optical computer. Their comprehensive study found that the total number of required components for a photon-based computer would be at least five orders of magnitude larger than for a matter-based processor.
Unlike the components of conventional computers – which are extremely reliable – quantum logic devices are prone to failure. This is because the entities used to store and transmit information – quantum bits (qubits) – quickly lose their quantum nature when in contact with the outside world. Ion-based qubits, for example, must be kept in ultrahigh-vacuum conditions to minimize their contact with air molecules. One way of dealing with this fragility is to create a fault-tolerant quantum computer, in which a single “logical qubit” is distributed across a number of different “physical qubits” – the latter being ions, superconducting circuits or photons. The idea is that if one or more physical qubits fail, then the logical qubit can be recovered and the calculation can continue.
Photons have several properties that make them attractive for use as physical qubits. They can store quantum information in several different ways, including in their polarization. Furthermore, photons can travel hundreds of kilometres in air or optical fibres and still retain their quantum information. Also, it is relatively straightforward to create pairs of entangled photons for use as input to a quantum computer. What is difficult, however, is to get these photons to interact with each other within the quantum computer – something that is needed for most quantum-computing processes.
One option is to use nonlinear optical components that cause photons to interact. The problem with such devices, however, is that most photons will not interact and large numbers of input photons are required to get the desired output. In 2001 Emanuel Knill, Raymond Laflamme and Gerard Milburn realized that quantum computing could be achieved without having photons interact with each other. Called “linear optical quantum computing” (LOQC), the scheme uses entangled photons as input to a quantum computer. But instead of having these photons interact while in the computer, specific measurements are made on some of the photons, with the output photons providing the result of the desired calculation.
While fault-tolerant LOQC can be implemented using relatively simple optical components such as mirrors, beamsplitters and photon detectors, it requires large numbers of photons as input – and therefore large numbers of these components are needed. The process is also non-deterministic, which means that not every attempt at performing the computation will succeed, which means even more resources are needed.
Now, Simon Benjamin and colleagues at Oxford and Bristol universities have calculated how many devices would actually be needed to create a fault-tolerant LOQC. The researchers assumed that a practical quantum computer would require about 1000 logical qubits to execute useful quantum processes such as Shor’s factoring algorithm. They assumed that each component in their model computer would lose one photon in 1000, and that the error rate of each component is one in 100,000. These tolerances are currently not possible, and Benjamin explains that if today’s tolerances were used, the size of the LOQC would be even larger. “We chose numbers that are beyond the state-of-the-art but perhaps not impossible to achieve, and showed that even then, the overall resource costs are high,” he adds.
The team looked at the resources required to create a “3D cluster state” quantum computer based on LOQC. In this fault-tolerant approach, all of the entanglement required for the calculation is created before the calculation, which is then executed by performing measurements. “A 3D cluster state is an entity involving multiple photons woven together to form the ‘fabric’ of the computer,” explains Benjamin. “By ‘woven’ I mean that they are in a highly entangled state, and the entanglement process is costly because photons don’t naturally interact with each other.”
Billions of components
Indeed, the high cost is borne out in the team’s calculation of the resources needed to create a practical quantum computer using this scheme. They reckon that 100,000 detectors are required for each physical qubit in their hypothetical quantum computer. The numbers of mirrors and beamsplitters would also be about 100,000. Furthermore, each logical qubit in the computer would comprise 1000 physical qubits to ensure fault tolerance. This means that a whopping 100 billion detectors would be needed to build a practical quantum computer comprising 1000 qubits.
Benjamin points out that the need for about 1000 physical qubits per logical qubit also applies to fault-tolerant systems made from ions or superconducting circuits. “Making a fault-tolerant quantum computer is hard and needs at least millions of physical qubits, but with the linear-optical approach there is the extra cost, another factor of 100,000 (or more) turning ‘hard’ into ‘so hard, it may be impossible’,” he explains.
While such a huge component count might seem like an insurmountable barrier to creating practical LOQC systems, Benjamin points out that there are ways forward. “In situations where small imperfections are acceptable, for example for simulators that predict chemical reactions, we don’t need full fault tolerance and then photonic machines may indeed be an elegant approach,” he says. Benjamin also told physicsworld.com that “there is another approach that I am a big fan of, where matter systems [such as ions] store and process information, but they are linked up optically”. “It’s a best-of-both-worlds picture where small, isolated matter systems do the processing, meanwhile photons do what they’re good at – communicating,” he says.
The calculations are described in Physical Review X.