Over the past few years, the big question in quantum computing has shifted from “can we make this work?” to “can we scale this?” It’s no longer a novelty when an algorithm is run on a small quantum computer – we’ve done it with a number of different technologies. The big question now: when can we run a useful problem on quantum hardware that clearly outperforms a traditional computer?

For that, we still need more qubits. And to consistently outperform classical computers on complicated problems, we’ll need enough qubits to perform error correction. That means thousands of qubits. So while there is currently a clear technology leader in the number of qubits (superconducting qubits called transmons), there is always a chance that another technology will eventually evolve better.

This possibility is what makes several results published today interesting. While there are differences between the three announced results, they all have one thing in common: high-quality qubits produced in silicon. After all, if there’s anything we know how to scale, it’s silicon-based technologies.

**Page Contents**

## Quality issues

The idea of making qubits out of silicon has some history, and we’ve made progress with the technology in the past. Indeed, the fabrication of qubits from silicon is relatively easy when using techniques developed for the semiconductor industry. For example, intentional contamination called “doping” which is used to alter the properties of silicon could also be used to incorporate atoms which can act as qubits. Likewise, our ability to place wiring on silicon can be used to create structures that create quantum dots where an individual electron can be controlled.

The best part is that these approaches require very little space to implement, which means we could potentially squeeze a lot of qubits onto a single silicon chip. That’s a big contrast to alternative technologies like transmons and trapped ions, both of which are large enough that companies working with them are already talking (or even implementing) spreading processors across multiple chips.

The problem so far was that silicon-based qubits were rather error-prone. Ultimately, we want to use groups of these individual qubits as a single logical qubit that implements error correction. But if errors occur faster than they can be corrected, this will not be possible. And so far, silicon-based qubits are definitely on the wrong side of that error threshold.

## High quality stitches

Two papers take a similar approach to improving the performance of quantum dot-based qubits. One is from a group of researchers based at Delft University of Technology, and the other is mainly from RIKEN in Japan, with some collaborators in Delft. Both groups used silicon with wiring on it to create a quantum dot that trapped a single electron. The spin of the trapped electron was used as the basis for the qubit. And the two groups took a similar approach, testing their gate under a wide range of conditions to identify which ones tended to produce errors, and then operating the qubit in ways that avoided those errors.

In the Delft work, the entanglement of the two qubits was achieved by manipulating the quantum dots so that the wave functions of the trapped electrons overlap. After optimizing the use of hardware, the researchers found that single-qubit and two-qubit gate operations had a fidelity rate of over 99.5%. This is above the threshold needed to make the most commonly considered form of quantum error correction work.

To show that qubits are actually useful, the researchers use their two-qubit configuration to calculate the ground-state energy of molecular hydrogen. This calculation is relatively easy to do on conventional hardware, so the results can be verified.

The RIKEN group did something similar and generally found that speeding up operations had a major effect on error rates. Again, handling this problem produced gates with 99.5% fidelity, well above the threshold needed for error correction. To show that the gates worked, the team implemented a few quantum computing algorithms and showed that they were completed with a success rate of around 97%.