Every quantum computing company talks about qubits. Fewer talk honestly about quantum error correction — and yet QEC is the single most important unsolved problem separating today's noisy quantum hardware from the fault-tolerant machines that will deliver real commercial value. Understanding why QEC is hard, and what it requires, is essential context for evaluating any claim about quantum computing's commercial timeline.
The Fragility Problem
Quantum information is extraordinarily fragile. A qubit maintains its quantum state — a superposition of 0 and 1 — only as long as it remains isolated from its environment. Any interaction with the outside world, even thermal noise or stray electromagnetic fields, causes the qubit's quantum state to degrade in a process called decoherence. Additionally, the quantum gate operations used to manipulate qubits are imperfect: every rotation, every two-qubit interaction, introduces a small error.
In classical computers, errors are so rare (roughly one per 10^17 operations) that they can effectively be ignored. Current physical qubits have error rates many orders of magnitude higher — typically between 0.1% and 1% per gate operation in the best systems. For any algorithm that requires more than a few hundred gates, error accumulation makes the result unreliable.
The Idea Behind Quantum Error Correction
Quantum error correction is the set of techniques that allow us to detect and correct qubit errors without ever measuring (and thereby collapsing) the quantum state being protected. This sounds paradoxical — how do you check for errors in a quantum system without looking at it? The answer is indirect measurement.
In a QEC code, the logical qubit (the unit of quantum information you actually care about) is encoded across many physical qubits in a carefully chosen entangled state. Errors in individual physical qubits change this entangled state in detectable ways. By measuring certain correlations between physical qubits — called syndrome measurements — a QEC decoder can identify which physical qubit has erred and apply a corrective operation, all without ever directly measuring the logical qubit's state.
The most widely studied QEC code today is the surface code, in which a logical qubit is encoded in a two-dimensional array of physical qubits. The surface code is attractive because it only requires nearest-neighbor interactions between qubits — a property that makes it well-suited to physical qubit arrays. Its fault-tolerance threshold is around 1% physical error rate per gate, meaning that if your physical qubits are better than 99% fidelity, you can in principle achieve arbitrarily low logical error rates by using more physical qubits.
The Overhead Is Enormous
Here is the uncomfortable truth that much of the quantum computing industry prefers not to dwell on: quantum error correction is expensive. Protecting a single logical qubit against errors requires dozens to hundreds of physical qubits, depending on the code and the required logical error rate. Running a commercially useful algorithm — factoring a cryptographically relevant integer, for example, or simulating a large molecular system — requires thousands of logical qubits. That means millions of physical qubits.
Current quantum processors, including the best superconducting and trapped-ion systems, have hundreds to low thousands of physical qubits. We are roughly three to four orders of magnitude away from the scale required for fault-tolerant commercial algorithms. This is not a criticism — it reflects the genuine difficulty of the engineering challenge, and progress has been remarkable. But it means that honest assessments of quantum computing's commercial timeline must account for this gap.
Why Physical Qubit Quality Is the Limiting Factor
The overhead of QEC scales inversely with the quality of the physical qubits. If your physical error rate is close to the fault-tolerance threshold (say, 0.5%), you need very large QEC code distances and therefore enormous physical qubit overheads. If your physical error rate is well below the threshold (say, 0.01%), the required overhead drops dramatically.
This is why Groove Quantum's focus on achieving world-class two-qubit gate fidelity is not just about bragging rights. Every additional nines of fidelity (99%, 99.9%, 99.99%) exponentially reduces the physical qubit overhead required for fault tolerance. A processor with 99.9% two-qubit fidelity requires roughly 10x fewer physical qubits to achieve a given logical error rate compared to a processor at 99% fidelity. At the scale of millions of qubits, this difference is decisive.
Germanium Qubits and the QEC Roadmap
Germanium hole-spin qubits are particularly well-positioned for the QEC era for two related reasons. First, their demonstrated fidelity (exceeding 99% on a 10-qubit array) is already within the fault-tolerance threshold for surface codes. Second, their physical architecture — a planar array of quantum dots controlled by gate electrodes — maps naturally onto the two-dimensional grid structure required by surface code implementations.
The nearest-neighbor connectivity of gate-defined quantum dot arrays is not an accident or a limitation: it is a feature. Surface codes are designed to work with nearest-neighbor interactions. Qubit platforms that have richer connectivity (like trapped ions with all-to-all coupling) are not necessarily better suited for surface code QEC — and the added connectivity often comes at the cost of gate speed and scalability.
The path to fault-tolerant quantum computing is long and technically demanding. But it is a path with clear engineering milestones, and germanium spin qubits are at the right point on that path to matter.