It is tempting to assess the quantum computing industry by reading press releases and counting qubits. I want to argue that this approach tells you almost nothing useful — and explain what to look for instead if you want to understand which programs are genuinely progressing toward commercial relevance.
The Qubit Count Trap
When a company announces a quantum processor with 1,000 qubits, or 5,000 qubits, or any large number, the natural question is: does that mean it's more powerful than a processor with 100 qubits? The answer, almost always, is not necessarily — and sometimes no.
Raw qubit count is a hardware metric that means nothing without context. A processor with 1,000 qubits at 95% two-qubit gate fidelity is dramatically less capable for computational purposes than a processor with 100 qubits at 99.9% fidelity. This is because quantum error correction — the mechanism by which noisy physical qubits are transformed into reliable logical qubits — becomes dramatically more efficient as physical error rates decrease. At 95% fidelity, you need hundreds of physical qubits per logical qubit. At 99.9%, you need tens. At 99.99%, you need fewer still.
The implication is that a "smaller" processor with higher fidelity qubits may be closer to commercially useful computation than a "larger" processor with lower fidelity — even though the marketing materials suggest the opposite.
Platform-by-Platform Assessment
Superconducting qubits (IBM, Google, others): The most mature platform in terms of qubit count and public cloud access. IBM's Eagle, Heron, and subsequent processors have reached hundreds to thousands of physical qubits. Gate fidelities have improved significantly and are now competitive. The primary challenges are: operating temperatures (millikelvin), relatively large qubit footprint limiting physical density, and the complexity of microwave control wiring. IBM's roadmap to fault-tolerant computing is the most publicly detailed in the industry.
Trapped ions (IonQ, Quantinuum): Generally the highest fidelity platform in absolute terms, with best-reported two-qubit gate fidelities exceeding 99.5-99.9%. The trade-off is gate speed — trapped ion gates are orders of magnitude slower than solid-state platforms — and scalability beyond a few tens of qubits requires physically large traps or modular architectures with photonic interconnects. Quantinuum's H-series processors are genuinely impressive from a fidelity perspective.
Silicon spin qubits (Intel, Imec, academic programs): The closest cousin to germanium spin qubits: gate-defined quantum dots in a semiconductor, CMOS-compatible fabrication. Silicon electron spin qubits have somewhat different properties than germanium hole-spin qubits (no intrinsic spin-orbit coupling, requiring local microwave antennas for qubit control), but the manufacturing advantages are similar. Intel's research program and the Sydney University group have demonstrated significant progress. Competition is healthy and welcome.
Photonic qubits (PsiQuantum, Xanadu, others): Photons are intrinsically noise-resistant — they don't interact with the environment the way charged particles do — but making deterministic quantum gates between photons is hard. PsiQuantum has bet heavily on fault-tolerant photonic computing at large scale, using silicon photonics fab infrastructure. The approach is intellectually interesting but requires extremely high-precision single-photon sources and detectors at a scale not yet demonstrated.
Germanium spin qubits (Groove Quantum): Our platform is among the highest fidelity solid-state platforms, with the additional advantage of all-electric qubit control and full CMOS manufacturing compatibility. We are earlier in the qubit count scaling curve than superconducting platforms, but our fundamental device quality metrics are at or near the leading edge across the field.
What Metrics Actually Matter
If you want to evaluate a quantum computing program's progress, here is what to look for. Two-qubit gate fidelity — reported on multiple qubit pairs in an array, not just the best pair. Coherence time relative to gate time. The number of qubits over which high fidelity is demonstrated simultaneously. Progress on error correction — have logical qubits below the break-even point been demonstrated? And critically: is the fabrication approach compatible with the manufacturing scale required for fault-tolerant systems?
The Honest Timeline
No one in the quantum computing industry will deliver fault-tolerant, commercially relevant quantum computers in the next three to five years. Anyone claiming otherwise is either confused about what "fault-tolerant" means or is not being honest. The engineering challenges are real, the timelines are long, and the programs that will succeed are the ones building systematically rather than optimizing for press release metrics.
We are at the beginning of a decades-long engineering program that will eventually produce transformative computing technology. The companies that matter in 2035 and 2040 are the ones doing the hardest, most rigorous technical work today. Benchmarks — real ones, honestly reported — are how you tell them apart.