Large-scale quantum processor architecture

The quantum computing industry has a scaling problem. Not in the sense that scaling is impossible — the physics allows it — but in the sense that most current approaches to qubit hardware were designed and optimized for small-scale demonstrations rather than large-scale production. The engineering choices that make it easy to demonstrate ten qubits with high fidelity are often the same choices that make it hard to scale to a million.

At Groove Quantum, scalability has been a design constraint from day one, not an afterthought. Understanding how we think about the path from our current 10-qubit array to the million-qubit processors required for fault-tolerant quantum advantage requires understanding the specific scaling challenges each layer of the hardware stack presents.

The Physical Layer: Qubit Density

The most fundamental scaling question is: how many qubits can fit on a chip? For germanium spin qubits, this question has an encouraging answer. Our quantum dots are defined by nanoscale gate electrodes patterned in a two-dimensional grid. The footprint of a single qubit — including its local gate electrodes and space for readout sensors — is on the order of 100 nm x 100 nm in current research devices. This is already comparable to the area of a single transistor in an advanced CMOS node.

At 100nm qubit pitch, a 1mm x 1mm area of chip can in principle accommodate 10^8 qubits. Of course, reaching that density requires solving many engineering problems that current research devices don't address: uniform material quality across a large area, yield of working qubits in a dense array, routing of control and readout signals without too much overhead. But the fundamental geometry is on our side. Spin qubits are small.

The Classical Control Layer: Multiplexing

Each qubit in a germanium processor requires multiple gate voltage signals for initialization, control, and readout. In current small-scale devices, these signals come from individual wire connections running through the cryostat from room-temperature electronics. At the scale of millions of qubits, this approach is impossible: you cannot route millions of separate wires from room temperature to a chip at millikelvin temperatures.

The solution is multiplexing — sharing control lines among many qubits through switching circuitry located close to the qubit chip. Cryo-CMOS multiplexers operating at 4 Kelvin can potentially control thousands of qubits with a much smaller number of input/output lines from room temperature. The key requirement is that the cryo-CMOS circuits introduce negligible noise and consume little enough power that they don't overwhelm the cryostat's cooling capacity.

Several research groups, including collaborators of ours at QuTech, have demonstrated cryo-CMOS multiplexer circuits operating at 4K with acceptable noise performance. Integrating these circuits with qubit chips — either through proximity (chips side by side in the cryostat) or through 3D die stacking — is a clear engineering roadmap item for the next five years.

The Fabrication Layer: Yield and Uniformity

In semiconductor manufacturing, yield refers to the fraction of chips on a wafer that work correctly. Mature CMOS processes achieve yields above 95% even for complex chips with billions of transistors. Current quantum processor fabrication, done largely in research labs, has much lower yields — not because the fundamental processes are harder, but because the optimization effort applied to CMOS over decades has not yet been applied to qubit fabrication.

Uniformity is a related challenge: for quantum error correction to work, all qubits in an array must have similar properties (qubit frequencies, coupling strengths, readout signals). Any systematic variation across the chip — from material non-uniformity, lithographic variation, or gate stack differences — degrades QEC performance and requires individual qubit calibration that doesn't scale.

Improving yield and uniformity in germanium qubit fabrication is a major focus of our device engineering team. It requires tight process control, statistical characterization across many devices, and feedback-optimized process tuning — the same approach that semiconductor fabs use to improve classical chip yield. The tools and methodologies exist; applying them rigorously to qubit fabrication is a matter of sustained engineering investment.

The Architecture Layer: Modular Quantum Computing

Even with perfect qubits, there are physical limits to how many qubits can be placed on a single chip and connected with high-fidelity two-qubit gates. At some point, the path to more qubits requires connecting multiple chips, either through photonic links, microwave connections, or electrical interconnects.

Modular quantum computing — connecting multiple smaller quantum processors into a larger coherent system — is an active research area and likely the architecture of any practical large-scale quantum computer. The requirements on the inter-chip links are demanding: they must preserve quantum coherence across the connection and have fidelity comparable to on-chip gates, or the overhead of inter-chip error correction negates the benefit of connectivity.

For germanium spin qubits, the natural inter-chip connection is through direct electrical bonding — physically connecting gate electrodes on adjacent chips to allow charge tunneling between quantum dots on different chips. This is a shorter-range, higher-fidelity option than photonic links, though it requires tight control of the bonding process. Alternative approaches using flying qubits encoded in photons or surface acoustic waves are also being explored.

Our Scaling Roadmap

We do not believe anyone will build a million-qubit quantum processor in the next five years. But we do believe that a systematic, engineering-disciplined approach to each layer of the scaling challenge — material quality, device uniformity, cryo-CMOS integration, modular architecture — can get us from today's 10-qubit demonstrations to hundreds of high-quality qubits by 2028, thousands by 2031, and the foundation for fault-tolerant processors by the mid-2030s. That is an aggressive but achievable roadmap, grounded in the genuine progress we see at every level of the stack.

← Back to Blog