← Back to Post-Quantum Cryptography

Quantum Computing Fundamentals for Cryptographers

18 min read

Why Cryptographers Need to Understand Quantum Computing

You do not need a physics degree to prepare for the post-quantum transition. What you need is a precise understanding of what quantum computers can and cannot do, when they will be able to break current cryptographic primitives, and how much uncertainty surrounds those timelines. This page provides exactly that — quantum computing concepts explained through the lens of cryptographic relevance, not academic physics.

The core message is simple: quantum computing threatens the mathematical hardness assumptions that underpin RSA, Diffie-Hellman, and elliptic curve cryptography. But the path from today’s noisy, small-scale quantum hardware to a machine that can actually factor a 2048-bit RSA key is extraordinarily difficult. Understanding why it is difficult — and how much progress has been made — is essential for making sound migration decisions.

Before proceeding, ensure you are familiar with the classical cryptographic primitives at risk, covered in Post-Quantum Cryptography Overview.


Qubits: The Fundamental Unit

Classical Bits vs. Qubits

A classical bit is either 0 or 1. A qubit (quantum bit) is a two-level quantum system that can exist in a superposition of both states simultaneously. Mathematically, a qubit’s state is represented as:

|ψ⟩ = α|0⟩ + β|1⟩

Where α and β are complex numbers called probability amplitudes, and |α|² + |β|² = 1. When you measure a qubit, you get 0 with probability |α|² and 1 with probability |β|², and the superposition collapses.

What this means for cryptography: Superposition is not “trying both values at once” in any simple parallel-computing sense. A common misconception is that n qubits let you try 2ⁿ solutions simultaneously. The reality is more subtle — quantum algorithms must be carefully structured so that correct answers are amplified (their probability increases) while incorrect answers interfere destructively (their probability decreases). This is why only specific problems — those with exploitable mathematical structure — benefit from quantum speedup.

Superposition in Practice

Consider a single qubit initialized to |0⟩. Applying a Hadamard gate puts it into equal superposition:

H|0⟩ = (|0⟩ + |1⟩) / √2

If you apply Hadamard gates to n qubits, you create a uniform superposition of all 2ⁿ possible bit strings. For a 20-qubit register, that is approximately one million states encoded simultaneously. For a 300-qubit register, that is more states than there are atoms in the observable universe.

The catch: you cannot simply “read out” all those states. Measurement gives you one outcome, probabilistically. The entire art of quantum algorithm design is structuring the computation so that the measurement overwhelmingly returns the answer you want.

Entanglement

Two qubits are entangled when their quantum states are correlated in a way that cannot be described independently. The canonical example is a Bell state:

|Φ⁺⟩ = (|00⟩ + |11⟩) / √2

Measuring the first qubit immediately determines the second qubit’s state, regardless of physical distance. This is not useful for faster-than-light communication (you still need a classical channel to compare results), but it is essential for:

  • Quantum error correction — encoding logical qubits across many entangled physical qubits
  • Quantum algorithms — creating correlations between input and output registers (critical for Shor’s algorithm)
  • Quantum key distribution (QKD) — detecting eavesdropping through Bell inequality violations

Cryptographic relevance: Entanglement is what allows Shor’s algorithm to correlate the output of modular exponentiation with the period of a function, enabling efficient factoring. Without entanglement, quantum computers would be no more powerful than probabilistic classical computers.

Decoherence: The Enemy of Quantum Computation

Decoherence is the process by which a qubit loses its quantum properties — superposition and entanglement — through unwanted interaction with its environment. For cryptographic analysis, decoherence is the primary reason why building a large-scale quantum computer is so difficult.

There are two key timescales:

  • T₁ (relaxation time) — How long before a qubit in the |1⟩ state spontaneously decays to |0⟩. Think of it as the “battery life” of the qubit’s energy state.
  • T₂ (dephasing time) — How long before the phase relationship between |0⟩ and |1⟩ is scrambled. This is typically shorter than T₁ and is the more relevant limit for computation.
PlatformTypical T₁Typical T₂Gate TimeOperations Before Decoherence
Superconducting (transmon)100–300 µs50–200 µs20–100 ns~1,000–10,000
Trapped ion1–60 s0.5–10 s1–100 µs~10,000–100,000
Neutral atom1–10 s0.1–1 s1–10 µs~10,000–100,000
PhotonicN/A (photon lost on measurement)N/A~nsLimited by photon loss rates
Nitrogen-vacancy (NV) center1–10 ms1–2 ms~10–100 ns~10,000–100,000

Why this matters for Shor’s algorithm: Factoring RSA-2048 requires circuit depths on the order of 10¹⁰–10¹² gates. Even with the fastest gate times (~20 ns for superconducting), that translates to minutes or hours of continuous coherent computation. No physical qubit can maintain coherence for that long — which is precisely why quantum error correction is mandatory.

The Bloch Sphere: Visualizing Qubit States

For security professionals who prefer geometric intuition: every single-qubit state can be represented as a point on the Bloch sphere — a unit sphere where:

  • The north pole represents |0⟩
  • The south pole represents |1⟩
  • The equator represents equal superpositions with different phase relationships
  • Quantum gates correspond to rotations of this sphere
  • Decoherence can be visualized as the point “drifting” from its intended position — T₁ pulls toward the north pole (energy decay), T₂ smears the point around the z-axis (phase randomization)

Understanding the Bloch sphere helps demystify quantum gates: the Hadamard gate is a 180° rotation about the axis halfway between X and Z. The T gate is a 45° rotation about the Z axis. Every single-qubit operation is just a rotation.


Quantum Gates and Circuits

Quantum computation is performed by applying quantum gates — unitary transformations — to qubits. Unlike classical logic gates, quantum gates are reversible (every gate has an inverse) and operate on probability amplitudes rather than definite bit values.

Essential Gates for Cryptographic Context

GateSymbolMatrixQubitsPurpose
HadamardH(1/√2)[[1,1],[1,-1]]1Creates superposition; foundation of nearly every quantum algorithm
Pauli-XX[[0,1],[1,0]]1Quantum NOT gate; bit flip
Pauli-ZZ[[1,0],[0,-1]]1Phase flip; leaves |0⟩ unchanged, maps |1⟩ to -|1⟩
Phase (S)S[[1,0],[0,i]]1Quarter-turn phase rotation
T GateT[[1,0],[0,e^(iπ/4)]]1π/8 gate; critical for universal computation and fault-tolerant circuits
CNOTCX4×4 controlled-NOT2Entangles two qubits; flips target if control is |1⟩
ToffoliCCX8×8 controlled-controlled-NOT3Universal for classical computation; used in quantum arithmetic circuits
SWAPSWAP4×4 swap matrix2Exchanges two qubit states; important for circuit routing on physical hardware

The Universal Gate Set

Any quantum computation can be decomposed into a sequence of gates from a universal gate set. The standard fault-tolerant universal set is {H, T, CNOT} — the Clifford+T gate set. This matters for cryptographic analysis because:

  1. Resource estimation for breaking cryptographic schemes is typically expressed in T-gate count, since T gates are the most expensive operation in fault-tolerant quantum computing.
  2. The number of T gates required to run Shor’s algorithm against RSA-2048 determines the physical resources needed.
  3. Current estimates for factoring RSA-2048 require on the order of billions of T gates.

Quantum Circuit Concept

The following diagram illustrates the structure of a quantum circuit relevant to Shor’s algorithm — showing how qubits move through Hadamard gates, controlled operations, and measurement.

graph LR
    subgraph INPUT["Input Register (n qubits)"]
        Q0["|0⟩"] --> H0["H"]
        Q1["|0⟩"] --> H1["H"]
        Q2["|0⟩"] --> H2["H"]
        Qn["|0⟩"] --> Hn["H"]
    end

    subgraph ORACLE["Modular Exponentiation Oracle"]
        H0 --> ME["Controlled-U<br/>a^(2^k) mod N"]
        H1 --> ME
        H2 --> ME
        Hn --> ME
    end

    subgraph OUTPUT["Output Processing"]
        ME --> QFT["Quantum Fourier<br/>Transform (QFT⁻¹)"]
        QFT --> M0["Measure"]
        QFT --> M1["Measure"]
        QFT --> M2["Measure"]
        QFT --> Mn["Measure"]
    end

    subgraph WORK["Work Register (n qubits)"]
        W0["|1⟩"] --> ME
    end

    M0 --> CF["Classical<br/>Post-Processing<br/>(Continued Fractions)"]
    M1 --> CF
    M2 --> CF
    Mn --> CF
    CF --> PERIOD["Period r of<br/>a^r mod N = 1"]
    PERIOD --> FACTOR["Factors of N<br/>via GCD"]

Key insight: The modular exponentiation oracle is by far the most expensive component. For RSA-2048, this single operation dominates the entire resource budget — requiring billions of quantum gates operating on thousands of logical qubits.


Quantum Computational Complexity: BQP

What Is BQP?

BQP (Bounded-Error Quantum Polynomial Time) is the class of decision problems solvable by a quantum computer in polynomial time with error probability at most 1/3. It is the quantum analog of the classical complexity class BPP (Bounded-Error Probabilistic Polynomial Time).

graph TB
    subgraph COMPLEXITY["Computational Complexity Classes"]
        P["P<br/>Deterministic<br/>Polynomial Time"]
        BPP["BPP<br/>Probabilistic<br/>Polynomial Time"]
        BQP["BQP<br/>Quantum<br/>Polynomial Time"]
        NP["NP<br/>Nondeterministic<br/>Polynomial Time"]
        PSPACE["PSPACE"]

        P --> BPP
        BPP --> BQP
        BQP --> PSPACE
        P --> NP
        NP --> PSPACE
    end

    style BQP fill:#e74c3c,stroke:#c0392b,color:#fff
    style NP fill:#3498db,stroke:#2980b9,color:#fff
    style P fill:#2ecc71,stroke:#27ae60,color:#fff

What BQP Means for Cryptography

ProblemClassical ComplexityQuantum ComplexityCryptographic Impact
Integer factoringSub-exponential (best known: GNFS)Polynomial (Shor’s)RSA broken
Discrete logarithmSub-exponentialPolynomial (Shor’s)DH, DSA, ECDH, ECDSA broken
Elliptic curve discrete logExponential (best known: Pollard’s rho)Polynomial (Shor’s variant)All ECC broken
AES key searchO(2ⁿ)O(2^(n/2)) (Grover’s)AES-128 reduced to 64-bit security; AES-256 remains secure
SHA-256 preimageO(2²⁵⁶)O(2¹²⁸) (Grover’s)Still computationally infeasible
Lattice problems (LWE/SIS)Exponential (best known)Exponential (best known)Basis for PQC — believed quantum-resistant
Hash-based signaturesSecurity from hash preimage resistanceGrover provides quadratic speedup onlySPHINCS+ secure with larger parameters
Code-based problemsExponential (best known)Exponential (best known)McEliece / BIKE / HQC believed quantum-resistant

The critical takeaway: BQP captures the problems quantum computers solve efficiently. Integer factoring and discrete logarithms fall within BQP (thanks to Shor’s algorithm), but most NP-complete problems — and the lattice/code-based problems that underpin post-quantum cryptography — are not known to be in BQP. This asymmetry is the entire foundation of the post-quantum migration.

What Quantum Computers Cannot Do

It is equally important to understand the limits:

  • NP-complete problems are not known to be in BQP. Quantum computers are not expected to solve SAT, traveling salesman, or graph coloring in polynomial time.
  • Grover’s algorithm provides only a quadratic speedup for unstructured search — significant but not revolutionary. Doubling key lengths neutralizes it.
  • Quantum supremacy/advantage demonstrations (Google’s Sycamore, 2019) solved problems with no practical application. They proved quantum hardware works but said nothing about cryptographic relevance.
  • No known quantum algorithm threatens the hardness of lattice problems, multivariate polynomial systems, or hash preimage resistance beyond the quadratic Grover speedup.

Current Quantum Hardware Landscape

The quantum computing industry is advancing rapidly, but the gap between current capabilities and what is needed to threaten cryptography remains vast. The following table summarizes the state of leading hardware efforts as of early 2026.

Leading Quantum Hardware Providers

ProviderTechnologyQubits (Physical)Two-Qubit Gate Error RateNotable Achievement
IBMSuperconducting (transmon)1,386 (Flamingo, 2025)~0.1–0.5%Largest superconducting processor; Qiskit ecosystem; 100,000-qubit roadmap by 2033
GoogleSuperconducting (transmon)105 (Willow, 2024)~0.1–0.3%Below-threshold error correction with surface codes; “quantum supremacy” (Sycamore, 2019)
QuantinuumTrapped ion (Ba⁺)56 (H2, 2024)~0.1% (best: 0.03%)Highest fidelity 2-qubit gates of any platform; all-to-all connectivity
IonQTrapped ion (Yb⁺/Ba⁺)36 (Forte Enterprise, 2025)~0.3–0.5%Algorithmic qubits metric; commercial cloud access via AWS/Azure/GCP
PsiQuantumPhotonic (fusion-based)Pre-commercialN/A (photonic loss rates)Targeting 1M+ physical qubits at scale; GlobalFoundries fab partnership
QuEraNeutral atom (Rb)256 (2024)~0.5%Dynamically reconfigurable atom arrays; Harvard/MIT collaboration; logical qubit demonstrations
Atom ComputingNeutral atom (Yb)1,225 (2024)~1–2%Largest neutral atom array; acquired by Infleqtion pathway

Quantum Computing Technology Comparison

PropertySuperconductingTrapped IonPhotonicNeutral AtomTopological
Operating Temp~15 mK (dilution fridge)Room temp (vacuum chamber)Room temp~µK (optical trap)~15 mK
Gate Speed10–100 ns1–100 µs~ns (measurement)1–10 µsTheoretical
ConnectivityNearest-neighbor (2D grid)All-to-allPhotonic graph statesReconfigurableNon-abelian anyons
Coherence Time100–300 µs1–60 secondsN/A (photons lost on measurement)1–10 secondsTopologically protected (theory)
Scalability PathModular multi-chipShuttle ions between zonesMultiplexed photon sourcesLarger atom arraysMajorana zero modes
Key AdvantageFast gates; mature fabLow error rates; full connectivityRoom temp; fiber integrationScalable arrays; mid-circuit measurementInherent error protection
Key ChallengeShort coherence; crosstalkSlow gates; scaling beyond ~100Photon loss; non-deterministic gatesHigher error rates; atom lossNo physical demonstration yet
Leading PlayersIBM, Google, RigettiQuantinuum, IonQPsiQuantum, XanaduQuEra, Atom Computing, PasqalMicrosoft (Station Q)
TRL (Technology Readiness)5–65–63–44–52–3

Quantum Volume and Benchmarking

Raw qubit count is a poor measure of quantum computer capability. The industry has developed several benchmarks to provide more meaningful comparisons:

MetricWhat It MeasuresWho Uses ItLimitations
Quantum Volume (QV)Max square circuit (width = depth) executable with >2/3 successIBM (originated), broadly adoptedSaturates at ~2¹⁵ for current hardware; favors all-to-all connectivity
CLOPS (Circuit Layer Operations Per Second)Throughput — how many circuits per secondIBMDoes not measure quality, only speed
Algorithmic QubitsEstimated qubits available after accounting for errorsIonQVendor-specific; not independently validated
Logical Error Rate per RoundError suppression in QEC cyclesGoogle, QuantinuumSpecific to QEC experiments; not general purpose
Application BenchmarksPerformance on specific algorithms (VQE, QAOA, etc.)QED-C, academic groupsProblem-specific; hard to generalize

Quantum Volume is the most widely cited benchmark. A QV of 2ⁿ means the device can reliably execute a random circuit on n qubits with depth n. As of 2026:

  • Quantinuum H2: QV = 2²⁰ (1,048,576) — highest reported
  • IonQ Forte: QV = 2²⁹ (claimed, using their methodology)
  • IBM Eagle: QV = 2⁷ (128) — limited by nearest-neighbor connectivity
  • Google Willow: Does not report QV; focuses on QEC metrics

The disconnect between IBM’s qubit count lead (1,386 qubits) and its modest quantum volume illustrates why qubit count alone is misleading. Connectivity, gate fidelity, and coherence time all interact to determine useful computational capacity.

What the Numbers Actually Mean

When a vendor announces “1,000 qubits,” context is everything:

  • Physical qubits ≠ useful computation. Without error correction, each physical qubit accumulates errors after a few hundred operations at best.
  • Gate fidelity matters more than qubit count. A 50-qubit machine with 99.9% two-qubit gate fidelity is far more useful than a 1,000-qubit machine with 99% fidelity.
  • Connectivity determines circuit depth. Nearest-neighbor architectures (superconducting) require SWAP operations to connect distant qubits, inflating circuit depth and error accumulation. All-to-all connectivity (trapped ions) avoids this overhead.
  • Circuit depth is limited by coherence time. If your qubits decohere after 100 µs and each gate takes 100 ns, your maximum circuit depth is roughly 1,000 operations — far too shallow for Shor’s algorithm on cryptographic key sizes.

Quantum Error Correction: The Fundamental Bottleneck

Quantum error correction (QEC) is the single most important factor determining when quantum computers will threaten cryptography. Without it, quantum computers are limited to shallow circuits on small problem sizes — nowhere near what is needed for Shor’s algorithm on RSA-2048.

Why Quantum Errors Are Different

Classical computers use discrete 0/1 values that can be trivially copied and checked. Quantum systems face fundamentally harder challenges:

  1. No-cloning theorem — You cannot copy an unknown quantum state. This rules out the simplest classical error correction strategy (redundant copies).
  2. Continuous errors — A qubit can drift by any amount on the Bloch sphere, not just flip between 0 and 1. Errors are continuous, not discrete.
  3. Measurement destroys information — You cannot inspect a qubit mid-computation without collapsing its superposition. Error detection must be indirect.
  4. Multiple error types — Qubits experience bit-flip errors (X), phase-flip errors (Z), and combinations (Y = iXZ). A code must correct all three.

Surface Codes: The Leading Approach

The surface code is the most studied and most practical QEC code for near-term hardware. It works by encoding a single logical qubit across a 2D grid of many physical qubits, with dedicated ancilla qubits performing continuous syndrome measurements to detect and correct errors without disturbing the encoded information.

PropertySurface Code Details
Layout2D grid of data and ancilla qubits
Error threshold~1% per physical operation (below this, adding more qubits helps)
Code distance dCorrects up to ⌊(d-1)/2⌋ errors; higher d = better protection
Physical-to-logical ratio~2d² physical qubits per logical qubit
Logical error rateSuppressed exponentially: ~(p/p_th)^(d/2) where p is physical error rate
Gate implementationClifford gates via lattice surgery; T gates via magic state distillation
T gate overheadEach logical T gate requires ~15–20× additional physical qubits for distillation

Logical vs. Physical Qubits: The Overhead Problem

This is the single most misunderstood aspect of quantum computing in the cybersecurity community. Here is the math:

To factor RSA-2048 using Shor’s algorithm:

ResourceEstimate
Logical qubits needed~4,000–6,000
Code distance requiredd = 27–31 (for target logical error rate ~10⁻¹⁵)
Physical qubits per logical qubit~1,500–2,000 (at d=27, plus ancillas)
T gate distillation overhead~15–20× additional physical qubits
Total physical qubits~6–20 million
Logical gate operations~10⁹–10¹²
Estimated runtimeHours to days (depending on assumptions)

These estimates come from peer-reviewed work by Gidney & Ekera (2021), which optimized Shor’s algorithm for RSA-2048 and estimated approximately 20 million physical qubits with a runtime of about 8 hours. More optimistic analyses using improved compilation techniques reduce this to ~4–6 million physical qubits, but even these lower bounds exceed current hardware by three to four orders of magnitude.

Why “1,000 Qubits” Does Not Break RSA

Let’s put the numbers in perspective with a concrete comparison:

MilestonePhysical QubitsWhat It Can Do (Cryptographically)
Current best (2026)~1,000–1,400Nothing cryptographically relevant; useful for chemistry simulations and optimization demos
Near-term (2027–2029)~10,000–100,000Early fault-tolerant demonstrations; factor very small numbers (~20–50 bits) with error correction
Medium-term (2030–2035)~100,000–1,000,000Factor numbers up to a few hundred bits; still insufficient for RSA-2048
CRQC threshold~4,000,000–20,000,000Factor RSA-2048, compute elliptic curve discrete logs — all classical public-key cryptography broken

The gap is not just qubit count. Even if you had millions of physical qubits today, you would need:

  • Error rates below the surface code threshold (~1%) consistently across the entire chip
  • Fast classical decoding to process syndrome measurements in real-time
  • Long coherence times relative to the total circuit depth (hours of computation)
  • Low-latency control electronics for millions of qubits simultaneously
  • Cryogenic infrastructure (for superconducting approaches) at unprecedented scale

Each of these is an active area of engineering research with no guaranteed timeline.

The Scaling Analogy: Where Are We Really?

To put the challenge in perspective, consider this analogy to classical computing history:

EraClassical ComputingQuantum Computing (Approximate Equivalent)
Transistor invention (1947)Proof that solid-state switching worksFirst qubit demonstrations (~1995–2000)
First IC (1958)Integration of multiple transistorsMulti-qubit processors (~2010–2015)
Intel 4004 (1971) — 2,300 transistorsFirst commercial microprocessorCurrent era (~1,000–1,400 qubits, 2024–2026)
Intel 80386 (1985) — 275,000 transistorsUsable personal computingProjected ~2030–2035
Modern CPU — billions of transistorsCryptographically relevant classical computationCRQC — projected ~2035–2050

This analogy is imperfect — quantum computing faces fundamentally different engineering challenges (error rates, cryogenics, decoherence) that classical scaling did not. But it illustrates that we are in the very early stages of a long engineering trajectory. Expecting a CRQC from today’s hardware is roughly equivalent to expecting a modern CPU from a 1971 Intel 4004.


The Path to a Cryptographically Relevant Quantum Computer (CRQC)

What Is a CRQC?

A Cryptographically Relevant Quantum Computer is defined as a quantum computer capable of breaking widely deployed public-key cryptographic algorithms — specifically, factoring 2048-bit RSA moduli or computing discrete logarithms on standardized elliptic curves (e.g., P-256) within a practical timeframe (hours to days).

A CRQC requires:

  1. Sufficient logical qubits (~4,000–6,000 for RSA-2048)
  2. Low enough logical error rates (~10⁻¹⁵ per gate) to complete trillions of operations
  3. A fault-tolerant architecture with real-time error correction
  4. Practical runtime (not years or decades per computation)

CRQC Timeline Estimates

Estimates for when a CRQC will exist vary dramatically depending on the source, assumptions about hardware progress, and whether breakthrough algorithms or architectures are anticipated.

SourceEstimated CRQC TimelineKey Assumptions
NIST (2024)“Not before 2035”Conservative; assumes incremental progress
NSA (CNSA 2.0, 2022)Migrate by 2035Implies concern about CRQC by mid-2030s
Global Risk Institute (2024)~15–20% chance by 2034, ~50% by 2039Annual expert survey; wide uncertainty
RAND Corporation (2023)2035–2050 rangeEmphasizes engineering challenges
Michele Mosca (2023)“Within 15–20 years” (from 2023)Leading quantum risk researcher; 2038–2043
IBM Roadmap (2025)100,000 qubits by 2033Self-reported; still far below CRQC threshold
Google Quantum AI (2024)“Useful error-corrected QC by 2029”Focused on scientific applications, not CRQC specifically
Chinese Academy of SciencesActive programs; no public timelineSignificant government investment; opaque progress
Optimistic outliers2030–2032Assume algorithmic breakthroughs or novel architectures
Pessimistic outliers2050+ or neverCite fundamental engineering barriers

CRQC Timeline Visualization

timeline
    title Path to Cryptographically Relevant Quantum Computers
    section Current Era (2024-2027)
        Noisy Intermediate Scale (NISQ) : 1,000-5,000 physical qubits
        : Error rates ~0.1-1%
        : No cryptographic threat
        : Early logical qubit demonstrations
    section Near-Term (2028-2032)
        Early Fault-Tolerant : 10,000-100,000 physical qubits
        : Below-threshold error rates achieved
        : Small-scale error-corrected computations
        : Factor ~50-100 bit numbers with QEC
    section Medium-Term (2033-2038)
        Scaling Era : 100,000-1,000,000 physical qubits
        : Modular quantum computer architectures
        : Factor ~500-1000 bit numbers
        : HARVEST NOW DECRYPT LATER becomes acute concern
    section CRQC Era (2035-2045+)
        Cryptographically Relevant : 4,000,000-20,000,000 physical qubits
        : RSA-2048 and ECC P-256 broken
        : All classical PKC compromised
        : Organizations without PQC are exposed

The “Harvest Now, Decrypt Later” Problem

Even though a CRQC may be a decade or more away, the threat is already real for data with long confidentiality requirements. Adversaries — particularly nation-state intelligence agencies — are widely believed to be collecting encrypted traffic today with the intention of decrypting it once a CRQC becomes available.

The critical question for any organization:

Is the secrecy lifetime of my data longer than the time until a CRQC exists?

If you are protecting state secrets, medical records, financial data, or intellectual property that must remain confidential for 10–25 years, the answer may well be yes — meaning you need to migrate to post-quantum cryptography now, not when a CRQC is announced. This concept is explored further in Migration Planning and Strategies.

Mosca’s Theorem formalizes this risk calculation. If:

  • x = the time needed to migrate to quantum-safe cryptography
  • y = the secrecy lifetime of your data
  • z = the time until a CRQC exists

Then if x + y > z, your data is at risk and you must begin migration now. For most large organizations, x is 5–10 years (cryptographic inventory, protocol updates, testing, deployment), y varies by data type, and z is unknown but possibly 10–20 years. The math often demands immediate action. See Cryptographic Risk Assessment for guidance on applying Mosca’s inequality to your organization.


Quantum Algorithms That Threaten Cryptography

Shor’s Algorithm (1994)

Peter Shor’s algorithm solves the integer factoring problem and the discrete logarithm problem in polynomial time on a quantum computer. This is the existential threat to RSA, Diffie-Hellman, DSA, ECDH, and ECDSA.

How it works at a high level:

  1. Reduce factoring N to finding the period r of the function f(x) = aˣ mod N for a random a
  2. Use quantum phase estimation (Hadamard + controlled modular exponentiation + inverse QFT) to find r
  3. Once r is known, compute gcd(a^(r/2) ± 1, N) to extract factors with high probability
  4. Classical post-processing via continued fractions extracts the period from the measurement result

Resource requirements for RSA-2048:

ResourceGidney & Ekera (2021)Optimized Estimates (2024)
Logical qubits~4,098~2,048–3,000
T gates~2.7 × 10⁹~10⁹
Toffoli gates~4.6 × 10⁹~10⁹
Physical qubits (surface code, d=27)~20 million~4–6 million
Runtime~8 hours~hours
Circuit depth~10¹²~10¹⁰–10¹¹

Grover’s Algorithm (1996)

Grover’s algorithm provides a quadratic speedup for unstructured search problems. Given a function with one solution among N possibilities, Grover’s finds it in O(√N) evaluations rather than the classical O(N).

Impact on symmetric cryptography:

AlgorithmClassical SecurityPost-Grover SecurityMitigation
AES-128128-bit64-bitUpgrade to AES-256
AES-192192-bit96-bitAcceptable for most use cases
AES-256256-bit128-bitFully quantum-resistant
SHA-256 (preimage)256-bit128-bitFully quantum-resistant
SHA-256 (collision)128-bit~85-bit (BHT algorithm)Monitor; consider SHA-384/SHA-512
HMAC-SHA-256256-bit128-bitFully quantum-resistant

Important nuance: Grover’s algorithm requires O(√N) sequential quantum operations. For AES-128, that means ~2⁶⁴ sequential quantum operations — each requiring coherent quantum computation. The actual cost of running Grover’s against AES-128 on a real quantum computer (including error correction overhead) may exceed the cost of classical brute force. NIST’s position is that AES-256 is sufficient; AES-128 is acceptable for most non-government applications even in a post-quantum world.

Regev’s Algorithm (2023): A Potential Shift

In 2023, Oded Regev proposed a new quantum factoring algorithm that could require significantly fewer quantum gates than Shor’s algorithm — at the cost of requiring more qubits and a classical preprocessing step using lattice reduction.

Key differences from Shor’s:

AspectShor’s AlgorithmRegev’s Algorithm
Gate complexityO(n³) for n-bit numberO(n^(3/2)) — potentially much fewer gates
Qubit countO(n) logical qubitsO(n^(3/2)) — more qubits needed
Classical preprocessingMinimalRequires lattice reduction (LLL/BKZ)
Circuit depthVery deepShallower — potentially more practical
Maturity30 years of optimizationNew; not yet optimized

The implication for CRQC timelines: if Regev’s algorithm (or future optimizations of it) proves practical, it could enable factoring on a quantum computer with fewer physical qubits and shallower circuits than Shor’s algorithm requires. Several research groups are actively working to optimize and resource-estimate this approach. This is a live area of research that security professionals should monitor through NIST Post-Quantum Standards updates.

Other Relevant Quantum Algorithms

AlgorithmProblemSpeedupCryptographic Impact
Shor’sFactoring, discrete logExponentialBreaks RSA, DH, ECC
Grover’sUnstructured searchQuadraticHalves symmetric key security
Simon’sPeriod finding (black-box)ExponentialBreaks some MACs/block cipher modes in superposition query model
BHT (Brassard-Hoyer-Tapp)Collision findingCubic root speedupModest impact on hash collisions
Quantum walksGraph problems, element distinctnessPolynomial (problem-dependent)Potential impact on graph-based cryptography
HHL (Harrow-Hassidim-Lloyd)Linear systems of equationsExponential (with caveats)Potential impact on lattice-based schemes — under active study
Regev’s factoring (2023)Factoring using quantum + classicalPotentially fewer qubitsCould reduce CRQC requirements; not yet practical

Quantum Error Correction Deep Dive

Types of Quantum Errors

Before diving into error correction codes, it is essential to understand the error landscape. Quantum errors are fundamentally different from classical bit flips:

Error TypeDescriptionPhysical CauseNotation
Bit flip (X error)|0⟩ ↔ |1⟩ transitionEnergy exchange with environment (T₁ decay)Pauli X
Phase flip (Z error)Relative phase randomizedDephasing from magnetic field fluctuations (T₂)Pauli Z
Bit-phase flip (Y error)Combined bit and phase flipCombined T₁ and T₂ processesPauli Y = iXZ
LeakageQubit escapes the computational subspaceExcitation to higher energy levelsNot a Pauli error
CrosstalkOperation on one qubit affects neighborsElectromagnetic coupling between qubitsCorrelated errors
Measurement errorIncorrect readout of qubit stateThermal noise in readout resonatorsClassical post-processing error

Correlated errors (where multiple qubits fail simultaneously due to a common cause) are particularly dangerous because they can defeat error correction codes designed for independent errors. Cosmic ray impacts on superconducting chips, for example, can cause correlated errors across hundreds of qubits simultaneously — a phenomenon Google documented in 2021. Addressing correlated errors is an active area of QEC research.

The Error Correction Threshold Theorem

The threshold theorem is the theoretical foundation for fault-tolerant quantum computing. It states:

If the physical error rate per gate is below a certain threshold (p < p_th), then arbitrarily long quantum computations can be performed with arbitrarily low logical error rates, provided you have enough physical qubits.

For the surface code, p_th ≈ 1%. Current leading hardware achieves physical error rates of 0.1–0.5%, which is below threshold. This is why Google’s Willow result (2024) — demonstrating that adding more physical qubits reduces logical error rates — was genuinely significant. It was the first experimental proof that we are on the right side of the error correction threshold.

Surface Code Architecture

graph TB
    subgraph SURFACE["Surface Code Patch (Distance d=5)"]
        subgraph ROW1["Row 1"]
            D1["Data"] --- A1["Ancilla<br/>(Z-check)"] --- D2["Data"] --- A2["Ancilla<br/>(X-check)"] --- D3["Data"]
        end
        subgraph ROW2["Row 2"]
            A3["Ancilla<br/>(X-check)"] --- D4["Data"] --- A4["Ancilla<br/>(Z-check)"] --- D5["Data"] --- A5["Ancilla<br/>(X-check)"]
        end
        subgraph ROW3["Row 3"]
            D6["Data"] --- A6["Ancilla<br/>(Z-check)"] --- D7["Data"] --- A7["Ancilla<br/>(X-check)"] --- D8["Data"]
        end
    end

    subgraph DECODER["Classical Decoder"]
        SYN["Syndrome<br/>Measurements"] --> DEC["Minimum Weight<br/>Perfect Matching"] --> CORR["Correction<br/>Operations"]
    end

    SURFACE --> SYN

    style SURFACE fill:#1a1a2e,stroke:#16213e,color:#e6e6e6
    style DECODER fill:#0f3460,stroke:#16213e,color:#e6e6e6

How it works:

  1. Data qubits store the encoded logical quantum state
  2. Ancilla qubits are measured every error correction cycle (~1 µs) to extract syndromes — patterns that indicate which errors likely occurred
  3. A classical decoder processes syndrome data in real-time and determines correction operations
  4. Corrections are applied (or, equivalently, tracked in software via the “Pauli frame”)

Code Distance and Error Suppression

The code distance d determines the error-correcting capability of the surface code. A distance-d code can correct up to ⌊(d-1)/2⌋ errors.

Code Distance (d)Physical Qubits per Logical QubitErrors CorrectableLogical Error Rate (at p=0.1%)Use Case
3~171~10⁻³Demonstrations only
5~492~10⁻⁵Early fault-tolerant experiments
7~973~10⁻⁷Small algorithms with moderate fidelity
13~3376~10⁻¹²Useful quantum chemistry computations
21~88110~10⁻¹⁸Approaching cryptographic requirements
27~1,45713~10⁻²³RSA-2048 factoring requirement
31~1,92115~10⁻²⁶Conservative margin for Shor’s algorithm

Magic State Distillation: The Hidden Cost

Clifford gates (H, S, CNOT) can be implemented transversally on surface codes — meaning they are relatively cheap. But the T gate (required for universal computation) cannot be implemented transversally due to the Eastin-Knill theorem.

Instead, T gates require magic state distillation — a process that:

  1. Prepares many noisy copies of a special “magic state”
  2. Distills them into fewer, higher-fidelity copies through multiple rounds of error correction
  3. Uses these purified states to implement T gates

The overhead is enormous:

Distillation LevelInput Magic StatesOutput Magic StatesOutput Error Rate
Level 1151~35p³ (p = input error rate)
Level 2225 (15 × 15)1~35(35p³)³
Level 33,3751Negligible

For Shor’s algorithm on RSA-2048, billions of T gates are needed. Each one requires magic state distillation, with dedicated “distillation factories” consuming a significant fraction of the total physical qubits. Current estimates dedicate 40–60% of total physical qubits to magic state distillation alone.


Alternative QEC Approaches and Recent Advances

Beyond Surface Codes

The surface code is not the only path forward. Several alternative QEC approaches could dramatically change the CRQC timeline:

QEC CodeKey AdvantageStatus (2026)Potential Impact on CRQC
Surface CodeWell-understood; compatible with 2D hardwareBelow-threshold demos (Google Willow)Baseline: ~20M physical qubits for RSA-2048
Color CodesTransversal Clifford gates; lower overheadExperimental demonstrationsCould reduce qubit count by 30–50%
Quantum LDPC CodesMuch higher encoding rate; fewer physical qubitsTheoretical with early experimentsCould reduce qubits by 10–100×; potential game-changer
Concatenated CodesModular error correctionOriginal QEC approach; less efficient than surfaceUnlikely to change timeline significantly
Floquet CodesDynamic error correction via periodic measurementsActive researchCould improve threshold and reduce overhead
Bosonic Codes (Cat/GKP)Hardware-efficient; fewer physical qubitsDemonstrations in superconducting cavitiesCould reduce overhead by encoding in oscillator modes

Quantum LDPC Codes: The Potential Game-Changer

Quantum Low-Density Parity-Check (qLDPC) codes have emerged as the most promising alternative to surface codes. Unlike surface codes, which encode one logical qubit per ~2d² physical qubits, qLDPC codes can encode multiple logical qubits with a much better physical-to-logical ratio.

In 2024, IBM researchers proposed the gross code — a qLDPC code family that could potentially reduce the physical qubit requirement for RSA-2048 factoring by an order of magnitude. If these codes prove practical, the CRQC timeline could accelerate by several years.

However: qLDPC codes require non-local connectivity between qubits (connections beyond nearest neighbors), which is challenging for superconducting hardware but more natural for trapped-ion and neutral-atom architectures. The practical implementation of qLDPC codes remains an open research question.


The T Gate Problem: Why Universal Fault Tolerance Is So Hard

The difficulty of implementing T gates fault-tolerantly deserves special attention because it is the single largest contributor to CRQC resource estimates.

Why T Gates Are Special

The Clifford group (generated by H, S, and CNOT) is efficiently simulable on a classical computer by the Gottesman-Knill theorem. This means a quantum computer running only Clifford gates provides zero advantage over a classical computer. It is the T gate that adds computational universality and enables quantum speedup.

Unfortunately, the Eastin-Knill theorem proves that no quantum error-correcting code can implement a universal gate set entirely through transversal (qubit-by-qubit) operations. For the surface code, the T gate is the “hard” gate that requires magic state distillation.

Practical Impact on CRQC Resources

Consider the resource chain for a single logical T gate in a surface-code quantum computer:

  1. Prepare ~15 noisy magic states (each on its own logical qubit patch)
  2. Distill them into 1 higher-fidelity magic state (consuming ~15 logical qubit patches for one round)
  3. Inject the distilled state via a gate teleportation circuit
  4. Repeat for every T gate in the algorithm — billions of times for RSA-2048

This means the quantum computer must maintain dedicated “distillation factories” running continuously throughout the computation, consuming a large fraction of the total hardware. Reducing T gate count through better circuit compilation is one of the most active areas of quantum algorithm optimization.

Recent advances in non-Clifford gate synthesis and catalyst states have reduced the T count for Shor’s algorithm by factors of 2–5x compared to early estimates, but the overhead remains the dominant cost.


Real-Time Decoding: The Classical Bottleneck

A commonly overlooked challenge is that quantum error correction requires real-time classical processing that keeps pace with the quantum computer.

Every ~1 µs, syndrome measurements from millions of qubits must be:

  1. Read out via control electronics
  2. Transmitted to classical decoders
  3. Decoded using algorithms like Minimum Weight Perfect Matching (MWPM) or Union-Find
  4. Fed back as corrections before the next error correction cycle

This creates a stringent latency requirement: the entire decode cycle must complete in ~1 µs. For millions of qubits, this requires custom classical hardware (FPGAs or ASICs) costing potentially billions of dollars in addition to the quantum hardware itself.

Decoding AspectRequirementCurrent Status
Syndrome bandwidth~TB/s for millions of qubitsDemonstrated for ~100 qubits
Decode latency~1 µs per roundAchieved for small codes on FPGAs
Decoder accuracyNear-optimal matchingMWPM and Union-Find algorithms available
ScalabilityMillions of qubits in real-timeMajor unsolved engineering challenge
Power consumptionWithin cryostat thermal budgetActive research into cryo-CMOS

Quantum Networking and Distributed Quantum Computing

A development worth monitoring is the emergence of quantum networking — connecting multiple quantum processors via entanglement distribution over optical fiber or free-space links. This has two implications for cryptographic security:

Quantum Key Distribution (QKD)

QKD uses quantum mechanics to distribute encryption keys with information-theoretic security. If an eavesdropper intercepts qubits in transit, the disturbance is detectable. The two main protocols are:

ProtocolHow It WorksKey RateDistance LimitStatus
BB84Sender encodes bits in random bases; receiver measures in random bases; they compare basesModerate~100 km (fiber), ~1,200 km (satellite)Commercially deployed (ID Quantique, Toshiba)
E91 (Ekert)Uses entangled photon pairs; security from Bell inequality violationLowerSimilar to BB84Research demonstrations
CV-QKDEncodes in continuous variables (phase/amplitude of coherent states)Higher throughput~50–80 kmEmerging commercial (ThinkQuantum)

Important caveat for security professionals: QKD is not a replacement for post-quantum cryptography. QKD requires dedicated quantum hardware, has severe distance limitations, cannot authenticate parties (you still need classical authentication), and does not protect stored data. NIST, NSA, and NCSC (UK) have all published guidance stating that PQC — not QKD — is the recommended path for most organizations. QKD has niche applications in high-security point-to-point links (government, military, financial backbone networks) but is not a general-purpose solution.

Distributed Quantum Computing

Connecting multiple quantum processors via quantum links could enable a distributed CRQC — assembling cryptographically relevant capability from smaller, individually insufficient machines. This is analogous to how classical distributed computing enables problems no single machine could solve alone.

Current quantum networking is in its infancy (entanglement distribution rates of ~1 pair/second over metro distances), but if quantum networks mature, the CRQC timeline could be affected. A nation-state might assemble a CRQC from many moderate-scale quantum processors rather than building a single monolithic machine.

This is speculative but worth tracking. For now, the monolithic approach dominates all serious CRQC resource estimates.


Implications for Cryptographic Migration Planning

What We Know with Confidence

  1. Shor’s algorithm is mathematically correct. The algorithm works; the question is purely about hardware capability.
  2. Error correction works in principle. Google’s Willow (2024) demonstrated below-threshold logical error reduction.
  3. The physical qubit requirement is millions, not thousands. Even optimistic estimates require ~4 million physical qubits for RSA-2048.
  4. Current hardware is at ~1,000–1,400 physical qubits. Three to four orders of magnitude short.
  5. Symmetric cryptography is safe with appropriate key sizes (AES-256, SHA-384+).

What Remains Uncertain

  1. Hardware scaling trajectory — Will qubit counts double annually? Every 18 months? Every 3 years?
  2. Algorithmic breakthroughs — Could a new algorithm (like Regev’s 2023 factoring approach) dramatically reduce resource requirements?
  3. QEC code improvements — Could qLDPC codes or other approaches reduce the physical-to-logical ratio by 10× or more?
  4. Nation-state programs — Are classified quantum computing programs significantly ahead of publicly known capabilities?
  5. Alternative computing paradigms — Could analog quantum computing, quantum annealing, or hybrid approaches provide unexpected shortcuts?

Decision Framework for Security Professionals

Given these uncertainties, the prudent approach for security professionals is:

Data SensitivitySecrecy LifetimeRecommended ActionUrgency
Top Secret / National Security50+ yearsMigrate to PQC immediately; implement hybrid schemesCritical — now
Financial / Healthcare PII10–25 yearsBegin PQC migration planning; inventory cryptographic assetsHigh — 2025–2027
Enterprise / Commercial5–10 yearsMonitor NIST standards; plan migration timelineMedium — 2026–2030
Transient / Session DataHours to daysCurrent cryptography sufficient; plan for future complianceLow — align with deprecation schedules

For detailed guidance on migration strategies, see NIST Post-Quantum Standards and Migration Planning and Strategies.


Geopolitical Dimension

Quantum computing development is a national security priority for multiple nations. The competitive landscape matters for CRQC timeline assessment:

Country/RegionEstimated Public Investment (Cumulative)Key ProgramsNotable Capabilities
China>$15 billionChinese Academy of Sciences; USTC (Hefei)Jiuzhang (photonic advantage); Zuchongzhi (superconducting); extensive satellite QKD network
United States>$5 billion (public) + significant privateDARPA US2QC; DOE quantum centers; NSF QLCIsIBM, Google, IonQ, Quantinuum (private sector leads)
European Union>€7 billionQuantum Flagship ProgrammeStrong theory; Pasqal, IQM (startups)
United Kingdom>£2.5 billionNational Quantum Strategy (2024)Oxford Ionics; Riverlane (decoding)
Canada>$1.5 billion (CAD)National Quantum Strategy (2023)Xanadu (photonic); D-Wave (annealing); Waterloo theory
Australia>$1 billion (AUD)Silicon Quantum Computing; CSIROPhosphorus-in-silicon approach (UNSW); unique technology path
Japan>¥150 billionMoonshot Program; RIKEN/Fujitsu partnershipSuperconducting focus; strong industry collaboration

Classification concern: The most advanced quantum computing research may be classified, particularly in China, the United States, and Russia. Public estimates of quantum progress represent a lower bound on actual capability. This uncertainty should factor into risk assessments — assume adversaries may be ahead of publicly known milestones.


Key Takeaways

  1. Qubits are not parallel classical bits. Superposition and entanglement enable specific algorithmic speedups, not general parallelism. Only problems with exploitable mathematical structure benefit.

  2. BQP captures what quantum computers can efficiently solve. Integer factoring and discrete logarithms are in BQP; lattice problems and hash functions are not (as far as we know). This asymmetry is why post-quantum cryptography works.

  3. Current quantum hardware is nowhere near cryptographically relevant. The gap between ~1,400 physical qubits (2026) and ~4–20 million physical qubits (CRQC) is enormous. Vendor qubit counts are misleading without error rates and connectivity context.

  4. Quantum error correction is the bottleneck, not qubit count. Surface codes require ~1,500 physical qubits per logical qubit, and magic state distillation consumes 40–60% of total resources. The overhead is staggering.

  5. CRQC timelines are genuinely uncertain. Expert estimates range from 2035 to 2050+. Plan for the possibility of earlier timelines while recognizing that the engineering challenges are formidable.

  6. “Harvest now, decrypt later” makes the threat immediate. Even if a CRQC is 15 years away, data encrypted today with RSA or ECDH may be at risk if it must remain confidential for more than 15 years.

  7. Migration decisions should be risk-based, not prediction-based. Do not wait for certainty about CRQC timelines. Assess your data’s secrecy lifetime against the range of CRQC estimates and act accordingly.


Common Misconceptions in the Cybersecurity Community

To conclude, here are the most frequent misunderstandings about quantum computing encountered in cybersecurity discussions:

MisconceptionReality
”Quantum computers try every answer simultaneously”Quantum parallelism requires carefully designed interference patterns. Only specific problems with mathematical structure can be exploited. Random brute-force does not benefit from quantum speedup.
”We need to worry about quantum computers breaking AES”Grover’s algorithm provides only a quadratic speedup. AES-256 retains 128-bit security post-quantum — more than sufficient. The real threat is to public-key cryptography (RSA, ECC, DH).
”Quantum-resistant means quantum-proof”Post-quantum algorithms are believed to be resistant to known quantum algorithms. No mathematical proof of security exists against all possible quantum attacks. This is analogous to classical cryptography — RSA is not proven hard either.
”A 1,000-qubit quantum computer can break encryption”Breaking RSA-2048 requires millions of error-corrected physical qubits. Current 1,000-qubit systems cannot perform any cryptographically relevant computation.
”Quantum computing will make all encryption obsolete”Symmetric cryptography (AES-256) and hash functions (SHA-384+) remain secure. Only public-key cryptography based on factoring and discrete logs is at risk.
”We have plenty of time before quantum threatens us”The harvest-now-decrypt-later threat means data encrypted today may be decrypted in the future. Long-lived secrets need protection now.
”QKD solves the quantum threat”QKD addresses key distribution only, requires dedicated hardware, has distance limitations, and does not protect stored data. PQC is the comprehensive solution for most organizations.

Further Reading

  • Gidney & Ekera (2021) — “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits” — the definitive resource estimate for Shor’s algorithm on RSA-2048
  • Google Quantum AI (2024) — Willow below-threshold error correction results
  • NIST IR 8413 — Status Report on the Third Round of the NIST Post-Quantum Cryptography Standardization Process
  • NSA CNSA 2.0 (2022) — Commercial National Security Algorithm Suite 2.0 guidance
  • Global Risk Institute Annual Survey — Expert assessments of quantum computing risk timelines
  • Nielsen & ChuangQuantum Computation and Quantum Information — the standard textbook reference
  • Regev (2023) — “An Efficient Quantum Factoring Algorithm” — potential improvement over Shor’s resource requirements
  • Litinski (2019) — “A Game of Surface Codes” — accessible introduction to lattice surgery and magic state distillation
  • Mosca (2018) — “Cybersecurity in an Era with Quantum Computers” — foundational risk framework for the harvest-now-decrypt-later threat

Next Steps

For the next step in understanding the post-quantum landscape, continue to NIST Post-Quantum Standards, which covers the specific algorithms (ML-KEM, ML-DSA, SLH-DSA) selected to replace the vulnerable classical cryptographic schemes discussed on this page.

If you want to understand the specific lattice, code-based, and hash-based mathematical problems that resist quantum attack, see Post-Quantum Algorithm Families.

For practical guidance on when and how to begin migrating your organization’s cryptographic infrastructure, see Migration Planning and Strategies.