Quantum Computing Fundamentals for Cryptographers
Why Cryptographers Need to Understand Quantum Computing
You do not need a physics degree to prepare for the post-quantum transition. What you need is a precise understanding of what quantum computers can and cannot do, when they will be able to break current cryptographic primitives, and how much uncertainty surrounds those timelines. This page provides exactly that — quantum computing concepts explained through the lens of cryptographic relevance, not academic physics.
The core message is simple: quantum computing threatens the mathematical hardness assumptions that underpin RSA, Diffie-Hellman, and elliptic curve cryptography. But the path from today’s noisy, small-scale quantum hardware to a machine that can actually factor a 2048-bit RSA key is extraordinarily difficult. Understanding why it is difficult — and how much progress has been made — is essential for making sound migration decisions.
Before proceeding, ensure you are familiar with the classical cryptographic primitives at risk, covered in Post-Quantum Cryptography Overview.
Qubits: The Fundamental Unit
Classical Bits vs. Qubits
A classical bit is either 0 or 1. A qubit (quantum bit) is a two-level quantum system that can exist in a superposition of both states simultaneously. Mathematically, a qubit’s state is represented as:
|ψ⟩ = α|0⟩ + β|1⟩
Where α and β are complex numbers called probability amplitudes, and |α|² + |β|² = 1. When you measure a qubit, you get 0 with probability |α|² and 1 with probability |β|², and the superposition collapses.
What this means for cryptography: Superposition is not “trying both values at once” in any simple parallel-computing sense. A common misconception is that n qubits let you try 2ⁿ solutions simultaneously. The reality is more subtle — quantum algorithms must be carefully structured so that correct answers are amplified (their probability increases) while incorrect answers interfere destructively (their probability decreases). This is why only specific problems — those with exploitable mathematical structure — benefit from quantum speedup.
Superposition in Practice
Consider a single qubit initialized to |0⟩. Applying a Hadamard gate puts it into equal superposition:
H|0⟩ = (|0⟩ + |1⟩) / √2
If you apply Hadamard gates to n qubits, you create a uniform superposition of all 2ⁿ possible bit strings. For a 20-qubit register, that is approximately one million states encoded simultaneously. For a 300-qubit register, that is more states than there are atoms in the observable universe.
The catch: you cannot simply “read out” all those states. Measurement gives you one outcome, probabilistically. The entire art of quantum algorithm design is structuring the computation so that the measurement overwhelmingly returns the answer you want.
Entanglement
Two qubits are entangled when their quantum states are correlated in a way that cannot be described independently. The canonical example is a Bell state:
|Φ⁺⟩ = (|00⟩ + |11⟩) / √2
Measuring the first qubit immediately determines the second qubit’s state, regardless of physical distance. This is not useful for faster-than-light communication (you still need a classical channel to compare results), but it is essential for:
- Quantum error correction — encoding logical qubits across many entangled physical qubits
- Quantum algorithms — creating correlations between input and output registers (critical for Shor’s algorithm)
- Quantum key distribution (QKD) — detecting eavesdropping through Bell inequality violations
Cryptographic relevance: Entanglement is what allows Shor’s algorithm to correlate the output of modular exponentiation with the period of a function, enabling efficient factoring. Without entanglement, quantum computers would be no more powerful than probabilistic classical computers.
Decoherence: The Enemy of Quantum Computation
Decoherence is the process by which a qubit loses its quantum properties — superposition and entanglement — through unwanted interaction with its environment. For cryptographic analysis, decoherence is the primary reason why building a large-scale quantum computer is so difficult.
There are two key timescales:
- T₁ (relaxation time) — How long before a qubit in the |1⟩ state spontaneously decays to |0⟩. Think of it as the “battery life” of the qubit’s energy state.
- T₂ (dephasing time) — How long before the phase relationship between |0⟩ and |1⟩ is scrambled. This is typically shorter than T₁ and is the more relevant limit for computation.
| Platform | Typical T₁ | Typical T₂ | Gate Time | Operations Before Decoherence |
|---|---|---|---|---|
| Superconducting (transmon) | 100–300 µs | 50–200 µs | 20–100 ns | ~1,000–10,000 |
| Trapped ion | 1–60 s | 0.5–10 s | 1–100 µs | ~10,000–100,000 |
| Neutral atom | 1–10 s | 0.1–1 s | 1–10 µs | ~10,000–100,000 |
| Photonic | N/A (photon lost on measurement) | N/A | ~ns | Limited by photon loss rates |
| Nitrogen-vacancy (NV) center | 1–10 ms | 1–2 ms | ~10–100 ns | ~10,000–100,000 |
Why this matters for Shor’s algorithm: Factoring RSA-2048 requires circuit depths on the order of 10¹⁰–10¹² gates. Even with the fastest gate times (~20 ns for superconducting), that translates to minutes or hours of continuous coherent computation. No physical qubit can maintain coherence for that long — which is precisely why quantum error correction is mandatory.
The Bloch Sphere: Visualizing Qubit States
For security professionals who prefer geometric intuition: every single-qubit state can be represented as a point on the Bloch sphere — a unit sphere where:
- The north pole represents |0⟩
- The south pole represents |1⟩
- The equator represents equal superpositions with different phase relationships
- Quantum gates correspond to rotations of this sphere
- Decoherence can be visualized as the point “drifting” from its intended position — T₁ pulls toward the north pole (energy decay), T₂ smears the point around the z-axis (phase randomization)
Understanding the Bloch sphere helps demystify quantum gates: the Hadamard gate is a 180° rotation about the axis halfway between X and Z. The T gate is a 45° rotation about the Z axis. Every single-qubit operation is just a rotation.
Quantum Gates and Circuits
Quantum computation is performed by applying quantum gates — unitary transformations — to qubits. Unlike classical logic gates, quantum gates are reversible (every gate has an inverse) and operate on probability amplitudes rather than definite bit values.
Essential Gates for Cryptographic Context
| Gate | Symbol | Matrix | Qubits | Purpose |
|---|---|---|---|---|
| Hadamard | H | (1/√2)[[1,1],[1,-1]] | 1 | Creates superposition; foundation of nearly every quantum algorithm |
| Pauli-X | X | [[0,1],[1,0]] | 1 | Quantum NOT gate; bit flip |
| Pauli-Z | Z | [[1,0],[0,-1]] | 1 | Phase flip; leaves |0⟩ unchanged, maps |1⟩ to -|1⟩ |
| Phase (S) | S | [[1,0],[0,i]] | 1 | Quarter-turn phase rotation |
| T Gate | T | [[1,0],[0,e^(iπ/4)]] | 1 | π/8 gate; critical for universal computation and fault-tolerant circuits |
| CNOT | CX | 4×4 controlled-NOT | 2 | Entangles two qubits; flips target if control is |1⟩ |
| Toffoli | CCX | 8×8 controlled-controlled-NOT | 3 | Universal for classical computation; used in quantum arithmetic circuits |
| SWAP | SWAP | 4×4 swap matrix | 2 | Exchanges two qubit states; important for circuit routing on physical hardware |
The Universal Gate Set
Any quantum computation can be decomposed into a sequence of gates from a universal gate set. The standard fault-tolerant universal set is {H, T, CNOT} — the Clifford+T gate set. This matters for cryptographic analysis because:
- Resource estimation for breaking cryptographic schemes is typically expressed in T-gate count, since T gates are the most expensive operation in fault-tolerant quantum computing.
- The number of T gates required to run Shor’s algorithm against RSA-2048 determines the physical resources needed.
- Current estimates for factoring RSA-2048 require on the order of billions of T gates.
Quantum Circuit Concept
The following diagram illustrates the structure of a quantum circuit relevant to Shor’s algorithm — showing how qubits move through Hadamard gates, controlled operations, and measurement.
graph LR
subgraph INPUT["Input Register (n qubits)"]
Q0["|0⟩"] --> H0["H"]
Q1["|0⟩"] --> H1["H"]
Q2["|0⟩"] --> H2["H"]
Qn["|0⟩"] --> Hn["H"]
end
subgraph ORACLE["Modular Exponentiation Oracle"]
H0 --> ME["Controlled-U<br/>a^(2^k) mod N"]
H1 --> ME
H2 --> ME
Hn --> ME
end
subgraph OUTPUT["Output Processing"]
ME --> QFT["Quantum Fourier<br/>Transform (QFT⁻¹)"]
QFT --> M0["Measure"]
QFT --> M1["Measure"]
QFT --> M2["Measure"]
QFT --> Mn["Measure"]
end
subgraph WORK["Work Register (n qubits)"]
W0["|1⟩"] --> ME
end
M0 --> CF["Classical<br/>Post-Processing<br/>(Continued Fractions)"]
M1 --> CF
M2 --> CF
Mn --> CF
CF --> PERIOD["Period r of<br/>a^r mod N = 1"]
PERIOD --> FACTOR["Factors of N<br/>via GCD"]
Key insight: The modular exponentiation oracle is by far the most expensive component. For RSA-2048, this single operation dominates the entire resource budget — requiring billions of quantum gates operating on thousands of logical qubits.
Quantum Computational Complexity: BQP
What Is BQP?
BQP (Bounded-Error Quantum Polynomial Time) is the class of decision problems solvable by a quantum computer in polynomial time with error probability at most 1/3. It is the quantum analog of the classical complexity class BPP (Bounded-Error Probabilistic Polynomial Time).
graph TB
subgraph COMPLEXITY["Computational Complexity Classes"]
P["P<br/>Deterministic<br/>Polynomial Time"]
BPP["BPP<br/>Probabilistic<br/>Polynomial Time"]
BQP["BQP<br/>Quantum<br/>Polynomial Time"]
NP["NP<br/>Nondeterministic<br/>Polynomial Time"]
PSPACE["PSPACE"]
P --> BPP
BPP --> BQP
BQP --> PSPACE
P --> NP
NP --> PSPACE
end
style BQP fill:#e74c3c,stroke:#c0392b,color:#fff
style NP fill:#3498db,stroke:#2980b9,color:#fff
style P fill:#2ecc71,stroke:#27ae60,color:#fff
What BQP Means for Cryptography
| Problem | Classical Complexity | Quantum Complexity | Cryptographic Impact |
|---|---|---|---|
| Integer factoring | Sub-exponential (best known: GNFS) | Polynomial (Shor’s) | RSA broken |
| Discrete logarithm | Sub-exponential | Polynomial (Shor’s) | DH, DSA, ECDH, ECDSA broken |
| Elliptic curve discrete log | Exponential (best known: Pollard’s rho) | Polynomial (Shor’s variant) | All ECC broken |
| AES key search | O(2ⁿ) | O(2^(n/2)) (Grover’s) | AES-128 reduced to 64-bit security; AES-256 remains secure |
| SHA-256 preimage | O(2²⁵⁶) | O(2¹²⁸) (Grover’s) | Still computationally infeasible |
| Lattice problems (LWE/SIS) | Exponential (best known) | Exponential (best known) | Basis for PQC — believed quantum-resistant |
| Hash-based signatures | Security from hash preimage resistance | Grover provides quadratic speedup only | SPHINCS+ secure with larger parameters |
| Code-based problems | Exponential (best known) | Exponential (best known) | McEliece / BIKE / HQC believed quantum-resistant |
The critical takeaway: BQP captures the problems quantum computers solve efficiently. Integer factoring and discrete logarithms fall within BQP (thanks to Shor’s algorithm), but most NP-complete problems — and the lattice/code-based problems that underpin post-quantum cryptography — are not known to be in BQP. This asymmetry is the entire foundation of the post-quantum migration.
What Quantum Computers Cannot Do
It is equally important to understand the limits:
- NP-complete problems are not known to be in BQP. Quantum computers are not expected to solve SAT, traveling salesman, or graph coloring in polynomial time.
- Grover’s algorithm provides only a quadratic speedup for unstructured search — significant but not revolutionary. Doubling key lengths neutralizes it.
- Quantum supremacy/advantage demonstrations (Google’s Sycamore, 2019) solved problems with no practical application. They proved quantum hardware works but said nothing about cryptographic relevance.
- No known quantum algorithm threatens the hardness of lattice problems, multivariate polynomial systems, or hash preimage resistance beyond the quadratic Grover speedup.
Current Quantum Hardware Landscape
The quantum computing industry is advancing rapidly, but the gap between current capabilities and what is needed to threaten cryptography remains vast. The following table summarizes the state of leading hardware efforts as of early 2026.
Leading Quantum Hardware Providers
| Provider | Technology | Qubits (Physical) | Two-Qubit Gate Error Rate | Notable Achievement |
|---|---|---|---|---|
| IBM | Superconducting (transmon) | 1,386 (Flamingo, 2025) | ~0.1–0.5% | Largest superconducting processor; Qiskit ecosystem; 100,000-qubit roadmap by 2033 |
| Superconducting (transmon) | 105 (Willow, 2024) | ~0.1–0.3% | Below-threshold error correction with surface codes; “quantum supremacy” (Sycamore, 2019) | |
| Quantinuum | Trapped ion (Ba⁺) | 56 (H2, 2024) | ~0.1% (best: 0.03%) | Highest fidelity 2-qubit gates of any platform; all-to-all connectivity |
| IonQ | Trapped ion (Yb⁺/Ba⁺) | 36 (Forte Enterprise, 2025) | ~0.3–0.5% | Algorithmic qubits metric; commercial cloud access via AWS/Azure/GCP |
| PsiQuantum | Photonic (fusion-based) | Pre-commercial | N/A (photonic loss rates) | Targeting 1M+ physical qubits at scale; GlobalFoundries fab partnership |
| QuEra | Neutral atom (Rb) | 256 (2024) | ~0.5% | Dynamically reconfigurable atom arrays; Harvard/MIT collaboration; logical qubit demonstrations |
| Atom Computing | Neutral atom (Yb) | 1,225 (2024) | ~1–2% | Largest neutral atom array; acquired by Infleqtion pathway |
Quantum Computing Technology Comparison
| Property | Superconducting | Trapped Ion | Photonic | Neutral Atom | Topological |
|---|---|---|---|---|---|
| Operating Temp | ~15 mK (dilution fridge) | Room temp (vacuum chamber) | Room temp | ~µK (optical trap) | ~15 mK |
| Gate Speed | 10–100 ns | 1–100 µs | ~ns (measurement) | 1–10 µs | Theoretical |
| Connectivity | Nearest-neighbor (2D grid) | All-to-all | Photonic graph states | Reconfigurable | Non-abelian anyons |
| Coherence Time | 100–300 µs | 1–60 seconds | N/A (photons lost on measurement) | 1–10 seconds | Topologically protected (theory) |
| Scalability Path | Modular multi-chip | Shuttle ions between zones | Multiplexed photon sources | Larger atom arrays | Majorana zero modes |
| Key Advantage | Fast gates; mature fab | Low error rates; full connectivity | Room temp; fiber integration | Scalable arrays; mid-circuit measurement | Inherent error protection |
| Key Challenge | Short coherence; crosstalk | Slow gates; scaling beyond ~100 | Photon loss; non-deterministic gates | Higher error rates; atom loss | No physical demonstration yet |
| Leading Players | IBM, Google, Rigetti | Quantinuum, IonQ | PsiQuantum, Xanadu | QuEra, Atom Computing, Pasqal | Microsoft (Station Q) |
| TRL (Technology Readiness) | 5–6 | 5–6 | 3–4 | 4–5 | 2–3 |
Quantum Volume and Benchmarking
Raw qubit count is a poor measure of quantum computer capability. The industry has developed several benchmarks to provide more meaningful comparisons:
| Metric | What It Measures | Who Uses It | Limitations |
|---|---|---|---|
| Quantum Volume (QV) | Max square circuit (width = depth) executable with >2/3 success | IBM (originated), broadly adopted | Saturates at ~2¹⁵ for current hardware; favors all-to-all connectivity |
| CLOPS (Circuit Layer Operations Per Second) | Throughput — how many circuits per second | IBM | Does not measure quality, only speed |
| Algorithmic Qubits | Estimated qubits available after accounting for errors | IonQ | Vendor-specific; not independently validated |
| Logical Error Rate per Round | Error suppression in QEC cycles | Google, Quantinuum | Specific to QEC experiments; not general purpose |
| Application Benchmarks | Performance on specific algorithms (VQE, QAOA, etc.) | QED-C, academic groups | Problem-specific; hard to generalize |
Quantum Volume is the most widely cited benchmark. A QV of 2ⁿ means the device can reliably execute a random circuit on n qubits with depth n. As of 2026:
- Quantinuum H2: QV = 2²⁰ (1,048,576) — highest reported
- IonQ Forte: QV = 2²⁹ (claimed, using their methodology)
- IBM Eagle: QV = 2⁷ (128) — limited by nearest-neighbor connectivity
- Google Willow: Does not report QV; focuses on QEC metrics
The disconnect between IBM’s qubit count lead (1,386 qubits) and its modest quantum volume illustrates why qubit count alone is misleading. Connectivity, gate fidelity, and coherence time all interact to determine useful computational capacity.
What the Numbers Actually Mean
When a vendor announces “1,000 qubits,” context is everything:
- Physical qubits ≠ useful computation. Without error correction, each physical qubit accumulates errors after a few hundred operations at best.
- Gate fidelity matters more than qubit count. A 50-qubit machine with 99.9% two-qubit gate fidelity is far more useful than a 1,000-qubit machine with 99% fidelity.
- Connectivity determines circuit depth. Nearest-neighbor architectures (superconducting) require SWAP operations to connect distant qubits, inflating circuit depth and error accumulation. All-to-all connectivity (trapped ions) avoids this overhead.
- Circuit depth is limited by coherence time. If your qubits decohere after 100 µs and each gate takes 100 ns, your maximum circuit depth is roughly 1,000 operations — far too shallow for Shor’s algorithm on cryptographic key sizes.
Quantum Error Correction: The Fundamental Bottleneck
Quantum error correction (QEC) is the single most important factor determining when quantum computers will threaten cryptography. Without it, quantum computers are limited to shallow circuits on small problem sizes — nowhere near what is needed for Shor’s algorithm on RSA-2048.
Why Quantum Errors Are Different
Classical computers use discrete 0/1 values that can be trivially copied and checked. Quantum systems face fundamentally harder challenges:
- No-cloning theorem — You cannot copy an unknown quantum state. This rules out the simplest classical error correction strategy (redundant copies).
- Continuous errors — A qubit can drift by any amount on the Bloch sphere, not just flip between 0 and 1. Errors are continuous, not discrete.
- Measurement destroys information — You cannot inspect a qubit mid-computation without collapsing its superposition. Error detection must be indirect.
- Multiple error types — Qubits experience bit-flip errors (X), phase-flip errors (Z), and combinations (Y = iXZ). A code must correct all three.
Surface Codes: The Leading Approach
The surface code is the most studied and most practical QEC code for near-term hardware. It works by encoding a single logical qubit across a 2D grid of many physical qubits, with dedicated ancilla qubits performing continuous syndrome measurements to detect and correct errors without disturbing the encoded information.
| Property | Surface Code Details |
|---|---|
| Layout | 2D grid of data and ancilla qubits |
| Error threshold | ~1% per physical operation (below this, adding more qubits helps) |
| Code distance d | Corrects up to ⌊(d-1)/2⌋ errors; higher d = better protection |
| Physical-to-logical ratio | ~2d² physical qubits per logical qubit |
| Logical error rate | Suppressed exponentially: ~(p/p_th)^(d/2) where p is physical error rate |
| Gate implementation | Clifford gates via lattice surgery; T gates via magic state distillation |
| T gate overhead | Each logical T gate requires ~15–20× additional physical qubits for distillation |
Logical vs. Physical Qubits: The Overhead Problem
This is the single most misunderstood aspect of quantum computing in the cybersecurity community. Here is the math:
To factor RSA-2048 using Shor’s algorithm:
| Resource | Estimate |
|---|---|
| Logical qubits needed | ~4,000–6,000 |
| Code distance required | d = 27–31 (for target logical error rate ~10⁻¹⁵) |
| Physical qubits per logical qubit | ~1,500–2,000 (at d=27, plus ancillas) |
| T gate distillation overhead | ~15–20× additional physical qubits |
| Total physical qubits | ~6–20 million |
| Logical gate operations | ~10⁹–10¹² |
| Estimated runtime | Hours to days (depending on assumptions) |
These estimates come from peer-reviewed work by Gidney & Ekera (2021), which optimized Shor’s algorithm for RSA-2048 and estimated approximately 20 million physical qubits with a runtime of about 8 hours. More optimistic analyses using improved compilation techniques reduce this to ~4–6 million physical qubits, but even these lower bounds exceed current hardware by three to four orders of magnitude.
Why “1,000 Qubits” Does Not Break RSA
Let’s put the numbers in perspective with a concrete comparison:
| Milestone | Physical Qubits | What It Can Do (Cryptographically) |
|---|---|---|
| Current best (2026) | ~1,000–1,400 | Nothing cryptographically relevant; useful for chemistry simulations and optimization demos |
| Near-term (2027–2029) | ~10,000–100,000 | Early fault-tolerant demonstrations; factor very small numbers (~20–50 bits) with error correction |
| Medium-term (2030–2035) | ~100,000–1,000,000 | Factor numbers up to a few hundred bits; still insufficient for RSA-2048 |
| CRQC threshold | ~4,000,000–20,000,000 | Factor RSA-2048, compute elliptic curve discrete logs — all classical public-key cryptography broken |
The gap is not just qubit count. Even if you had millions of physical qubits today, you would need:
- Error rates below the surface code threshold (~1%) consistently across the entire chip
- Fast classical decoding to process syndrome measurements in real-time
- Long coherence times relative to the total circuit depth (hours of computation)
- Low-latency control electronics for millions of qubits simultaneously
- Cryogenic infrastructure (for superconducting approaches) at unprecedented scale
Each of these is an active area of engineering research with no guaranteed timeline.
The Scaling Analogy: Where Are We Really?
To put the challenge in perspective, consider this analogy to classical computing history:
| Era | Classical Computing | Quantum Computing (Approximate Equivalent) |
|---|---|---|
| Transistor invention (1947) | Proof that solid-state switching works | First qubit demonstrations (~1995–2000) |
| First IC (1958) | Integration of multiple transistors | Multi-qubit processors (~2010–2015) |
| Intel 4004 (1971) — 2,300 transistors | First commercial microprocessor | Current era (~1,000–1,400 qubits, 2024–2026) |
| Intel 80386 (1985) — 275,000 transistors | Usable personal computing | Projected ~2030–2035 |
| Modern CPU — billions of transistors | Cryptographically relevant classical computation | CRQC — projected ~2035–2050 |
This analogy is imperfect — quantum computing faces fundamentally different engineering challenges (error rates, cryogenics, decoherence) that classical scaling did not. But it illustrates that we are in the very early stages of a long engineering trajectory. Expecting a CRQC from today’s hardware is roughly equivalent to expecting a modern CPU from a 1971 Intel 4004.
The Path to a Cryptographically Relevant Quantum Computer (CRQC)
What Is a CRQC?
A Cryptographically Relevant Quantum Computer is defined as a quantum computer capable of breaking widely deployed public-key cryptographic algorithms — specifically, factoring 2048-bit RSA moduli or computing discrete logarithms on standardized elliptic curves (e.g., P-256) within a practical timeframe (hours to days).
A CRQC requires:
- Sufficient logical qubits (~4,000–6,000 for RSA-2048)
- Low enough logical error rates (~10⁻¹⁵ per gate) to complete trillions of operations
- A fault-tolerant architecture with real-time error correction
- Practical runtime (not years or decades per computation)
CRQC Timeline Estimates
Estimates for when a CRQC will exist vary dramatically depending on the source, assumptions about hardware progress, and whether breakthrough algorithms or architectures are anticipated.
| Source | Estimated CRQC Timeline | Key Assumptions |
|---|---|---|
| NIST (2024) | “Not before 2035” | Conservative; assumes incremental progress |
| NSA (CNSA 2.0, 2022) | Migrate by 2035 | Implies concern about CRQC by mid-2030s |
| Global Risk Institute (2024) | ~15–20% chance by 2034, ~50% by 2039 | Annual expert survey; wide uncertainty |
| RAND Corporation (2023) | 2035–2050 range | Emphasizes engineering challenges |
| Michele Mosca (2023) | “Within 15–20 years” (from 2023) | Leading quantum risk researcher; 2038–2043 |
| IBM Roadmap (2025) | 100,000 qubits by 2033 | Self-reported; still far below CRQC threshold |
| Google Quantum AI (2024) | “Useful error-corrected QC by 2029” | Focused on scientific applications, not CRQC specifically |
| Chinese Academy of Sciences | Active programs; no public timeline | Significant government investment; opaque progress |
| Optimistic outliers | 2030–2032 | Assume algorithmic breakthroughs or novel architectures |
| Pessimistic outliers | 2050+ or never | Cite fundamental engineering barriers |
CRQC Timeline Visualization
timeline
title Path to Cryptographically Relevant Quantum Computers
section Current Era (2024-2027)
Noisy Intermediate Scale (NISQ) : 1,000-5,000 physical qubits
: Error rates ~0.1-1%
: No cryptographic threat
: Early logical qubit demonstrations
section Near-Term (2028-2032)
Early Fault-Tolerant : 10,000-100,000 physical qubits
: Below-threshold error rates achieved
: Small-scale error-corrected computations
: Factor ~50-100 bit numbers with QEC
section Medium-Term (2033-2038)
Scaling Era : 100,000-1,000,000 physical qubits
: Modular quantum computer architectures
: Factor ~500-1000 bit numbers
: HARVEST NOW DECRYPT LATER becomes acute concern
section CRQC Era (2035-2045+)
Cryptographically Relevant : 4,000,000-20,000,000 physical qubits
: RSA-2048 and ECC P-256 broken
: All classical PKC compromised
: Organizations without PQC are exposed
The “Harvest Now, Decrypt Later” Problem
Even though a CRQC may be a decade or more away, the threat is already real for data with long confidentiality requirements. Adversaries — particularly nation-state intelligence agencies — are widely believed to be collecting encrypted traffic today with the intention of decrypting it once a CRQC becomes available.
The critical question for any organization:
Is the secrecy lifetime of my data longer than the time until a CRQC exists?
If you are protecting state secrets, medical records, financial data, or intellectual property that must remain confidential for 10–25 years, the answer may well be yes — meaning you need to migrate to post-quantum cryptography now, not when a CRQC is announced. This concept is explored further in Migration Planning and Strategies.
Mosca’s Theorem formalizes this risk calculation. If:
- x = the time needed to migrate to quantum-safe cryptography
- y = the secrecy lifetime of your data
- z = the time until a CRQC exists
Then if x + y > z, your data is at risk and you must begin migration now. For most large organizations, x is 5–10 years (cryptographic inventory, protocol updates, testing, deployment), y varies by data type, and z is unknown but possibly 10–20 years. The math often demands immediate action. See Cryptographic Risk Assessment for guidance on applying Mosca’s inequality to your organization.
Quantum Algorithms That Threaten Cryptography
Shor’s Algorithm (1994)
Peter Shor’s algorithm solves the integer factoring problem and the discrete logarithm problem in polynomial time on a quantum computer. This is the existential threat to RSA, Diffie-Hellman, DSA, ECDH, and ECDSA.
How it works at a high level:
- Reduce factoring N to finding the period r of the function f(x) = aˣ mod N for a random a
- Use quantum phase estimation (Hadamard + controlled modular exponentiation + inverse QFT) to find r
- Once r is known, compute gcd(a^(r/2) ± 1, N) to extract factors with high probability
- Classical post-processing via continued fractions extracts the period from the measurement result
Resource requirements for RSA-2048:
| Resource | Gidney & Ekera (2021) | Optimized Estimates (2024) |
|---|---|---|
| Logical qubits | ~4,098 | ~2,048–3,000 |
| T gates | ~2.7 × 10⁹ | ~10⁹ |
| Toffoli gates | ~4.6 × 10⁹ | ~10⁹ |
| Physical qubits (surface code, d=27) | ~20 million | ~4–6 million |
| Runtime | ~8 hours | ~hours |
| Circuit depth | ~10¹² | ~10¹⁰–10¹¹ |
Grover’s Algorithm (1996)
Grover’s algorithm provides a quadratic speedup for unstructured search problems. Given a function with one solution among N possibilities, Grover’s finds it in O(√N) evaluations rather than the classical O(N).
Impact on symmetric cryptography:
| Algorithm | Classical Security | Post-Grover Security | Mitigation |
|---|---|---|---|
| AES-128 | 128-bit | 64-bit | Upgrade to AES-256 |
| AES-192 | 192-bit | 96-bit | Acceptable for most use cases |
| AES-256 | 256-bit | 128-bit | Fully quantum-resistant |
| SHA-256 (preimage) | 256-bit | 128-bit | Fully quantum-resistant |
| SHA-256 (collision) | 128-bit | ~85-bit (BHT algorithm) | Monitor; consider SHA-384/SHA-512 |
| HMAC-SHA-256 | 256-bit | 128-bit | Fully quantum-resistant |
Important nuance: Grover’s algorithm requires O(√N) sequential quantum operations. For AES-128, that means ~2⁶⁴ sequential quantum operations — each requiring coherent quantum computation. The actual cost of running Grover’s against AES-128 on a real quantum computer (including error correction overhead) may exceed the cost of classical brute force. NIST’s position is that AES-256 is sufficient; AES-128 is acceptable for most non-government applications even in a post-quantum world.
Regev’s Algorithm (2023): A Potential Shift
In 2023, Oded Regev proposed a new quantum factoring algorithm that could require significantly fewer quantum gates than Shor’s algorithm — at the cost of requiring more qubits and a classical preprocessing step using lattice reduction.
Key differences from Shor’s:
| Aspect | Shor’s Algorithm | Regev’s Algorithm |
|---|---|---|
| Gate complexity | O(n³) for n-bit number | O(n^(3/2)) — potentially much fewer gates |
| Qubit count | O(n) logical qubits | O(n^(3/2)) — more qubits needed |
| Classical preprocessing | Minimal | Requires lattice reduction (LLL/BKZ) |
| Circuit depth | Very deep | Shallower — potentially more practical |
| Maturity | 30 years of optimization | New; not yet optimized |
The implication for CRQC timelines: if Regev’s algorithm (or future optimizations of it) proves practical, it could enable factoring on a quantum computer with fewer physical qubits and shallower circuits than Shor’s algorithm requires. Several research groups are actively working to optimize and resource-estimate this approach. This is a live area of research that security professionals should monitor through NIST Post-Quantum Standards updates.
Other Relevant Quantum Algorithms
| Algorithm | Problem | Speedup | Cryptographic Impact |
|---|---|---|---|
| Shor’s | Factoring, discrete log | Exponential | Breaks RSA, DH, ECC |
| Grover’s | Unstructured search | Quadratic | Halves symmetric key security |
| Simon’s | Period finding (black-box) | Exponential | Breaks some MACs/block cipher modes in superposition query model |
| BHT (Brassard-Hoyer-Tapp) | Collision finding | Cubic root speedup | Modest impact on hash collisions |
| Quantum walks | Graph problems, element distinctness | Polynomial (problem-dependent) | Potential impact on graph-based cryptography |
| HHL (Harrow-Hassidim-Lloyd) | Linear systems of equations | Exponential (with caveats) | Potential impact on lattice-based schemes — under active study |
| Regev’s factoring (2023) | Factoring using quantum + classical | Potentially fewer qubits | Could reduce CRQC requirements; not yet practical |
Quantum Error Correction Deep Dive
Types of Quantum Errors
Before diving into error correction codes, it is essential to understand the error landscape. Quantum errors are fundamentally different from classical bit flips:
| Error Type | Description | Physical Cause | Notation |
|---|---|---|---|
| Bit flip (X error) | |0⟩ ↔ |1⟩ transition | Energy exchange with environment (T₁ decay) | Pauli X |
| Phase flip (Z error) | Relative phase randomized | Dephasing from magnetic field fluctuations (T₂) | Pauli Z |
| Bit-phase flip (Y error) | Combined bit and phase flip | Combined T₁ and T₂ processes | Pauli Y = iXZ |
| Leakage | Qubit escapes the computational subspace | Excitation to higher energy levels | Not a Pauli error |
| Crosstalk | Operation on one qubit affects neighbors | Electromagnetic coupling between qubits | Correlated errors |
| Measurement error | Incorrect readout of qubit state | Thermal noise in readout resonators | Classical post-processing error |
Correlated errors (where multiple qubits fail simultaneously due to a common cause) are particularly dangerous because they can defeat error correction codes designed for independent errors. Cosmic ray impacts on superconducting chips, for example, can cause correlated errors across hundreds of qubits simultaneously — a phenomenon Google documented in 2021. Addressing correlated errors is an active area of QEC research.
The Error Correction Threshold Theorem
The threshold theorem is the theoretical foundation for fault-tolerant quantum computing. It states:
If the physical error rate per gate is below a certain threshold (p < p_th), then arbitrarily long quantum computations can be performed with arbitrarily low logical error rates, provided you have enough physical qubits.
For the surface code, p_th ≈ 1%. Current leading hardware achieves physical error rates of 0.1–0.5%, which is below threshold. This is why Google’s Willow result (2024) — demonstrating that adding more physical qubits reduces logical error rates — was genuinely significant. It was the first experimental proof that we are on the right side of the error correction threshold.
Surface Code Architecture
graph TB
subgraph SURFACE["Surface Code Patch (Distance d=5)"]
subgraph ROW1["Row 1"]
D1["Data"] --- A1["Ancilla<br/>(Z-check)"] --- D2["Data"] --- A2["Ancilla<br/>(X-check)"] --- D3["Data"]
end
subgraph ROW2["Row 2"]
A3["Ancilla<br/>(X-check)"] --- D4["Data"] --- A4["Ancilla<br/>(Z-check)"] --- D5["Data"] --- A5["Ancilla<br/>(X-check)"]
end
subgraph ROW3["Row 3"]
D6["Data"] --- A6["Ancilla<br/>(Z-check)"] --- D7["Data"] --- A7["Ancilla<br/>(X-check)"] --- D8["Data"]
end
end
subgraph DECODER["Classical Decoder"]
SYN["Syndrome<br/>Measurements"] --> DEC["Minimum Weight<br/>Perfect Matching"] --> CORR["Correction<br/>Operations"]
end
SURFACE --> SYN
style SURFACE fill:#1a1a2e,stroke:#16213e,color:#e6e6e6
style DECODER fill:#0f3460,stroke:#16213e,color:#e6e6e6
How it works:
- Data qubits store the encoded logical quantum state
- Ancilla qubits are measured every error correction cycle (~1 µs) to extract syndromes — patterns that indicate which errors likely occurred
- A classical decoder processes syndrome data in real-time and determines correction operations
- Corrections are applied (or, equivalently, tracked in software via the “Pauli frame”)
Code Distance and Error Suppression
The code distance d determines the error-correcting capability of the surface code. A distance-d code can correct up to ⌊(d-1)/2⌋ errors.
| Code Distance (d) | Physical Qubits per Logical Qubit | Errors Correctable | Logical Error Rate (at p=0.1%) | Use Case |
|---|---|---|---|---|
| 3 | ~17 | 1 | ~10⁻³ | Demonstrations only |
| 5 | ~49 | 2 | ~10⁻⁵ | Early fault-tolerant experiments |
| 7 | ~97 | 3 | ~10⁻⁷ | Small algorithms with moderate fidelity |
| 13 | ~337 | 6 | ~10⁻¹² | Useful quantum chemistry computations |
| 21 | ~881 | 10 | ~10⁻¹⁸ | Approaching cryptographic requirements |
| 27 | ~1,457 | 13 | ~10⁻²³ | RSA-2048 factoring requirement |
| 31 | ~1,921 | 15 | ~10⁻²⁶ | Conservative margin for Shor’s algorithm |
Magic State Distillation: The Hidden Cost
Clifford gates (H, S, CNOT) can be implemented transversally on surface codes — meaning they are relatively cheap. But the T gate (required for universal computation) cannot be implemented transversally due to the Eastin-Knill theorem.
Instead, T gates require magic state distillation — a process that:
- Prepares many noisy copies of a special “magic state”
- Distills them into fewer, higher-fidelity copies through multiple rounds of error correction
- Uses these purified states to implement T gates
The overhead is enormous:
| Distillation Level | Input Magic States | Output Magic States | Output Error Rate |
|---|---|---|---|
| Level 1 | 15 | 1 | ~35p³ (p = input error rate) |
| Level 2 | 225 (15 × 15) | 1 | ~35(35p³)³ |
| Level 3 | 3,375 | 1 | Negligible |
For Shor’s algorithm on RSA-2048, billions of T gates are needed. Each one requires magic state distillation, with dedicated “distillation factories” consuming a significant fraction of the total physical qubits. Current estimates dedicate 40–60% of total physical qubits to magic state distillation alone.
Alternative QEC Approaches and Recent Advances
Beyond Surface Codes
The surface code is not the only path forward. Several alternative QEC approaches could dramatically change the CRQC timeline:
| QEC Code | Key Advantage | Status (2026) | Potential Impact on CRQC |
|---|---|---|---|
| Surface Code | Well-understood; compatible with 2D hardware | Below-threshold demos (Google Willow) | Baseline: ~20M physical qubits for RSA-2048 |
| Color Codes | Transversal Clifford gates; lower overhead | Experimental demonstrations | Could reduce qubit count by 30–50% |
| Quantum LDPC Codes | Much higher encoding rate; fewer physical qubits | Theoretical with early experiments | Could reduce qubits by 10–100×; potential game-changer |
| Concatenated Codes | Modular error correction | Original QEC approach; less efficient than surface | Unlikely to change timeline significantly |
| Floquet Codes | Dynamic error correction via periodic measurements | Active research | Could improve threshold and reduce overhead |
| Bosonic Codes (Cat/GKP) | Hardware-efficient; fewer physical qubits | Demonstrations in superconducting cavities | Could reduce overhead by encoding in oscillator modes |
Quantum LDPC Codes: The Potential Game-Changer
Quantum Low-Density Parity-Check (qLDPC) codes have emerged as the most promising alternative to surface codes. Unlike surface codes, which encode one logical qubit per ~2d² physical qubits, qLDPC codes can encode multiple logical qubits with a much better physical-to-logical ratio.
In 2024, IBM researchers proposed the gross code — a qLDPC code family that could potentially reduce the physical qubit requirement for RSA-2048 factoring by an order of magnitude. If these codes prove practical, the CRQC timeline could accelerate by several years.
However: qLDPC codes require non-local connectivity between qubits (connections beyond nearest neighbors), which is challenging for superconducting hardware but more natural for trapped-ion and neutral-atom architectures. The practical implementation of qLDPC codes remains an open research question.
The T Gate Problem: Why Universal Fault Tolerance Is So Hard
The difficulty of implementing T gates fault-tolerantly deserves special attention because it is the single largest contributor to CRQC resource estimates.
Why T Gates Are Special
The Clifford group (generated by H, S, and CNOT) is efficiently simulable on a classical computer by the Gottesman-Knill theorem. This means a quantum computer running only Clifford gates provides zero advantage over a classical computer. It is the T gate that adds computational universality and enables quantum speedup.
Unfortunately, the Eastin-Knill theorem proves that no quantum error-correcting code can implement a universal gate set entirely through transversal (qubit-by-qubit) operations. For the surface code, the T gate is the “hard” gate that requires magic state distillation.
Practical Impact on CRQC Resources
Consider the resource chain for a single logical T gate in a surface-code quantum computer:
- Prepare ~15 noisy magic states (each on its own logical qubit patch)
- Distill them into 1 higher-fidelity magic state (consuming ~15 logical qubit patches for one round)
- Inject the distilled state via a gate teleportation circuit
- Repeat for every T gate in the algorithm — billions of times for RSA-2048
This means the quantum computer must maintain dedicated “distillation factories” running continuously throughout the computation, consuming a large fraction of the total hardware. Reducing T gate count through better circuit compilation is one of the most active areas of quantum algorithm optimization.
Recent advances in non-Clifford gate synthesis and catalyst states have reduced the T count for Shor’s algorithm by factors of 2–5x compared to early estimates, but the overhead remains the dominant cost.
Real-Time Decoding: The Classical Bottleneck
A commonly overlooked challenge is that quantum error correction requires real-time classical processing that keeps pace with the quantum computer.
Every ~1 µs, syndrome measurements from millions of qubits must be:
- Read out via control electronics
- Transmitted to classical decoders
- Decoded using algorithms like Minimum Weight Perfect Matching (MWPM) or Union-Find
- Fed back as corrections before the next error correction cycle
This creates a stringent latency requirement: the entire decode cycle must complete in ~1 µs. For millions of qubits, this requires custom classical hardware (FPGAs or ASICs) costing potentially billions of dollars in addition to the quantum hardware itself.
| Decoding Aspect | Requirement | Current Status |
|---|---|---|
| Syndrome bandwidth | ~TB/s for millions of qubits | Demonstrated for ~100 qubits |
| Decode latency | ~1 µs per round | Achieved for small codes on FPGAs |
| Decoder accuracy | Near-optimal matching | MWPM and Union-Find algorithms available |
| Scalability | Millions of qubits in real-time | Major unsolved engineering challenge |
| Power consumption | Within cryostat thermal budget | Active research into cryo-CMOS |
Quantum Networking and Distributed Quantum Computing
A development worth monitoring is the emergence of quantum networking — connecting multiple quantum processors via entanglement distribution over optical fiber or free-space links. This has two implications for cryptographic security:
Quantum Key Distribution (QKD)
QKD uses quantum mechanics to distribute encryption keys with information-theoretic security. If an eavesdropper intercepts qubits in transit, the disturbance is detectable. The two main protocols are:
| Protocol | How It Works | Key Rate | Distance Limit | Status |
|---|---|---|---|---|
| BB84 | Sender encodes bits in random bases; receiver measures in random bases; they compare bases | Moderate | ~100 km (fiber), ~1,200 km (satellite) | Commercially deployed (ID Quantique, Toshiba) |
| E91 (Ekert) | Uses entangled photon pairs; security from Bell inequality violation | Lower | Similar to BB84 | Research demonstrations |
| CV-QKD | Encodes in continuous variables (phase/amplitude of coherent states) | Higher throughput | ~50–80 km | Emerging commercial (ThinkQuantum) |
Important caveat for security professionals: QKD is not a replacement for post-quantum cryptography. QKD requires dedicated quantum hardware, has severe distance limitations, cannot authenticate parties (you still need classical authentication), and does not protect stored data. NIST, NSA, and NCSC (UK) have all published guidance stating that PQC — not QKD — is the recommended path for most organizations. QKD has niche applications in high-security point-to-point links (government, military, financial backbone networks) but is not a general-purpose solution.
Distributed Quantum Computing
Connecting multiple quantum processors via quantum links could enable a distributed CRQC — assembling cryptographically relevant capability from smaller, individually insufficient machines. This is analogous to how classical distributed computing enables problems no single machine could solve alone.
Current quantum networking is in its infancy (entanglement distribution rates of ~1 pair/second over metro distances), but if quantum networks mature, the CRQC timeline could be affected. A nation-state might assemble a CRQC from many moderate-scale quantum processors rather than building a single monolithic machine.
This is speculative but worth tracking. For now, the monolithic approach dominates all serious CRQC resource estimates.
Implications for Cryptographic Migration Planning
What We Know with Confidence
- Shor’s algorithm is mathematically correct. The algorithm works; the question is purely about hardware capability.
- Error correction works in principle. Google’s Willow (2024) demonstrated below-threshold logical error reduction.
- The physical qubit requirement is millions, not thousands. Even optimistic estimates require ~4 million physical qubits for RSA-2048.
- Current hardware is at ~1,000–1,400 physical qubits. Three to four orders of magnitude short.
- Symmetric cryptography is safe with appropriate key sizes (AES-256, SHA-384+).
What Remains Uncertain
- Hardware scaling trajectory — Will qubit counts double annually? Every 18 months? Every 3 years?
- Algorithmic breakthroughs — Could a new algorithm (like Regev’s 2023 factoring approach) dramatically reduce resource requirements?
- QEC code improvements — Could qLDPC codes or other approaches reduce the physical-to-logical ratio by 10× or more?
- Nation-state programs — Are classified quantum computing programs significantly ahead of publicly known capabilities?
- Alternative computing paradigms — Could analog quantum computing, quantum annealing, or hybrid approaches provide unexpected shortcuts?
Decision Framework for Security Professionals
Given these uncertainties, the prudent approach for security professionals is:
| Data Sensitivity | Secrecy Lifetime | Recommended Action | Urgency |
|---|---|---|---|
| Top Secret / National Security | 50+ years | Migrate to PQC immediately; implement hybrid schemes | Critical — now |
| Financial / Healthcare PII | 10–25 years | Begin PQC migration planning; inventory cryptographic assets | High — 2025–2027 |
| Enterprise / Commercial | 5–10 years | Monitor NIST standards; plan migration timeline | Medium — 2026–2030 |
| Transient / Session Data | Hours to days | Current cryptography sufficient; plan for future compliance | Low — align with deprecation schedules |
For detailed guidance on migration strategies, see NIST Post-Quantum Standards and Migration Planning and Strategies.
Geopolitical Dimension
Quantum computing development is a national security priority for multiple nations. The competitive landscape matters for CRQC timeline assessment:
| Country/Region | Estimated Public Investment (Cumulative) | Key Programs | Notable Capabilities |
|---|---|---|---|
| China | >$15 billion | Chinese Academy of Sciences; USTC (Hefei) | Jiuzhang (photonic advantage); Zuchongzhi (superconducting); extensive satellite QKD network |
| United States | >$5 billion (public) + significant private | DARPA US2QC; DOE quantum centers; NSF QLCIs | IBM, Google, IonQ, Quantinuum (private sector leads) |
| European Union | >€7 billion | Quantum Flagship Programme | Strong theory; Pasqal, IQM (startups) |
| United Kingdom | >£2.5 billion | National Quantum Strategy (2024) | Oxford Ionics; Riverlane (decoding) |
| Canada | >$1.5 billion (CAD) | National Quantum Strategy (2023) | Xanadu (photonic); D-Wave (annealing); Waterloo theory |
| Australia | >$1 billion (AUD) | Silicon Quantum Computing; CSIRO | Phosphorus-in-silicon approach (UNSW); unique technology path |
| Japan | >¥150 billion | Moonshot Program; RIKEN/Fujitsu partnership | Superconducting focus; strong industry collaboration |
Classification concern: The most advanced quantum computing research may be classified, particularly in China, the United States, and Russia. Public estimates of quantum progress represent a lower bound on actual capability. This uncertainty should factor into risk assessments — assume adversaries may be ahead of publicly known milestones.
Key Takeaways
-
Qubits are not parallel classical bits. Superposition and entanglement enable specific algorithmic speedups, not general parallelism. Only problems with exploitable mathematical structure benefit.
-
BQP captures what quantum computers can efficiently solve. Integer factoring and discrete logarithms are in BQP; lattice problems and hash functions are not (as far as we know). This asymmetry is why post-quantum cryptography works.
-
Current quantum hardware is nowhere near cryptographically relevant. The gap between ~1,400 physical qubits (2026) and ~4–20 million physical qubits (CRQC) is enormous. Vendor qubit counts are misleading without error rates and connectivity context.
-
Quantum error correction is the bottleneck, not qubit count. Surface codes require ~1,500 physical qubits per logical qubit, and magic state distillation consumes 40–60% of total resources. The overhead is staggering.
-
CRQC timelines are genuinely uncertain. Expert estimates range from 2035 to 2050+. Plan for the possibility of earlier timelines while recognizing that the engineering challenges are formidable.
-
“Harvest now, decrypt later” makes the threat immediate. Even if a CRQC is 15 years away, data encrypted today with RSA or ECDH may be at risk if it must remain confidential for more than 15 years.
-
Migration decisions should be risk-based, not prediction-based. Do not wait for certainty about CRQC timelines. Assess your data’s secrecy lifetime against the range of CRQC estimates and act accordingly.
Common Misconceptions in the Cybersecurity Community
To conclude, here are the most frequent misunderstandings about quantum computing encountered in cybersecurity discussions:
| Misconception | Reality |
|---|---|
| ”Quantum computers try every answer simultaneously” | Quantum parallelism requires carefully designed interference patterns. Only specific problems with mathematical structure can be exploited. Random brute-force does not benefit from quantum speedup. |
| ”We need to worry about quantum computers breaking AES” | Grover’s algorithm provides only a quadratic speedup. AES-256 retains 128-bit security post-quantum — more than sufficient. The real threat is to public-key cryptography (RSA, ECC, DH). |
| ”Quantum-resistant means quantum-proof” | Post-quantum algorithms are believed to be resistant to known quantum algorithms. No mathematical proof of security exists against all possible quantum attacks. This is analogous to classical cryptography — RSA is not proven hard either. |
| ”A 1,000-qubit quantum computer can break encryption” | Breaking RSA-2048 requires millions of error-corrected physical qubits. Current 1,000-qubit systems cannot perform any cryptographically relevant computation. |
| ”Quantum computing will make all encryption obsolete” | Symmetric cryptography (AES-256) and hash functions (SHA-384+) remain secure. Only public-key cryptography based on factoring and discrete logs is at risk. |
| ”We have plenty of time before quantum threatens us” | The harvest-now-decrypt-later threat means data encrypted today may be decrypted in the future. Long-lived secrets need protection now. |
| ”QKD solves the quantum threat” | QKD addresses key distribution only, requires dedicated hardware, has distance limitations, and does not protect stored data. PQC is the comprehensive solution for most organizations. |
Further Reading
- Gidney & Ekera (2021) — “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits” — the definitive resource estimate for Shor’s algorithm on RSA-2048
- Google Quantum AI (2024) — Willow below-threshold error correction results
- NIST IR 8413 — Status Report on the Third Round of the NIST Post-Quantum Cryptography Standardization Process
- NSA CNSA 2.0 (2022) — Commercial National Security Algorithm Suite 2.0 guidance
- Global Risk Institute Annual Survey — Expert assessments of quantum computing risk timelines
- Nielsen & Chuang — Quantum Computation and Quantum Information — the standard textbook reference
- Regev (2023) — “An Efficient Quantum Factoring Algorithm” — potential improvement over Shor’s resource requirements
- Litinski (2019) — “A Game of Surface Codes” — accessible introduction to lattice surgery and magic state distillation
- Mosca (2018) — “Cybersecurity in an Era with Quantum Computers” — foundational risk framework for the harvest-now-decrypt-later threat
Next Steps
For the next step in understanding the post-quantum landscape, continue to NIST Post-Quantum Standards, which covers the specific algorithms (ML-KEM, ML-DSA, SLH-DSA) selected to replace the vulnerable classical cryptographic schemes discussed on this page.
If you want to understand the specific lattice, code-based, and hash-based mathematical problems that resist quantum attack, see Post-Quantum Algorithm Families.
For practical guidance on when and how to begin migrating your organization’s cryptographic infrastructure, see Migration Planning and Strategies.