← Back to Post-Quantum Cryptography

Shor's Algorithm & Grover's Algorithm

19 min read

Why These Two Algorithms Matter

The entire post-quantum cryptography migration — billions of dollars in infrastructure changes, years of standardization work, urgent government mandates — traces back to two quantum algorithms. Shor’s algorithm (1994) demonstrates that a sufficiently large quantum computer can factor integers and compute discrete logarithms in polynomial time, shattering the mathematical assumptions underneath RSA, Diffie-Hellman, and elliptic curve cryptography. Grover’s algorithm (1996) provides a quadratic speedup for unstructured search, effectively halving the security level of symmetric ciphers and hash functions.

Understanding these algorithms — not just their existence, but their mechanics, resource requirements, and practical limitations — is essential for any security professional making migration decisions. The difference between “quantum computers will break everything” and “quantum computers will break specific things under specific conditions on specific timelines” is the difference between panic and informed planning.

This page provides a rigorous walkthrough of both algorithms, current resource estimates for cryptographically relevant attacks, and an honest assessment of the gap between theoretical capability and engineering reality.


Shor’s Algorithm

The Problem: Integer Factoring

RSA’s security rests on the computational hardness of factoring the product of two large primes. Given a public key N = p × q where p and q are each ~1,024-bit primes (for RSA-2048), the best-known classical algorithm — the General Number Field Sieve (GNFS) — requires sub-exponential time:

GNFS complexity: exp(O(n^(1/3) × (log n)^(2/3)))

For RSA-2048, this translates to roughly 2^112 operations — well beyond any classical computing capability for the foreseeable future.

Peter Shor showed in 1994 that a quantum computer can factor N in polynomial time: O((log N)^2 × (log log N) × (log log log N)). For RSA-2048, this reduces the effective work to something manageable in hours or days on a sufficiently large quantum computer, rather than billions of years.

The Core Insight: Reducing Factoring to Period Finding

Shor’s algorithm does not attack factoring directly. Instead, it exploits a well-known number-theoretic reduction: factoring an integer N can be reduced to finding the period of a modular exponentiation function.

Here is the chain of reasoning:

  1. Choose a random integer a such that 1 < a < N and gcd(a, N) = 1. (If gcd(a, N) > 1, you have already found a factor — but this is astronomically unlikely for large N.)

  2. Define the function f(x) = a^x mod N. This function is periodic. There exists some integer r (the order of a modulo N) such that a^r ≡ 1 (mod N), and f(x) = f(x + r) for all x.

  3. If you can find r, and r is even, then:

    • a^r − 1 ≡ 0 (mod N)
    • (a^(r/2) − 1)(a^(r/2) + 1) ≡ 0 (mod N)
    • With high probability, gcd(a^(r/2) − 1, N) or gcd(a^(r/2) + 1, N) is a non-trivial factor of N.
  4. Classical period finding is hard — it requires exponential time for the same reason factoring does. But quantum period finding, using the Quantum Fourier Transform, is efficient.

The entire algorithm is a hybrid classical-quantum procedure. The quantum computer handles the period-finding step; everything else — the random selection, GCD computation, verification — runs on a classical machine.

Step-by-Step Walkthrough

The following walkthrough traces the full algorithm at a level that makes the quantum mechanics concrete without requiring a physics background.

Step 1: Classical Preprocessing

Choose a random a with 1 < a < N. Compute gcd(a, N) using the Euclidean algorithm. If the result is not 1, output the factor and stop. Otherwise, proceed to the quantum subroutine.

Step 2: Prepare the Quantum Registers

Initialize two quantum registers:

  • Register 1 (input register): n qubits, where n = ⌈log₂ N⌉. Prepared in a uniform superposition of all values from 0 to 2^n − 1 using Hadamard gates on each qubit.
  • Register 2 (output register): n qubits, initialized to |0⟩.

After applying Hadamard gates to Register 1, the combined state is:

|ψ⟩ = (1/√2^n) × Σ |x⟩|0⟩    for x = 0 to 2^n − 1

This is the power of quantum computing: Register 1 simultaneously represents every possible input value.

Step 3: Compute Modular Exponentiation

Apply the unitary operation that maps |x⟩|0⟩ → |x⟩|a^x mod N⟩. This is the most resource-intensive step in the entire algorithm. The state becomes:

|ψ⟩ = (1/√2^n) × Σ |x⟩|a^x mod N⟩

Every value of f(x) = a^x mod N is now computed simultaneously across the superposition. Critically, because f is periodic with period r, the output register contains repeated values — all inputs x that map to the same output form an arithmetic progression with common difference r.

Step 4: Apply the Quantum Fourier Transform

Apply the Quantum Fourier Transform (QFT) to Register 1. The QFT is the quantum analog of the discrete Fourier transform, and it maps the periodic structure in the input register to sharp peaks at frequencies that are multiples of 2^n / r.

The QFT over 2^n elements maps:

|x⟩ → (1/√2^n) × Σ exp(2πi × x × k / 2^n) |k⟩    for k = 0 to 2^n − 1

Because the input register’s amplitudes have period r, the QFT concentrates the probability amplitude at values of k that are close to integer multiples of 2^n / r. All other values interfere destructively and have negligible probability.

Step 5: Measurement

Measure Register 1. The result will be some value k ≈ j × 2^n / r for some integer j. From this measured value, you can extract r using the continued fractions algorithm — a classical post-processing step that efficiently finds the best rational approximation to k / 2^n.

Step 6: Classical Post-Processing

Given the candidate period r:

  1. Verify that a^r ≡ 1 (mod N).
  2. If r is odd, repeat the algorithm with a new random a.
  3. If r is even, compute gcd(a^(r/2) ± 1, N).
  4. If either GCD yields a non-trivial factor, output it. Otherwise, repeat.

The algorithm succeeds with probability at least 1 − 1/2^k after k repetitions. In practice, it typically finds a factor within a few attempts.

flowchart TD
    A["Choose random a, 1 < a < N"] --> B{"gcd(a, N) = 1?"}
    B -- "No → factor found" --> Z["Output factor"]
    B -- "Yes" --> C["Prepare superposition<br/>|0⟩ + |1⟩ + ... + |2ⁿ-1⟩"]
    C --> D["Compute a^x mod N<br/>in superposition"]
    D --> E["Apply Quantum<br/>Fourier Transform"]
    E --> F["Measure → get k ≈ j·2ⁿ/r"]
    F --> G["Continued fractions<br/>→ extract r"]
    G --> H{"r even AND<br/>a^(r/2) ≢ -1 (mod N)?"}
    H -- "No" --> A
    H -- "Yes" --> I["Compute<br/>gcd(a^(r/2) ± 1, N)"]
    I --> J{"Non-trivial<br/>factor found?"}
    J -- "No" --> A
    J -- "Yes" --> Z

    style A fill:#1a1a2e,stroke:#e94560,color:#eee
    style C fill:#1a1a2e,stroke:#0f3460,color:#eee
    style D fill:#1a1a2e,stroke:#0f3460,color:#eee
    style E fill:#1a1a2e,stroke:#0f3460,color:#eee
    style F fill:#1a1a2e,stroke:#0f3460,color:#eee
    style G fill:#1a1a2e,stroke:#e94560,color:#eee
    style I fill:#1a1a2e,stroke:#e94560,color:#eee
    style Z fill:#16213e,stroke:#e94560,color:#eee

The Role of the Quantum Fourier Transform

The QFT is the engine that makes Shor’s algorithm work. It is worth understanding why.

A periodic signal in the time domain becomes a set of sharp peaks in the frequency domain — this is the fundamental property of Fourier transforms, classical or quantum. The QFT exploits this: the periodic structure of a^x mod N in the computational basis is transformed into a frequency-domain representation where the period r can be read off from the peak positions.

What makes the quantum Fourier transform special is its efficiency. The classical discrete Fourier transform over N elements requires O(N log N) operations (via the FFT). The QFT over N elements requires only O((log N)^2) quantum gates. This exponential speedup is possible because the QFT operates on quantum amplitudes — it processes 2^n amplitudes using only n qubits and O(n^2) gates.

The QFT circuit decomposes into a sequence of Hadamard gates and controlled phase-rotation gates. For an n-qubit register:

  1. Apply a Hadamard gate to the first qubit.
  2. Apply controlled phase rotations from all subsequent qubits to the first, with rotation angles π/2, π/4, …, π/2^(n−1).
  3. Repeat the pattern for the second qubit, then the third, and so on.
  4. Reverse the order of the qubits (swap first with last, etc.).

The total gate count is n(n−1)/2 controlled rotations plus n Hadamard gates — quadratic in the number of qubits, which is logarithmic in N. This is the source of Shor’s polynomial-time complexity.

A Concrete Example: Factoring 15

To make the algorithm tangible, consider factoring N = 15 (the smallest non-trivial example, and the first number factored by a quantum computer — IBM, 2001).

  1. Choose a = 7. gcd(7, 15) = 1, so proceed.
  2. Compute f(x) = 7^x mod 15:
x01234567
7^x mod 151741317413
  1. The period is r = 4. The sequence repeats every 4 steps.
  2. Compute factors: r = 4 is even. Compute:
    • a^(r/2) = 7^2 = 49
    • gcd(49 − 1, 15) = gcd(48, 15) = 3
    • gcd(49 + 1, 15) = gcd(50, 15) = 5
  3. Result: 15 = 3 × 5.

In this toy example, the period is trivially visible in the table. For RSA-2048, the period r is roughly 2,048 bits long — completely invisible classically, but efficiently extractable by the QFT operating on a quantum superposition of all 2^2048 values simultaneously.

The gap between this toy example and a real cryptographic attack is not conceptual — the algorithm works identically. The gap is purely in scale: the number of qubits, the depth of the modular exponentiation circuit, and the error correction required to keep everything coherent across billions of gate operations.

Modular Exponentiation: The Bottleneck

The modular exponentiation step — computing a^x mod N across the entire superposition — dominates the resource cost of Shor’s algorithm. While the QFT requires only O(n^2) gates for an n-qubit register, the modular exponentiation requires O(n^3) gates (using schoolbook multiplication) or O(n^2 × log n) gates (using more advanced techniques like Karatsuba or Schönhage-Strassen adapted for reversible circuits).

Each modular multiplication must be implemented as a reversible quantum circuit — no information can be discarded, because quantum computation is unitary. Classical algorithms freely overwrite intermediate results; quantum circuits must “uncompute” temporary values to avoid entangling ancilla qubits with the computation. This reversibility requirement roughly doubles the gate count compared to an irreversible classical implementation.

The key optimizations in Gidney and Ekerå (2021) include:

  • Windowed arithmetic: Batching multiple controlled additions to reduce the number of sequential T-gates.
  • Coset representation: Representing numbers modulo N in a form that eliminates expensive modular reduction steps.
  • Measurement-based uncomputation: Using mid-circuit measurements and classically-controlled corrections instead of quantum uncomputation, trading qubit count for circuit depth.

These optimizations collectively reduced the estimated qubit count for RSA-2048 from ~8,000 logical qubits (earlier estimates) to ~4,099 logical qubits, and reduced the T-gate count by roughly an order of magnitude.

Extension to Discrete Logarithm: Breaking ECC and Diffie-Hellman

Shor’s algorithm generalizes beyond integer factoring to the hidden subgroup problem over abelian groups. The discrete logarithm problem — given g, h, and a prime p, find x such that g^x ≡ h (mod p) — is another instance of this problem.

The adaptation works as follows:

  1. Prepare two input registers in uniform superposition.
  2. Compute g^a × h^b mod p for all combinations of a and b.
  3. Apply the QFT to both input registers.
  4. Measure and extract the relationship between a and b that reveals x.

This directly breaks:

  • Diffie-Hellman key exchange (both classical DH and ECDH): An attacker can compute the shared secret from the public values.
  • DSA and ECDSA signatures: An attacker can recover the private signing key from the public key.
  • ElGamal encryption: An attacker can recover plaintext from ciphertext and the public key.

For elliptic curve cryptography, the discrete logarithm problem is defined over the group of points on an elliptic curve rather than over integers modulo a prime. Shor’s algorithm adapts to this setting using the group operation (point addition) instead of modular multiplication. The key observation is that the elliptic curve group is still abelian, so the hidden subgroup framework applies directly.

The practical consequence is severe: ECC offers no advantage over RSA against quantum attack. While ECC keys are much shorter than RSA keys for equivalent classical security (256-bit ECC ≈ 3,072-bit RSA), the quantum resources needed to break ECC are also proportionally smaller. A quantum computer that can break RSA-2048 requires ~4,000 logical qubits; one that can break P-256 ECC requires ~2,300 logical qubits.


Quantum Resource Estimates for Shor’s Algorithm

The question “when will quantum computers break RSA?” reduces to: how many qubits and gates does Shor’s algorithm actually require, and when will hardware reach those thresholds?

Resource estimates have improved dramatically as researchers have optimized the modular exponentiation circuit. The landmark paper by Gidney and Ekerå (2021) — “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits” — provides the most concrete estimates to date, accounting for quantum error correction overhead.

Logical Qubit and Gate Estimates

The following table summarizes the best-known resource estimates for running Shor’s algorithm against common cryptographic targets. Logical qubits are error-corrected qubits; the physical qubit count is much larger (see the next section).

AlgorithmTargetLogical QubitsT-gates (approx.)Estimated Runtime
Shor’s (factoring)RSA-2048~4,099~2.7 × 10^118 hours*
Shor’s (factoring)RSA-3072~6,145~6.1 × 10^11~18 hours*
Shor’s (factoring)RSA-4096~8,194~1.1 × 10^12~34 hours*
Shor’s (ECDLP)ECC P-256~2,330~1.3 × 10^11~4 hours*
Shor’s (ECDLP)ECC P-384~3,484~2.9 × 10^11~9 hours*

*Runtime estimates assume a surface code error correction scheme with a physical error rate of ~10^-3, a code distance sufficient for the computation, and a reaction time (classical decoding latency) of ~1 μs. These are optimistic but not unreasonable projections for future hardware. Estimates are based on Gidney & Ekerå (2021) for RSA and Roetteler et al. (2017) / Haner et al. (2020) for ECC, with interpolation for intermediate key sizes.

From Logical to Physical Qubits

The numbers above describe logical qubits — fault-tolerant qubits constructed from many noisy physical qubits via quantum error correction. The dominant error correction scheme in current roadmaps is the surface code, which encodes one logical qubit in a 2D grid of physical qubits.

The overhead depends on the code distance d, which determines how many physical errors can be corrected:

  • Physical qubits per logical qubit: ~2d^2 (for a standard surface code patch)
  • Required code distance scales with circuit depth and physical error rate
  • For RSA-2048 at a physical error rate of 10^-3: d ≈ 27, requiring ~1,458 physical qubits per logical qubit

This yields the following physical qubit estimates:

TargetLogical QubitsCode DistancePhysical Qubits (approx.)
RSA-2048~4,099~27~6.0 million
RSA-3072~6,145~27~9.0 million
RSA-4096~8,194~27~12.0 million
ECC P-256~2,330~27~3.4 million
ECC P-384~3,484~27~5.1 million

Gidney and Ekerå’s headline figure of 20 million noisy qubits for RSA-2048 accounts for additional overhead from magic state distillation factories (used to implement T-gates fault-tolerantly) and routing space between logical qubit patches. This is the more realistic figure for hardware planning.

What These Numbers Mean

As of early 2026, the largest quantum processors have ~1,100–1,200 physical qubits (IBM Condor, Atom Computing), with roadmaps targeting 10,000+ physical qubits by 2028–2029. The gap between current hardware and the ~20 million physical qubits needed for RSA-2048 is roughly four orders of magnitude.

However, three factors complicate simple extrapolation:

  1. Error rates must also improve. Current physical error rates are ~10^-3 to 10^-2. The estimates above assume 10^-3 consistently across all qubits and gates. Achieving this at scale is an unsolved engineering problem.

  2. Alternative approaches may reduce requirements. Research into higher-rate quantum error correcting codes (e.g., quantum LDPC codes, color codes) could reduce the physical-to-logical qubit ratio significantly. Some proposals suggest 10× to 100× improvements over surface codes.

  3. Algorithmic improvements continue. The Gidney-Ekerå estimate is already a 20× improvement over earlier estimates by Fowler et al. (2012). Further circuit optimizations, better classical co-processing, or entirely new algorithmic approaches could shift the timeline.


Impact on Public-Key Cryptography

The following table summarizes the impact of Shor’s algorithm on deployed public-key cryptosystems:

CryptosystemUsageQuantum AttackClassical SecurityPost-Quantum SecurityImpact
RSA-2048Encryption, signatures, key exchangeShor’s (factoring)~112 bitsBrokenTotal compromise of confidentiality and authenticity
RSA-3072Encryption, signaturesShor’s (factoring)~128 bitsBrokenTotal compromise
RSA-4096Encryption, signaturesShor’s (factoring)~140 bitsBrokenTotal compromise; larger keys do not help
ECDSA P-256Signatures (TLS, code signing)Shor’s (ECDLP)~128 bitsBrokenForged signatures, impersonation
ECDSA P-384Signatures (government, high-assurance)Shor’s (ECDLP)~192 bitsBrokenTotal compromise
ECDH P-256Key exchange (TLS, Signal, WireGuard)Shor’s (ECDLP)~128 bitsBrokenPast session keys recoverable (harvest-now-decrypt-later)
Ed25519Signatures (SSH, cryptocurrency)Shor’s (ECDLP)~128 bitsBrokenPrivate key recovery from public key
X25519Key exchange (TLS 1.3, Signal)Shor’s (ECDLP)~128 bitsBrokenPast and future sessions compromised
DH-2048 (finite field)Legacy key exchangeShor’s (DLP)~112 bitsBrokenTotal compromise
ElGamalEncryption (PGP)Shor’s (DLP)VariesBrokenPlaintext recovery

The critical takeaway: increasing key sizes does not mitigate the quantum threat for any system based on factoring or discrete logarithms. Shor’s algorithm runs in polynomial time against all of them. Moving from RSA-2048 to RSA-4096 approximately doubles the quantum resources needed — a trivial barrier compared to the exponential security margin these key sizes provide against classical attack.


Grover’s Algorithm

Grover’s algorithm (Lov Grover, 1996) addresses a fundamentally different problem from Shor’s. Given a black-box function f that returns 1 for exactly one input (the “marked item”) out of N possible inputs, and 0 for all others, how many queries to f are needed to find the marked item?

  • Classically: O(N) queries — in the worst case, you must check every input.
  • Grover’s algorithm: O(√N) queries — a quadratic speedup.

This is provably optimal for quantum algorithms in this setting (Bennett et al., 1997). No quantum algorithm can do better than O(√N) for unstructured search.

How Amplitude Amplification Works

Grover’s algorithm is an instance of amplitude amplification, a technique that systematically increases the probability of measuring the correct answer. The intuition is geometric.

Initial State

Prepare a uniform superposition over all N inputs:

|ψ⟩ = (1/√N) × Σ |x⟩    for x = 0 to N − 1

Each basis state has amplitude 1/√N, meaning each has probability 1/N of being measured. For a 128-bit key space, this is 1/2^128 — vanishingly small.

The Oracle

The oracle is a quantum operation that flips the amplitude (phase) of the marked item:

|x⟩ → −|x⟩    if f(x) = 1
|x⟩ → |x⟩     if f(x) = 0

This is a reflection about the subspace orthogonal to the marked state.

The Diffusion Operator

After the oracle marks the target, the diffusion operator reflects all amplitudes about their mean. States with below-average amplitude (most of them, since they were slightly reduced by the oracle’s effect on the mean) get boosted; the marked state, which was already flipped to negative and is now far below the mean, gets boosted the most.

Iteration

Each Grover iteration — oracle followed by diffusion — rotates the state vector by a fixed angle toward the marked item. After approximately π/4 × √N iterations, the marked state has amplitude close to 1, and measurement yields the correct answer with high probability.

Overshooting is possible: too many iterations rotate past the optimal point and decrease the success probability. The algorithm must be stopped at the right moment.

flowchart LR
    A["Uniform<br/>Superposition<br/>H⊗ⁿ|0⟩"] --> B["Oracle<br/>Flip phase of<br/>target state"]
    B --> C["Diffusion<br/>Reflect about<br/>mean amplitude"]
    C --> D{"≈ π/4 × √N<br/>iterations?"}
    D -- "No" --> B
    D -- "Yes" --> E["Measure<br/>→ target state<br/>with high probability"]

    style A fill:#1a1a2e,stroke:#0f3460,color:#eee
    style B fill:#1a1a2e,stroke:#e94560,color:#eee
    style C fill:#1a1a2e,stroke:#0f3460,color:#eee
    style E fill:#16213e,stroke:#e94560,color:#eee

The Geometric Picture

The most illuminating way to understand Grover’s algorithm is geometrically. Define two states:

  • |w⟩: the target (marked) state.
  • |s’⟩: the uniform superposition over all non-target states.

The initial uniform superposition |ψ⟩ can be decomposed as:

|ψ⟩ = sin(θ)|w⟩ + cos(θ)|s'⟩

where sin(θ) = 1/√N    (the initial amplitude of the target)

Each Grover iteration rotates the state by an angle of 2θ toward |w⟩ in the two-dimensional plane spanned by |w⟩ and |s’⟩. After k iterations:

|ψ_k⟩ = sin((2k+1)θ)|w⟩ + cos((2k+1)θ)|s'⟩

The probability of measuring the target state is sin^2((2k+1)θ). This reaches its maximum (close to 1) when (2k+1)θ ≈ π/2, giving:

k ≈ π/(4θ) ≈ (π/4)√N

This geometric picture also makes clear why overshooting is a problem: if you iterate past the optimal k, the state rotates away from |w⟩, and the success probability drops. For a single target in a space of 2^128, the optimal number of iterations is approximately 2^63 — an enormous but well-defined number.

When there are t marked items out of N total (for example, multiple valid keys that produce the same plaintext-ciphertext pair, which is relevant for block ciphers with short blocks), Grover’s algorithm finds any one of them in:

O(√(N/t)) iterations

For cryptographic key search against AES-128 with a single known plaintext-ciphertext pair and a 128-bit block, the key is essentially unique (t ≈ 1), so multi-target search does not help the attacker. However, for hash preimage search where many preimages exist (because the message space is much larger than the hash output space), t can be large, providing additional speedup beyond the basic √N estimate.

This nuance matters for security analysis: the effective Grover speedup depends on the specific problem structure, not just the key space size.

Impact on Symmetric Cryptography

Grover’s algorithm applies to symmetric key recovery as an unstructured search problem: given a known plaintext-ciphertext pair (P, C), search for the key k such that E_k(P) = C.

CipherClassical Key SpaceGrover’s Effective SecurityEquivalent Classical Security
AES-1282^1282^64 operations64-bit — within reach of brute force
AES-1922^1922^96 operations96-bit — marginal
AES-2562^2562^128 operations128-bit — considered secure
ChaCha202^2562^128 operations128-bit — considered secure
3DES (2-key)2^1122^56 operations56-bit — trivially broken
3DES (3-key)2^1682^84 operations84-bit — insufficient margin

The headline: AES-128’s security is halved from 128 bits to 64 bits, placing it in a range that is computationally reachable (2^64 is roughly what Bitcoin’s network computes in a few hours). AES-256 drops to 128 bits, which remains well within accepted security margins.

Impact on Hash Functions

Grover’s algorithm also applies to hash function attacks:

Preimage attacks (finding m such that H(m) = h):

  • Classical: O(2^n) for an n-bit hash
  • Grover’s: O(2^(n/2))
  • SHA-256 preimage: 2^256 → 2^128 (still secure)
  • SHA-1 preimage (160-bit): 2^160 → 2^80 (marginal)

Collision attacks (finding m₁m₂ such that H(m₁) = H(m₂)):

  • Classical birthday attack: O(2^(n/2))
  • Quantum BHT algorithm (Brassard-Høyer-Tapp): O(2^(n/3))
  • SHA-256 collision: 2^128 → 2^85 (reduced but likely adequate)
  • SHA-1 collision: 2^80 → 2^53 (dangerously weak — already classically broken)
Hash FunctionOutput BitsClassical PreimageQuantum PreimageClassical CollisionQuantum Collision
SHA-11602^1602^802^80*2^53
SHA-2562562^2562^1282^1282^85
SHA-3843842^3842^1922^1922^128
SHA-5125122^5122^2562^2562^171
SHA3-2562562^2562^1282^1282^85
BLAKE32562^2562^1282^1282^85

*SHA-1 collisions have been demonstrated classically (SHAttered, 2017; Shambles, 2020), so the quantum collision figure is academic.

Why “Just Double the Key Size” Is an Oversimplification

The common advice — “Grover’s halves security, so just double the key size” — is directionally correct but misses critical nuances:

1. Grover’s Requires Sequential Quantum Operations

Each Grover iteration must be executed sequentially — the output of one oracle query feeds into the next diffusion step. Unlike classical brute force, which parallelizes trivially across millions of processors, Grover’s algorithm does not parallelize well.

If you split the search across p quantum processors, each processor searches a key space of N/p and requires O(√(N/p)) iterations. The total quantum work is p × O(√(N/p)) = O(√(N × p)). To match the speedup of a single processor, you need quadratically more parallel processors — the speedup from parallelism is only the square root.

For AES-128: a single quantum processor needs ~2^64 sequential oracle evaluations. At even an optimistic 1 GHz quantum gate clock rate, this is roughly 585 years. Parallelizing across 2^32 (~4 billion) quantum processors reduces this to ~2^48 operations per processor — still ~8.9 years per processor. The parallelization overhead is brutal.

2. Oracle Implementation Cost

Each Grover oracle evaluation requires implementing the full AES encryption circuit as a reversible quantum circuit. This is far more expensive than a classical AES evaluation:

  • A quantum AES-128 circuit requires approximately 2,953 logical qubits and 1.5 × 10^8 T-gates per oracle call (Grassl et al., 2016; Jaques et al., 2020).
  • Each T-gate requires magic state distillation, adding physical qubit overhead.
  • The total physical qubit count for a single Grover iteration against AES-128 is estimated at hundreds of thousands to millions of physical qubits.

The quantum computer running Grover’s against AES-128 would need to sustain coherent operation across millions of physical qubits for hundreds of years. This is not a realistic near-term or medium-term threat.

3. Error Correction Overhead Scales Differently

For Shor’s algorithm, the circuit depth is polynomial, so error correction overhead is manageable. For Grover’s algorithm against cryptographic key spaces, the circuit depth is O(2^64) or higher. The probability of a logical error accumulates over the full computation, requiring either:

  • Higher code distances (more physical qubits per logical qubit), or
  • Accepting a non-trivial failure probability and repeating the algorithm.

Both options significantly increase total resource requirements beyond the naive “2^64 evaluations” estimate.

4. The Practical Implication

For all currently deployed symmetric ciphers with 128-bit or larger keys, Grover’s algorithm is not an existential threat. It is a theoretical concern that justifies using 256-bit keys (AES-256, ChaCha20) as a prudent defense-in-depth measure, but it is not the urgent, near-term danger that Shor’s algorithm poses to public-key cryptography.

NIST’s guidance is clear: AES-256 and SHA-256 are considered quantum-resistant for the foreseeable future. The migration urgency is entirely on the public-key side.


Combined Quantum Threat Assessment

The following table provides a unified view of how quantum computing affects the major cryptographic primitives:

PrimitiveSpecific AlgorithmCurrent Security LevelQuantum AttackPost-Quantum SecurityMigration Urgency
Asymmetric encryptionRSA-2048112-bitShor’s (polynomial)BrokenCritical
Asymmetric encryptionRSA-4096140-bitShor’s (polynomial)BrokenCritical
Key exchangeECDH P-256128-bitShor’s (polynomial)BrokenCritical (harvest-now-decrypt-later)
Key exchangeX25519128-bitShor’s (polynomial)BrokenCritical (harvest-now-decrypt-later)
Digital signaturesECDSA P-256128-bitShor’s (polynomial)BrokenHigh
Digital signaturesEd25519128-bitShor’s (polynomial)BrokenHigh
Symmetric encryptionAES-128128-bitGrover’s (quadratic)64-bit (theoretical)Moderate — upgrade to AES-256
Symmetric encryptionAES-256256-bitGrover’s (quadratic)128-bitLow — already quantum-resistant
Hash functionsSHA-256128-bit (collision)BHT (cube root)85-bit (collision)Low — adequate margin
Hash functionsSHA-384192-bit (collision)BHT (cube root)128-bit (collision)None — fully quantum-resistant
MACsHMAC-SHA-256256-bitGrover’s (quadratic)128-bitNone — fully quantum-resistant
quadrantChart
    title Quantum Impact vs. Migration Urgency
    x-axis Low Impact --> High Impact
    y-axis Low Urgency --> High Urgency
    quadrant-1 "Immediate Action Required"
    quadrant-2 "Plan Migration"
    quadrant-3 "Monitor"
    quadrant-4 "Upgrade When Convenient"
    "RSA-2048": [0.95, 0.95]
    "ECDH P-256": [0.93, 0.98]
    "ECDSA P-256": [0.90, 0.85]
    "Ed25519": [0.88, 0.83]
    "DH-2048": [0.92, 0.75]
    "AES-128": [0.45, 0.40]
    "AES-256": [0.10, 0.10]
    "SHA-256": [0.20, 0.15]
    "SHA-384": [0.05, 0.05]
    "3DES": [0.70, 0.60]

The Gap Between Theory and Practice

Common Misconceptions

Before examining the state of hardware, it is worth addressing misconceptions that frequently surface in security briefings and vendor marketing:

“Quantum computers will break all encryption.” False. Quantum computers threaten specific mathematical problems (factoring, discrete logarithm). Symmetric encryption and hash functions face only a quadratic speedup, which is manageable with existing key sizes. Post-quantum public-key algorithms (lattice-based, hash-based, code-based) are specifically designed to resist quantum attack. The correct statement is: “Quantum computers will break all currently deployed public-key cryptography.”

“We need quantum-safe symmetric encryption.” No. AES-256 is already quantum-safe. The “quantum-safe” migration is exclusively about public-key algorithms — key exchange, digital signatures, and key encapsulation. Symmetric primitives need only a key-size upgrade in some cases (AES-128 to AES-256), which is a configuration change rather than an algorithmic migration.

“A 1,000-qubit quantum computer can already break RSA.” No. A 1,000-physical-qubit computer cannot execute Shor’s algorithm against any cryptographically relevant key size. You need ~4,000 logical qubits, each composed of ~1,500 physical qubits, plus factory space for magic state distillation. The total is ~20 million physical qubits. Current 1,000-qubit machines are four orders of magnitude short.

“Quantum annealing (D-Wave) is a threat to cryptography.” No demonstrated threat. Quantum annealing solves optimization problems and has not been shown to provide any advantage for factoring or discrete logarithm computation compared to classical algorithms. D-Wave’s machines operate on different physical principles from gate-based quantum computers and cannot run Shor’s algorithm.

“Post-quantum algorithms are quantum algorithms.” A persistent confusion. Post-quantum algorithms run on classical computers. They are called “post-quantum” because they resist attack by quantum computers. They have nothing to do with quantum computing hardware.

Where We Are Today (2026)

The state of quantum computing hardware as of early 2026:

  • Qubit count: The largest announced processors have ~1,100–1,200 physical qubits. IBM, Google, Quantinuum, IonQ, and Atom Computing all have roadmaps toward 10,000+ physical qubits by 2028–2029.
  • Error rates: Two-qubit gate error rates are in the range of 10^-3 to 10^-2 for the best platforms (trapped ions, some superconducting architectures). Surface code error correction requires error rates consistently below ~10^-2 to provide any benefit, and below ~10^-3 for practical overhead ratios.
  • Logical qubits demonstrated: Small-scale quantum error correction has been demonstrated with 2–10 logical qubits. Google’s 2024 experiment showed a surface code logical qubit with below-threshold error rates — a genuine milestone, but ~4 orders of magnitude from cryptographic relevance.
  • Longest coherent computations: Current systems can sustain coherent computation for hundreds to thousands of two-qubit gate layers. Shor’s algorithm for RSA-2048 requires billions of gate layers.

Timeline Estimates

Responsible timeline estimation requires acknowledging deep uncertainty. The following represents the range of informed opinion as of 2026:

Optimistic (10–15 years / 2036–2041): Assumes continued exponential improvement in qubit count and quality, successful scaling of quantum error correction, and no fundamental engineering barriers. Proponents point to historical analogs (transistor scaling, classical computing progress) and substantial ongoing investment ($30+ billion globally).

Moderate (15–25 years / 2041–2051): Assumes current progress continues but encounters expected engineering challenges: wiring and control electronics scaling, cryogenic cooling at scale (for superconducting qubits), defect rates in large qubit arrays. This is the median view among quantum computing researchers surveyed in the Global Risk Institute’s annual reports.

Pessimistic (25+ years / 2051+): Assumes fundamental engineering barriers that are not yet apparent, diminishing returns from current qubit technologies, or the need for entirely new qubit architectures to achieve the required scale and fidelity. Proponents note that no quantum advantage for any practically useful problem has been demonstrated at scale.

The “Never” position: A minority view, but not a fringe one. Some physicists argue that the engineering challenges of maintaining coherence across millions of qubits may be fundamentally intractable — not just hard, but physically impossible at the required fidelity. This position is not mainstream, but it is held by credible researchers.

Hardware Platform Comparison

Different quantum computing platforms are competing to reach cryptographic relevance, each with distinct advantages and bottlenecks:

PlatformLeading OrgsPhysical Qubits (2026)Two-Qubit Gate Error RateKey AdvantageKey Challenge
SuperconductingIBM, Google, Rigetti~1,200~10^-3Fast gate times (~100 ns), mature fabricationRequires millikelvin cooling, limited connectivity
Trapped IonQuantinuum, IonQ~56 (fully connected)~10^-3 to 10^-4High fidelity, all-to-all connectivitySlow gate times (~ms), scaling bottleneck
Neutral AtomAtom Computing, QuEra, Pasqal~1,200~10^-2Large arrays, native multi-qubit gatesHigher error rates, newer technology
PhotonicPsiQuantum, XanaduN/A (different model)VariesRoom temperature, networking potentialLoss rates, deterministic gate challenge
TopologicalMicrosoft0 (in development)Projected ~10^-6Hardware-level error protectionUnproven at scale, recent first demonstrations

For Shor’s algorithm against RSA-2048, the superconducting and trapped ion platforms are currently the most mature. However, Microsoft’s topological approach — if its theoretical error rate projections hold — could dramatically reduce the error correction overhead, potentially shrinking the physical qubit requirement from ~20 million to ~1 million or fewer. Microsoft’s 2025 announcement of a topological qubit prototype is significant but remains far from cryptographic relevance.

The Harvest-Now-Decrypt-Later Problem

Regardless of when a cryptographically relevant quantum computer (CRQC) arrives, data encrypted today with RSA or ECC-based key exchange is at risk now if it has a secrecy requirement that extends beyond the CRQC timeline.

The threat model:

  1. An adversary intercepts and stores encrypted traffic today.
  2. The adversary waits until a CRQC is available.
  3. The adversary uses the CRQC to break the key exchange, recovering session keys.
  4. The adversary decrypts the stored traffic.

This applies to any data where confidentiality matters for decades: classified government communications, trade secrets, medical records, legal communications, financial data. For these use cases, the migration deadline is not “when CRQCs arrive” — it is now.

This is why organizations like NSA (CNSA 2.0), NIST, and the European Commission have mandated or strongly recommended beginning the transition to post-quantum key exchange immediately, even though the quantum threat is not imminent. The urgency is asymmetric: signatures can be migrated later (an adversary would need a CRQC at the time of attack), but encryption must be migrated before the data is transmitted.

For a detailed breakdown of post-quantum algorithm standards and migration strategies, see NIST PQC Standards and Migration Planning.


Implications for Compliance and Regulatory Frameworks

Understanding Shor’s and Grover’s algorithms is not just a technical exercise — it directly informs compliance obligations that are already in effect or imminent:

United States:

  • NSA CNSA 2.0 (2022): Requires National Security Systems to begin transitioning to post-quantum algorithms. Software and firmware signing must use post-quantum algorithms by 2025; symmetric key agreement and web browsers/servers by 2025; all other uses by 2030–2033.
  • OMB Memorandum M-23-02 (2022): Requires federal agencies to inventory cryptographic systems and submit migration plans, explicitly citing the threat from Shor’s algorithm.
  • NIST SP 800-208 (2020): Approved hash-based signature scheme (LMS/XMSS) for firmware signing — a direct response to the Shor’s algorithm threat against RSA/ECDSA firmware signatures.

European Union:

  • ENISA PQC Recommendation (2024): Advises EU member states to begin planning PQC migration, with particular urgency for long-lived confidentiality (citing the harvest-now-decrypt-later threat).
  • ANSSI (France), BSI (Germany): Both agencies have published guidance recommending hybrid PQC deployment for key exchange starting immediately.

Financial Sector:

  • PCI DSS v4.0.1: While not yet mandating PQC, requires organizations to assess emerging cryptographic threats — which explicitly includes quantum computing — as part of their cryptographic inventory and risk assessment.
  • SWIFT CSCF: The SWIFT Customer Security Programme is monitoring PQC developments and expected to incorporate PQC requirements in future control framework versions.

The regulatory landscape reinforces the technical analysis: Shor’s algorithm drives the urgency (public-key migration), while Grover’s algorithm drives the simpler remediation (symmetric key size upgrades).


Comparing Shor’s and Grover’s: A Summary

It is essential to understand these algorithms as fundamentally different threats that demand different responses.

DimensionShor’s AlgorithmGrover’s Algorithm
Speedup typeExponential (polynomial vs. sub-exponential)Quadratic (√N vs. N)
TargetsFactoring, discrete logarithm (RSA, ECC, DH)Unstructured search (symmetric keys, hash preimages)
ImpactTotal break — increases key size cannot helpHalves effective security — doubling key size restores it
ParallelizationEfficient classical post-processingPoor — square-root scaling with parallel processors
Practical threatReal and urgent (once CRQC exists)Theoretical — resource requirements are prohibitive for 128+ bit keys
Migration responseReplace algorithms entirely (PQC)Increase key sizes (AES-128 → AES-256)
Timeline sensitivityHigh — must migrate before CRQC + data lifetimeLow — AES-256 is already deployed widely

Quantum Cost Models: Thinking Like an Attacker

Security professionals should evaluate the quantum threat through the lens of attack economics, not just theoretical possibility. Even when a CRQC exists, the cost of running Shor’s algorithm will not be zero.

Cost Dimensions

A quantum attack on RSA-2048 would require:

  • Capital cost: Building or leasing access to a ~20-million-qubit quantum computer. If qubit costs follow even optimistic projections ($1–$100 per physical qubit for manufacturing plus cooling infrastructure), the capital cost for RSA-2048 factoring would be $20 million to $2 billion.
  • Operating cost: Cryogenic cooling (for superconducting qubits) at millikelvin temperatures consumes significant power. Estimates range from 10–100 MW for a system at this scale — comparable to a small power plant.
  • Time cost: 8 hours of continuous coherent computation, during which the machine cannot be used for other tasks.
  • Expertise cost: Operating a fault-tolerant quantum computer at this scale requires specialized knowledge that will initially be scarce.

Who Can Afford This?

The first entities to possess CRQCs will almost certainly be:

  1. Nation-state intelligence agencies (NSA, GCHQ, MSS, GRU equivalents) — These organizations have both the budget and the motivation. Classified quantum computing programs likely exist in several countries.
  2. Major cloud providers (Google, IBM, Microsoft, Amazon) — These companies are building quantum computers commercially and will offer “quantum computing as a service.” The question is whether they will offer (or be compelled to restrict) access to cryptanalytically relevant workloads.
  3. Well-funded research institutions — Large national labs and university consortia may reach cryptographic relevance through publicly funded programs.

For the first several years after CRQCs become feasible, the cost will restrict attacks to high-value targets: government communications, defense contractors, financial infrastructure, critical intellectual property. The “script kiddie with a quantum laptop” scenario is decades beyond initial CRQC capability, if it ever materializes.

This does not reduce the urgency of migration — nation-states are precisely the adversaries that justify the effort — but it does inform risk prioritization. An organization holding routine commercial data with a 5-year secrecy requirement faces a different risk profile than one holding military secrets with a 50-year requirement.

The Mosca Inequality

Michele Mosca formalized the migration urgency question with a simple inequality:

If  x + y > z,  then migration is already overdue.

Where:
  x = time to migrate your systems to post-quantum cryptography
  y = time your data must remain confidential (shelf life)
  z = time until a CRQC is available

For example: if migrating your infrastructure takes 5 years (x = 5), your classified data must remain secret for 25 years (y = 25), and a CRQC arrives in 15 years (z = 15), then x + y = 30 > 15 = z. You are already 15 years late.

Most large enterprises estimate x at 5–10 years for full migration. Government agencies and critical infrastructure operators often have y values of 20–50+ years. If z is anywhere in the 10–25 year range (the moderate consensus), the inequality is already violated for many organizations.


Emerging Research and Alternative Approaches

Improvements to Shor’s Algorithm

Research continues to reduce the resource requirements for quantum factoring:

  • Ekerå-Håstad (2017): Demonstrated that the discrete logarithm variant of Shor’s algorithm can be made more efficient than the factoring variant when the factors are of known approximate size — which they always are for RSA keys.
  • Regev (2023): Proposed a multidimensional variant of Shor’s algorithm that uses O(n^(3/2)) quantum gates instead of O(n^2), at the cost of requiring O(√n) independent runs. The total quantum resources may be lower for certain parameter ranges, but the analysis is ongoing.
  • Quantum-classical hybrid approaches: Several groups are exploring algorithms that use smaller quantum computers combined with classical lattice-reduction techniques. These could potentially reduce the quantum hardware requirements at the cost of increased classical computation.

Quantum Error Correction Advances

The physical-to-logical qubit ratio is arguably the most important parameter for CRQC timelines. Recent advances are worth tracking:

  • Google’s surface code milestone (2024): Demonstrated that increasing code distance from d = 3 to d = 5 actually reduced the logical error rate — the first time scaling up error correction yielded improvement rather than adding more noise. This crossed a critical threshold, proving that the surface code approach can work in principle.
  • Quantum LDPC codes: Classical Low-Density Parity-Check codes have quantum analogs that promise much higher encoding rates than the surface code. A surface code with distance d uses O(d^2) physical qubits per logical qubit. Quantum LDPC codes can achieve O(d) or even O(d^(4/5)) physical qubits per logical qubit for the same error protection. If practically implemented, this could reduce physical qubit requirements by 10–100x.
  • Color codes and Floquet codes: Alternative topological codes that may offer better T-gate implementation (surface codes require expensive magic state distillation for T-gates, while some alternative codes support transversal T-gates directly). Eliminating magic state distillation could remove roughly 50–70% of the total physical qubit overhead.
  • Real-time decoding: Error correction requires classical processing to identify and correct errors faster than they accumulate. The decoding latency directly affects the achievable circuit depth. Recent advances in FPGA-based and ASIC-based decoders have reduced latency from milliseconds to microseconds, which is essential for the deep circuits required by Shor’s algorithm.

Hybrid Classical-Quantum Attacks

An active research frontier explores whether smaller (nearer-term) quantum computers could assist classical algorithms to break cryptography:

  • Quantum-enhanced lattice sieving: Combining Grover’s search with classical lattice sieving algorithms could reduce the cost of lattice attacks, potentially weakening lattice-based post-quantum schemes. However, current analysis suggests the speedup is modest (a small constant factor, not a change in asymptotic complexity) and unlikely to threaten NIST’s chosen parameter sets.
  • Partial quantum factoring: Using a small quantum computer to find partial information about the factors of N, then completing the factorization classically. Regev’s 2023 algorithm moves in this direction, but the classical post-processing requires solving a lattice problem that itself requires substantial (though polynomial) classical resources.
  • Quantum preprocessing for GNFS: Using quantum subroutines to accelerate specific steps of the General Number Field Sieve. This could potentially reduce the exponent in the sub-exponential classical complexity, but no concrete proposal has demonstrated a practical advantage.

These hybrid approaches are worth monitoring because they could lower the CRQC threshold — a quantum computer with “only” thousands of physical qubits might contribute meaningfully to cryptanalysis when combined with classical supercomputing resources. Current analysis suggests this is unlikely to change the picture for RSA-2048 or ECC P-256, but the research is ongoing.

Beyond Shor’s and Grover’s

Other quantum algorithms with cryptographic implications:

  • Quantum walks: Can provide speedups for certain graph problems, potentially relevant to isogeny-based cryptography (see Isogeny-Based Cryptography for how the SIDH break used related techniques).
  • Quantum annealing: D-Wave’s quantum annealers have been proposed for combinatorial optimization, including some formulations of cryptanalysis. However, no demonstrated advantage over classical algorithms has been shown for cryptographically relevant problems.
  • Variational quantum algorithms (VQE, QAOA): Hybrid quantum-classical algorithms that might provide advantages with near-term noisy quantum computers. Their application to cryptanalysis is speculative, and most researchers consider them unlikely to threaten deployed cryptography.

Real-World Scenario: Evaluating Your Organization’s Risk

Consider a concrete scenario to illustrate how these technical details translate into risk decisions.

Scenario: A healthcare organization stores patient records encrypted with TLS 1.2 using ECDHE-P256 key exchange and AES-128-GCM. Records must remain confidential for 50 years (HIPAA retention and patient privacy requirements). The organization estimates a 7-year migration timeline to post-quantum TLS.

Analysis using the Mosca Inequality:

  • x (migration time) = 7 years
  • y (secrecy requirement) = 50 years
  • z (CRQC timeline) = 15–25 years (moderate estimate)
  • x + y = 57 years >> z = 15–25 years

This organization’s migration is already decades overdue.

Specific threats:

  1. Key exchange (ECDHE-P256): Vulnerable to Shor’s algorithm. Any traffic captured today can be decrypted once a CRQC arrives. The organization should assume nation-state adversaries are already collecting healthcare data for intelligence value (pharmaceutical research, government official medical records, etc.).
  2. Symmetric encryption (AES-128-GCM): Grover’s reduces effective security to 64 bits. While a practical Grover’s attack on AES-128 is far beyond any foreseeable quantum computer, the reduced security margin (64 bits) falls below most compliance frameworks’ minimum thresholds. Upgrading to AES-256 is prudent.
  3. Digital signatures (on records and audit logs): If the organization uses ECDSA for integrity verification, an attacker with a CRQC could forge signatures, potentially altering medical records. However, since this requires a CRQC at the time of forgery (not retroactively), signature migration is less urgent than key exchange.

Recommended priority:

  1. Immediately deploy hybrid key exchange (e.g., X25519 + ML-KEM-768) for all new TLS connections.
  2. Upgrade from AES-128 to AES-256 (configuration change, minimal risk).
  3. Plan signature migration to ML-DSA or SLH-DSA within the next 3–5 years.
  4. Re-encrypt archived data under post-quantum key encapsulation if retroactive confidentiality is required.

For algorithm-specific guidance, see NIST PQC Standards. For step-by-step migration procedures, see Migration Planning.


Practical Takeaways for Security Professionals

  1. Shor’s algorithm is the existential threat. It completely breaks all deployed public-key cryptography. No amount of key size increase helps. The only defense is migrating to post-quantum algorithms.

  2. Grover’s algorithm is a manageable concern. For symmetric cryptography, it provides a quadratic speedup that is mitigated by using 256-bit keys (AES-256). The practical resource requirements make Grover’s attacks on 128+ bit keys infeasible for the foreseeable future, but AES-256 adoption is cheap insurance.

  3. The timeline is uncertain but the direction is clear. Whether CRQCs arrive in 10 years or 30, they will arrive. The cryptographic community is unanimous on this point, even if the timeline is debated.

  4. Harvest-now-decrypt-later makes the timeline question partially irrelevant. For long-lived secrets, the migration deadline for key exchange is today, not the day CRQCs arrive.

  5. Resource estimates keep improving (in the attacker’s favor). The qubit requirements for Shor’s algorithm have dropped by orders of magnitude over two decades of optimization. Assume this trend continues.

  6. Focus migration efforts on key exchange first, then signatures, then symmetric upgrades. This priority ordering reflects both the threat urgency (harvest-now-decrypt-later) and the practical difficulty of migration (key exchange changes are typically less disruptive than signature changes in PKI). See Migration Planning for detailed guidance.


Frequently Misunderstood Points

”Quantum supremacy means quantum computers can already break crypto”

Quantum supremacy (or “quantum advantage”) demonstrations — Google’s Sycamore (2019), various follow-ups — show that quantum computers can outperform classical computers on specific, artificially constructed problems. These problems have no relationship to cryptography. The random circuit sampling problem solved by Sycamore requires ~50 qubits and ~20 gate layers. Shor’s algorithm for RSA-2048 requires ~4,000 logical qubits and billions of gate layers. The gap is not incremental — it is qualitative.

”Lattice-based PQC is quantum-resistant because there is no quantum algorithm for lattices”

More precisely: no efficient quantum algorithm is known for the lattice problems underlying schemes like ML-KEM and ML-DSA. This is not the same as a proof that no such algorithm exists. The best-known quantum speedup for lattice problems is modest (Grover-like, providing at most a quadratic improvement to certain sieving steps). NIST’s parameter selection includes significant security margins to account for unknown future improvements, but the absence of a proof of quantum hardness for lattices is a legitimate — if unlikely to be realized — risk. See Lattice-Based Cryptography for a deeper analysis.

”My data is only sensitive for 5 years, so I don’t need to worry about PQC”

This reasoning is valid if the data has truly no value after 5 years and CRQCs are more than 5 years away. But consider: even “low-value” data can be combined with other harvested data to produce high-value intelligence. Authentication credentials harvested today might provide access to systems that still exist in 10 years. And the CRQC timeline carries significant uncertainty — planning for the median estimate while ignoring the tail risk is poor security practice.

”AES-256 with a pre-shared key eliminates the quantum threat”

Technically correct for the data encrypted under that specific key — if the key is established through a quantum-safe channel (truly pre-shared, derived from a PQC KEM, or distributed via QKD). However, pre-shared keys do not scale for most real-world deployments (web servers, API gateways, mobile apps). The practical solution requires public-key cryptography, which means PQC migration.

”Quantum Key Distribution (QKD) makes PQC unnecessary”

QKD provides information-theoretic security for key exchange using quantum physics rather than computational assumptions. It is unbreakable by definition, regardless of advances in quantum computing or algorithms. However, QKD has severe practical limitations:

  • Requires dedicated fiber-optic or free-space optical links (cannot run over existing internet infrastructure).
  • Maximum range of ~100–500 km per link without quantum repeaters (which do not yet exist at scale).
  • Low key generation rates (kbps to Mbps, depending on distance and technology).
  • Does not provide authentication — it must be combined with classical or post-quantum authentication.
  • Extremely expensive compared to software-only PQC solutions.

QKD is appropriate for point-to-point links between high-security facilities (e.g., government data centers, financial exchanges). For everything else — web traffic, email, cloud services, IoT — PQC is the practical solution. The two technologies are complementary, not competitive, but PQC addresses the vast majority of real-world deployment scenarios.


Further Reading

Key Milestones in Quantum Factoring Experiments

The following timeline tracks actual quantum factoring demonstrations — useful for calibrating how far hardware has progressed:

YearAchievementQubits UsedMethod
2001IBM factors 15 = 3 × 57 qubits (NMR)Shor’s algorithm
2012Factors 21 = 3 × 710 qubits (photonic)Compiled Shor’s variant
2012Factors 143 = 11 × 134 qubits (adiabatic)Quantum annealing approach
2014Factors 56,1534 qubitsHeavily compiled/classical-assisted
2019Google achieves quantum supremacy53 qubitsNot factoring (random circuit sampling)
2023Claims of factoring RSA-2048 with ~372 qubitsTheoreticalSchnorr/QAOA hybrid — not validated by community
2024Google demonstrates below-threshold QEC105 qubits (logical)Surface code, not factoring
2025Microsoft announces topological qubit1 qubitArchitecture demonstration only

The 2023 claim by Bao Yan et al. (proposing to combine the Schnorr lattice-based factoring approach with QAOA) received significant media attention but was thoroughly rebutted by the cryptography community. The classical portion of their algorithm does not scale efficiently, and the quantum “speedup” was not demonstrated. This is a cautionary example of quantum hype outpacing rigorous analysis.

The honest assessment: as of 2026, no quantum computer has factored a number that a pocket calculator could not factor faster. The demonstrated quantum factoring record using an honest implementation of Shor’s algorithm remains N = 21 (factored on 10 qubits in 2012). The gap between this and RSA-2048 is approximately 600 orders of magnitude in the size of the number.

Foundational Papers

  • P. Shor, “Algorithms for Quantum Computation: Discrete Logarithms and Factoring” (1994) — the original paper.
  • L. Grover, “A Fast Quantum Mechanical Algorithm for Database Search” (1996) — Grover’s original proposal.
  • C. Gidney and M. Ekerå, “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits” (2021) — the most cited resource estimate paper.

Resource Estimates

  • M. Roetteler, M. Naehrig, K. Svore, and K. Lauter, “Quantum Resource Estimates for Computing Elliptic Curve Discrete Logarithms” (2017)
  • S. Jaques, M. Naehrig, M. Roetteler, and F. Virdia, “Implementing Grover Oracles for Quantum Key Search on AES and LowMC” (2020)
  • M. Grassl, B. Langenberg, M. Roetteler, and R. Steinwandt, “Applying Grover’s Algorithm to AES” (2016)

Context and Timeline Analysis

  • Global Risk Institute, “Quantum Threat Timeline Report” (published annually)
  • M. Mosca, “Cybersecurity in an Era with Quantum Computers: Will We Be Ready?” (2018)
  • National Academies of Sciences, “Quantum Computing: Progress and Prospects” (2019)

Standards and Migration

  • NIST, “Post-Quantum Cryptography Standardization Process” — FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), FIPS 205 (SLH-DSA)
  • NSA, “CNSA 2.0 — Cybersecurity Advisory” (2022)
  • ETSI, “Quantum Safe Cryptography” technical reports (QSC working group)
  • BSI, “Migration to Post-Quantum Cryptography” technical guideline (2024)

Accessible Overviews

  • S. Aaronson, “Shor, I’ll Do It” — an accessible explanation of Shor’s algorithm aimed at computer scientists (blog post, regularly updated)
  • A. Montanaro, “Quantum Algorithms: An Overview” — Reviews of Modern Physics (2016), covering both Shor’s and Grover’s in context
  • NIST, “Post-Quantum Cryptography FAQ” — plain-language answers to common questions about the quantum threat

Previous: Quantum Computing Fundamentals — the physics and computation model underlying these algorithms.

Next: NIST PQC Standards — the algorithms designed to resist both Shor’s and Grover’s attacks.