NIST PQC Standardization Process
Overview
The National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization Process is the most consequential cryptographic standards competition since the AES selection in 2001. Spanning nearly a decade — from the initial call for proposals in December 2016 to the publication of FIPS 203, 204, and 205 in August 2024 — the process evaluated 82 candidate algorithms across multiple elimination rounds, subjected them to years of global cryptanalysis, and ultimately selected four algorithms for standardization with a fifth (HQC) announced in March 2025.
This page covers the complete timeline, the selection criteria NIST applied, the winners and their properties, the algorithms that were broken or eliminated along the way, the naming conventions adopted for the final standards, Round 4 status, and global alignment efforts. For background on why post-quantum cryptography is necessary, see Why Post-Quantum Cryptography Matters. For technical details on the mathematical foundations underpinning the selected algorithms, see Mathematical Foundations.
1. Why NIST Ran a Competition
The Precedent: AES and SHA-3
NIST has a well-established track record of running open cryptographic competitions. The Advanced Encryption Standard (AES) competition (1997-2001) selected Rijndael from 15 candidates. The SHA-3 competition (2007-2012) selected Keccak from 64 submissions. Both processes demonstrated that open, multi-year, community-driven evaluation produces standards with significantly higher confidence than closed-door selection.
The PQC competition followed the same model but with a critical difference: the threat being addressed — large-scale quantum computers — does not yet exist in operational form. NIST was standardizing defenses against a future capability, which meant that the process had to balance urgency (harvest-now-decrypt-later attacks are happening today) against the risk of standardizing algorithms that might themselves be broken by novel classical or quantum attacks.
The Catalyst: NSA’s CNSA Suite Announcement
In August 2015, the NSA’s Information Assurance Directorate published Commercial National Security Algorithm (CNSA) Suite guidance that explicitly acknowledged the quantum threat and signaled the need for quantum-resistant algorithms. This was a watershed moment — when the agency responsible for protecting classified U.S. communications publicly states that current public-key cryptography has an expiration date, the broader community takes notice.
NIST followed in April 2016 with NISTIR 8105, “Report on Post-Quantum Cryptography,” which laid out the technical landscape and telegraphed the forthcoming standardization effort.
The Call for Proposals
On December 20, 2016, NIST published the official call for proposals with a submission deadline of November 30, 2017. The call specified two categories of algorithms:
- Public-key encryption / Key Encapsulation Mechanisms (KEMs): Replacements for algorithms like RSA-OAEP, ECDH, and ECIES
- Digital signatures: Replacements for RSA-PSS, ECDSA, and EdDSA
NIST explicitly stated that it expected to standardize more than one algorithm in each category to provide algorithmic diversity — a lesson learned from the over-reliance on RSA and the need for backup options if a single mathematical family is broken.
2. The Complete Timeline
Mermaid Timeline Diagram
graph LR
A["Dec 2016<br/>Call for<br/>Proposals"] --> B["Nov 2017<br/>82 Submissions<br/>Received"]
B --> C["Jan 2018<br/>69 Accepted to<br/>Round 1"]
C --> D["Jan 2019<br/>26 Advance to<br/>Round 2"]
D --> E["Jul 2020<br/>7 Finalists +<br/>8 Alternates"]
E --> F["Jul 2022<br/>4 Winners<br/>Announced"]
F --> G["Aug 2024<br/>FIPS 203, 204,<br/>205 Published"]
G --> H["Mar 2025<br/>HQC Selected<br/>(Round 4)"]
style A fill:#1a1a2e,stroke:#e94560,color:#eee
style B fill:#1a1a2e,stroke:#e94560,color:#eee
style C fill:#1a1a2e,stroke:#16213e,color:#eee
style D fill:#1a1a2e,stroke:#16213e,color:#eee
style E fill:#1a1a2e,stroke:#0f3460,color:#eee
style F fill:#1a1a2e,stroke:#e94560,color:#eee
style G fill:#1a1a2e,stroke:#e94560,color:#eee
style H fill:#1a1a2e,stroke:#e94560,color:#eee
Detailed Chronology
| Date | Event | Details |
|---|---|---|
| Dec 2016 | Call for proposals | NIST publishes formal submission requirements and evaluation criteria |
| Nov 2017 | Submission deadline | 82 complete submissions received from teams worldwide |
| Dec 2017 | First PQC Standardization Conference | Held at NIST, Gaithersburg, MD; submitters present their algorithms |
| Jan 2018 | Round 1 begins | 69 submissions accepted (13 withdrawn or rejected for completeness issues) |
| Apr 2018 | Second PQC Conference | Continued presentations and early cryptanalytic results |
| Jan 2019 | Round 2 announced | 26 algorithms advance (17 KEMs, 9 signatures) |
| Aug 2019 | Third PQC Conference | Focus on performance benchmarking and implementation security |
| Jul 2020 | Round 3 announced | 7 finalists selected, 8 alternate candidates retained for continued study |
| Jun 2021 | Fourth PQC Conference | Deep dives into finalist security analysis |
| Jul 2022 | Winners announced | CRYSTALS-Kyber (KEM), CRYSTALS-Dilithium, Falcon, SPHINCS+ (signatures) selected |
| Aug 2023 | Draft FIPS published | FIPS 203, 204, 205 drafts released for public comment |
| Aug 2024 | Final FIPS published | FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), FIPS 205 (SLH-DSA) officially released |
| Mar 2025 | HQC selected | HQC announced as additional KEM standard (Round 4 winner); FN-DSA (Falcon) draft expected |
3. Selection Criteria
NIST evaluated candidates across multiple dimensions, weighting them differently for KEMs and signature schemes. Understanding these criteria is essential for security professionals who need to justify algorithm selection to stakeholders.
3.1 Security
Security was the primary criterion, evaluated at multiple levels:
- Claimed security level: NIST defined five security strength categories mapped to existing symmetric and hash-based benchmarks:
| Level | Equivalent Security | Reference Primitive |
|---|---|---|
| 1 | At least as hard to break as | AES-128 key search |
| 2 | At least as hard to break as | SHA-256 collision |
| 3 | At least as hard to break as | AES-192 key search |
| 4 | At least as hard to break as | SHA-384 collision |
| 5 | At least as hard to break as | AES-256 key search |
-
Underlying hardness assumptions: How well-studied is the mathematical problem? Lattice problems (LWE, MLWE, NTRU) have decades of cryptanalytic attention. Code-based problems (syndrome decoding) date to McEliece (1978). Hash-based signatures rest on the minimal assumption that the hash function is secure. Isogeny-based problems (CSIDH, SIDH) were comparatively newer and less battle-tested — a fact that proved consequential.
-
Proof quality: Does the scheme have a tight reduction to a well-known hard problem, or does it rely on heuristic security arguments? NIST valued schemes with clean, well-understood security proofs.
-
Resistance to side-channel attacks: Can the algorithm be implemented securely in constant time? Are there inherent structural properties that make side-channel protection difficult?
3.2 Performance
Performance was evaluated holistically, not as a single metric:
- Key generation time: How long does it take to generate a keypair?
- Encapsulation / signing time: Throughput for the core cryptographic operation
- Decapsulation / verification time: Critical for high-volume server workloads
- Suitability for constrained environments: Can it run on IoT devices, smart cards, embedded systems?
NIST explicitly stated that performance was secondary to security but would serve as a tiebreaker between schemes with comparable security arguments.
3.3 Key and Ciphertext / Signature Sizes
This criterion had enormous practical weight. Post-quantum algorithms universally produce larger keys, ciphertexts, and signatures than their classical counterparts. The impact on protocols like TLS, SSH, X.509 certificates, and DNSSEC varies dramatically by algorithm family:
| Classical Algorithm | Public Key | Signature / Ciphertext |
|---|---|---|
| RSA-2048 | 256 bytes | 256 bytes |
| ECDSA P-256 | 64 bytes | 64 bytes |
| Ed25519 | 32 bytes | 64 bytes |
Post-quantum sizes range from slightly larger (lattice-based KEMs) to dramatically larger (code-based KEMs, hash-based signatures). The full comparison is in Section 8.
3.4 Implementation Characteristics
- Algorithm simplicity: Simpler algorithms have smaller attack surfaces for implementation bugs
- Flexibility: Can the algorithm support multiple security levels from the same core design?
- Patent status: NIST required that selected algorithms be available royalty-free worldwide
- Existing implementation quality: Availability of reference and optimized implementations
- Misuse resistance: How badly can an implementer break the security by making common mistakes?
4. The FIPS Winners
On August 13, 2024, NIST published three Federal Information Processing Standards — the culmination of nearly eight years of evaluation.
FIPS Winners Table
| Standard | Algorithm Name | Original Submission Name | Type | Mathematical Family | Security Levels |
|---|---|---|---|---|---|
| FIPS 203 | ML-KEM | CRYSTALS-Kyber | Key Encapsulation Mechanism | Module Lattices (MLWE) | 1, 3, 5 |
| FIPS 204 | ML-DSA | CRYSTALS-Dilithium | Digital Signature | Module Lattices (MLWE/MSIS) | 2, 3, 5 |
| FIPS 205 | SLH-DSA | SPHINCS+ | Digital Signature | Hash-based (stateless) | 1, 3, 5 |
A fourth algorithm, Falcon (now FN-DSA), was also selected for standardization but its FIPS publication is still pending as of early 2025 due to the complexity of specifying its discrete Gaussian sampling correctly.
4.1 FIPS 203: ML-KEM (CRYSTALS-Kyber)
What it does: ML-KEM is a key encapsulation mechanism — it allows two parties to establish a shared secret key over an insecure channel. It replaces ECDH / RSA key exchange in protocols like TLS 1.3, SSH, and IPsec.
How it works: ML-KEM is based on the Module Learning With Errors (MLWE) problem. The public key is a noisy system of linear equations over a polynomial ring. Encapsulation adds additional noise to encode the shared secret, and decapsulation uses the secret key to strip the noise and recover the shared secret. The “module” variant structures the lattice as a matrix of polynomial ring elements, providing a favorable balance between security reduction quality and efficiency.
Parameter sets:
| Parameter Set | Security Level | Public Key | Ciphertext | Shared Secret | Encaps Time (approx) |
|---|---|---|---|---|---|
| ML-KEM-512 | 1 (AES-128) | 800 bytes | 768 bytes | 32 bytes | ~30 us |
| ML-KEM-768 | 3 (AES-192) | 1,184 bytes | 1,088 bytes | 32 bytes | ~50 us |
| ML-KEM-1024 | 5 (AES-256) | 1,568 bytes | 1,568 bytes | 32 bytes | ~70 us |
Why it won: Kyber offered the best overall balance of security, performance, and size among the KEM candidates. Its public keys and ciphertexts are dramatically smaller than code-based alternatives (Classic McEliece has ~260 KB public keys). The MLWE problem is well-studied with over a decade of cryptanalytic attention. Kyber’s algebraic structure enables highly efficient implementations using Number Theoretic Transform (NTT) operations.
Key implementation considerations:
- Requires a good source of randomness for encapsulation
- Constant-time implementation is straightforward compared to NTRU or code-based schemes
- The Fujisaki-Okamoto transform provides CCA2 security from a CPA-secure core scheme
- ML-KEM-768 is the recommended default for most applications (NIST Level 3)
4.2 FIPS 204: ML-DSA (CRYSTALS-Dilithium)
What it does: ML-DSA is a digital signature scheme — it allows a signer to produce a signature that any verifier can check, providing authentication and non-repudiation. It replaces ECDSA, EdDSA, and RSA signatures.
How it works: ML-DSA is based on the Module Learning With Errors (MLWE) and Module Short Integer Solution (MSIS) problems over polynomial rings. Signing uses a Fiat-Shamir-with-aborts approach: the signer generates a random masking vector, computes a commitment, derives a challenge via hashing, and computes the response. If the response vector is “too large” (would leak information about the secret key), the signer rejects and restarts. This rejection sampling is critical for security.
Parameter sets:
| Parameter Set | Security Level | Public Key | Signature | Signing Time (approx) |
|---|---|---|---|---|
| ML-DSA-44 | 2 (SHA-256 collision) | 1,312 bytes | 2,420 bytes | ~100 us |
| ML-DSA-65 | 3 (AES-192) | 1,952 bytes | 3,309 bytes | ~160 us |
| ML-DSA-87 | 5 (AES-256) | 2,592 bytes | 4,627 bytes | ~250 us |
Why it won: Dilithium was the most balanced signature scheme across all evaluation criteria. Its signatures are larger than Falcon’s but the implementation is dramatically simpler — no discrete Gaussian sampling, no floating-point arithmetic, no complex tree structures. NIST explicitly valued this simplicity, noting that implementation errors are a leading source of real-world cryptographic vulnerabilities. Dilithium’s signing and verification are both fast, and the scheme handles batching well.
Key implementation considerations:
- Rejection sampling means signing time is variable (but bounded and typically fast)
- The deterministic signing variant (derandomized) eliminates the need for runtime randomness during signing
- Public keys are larger than ECDSA (1.3 KB vs 64 bytes) which impacts certificate chains and X.509 workflows
- ML-DSA-65 is the recommended default for general-purpose use
4.3 FIPS 205: SLH-DSA (SPHINCS+)
What it does: SLH-DSA is a stateless hash-based digital signature scheme. It serves as the conservative backup to lattice-based signatures — if MLWE/MSIS are ever broken (by quantum or classical means), SLH-DSA remains secure as long as the underlying hash function is secure.
How it works: SLH-DSA constructs a hypertree of many-time signatures built from one-time signature schemes (WOTS+). Signing traverses this tree structure, selecting leaves via a few-time signature scheme (FORS — Forest of Random Subsets). The security rests solely on the collision resistance, preimage resistance, and second-preimage resistance of the underlying hash function (SHA-256 or SHAKE256). No number-theoretic assumptions are required.
Parameter sets (selected subset):
| Parameter Set | Security Level | Public Key | Signature | Signing Time (approx) |
|---|---|---|---|---|
| SLH-DSA-SHA2-128s | 1 | 32 bytes | 7,856 bytes | ~60 ms |
| SLH-DSA-SHA2-128f | 1 | 32 bytes | 17,088 bytes | ~4 ms |
| SLH-DSA-SHA2-192s | 3 | 48 bytes | 16,224 bytes | ~100 ms |
| SLH-DSA-SHA2-256s | 5 | 64 bytes | 29,792 bytes | ~200 ms |
| SLH-DSA-SHA2-256f | 5 | 64 bytes | 49,856 bytes | ~15 ms |
The “s” (small) variants produce smaller signatures but are slower. The “f” (fast) variants sign faster but produce larger signatures. This speed/size tradeoff is a fundamental characteristic of the hypertree design.
Why it was included: SLH-DSA is not the fastest or smallest signature scheme, but it provides crucial algorithmic diversity. Its security rests on symmetric-key cryptographic assumptions only — the most conservative and well-understood assumptions in all of cryptography. If a future breakthrough breaks lattice problems, SLH-DSA remains standing. NIST explicitly designated it as the “conservative option” and recommended it for applications where long-term security is paramount and performance is less critical.
Key implementation considerations:
- Signing is orders of magnitude slower than ML-DSA (milliseconds vs microseconds)
- Signatures are large (8-50 KB depending on parameter set), which is problematic for bandwidth-constrained protocols
- Verification is fast relative to signing
- Stateless design eliminates the state management burden of XMSS/LMS (which are already standardized in NIST SP 800-208)
- Public keys are very small (32-64 bytes) — the smallest of any PQC signature scheme
4.4 Pending: FN-DSA (Falcon)
What it is: FN-DSA (Fast-Fourier Lattice-based Compact Signatures over NTRU) is a signature scheme based on the NTRU lattice problem. It was selected alongside Dilithium and SPHINCS+ in July 2022 but its standardization has been delayed.
Why it matters: Falcon produces the smallest signatures of any lattice-based scheme selected by NIST:
| Parameter Set | Security Level | Public Key | Signature |
|---|---|---|---|
| FN-DSA-512 | 1 | 897 bytes | 666 bytes |
| FN-DSA-1024 | 5 | 1,793 bytes | 1,280 bytes |
At 666 bytes for Level 1, Falcon signatures are roughly 3.6x smaller than Dilithium’s. This makes Falcon attractive for protocols where signature size dominates bandwidth (certificate chains, blockchain, DNSSEC).
Why the delay: Falcon’s core operation requires sampling from a discrete Gaussian distribution over lattice cosets. This requires either:
- Double-precision floating-point arithmetic (which has portability and constant-time implementation challenges), or
- Integer-only approximations (which must be specified with extreme care to avoid subtle security degradations)
NIST is taking additional time to ensure the specification is precise enough that independent implementations will be interoperable and secure. A draft FIPS for FN-DSA is expected in 2025.
Risk factors: Falcon’s implementation complexity means that misuse and side-channel vulnerabilities are more likely than with Dilithium. Organizations should default to ML-DSA unless they have a specific, measured need for Falcon’s smaller signatures and are confident in their implementation quality.
5. What Didn’t Make It — and Why
The NIST PQC process was also notable for the algorithms that were broken or eliminated. These failures provided invaluable lessons about the maturity of different mathematical assumptions.
5.1 SIKE — Catastrophically Broken (July 2022)
Algorithm: Supersingular Isogeny Key Encapsulation
What happened: On July 30, 2022, Wouter Castryck and Thomas Decru published a devastating attack that completely broke SIKE using a classical computer. The attack exploited the auxiliary torsion point information that SIKE published as part of its public key — information that was known to be theoretically risky but was thought to be safe in practice.
Impact: A single laptop could break SIKE’s highest security level (SIKEp751, targeting NIST Level 5) in approximately 60 minutes. This was not a marginal attack that slightly reduced the security level — it was a total break that reduced the problem to polynomial time.
Technical root cause: The Castryck-Decru attack used the GPST (Galbraith-Petit-Shani-Ti) framework combined with evaluating isogenies at known torsion points. By leveraging the Kani-Frey theorem on reducibility of abelian surfaces, the attackers could recover the secret isogeny from the auxiliary point data. The fundamental mistake was publishing torsion point images alongside the public key — a design decision that seemed necessary for non-interactive key exchange but created a fatal information leakage.
Aftermath: SIKE was immediately withdrawn from Round 4 consideration. The break also cast doubt on the broader SIDH (Supersingular Isogeny Diffie-Hellman) framework, though related isogeny constructions like CSIDH (which do not publish torsion point information) remain unbroken. The incident validated NIST’s decision to require multiple algorithm families — if SIKE had been the sole KEM selection, the entire standardization effort would have been compromised.
Lesson for security professionals: The maturity of the underlying mathematical assumption matters enormously. Isogeny-based cryptography was the youngest family under consideration, with the least accumulated cryptanalytic attention. “Novel” and “efficient” are not always virtues in cryptographic standardization.
Timeline of the collapse:
- 2016-2021: SIKE progressed through rounds with no practical attacks; some theoretical concerns about torsion point information were published but dismissed as non-exploitable
- June 2022: NIST advanced SIKE to Round 4, expressing continued confidence
- July 20, 2022: Castryck and Decru posted their preprint to ePrint
- July 30, 2022: Independent reproductions confirmed the break; additional researchers (Maino-Martindale, Robert) published improved attacks
- August 2022: SIKE team acknowledged the break and withdrew from the competition
The speed of the collapse — from “Round 4 candidate” to “completely broken” in under six weeks — underscored why algorithmic diversity and conservative assumptions are essential in standardization.
5.2 Rainbow — Broken Before Round 3 Concluded (February 2022)
Algorithm: Rainbow (multivariate quadratic signature scheme)
What happened: In February 2022, Ward Beullens published “Breaking Rainbow Takes a Weekend on a Laptop,” demonstrating a practical key-recovery attack against all proposed Rainbow parameter sets. The attack combined a rectangular MinRank attack with the specific structure of Rainbow’s layered construction to reduce the security levels dramatically below their claimed targets.
Impact: The Level 1 parameter set (targeting 128-bit security) was broken in approximately 53 hours on a standard laptop. Higher parameter sets fell with proportionally more (but still feasible) compute.
Technical root cause: Rainbow is a layered unbalanced Oil-and-Vinegar (UOV) construction. Beullens showed that Rainbow’s specific layered structure leaked information about the secret key’s internal decomposition, enabling a rectangular MinRank attack that is far more efficient against Rainbow than against generic multivariate systems. The broader UOV scheme (without Rainbow’s layered structure) was not directly affected.
Aftermath: Rainbow was removed from NIST consideration. The broader multivariate quadratic (MQ) signature family continues to be studied, and UOV variants (without the layered structure) remain candidates in other standardization efforts. Notably, UOV has been submitted to NIST’s additional signature call and is considered a strong candidate precisely because it avoids the layered structure that made Rainbow vulnerable.
Lesson for security professionals: Structural optimizations that improve performance can simultaneously introduce exploitable mathematical structure. Rainbow’s layered design made keys smaller and operations faster than plain UOV, but those same layers created the attack vector. This is a recurring theme in cryptographic design — efficiency and security are often in tension.
5.3 GeMSS
Algorithm: Great Multivariate Short Signature
Status: Eliminated in Round 2. GeMSS produced very small signatures but had extremely large public keys (hundreds of kilobytes to megabytes) and very slow signing. Cryptanalytic progress further reduced confidence in its security margins.
5.4 Classic McEliece
Algorithm: Classic McEliece (code-based KEM)
Status: Advanced to Round 4 but was not selected. Classic McEliece is based on the Niederreiter dual of the original McEliece cryptosystem (1978), making it one of the oldest and most conservative post-quantum proposals. Its security reduction to the well-studied syndrome decoding problem is tight and well-understood.
Why it wasn’t selected: Public keys are enormous — approximately 261 KB for NIST Level 1, scaling to over 1 MB for Level 5. This makes Classic McEliece impractical for most interactive protocols (TLS handshakes, SSH key exchange) where key material must be transmitted. NIST acknowledged that Classic McEliece has one of the strongest security arguments of any submission but concluded that the key sizes preclude general-purpose standardization.
Classic McEliece may still find a role in specific applications where public keys can be pre-distributed (e.g., long-lived device certificates, firmware signing) or where the strongest possible security argument is required regardless of bandwidth cost.
5.5 NTRU
Algorithm: NTRU (lattice-based KEM)
Status: Round 3 finalist, not selected. NTRU was one of the oldest lattice-based schemes (proposed in 1996) and had significant industry deployment history. It was ultimately passed over in favor of Kyber, which offered better performance and comparable security from a closely related (MLWE vs NTRU) mathematical foundation. NIST concluded that standardizing both Kyber and NTRU would provide insufficient algorithmic diversity since both are lattice-based.
5.6 Other Notable Eliminations
| Algorithm | Family | Round Eliminated | Primary Reason |
|---|---|---|---|
| BIKE | Code-based KEM | Round 4 (not selected) | IND-CCA2 security proof concerns; decoding failure rate |
| FrodoKEM | Lattice-based KEM | Round 2 | Conservative but significantly slower and larger than Kyber |
| NewHope | Lattice-based KEM | Round 2 | Merged effort into Kyber; similar design space |
| SABER | Lattice-based KEM | Round 3 finalist | Very similar performance profile to Kyber; NIST chose one lattice KEM |
| Picnic | Zero-knowledge signature | Round 3 alternate | Signature sizes too large; signing too slow for general use |
| LUOV | Multivariate signature | Round 2 | Cryptanalytic attacks reduced security margins |
6. Naming Changes: From Submissions to Standards
When NIST publishes FIPS standards, algorithms receive new standardized names. This has created confusion in the community, as the research literature, early implementations, and popular press use the original submission names, while the official standards use the new designations.
Name Mapping
| Original Submission Name | NIST Standard Name | Standard Document | Naming Convention |
|---|---|---|---|
| CRYSTALS-Kyber | ML-KEM | FIPS 203 | Module Lattice - Key Encapsulation Mechanism |
| CRYSTALS-Dilithium | ML-DSA | FIPS 204 | Module Lattice - Digital Signature Algorithm |
| SPHINCS+ | SLH-DSA | FIPS 205 | Stateless Hash-based - Digital Signature Algorithm |
| Falcon | FN-DSA | (pending) | FFT over NTRU lattice - Digital Signature Algorithm |
| HQC | ML-KEM (separate std) | (pending) | To be determined; likely retains HQC name or receives code-based designation |
Why the Renaming?
NIST standardized names follow a descriptive convention that encodes the algorithm family and function type directly in the name. This approach:
- Eliminates trademark issues: Original names like “Kyber” and “Dilithium” (from Star Wars and The Matrix respectively) have potential trademark complications
- Communicates function: “KEM” and “DSA” immediately tell an implementer what the algorithm does
- Communicates family: “ML” (Module Lattice) and “SLH” (Stateless Hash-based) signal the underlying mathematical approach
- Aligns with NIST naming conventions: Consistent with how NIST names other standards (AES, SHA, DSA)
Practical Impact
Security professionals should be aware that:
- Library implementations may use either name (or both):
kyber768vsML-KEM-768 - IETF RFCs and drafts have adopted the NIST names for new specifications
- Legacy documentation and academic papers will continue to reference the original names
- Configuration files, API names, and protocol negotiation identifiers may use either naming scheme depending on when the software was developed
7. Round 4 and Ongoing Selections
Round 4 KEMs
After announcing the initial winners in July 2022, NIST advanced four additional KEM candidates to a fourth evaluation round:
| Algorithm | Family | Key Feature | Status |
|---|---|---|---|
| HQC | Code-based | Hamming Quasi-Cyclic codes | Selected (March 2025) |
| BIKE | Code-based | Bit Flipping Key Encapsulation | Not selected |
| Classic McEliece | Code-based | Niederreiter dual of McEliece (1978) | Not selected (key sizes) |
| SIKE | Isogeny-based | Supersingular isogeny DH | Broken (July 2022); withdrawn |
HQC: The Round 4 Winner
In March 2025, NIST announced the selection of HQC (Hamming Quasi-Cyclic) as an additional KEM standard. HQC is based on the decisional syndrome decoding problem for structured (quasi-cyclic) codes, providing algorithmic diversity from ML-KEM’s lattice-based foundation.
HQC properties:
| Parameter Set | Security Level | Public Key | Ciphertext |
|---|---|---|---|
| HQC-128 | 1 | ~2,249 bytes | ~4,497 bytes |
| HQC-192 | 3 | ~4,522 bytes | ~9,042 bytes |
| HQC-256 | 5 | ~7,245 bytes | ~14,485 bytes |
HQC’s key and ciphertext sizes are significantly larger than ML-KEM’s — roughly 3-5x depending on the parameter set. However, HQC provides critical backup diversity: if a breakthrough attack targets MLWE (the problem underlying ML-KEM), HQC remains secure because its security is based on code-based assumptions, an entirely separate mathematical foundation.
Why HQC over BIKE or Classic McEliece?
- vs BIKE: HQC has a cleaner IND-CCA2 security argument. BIKE’s decoding failure rate introduced complexity in the CCA transformation that reviewers found less satisfactory.
- vs Classic McEliece: While Classic McEliece has the strongest security argument, its public keys (261 KB+) are impractical for most use cases. HQC provides code-based diversity with manageable key sizes.
Additional Signature Candidates
NIST also issued a separate call for additional digital signature algorithms in June 2023, focused on schemes that do not rely on structured lattices. This call yielded 50 submissions, from which NIST is evaluating candidates that offer:
- Shorter signatures than SLH-DSA
- Different security assumptions from ML-DSA
- Suitability for specific use cases (e.g., certificate transparency, IoT)
Notable submissions under evaluation include:
| Candidate | Family | Key Feature | Signature Size (Level 1) |
|---|---|---|---|
| UOV | Multivariate (Oil-and-Vinegar) | Well-studied multivariate scheme; no layered structure (unlike broken Rainbow) | ~96 bytes |
| MAYO | Multivariate (whipped UOV) | Compressed UOV variant with smaller public keys | ~321 bytes |
| SQIsign | Isogeny-based | Extremely compact signatures (~177 B); slow signing | ~177 bytes |
| CROSS | Code-based (zero-knowledge) | Based on restricted syndrome decoding | ~5-13 KB |
| LESS | Code-based (equivalence) | Based on linear code equivalence problem | ~6-8 KB |
| HAWK | Lattice-based (non-structured) | Based on the Hawk lattice framework; alternative to Falcon | ~555 bytes |
Selections from this additional call are expected to be announced in 2026-2027. SQIsign is particularly interesting — it offers signature sizes competitive with classical ECDSA (~177 bytes at Level 1) but with very slow signing (~1 second). If SQIsign survives cryptanalysis, it could become the preferred scheme for applications where signature size is critical and signing speed is not (e.g., certificate authorities, root-of-trust operations).
NIST PQC Round Flow Diagram
The following diagram illustrates the complete elimination and selection flow across all rounds.
graph TD
START["82 Submissions<br/>(Nov 2017)"] --> R1_FILTER["Initial Screening"]
R1_FILTER -->|"13 withdrawn/rejected"| R1["Round 1<br/>69 Candidates"]
R1 -->|"43 eliminated"| R2["Round 2<br/>26 Candidates"]
R2 -->|"11 eliminated"| R3["Round 3<br/>15 Candidates"]
R3 --> FINALISTS["7 Finalists"]
R3 --> ALTERNATES["8 Alternates"]
FINALISTS --> KEM_F["KEM Finalists"]
FINALISTS --> SIG_F["Signature Finalists"]
KEM_F --> KYBER["CRYSTALS-Kyber<br/>(lattice)"]
KEM_F --> NTRU_F["NTRU<br/>(lattice)"]
KEM_F --> SABER_F["SABER<br/>(lattice)"]
KEM_F --> MCELIECE_F["Classic McEliece<br/>(code-based)"]
SIG_F --> DILITHIUM["CRYSTALS-Dilithium<br/>(lattice)"]
SIG_F --> FALCON["Falcon<br/>(lattice/NTRU)"]
SIG_F --> SPHINCS["SPHINCS+<br/>(hash-based)"]
ALTERNATES --> R4["Round 4 KEMs"]
R4 --> HQC_R4["HQC<br/>(code-based)"]
R4 --> BIKE_R4["BIKE<br/>(code-based)"]
R4 --> SIKE_R4["SIKE<br/>(isogeny)"]
R4 --> MCE_R4["Classic McEliece<br/>(code-based)"]
KYBER -->|"FIPS 203"| WIN_KEM["ML-KEM ✓"]
DILITHIUM -->|"FIPS 204"| WIN_SIG1["ML-DSA ✓"]
SPHINCS -->|"FIPS 205"| WIN_SIG2["SLH-DSA ✓"]
FALCON -->|"Pending FIPS"| WIN_SIG3["FN-DSA ✓"]
HQC_R4 -->|"Mar 2025"| WIN_KEM2["HQC ✓"]
SIKE_R4 -->|"BROKEN Jul 2022"| BROKEN1["✗ Castryck-Decru<br/>Attack"]
style WIN_KEM fill:#16213e,stroke:#e94560,color:#eee
style WIN_SIG1 fill:#16213e,stroke:#e94560,color:#eee
style WIN_SIG2 fill:#16213e,stroke:#e94560,color:#eee
style WIN_SIG3 fill:#16213e,stroke:#0f3460,color:#eee
style WIN_KEM2 fill:#16213e,stroke:#0f3460,color:#eee
style BROKEN1 fill:#e94560,stroke:#e94560,color:#fff
style START fill:#1a1a2e,stroke:#e94560,color:#eee
8. Comprehensive Algorithm Comparison
The following table compares all major algorithms that reached the finalist stage or beyond, providing the metrics that matter most for deployment decisions.
KEM Comparison (NIST Level 1 / equivalent)
| Algorithm | Public Key | Ciphertext | Shared Secret | KeyGen | Encaps | Decaps | Status |
|---|---|---|---|---|---|---|---|
| ML-KEM-512 | 800 B | 768 B | 32 B | ~25 us | ~30 us | ~30 us | FIPS 203 |
| NTRU-HPS-509 | 699 B | 699 B | 32 B | ~40 us | ~15 us | ~15 us | Not selected |
| SABER-LightSaber | 672 B | 736 B | 32 B | ~30 us | ~35 us | ~35 us | Not selected |
| HQC-128 | ~2,249 B | ~4,497 B | 32 B | ~50 us | ~80 us | ~120 us | Selected (R4) |
| Classic McEliece 348864 | 261,120 B | 128 B | 32 B | ~200 ms | ~30 us | ~50 us | Not selected |
| BIKE-L1 | ~1,541 B | ~1,573 B | 32 B | ~500 us | ~60 us | ~800 us | Not selected |
| SIKE-p434 | 330 B | 346 B | 16 B | ~5 ms | ~8 ms | ~8 ms | Broken |
Key observations:
- ML-KEM dominates in the balance of speed and size — no other finalist matches its overall profile
- Classic McEliece has the smallest ciphertext (128 bytes!) but enormous public keys
- SIKE had the smallest keys and ciphertexts of all candidates, which made its collapse particularly dramatic
- HQC provides the code-based diversity that NIST wanted at a moderate size cost
Signature Comparison (NIST Level 1-2 / equivalent)
| Algorithm | Public Key | Signature | KeyGen | Sign | Verify | Status |
|---|---|---|---|---|---|---|
| ML-DSA-44 | 1,312 B | 2,420 B | ~80 us | ~100 us | ~50 us | FIPS 204 |
| FN-DSA-512 | 897 B | 666 B | ~8 ms | ~400 us | ~80 us | Selected (pending FIPS) |
| SLH-DSA-SHA2-128f | 32 B | 17,088 B | ~1 us | ~4 ms | ~250 us | FIPS 205 |
| SLH-DSA-SHA2-128s | 32 B | 7,856 B | ~1 us | ~60 ms | ~3 ms | FIPS 205 |
| Rainbow-I | 161,600 B | 66 B | ~50 ms | ~1 ms | ~1 ms | Broken |
| Picnic-L1-full | 35 B | 34,036 B | ~2 us | ~50 ms | ~30 ms | Not selected |
| GeMSS-128 | 352,188 B | 33 B | ~15 s | ~15 s | ~5 ms | Not selected |
Key observations:
- Rainbow had the smallest signatures (66 bytes!) — smaller than classical ECDSA — before it was broken
- GeMSS had even smaller signatures (33 bytes) but with enormous keys and signing times in the seconds range
- SLH-DSA trades speed for the strongest security assumption (hash functions only)
- ML-DSA is the clear general-purpose winner — good at everything, best at nothing except simplicity
- FN-DSA produces the smallest post-quantum signatures that are still considered secure
Classical vs Post-Quantum Size Comparison
For perspective on the migration impact:
| Operation | Classical | Post-Quantum (recommended) | Size Increase |
|---|---|---|---|
| Key exchange (public key) | ECDH P-256: 64 B | ML-KEM-768: 1,184 B | ~18.5x |
| Key exchange (ciphertext) | ECDH P-256: 64 B | ML-KEM-768: 1,088 B | ~17x |
| Signature (public key) | Ed25519: 32 B | ML-DSA-65: 1,952 B | ~61x |
| Signature (signature) | Ed25519: 64 B | ML-DSA-65: 3,309 B | ~52x |
| TLS 1.3 handshake overhead | ~1 KB crypto | ~8 KB crypto (hybrid) | ~8x |
These increases are manageable for most modern networks but can be significant for constrained environments, satellite links (see Satellite Security for related challenges), and protocols with tight size limits.
Protocol-Specific Impact Analysis
DNSSEC: Perhaps the most severely impacted protocol. DNS responses are constrained by UDP packet sizes (typically 1,232 bytes for EDNS). A single ML-DSA-65 signature (3,309 bytes) exceeds this limit, forcing TCP fallback and significantly increasing DNS resolution latency. DNSSEC migration to PQC signatures is an active area of IETF research with no clear solution yet — SLH-DSA’s 7-17 KB signatures and FN-DSA’s 666-byte signatures represent opposite ends of the tradeoff space.
IoT and Embedded Systems: Constrained devices (ARM Cortex-M class) face challenges with ML-KEM’s memory requirements for NTT computation (~20 KB RAM for ML-KEM-768). Stack-optimized implementations exist but require careful engineering. SLH-DSA verification on constrained devices is feasible but signing is typically offloaded to more capable hardware.
Blockchain and Distributed Ledgers: Every signature is stored permanently and verified by every node. The 50x increase in signature size from ECDSA to ML-DSA has direct cost implications for blockchain storage and bandwidth. Some blockchain projects are exploring aggregate or batch verification techniques to mitigate the impact.
For detailed migration planning, see PQC Migration Strategies.
9. Global Alignment and International Positions
The NIST PQC process was U.S.-led but has had global influence. However, not all national bodies have aligned perfectly with NIST’s selections.
European Union
ETSI (European Telecommunications Standards Institute): ETSI’s Quantum-Safe Cryptography (QSC) working group has published guidance generally aligned with NIST selections. ETSI TR 103 616 provides a framework for quantum-safe migration, and the organization recommends ML-KEM and ML-DSA as primary algorithms.
BSI (German Federal Office for Information Security): The BSI has taken a notably more cautious position than NIST. Key recommendations include:
- Mandatory hybrid mode: BSI requires that post-quantum algorithms be deployed alongside classical algorithms (e.g., ML-KEM + ECDH) during the transition period, ensuring that security is maintained even if the PQC algorithm is broken
- FrodoKEM endorsement: BSI has expressed preference for FrodoKEM (based on unstructured LWE) over ML-KEM, arguing that the algebraic structure in MLWE could theoretically be exploited. NIST chose not to advance FrodoKEM past Round 2 due to its larger sizes and slower performance
- Higher parameter recommendations: BSI recommends higher security parameter sets than the minimums NIST designates as acceptable
ANSSI (French National Agency for the Security of Information Systems): ANSSI has published specific positions:
- Endorses NIST’s ML-KEM and ML-DSA selections
- Requires hybrid key exchange (combining classical and post-quantum) for all applications handling classified or sensitive data
- Published technical guidance on implementing hybrid TLS with ML-KEM + X25519
- Recommends FN-DSA (Falcon) for applications where signature size is critical
Japan
CRYPTREC (Cryptography Research and Evaluation Committees): Japan’s cryptographic evaluation body has:
- Added ML-KEM and ML-DSA to the CRYPTREC “candidate” recommended list (not yet the primary “e-Government” list)
- Published evaluation reports on all NIST finalists
- Conducted independent performance benchmarking on Japanese computing infrastructure
- Expressed interest in lattice-based schemes but has not yet issued migration timelines
Other National Positions
| Country/Body | Position | Notable Differences from NIST |
|---|---|---|
| UK (NCSC) | Recommends preparing for PQC migration; no mandated timeline yet | Emphasizes hybrid deployment; cautious on timeline |
| Canada (CCCS) | Aligned with NIST selections | Co-authored guidance with NSA/CISA on PQC migration |
| Australia (ASD) | Follows NIST recommendations | Integrated PQC guidance into ISM (Information Security Manual) |
| South Korea (NIS) | Evaluating NIST selections for KCS (Korean Cryptographic Standard) | Conducting independent evaluation track |
| China (OSCCA) | Developing independent PQC standards | Lattice-based focus; separate algorithm selection process |
| ISO/IEC | Working on ISO/IEC 18033-x updates | Will likely standardize NIST-selected algorithms with ISO identifiers |
| India (CCA/STQC) | Monitoring NIST process | No independent PQC standardization effort announced |
| Singapore (CSA) | Aligned with NIST; published quantum-safe transition guide | Emphasis on financial sector migration |
| NATO | Developing quantum-resistant communications standards | Classified timeline; hybrid deployment mandated for new systems |
China’s Independent Path
China’s approach to PQC standardization deserves special attention. The Office of the State Commercial Cryptography Administration (OSCCA) has historically maintained separate cryptographic standards (SM2, SM3, SM4) rather than adopting NIST standards directly. For PQC, China is:
- Running its own evaluation process through the Chinese Association for Cryptologic Research (CACR)
- Focusing heavily on lattice-based constructions, particularly schemes derived from NTRU and LWE
- Publishing research on quantum computing capabilities that informs its timeline estimates
- Developing PQC standards that may or may not be interoperable with NIST selections
For organizations operating across jurisdictions, this divergence means potentially maintaining multiple PQC algorithm implementations — one set for compliance with U.S./European standards and another for Chinese market requirements.
The Hybrid Deployment Consensus
One area of broad international agreement is the recommendation for hybrid deployment during the transition period. Hybrid mode combines a classical algorithm with a post-quantum algorithm such that the resulting scheme is secure if either component is secure. For example:
- Hybrid KEM: X25519 + ML-KEM-768 (used in Chrome, Cloudflare, and AWS since 2023-2024)
- Hybrid signatures: ECDSA P-256 + ML-DSA-44 (for certificate chains)
The rationale is simple: post-quantum algorithms are younger and less battle-tested than ECDH and ECDSA. Hybrid mode provides a safety net during the transition period. Most national bodies recommend hybrid deployment for at least the first 5-10 years of PQC adoption.
How hybrid combining works: In a hybrid KEM, both component KEMs (e.g., X25519 and ML-KEM-768) are executed independently, and the resulting shared secrets are combined using a key derivation function (KDF). The combined shared secret is secure as long as at least one of the two component KEMs remains unbroken. This “security OR” property means hybrid deployment is strictly better than either component alone during a period of uncertainty.
The overhead is additive: the hybrid key share in a TLS ClientHello is the X25519 key share (32 bytes) plus the ML-KEM-768 key share (1,184 bytes), for a total of ~1,216 bytes. This fits comfortably within a single TCP packet and has negligible latency impact on modern networks — Google’s measurements showed less than 1ms additional handshake time for X25519+ML-KEM-768 hybrid key exchange in Chrome.
10. Lessons Learned from the NIST PQC Process
The nine-year standardization process yielded insights that extend beyond the specific algorithms selected.
10.1 Algorithmic Diversity Is Non-Negotiable
NIST selected algorithms from three distinct mathematical families (module lattices, hash-based, code-based) deliberately. The SIKE and Rainbow breaks demonstrated that even well-studied schemes can fall suddenly. A standards ecosystem that depends on a single mathematical assumption is fragile. Security architects should plan for the possibility that any one family could be broken and ensure their systems can swap algorithms without fundamental re-architecture.
10.2 Maturity of the Underlying Problem Matters
The algorithms that survived the full process are based on problems with the deepest cryptanalytic heritage:
- Lattice problems (LWE, NTRU): Studied since the 1990s
- Syndrome decoding: Studied since 1978 (McEliece)
- Hash function security: Studied since the 1970s
The algorithm based on the newest assumption (SIKE, based on SIDH from 2011) was the one that fell completely. This does not mean novel mathematical approaches are worthless — but it does mean that the bar for confidence is proportional to the time the community has spent trying to break the problem.
10.3 Performance Is a Gating Factor, Not Just a Nice-to-Have
Several schemes with excellent security arguments (Classic McEliece, FrodoKEM) were not selected primarily because their sizes made deployment impractical. A cryptographic algorithm that is too slow or too large to deploy is not a solution — it is an academic contribution. The winning algorithms found the balance point between security and usability.
10.4 Implementation Complexity Is a Security Property
NIST’s decision to prioritize Dilithium over Falcon for the primary signature standard was driven significantly by implementation risk. Falcon’s discrete Gaussian sampling is difficult to implement in constant time, creating a larger surface for side-channel attacks. In a world where most cryptographic vulnerabilities come from implementation errors rather than algorithmic breaks, simplicity has direct security value.
10.5 Open Processes Work
The PQC competition’s open structure — public submissions, public analysis, multiple conference rounds, years of worldwide cryptanalysis — was directly responsible for identifying the SIKE and Rainbow breaks before they were standardized. A closed selection process might have missed these attacks until after deployment. The transparency of the process also built trust and facilitated global adoption of the results.
10.6 The Timeline Pressure Is Real
NIST began the process in 2016. Final standards were published in 2024. Full ecosystem migration will take until at least 2030-2035. If large-scale quantum computers arrive before migration is complete, the “harvest-now-decrypt-later” data collected during the gap is irrecoverable. This timeline risk is why multiple national bodies now treat PQC migration as an active, ongoing project rather than a future planning exercise.
The critical insight is that the “harvest-now-decrypt-later” threat means the effective deadline for protecting confidential data is not when quantum computers arrive — it is today. Data encrypted with RSA or ECDH that is intercepted and stored now can be decrypted retroactively once a cryptographically relevant quantum computer exists. For data with confidentiality requirements of 10+ years (medical records, national security intelligence, trade secrets, legal communications), the migration deadline has already passed.
10.7 Standards Are Necessary but Not Sufficient
Publishing FIPS 203/204/205 was a critical milestone, but standards documents alone do not protect systems. The real work lies in:
- Library implementation: Converting mathematical specifications into constant-time, side-channel-resistant code
- Protocol integration: Updating TLS, SSH, IPsec, S/MIME, DNSSEC, and dozens of other protocols to negotiate and use PQC algorithms
- Testing and validation: NIST’s Cryptographic Algorithm Validation Program (CAVP) must develop test vectors and validation suites for ML-KEM, ML-DSA, and SLH-DSA
- Hardware acceleration: Future CPUs and HSMs will need dedicated instructions for NTT operations and polynomial arithmetic, similar to AES-NI for AES
- Ecosystem coordination: Every link in the chain — from CAs to browsers to load balancers to embedded devices — must update in a coordinated fashion
This “last mile” problem is where most of the remaining effort lies. The algorithms are selected; the engineering challenge of deploying them globally is just beginning.
For migration strategies and implementation guidance, see PQC Migration Strategies.
11. Impact on Protocols and Standards
The NIST selections trigger cascading updates across the entire cryptographic ecosystem.
TLS 1.3
The IETF has published or is developing RFCs for integrating PQC into TLS:
- Hybrid key exchange: X25519MLKEM768 (combining X25519 and ML-KEM-768) is already deployed at scale by Cloudflare, Google Chrome, and AWS
- PQC-only key exchange: ML-KEM key exchange groups defined for TLS 1.3
- PQC authentication: ML-DSA and SLH-DSA signature algorithms for TLS certificate verification (larger impact due to certificate chain sizes)
X.509 and PKI
Post-quantum certificates are significantly larger than classical ones:
- An ML-DSA-65 certificate adds ~5 KB over an ECDSA certificate
- A typical TLS certificate chain (3 certificates) with ML-DSA-65 adds ~15 KB to the handshake
- Certificate transparency logs, OCSP responses, and CRLs all grow proportionally
- Hybrid certificates (containing both classical and PQC keys/signatures) are being specified to enable gradual migration
SSH
OpenSSH has integrated post-quantum key exchange since version 9.0 (April 2022):
sntrup761x25519-sha512@openssh.com— a hybrid combining Streamlined NTRU Prime (a lattice-based KEM) with X25519- ML-KEM-based key exchange is expected to replace this once ML-KEM integration is finalized in the SSH specification
IPsec / IKEv2
RFC 9370 defines multiple key exchanges for IKEv2, enabling hybrid PQC + classical key exchange. ML-KEM key exchange groups are being defined for production use.
Code Signing and Firmware Updates
- Larger signatures impact firmware images where signature verification happens on constrained devices
- SLH-DSA may be preferred for firmware signing due to its conservative security argument and the fact that verification (not signing) is the constrained operation
- ML-DSA is preferred for code signing where latency matters and signatures are verified on general-purpose hardware
S/MIME and Email
Post-quantum S/MIME integration is being developed through IETF drafts. Key challenges include:
- Email clients must handle larger certificate chains embedded in every signed message
- ML-DSA-65 signatures add ~3.3 KB per signature; a triple-signed email chain adds ~10 KB
- Gateway-based encryption can use ML-KEM for key exchange, but end-to-end encryption requires PQC key material in user certificates
- Backward compatibility with legacy email clients that do not understand PQC algorithms requires dual-signature approaches
FIDO2 / WebAuthn
The FIDO Alliance is evaluating PQC integration for hardware security keys and passkeys:
- Hardware authenticators (YubiKey, Titan) are constrained in storage and computation
- ML-DSA-44 is the most likely candidate for FIDO2 PQC signatures due to its reasonable key and signature sizes
- Attestation certificate chains will need PQC-aware verification in relying parties
- Timeline for PQC FIDO2 specifications is expected in 2026-2027
12. Preparing for the Transition
Immediate Actions for Security Teams
- Inventory cryptographic usage: Identify all systems using RSA, ECDH, ECDSA, or EdDSA for key exchange, signatures, or encryption
- Prioritize by data sensitivity: Systems protecting data with long confidentiality requirements (healthcare, financial, government, legal) should migrate first
- Deploy hybrid key exchange: TLS 1.3 hybrid key exchange (X25519 + ML-KEM-768) is production-ready and should be enabled immediately for key exchange
- Test certificate chain sizes: Evaluate the impact of PQC signatures on your certificate infrastructure, load balancers, and client connections
- Engage vendors: Ensure your cryptographic library vendors (OpenSSL, BoringSSL, NSS, wolfSSL) have roadmaps for FIPS 203/204/205 support
Algorithm Selection Guidance
| Use Case | Recommended Algorithm | Rationale |
|---|---|---|
| General key exchange | ML-KEM-768 (hybrid with X25519) | Best balance of security, performance, and size |
| General signatures | ML-DSA-65 | Simplest implementation, good performance |
| Conservative / long-term signatures | SLH-DSA-SHA2-192s | Hash-only security assumption |
| Bandwidth-constrained signatures | FN-DSA-512 (when available) | Smallest PQC signatures |
| Backup / diversity KEM | HQC-192 (when available) | Code-based diversity from ML-KEM |
Library and Framework Support (as of early 2025)
| Library | ML-KEM | ML-DSA | SLH-DSA | FN-DSA | HQC |
|---|---|---|---|---|---|
| liboqs (Open Quantum Safe) | Yes | Yes | Yes | Yes | Yes |
| OpenSSL 3.5+ | Yes | Yes | Yes | Planned | Planned |
| BoringSSL | Yes (hybrid TLS) | Planned | No | No | No |
| wolfSSL | Yes | Yes | Yes | No | No |
| AWS-LC | Yes | Yes | No | No | No |
| Bouncy Castle | Yes | Yes | Yes | Yes | Yes |
Common Migration Pitfalls
Security teams undertaking PQC migration should be aware of these frequently encountered issues:
-
Assuming classical algorithms can be removed immediately: Hybrid deployment exists for a reason. Removing classical algorithms before PQC implementations are battle-tested creates unnecessary risk. Plan for at least 5 years of hybrid operation.
-
Ignoring certificate chain bloat: A single ML-DSA-65 certificate is manageable. A chain of 3-4 certificates with cross-signatures can exceed 15 KB of cryptographic overhead. Test your entire PKI chain, not just leaf certificates.
-
Underestimating performance variance: PQC algorithms have wider performance variance across hardware platforms than classical algorithms. ML-KEM on a server-class CPU with AVX2 is very fast; on a Cortex-M4 without hardware acceleration, it requires careful optimization. Benchmark on your actual target hardware.
-
Forgetting about key storage: ML-DSA-65 public keys (1,952 bytes) stored in LDAP directories, DNS records, or blockchain systems accumulate storage costs at scale. An organization with 100,000 employees storing PQC public keys in Active Directory will see measurable storage growth.
-
Neglecting cryptographic agility: The PQC landscape is still evolving. Systems should be designed with algorithm agility — the ability to swap algorithms via configuration rather than code changes. This was a hard lesson from the SHA-1 deprecation process and applies even more strongly to PQC.
-
Skipping the inventory step: You cannot migrate what you have not inventoried. Many organizations discover cryptographic dependencies in unexpected places during PQC readiness assessments — embedded firmware, legacy protocols, third-party SaaS integrations, and hardware security modules with non-upgradeable firmware.
Key Compliance Deadlines
| Mandate | Deadline | Requirement |
|---|---|---|
| U.S. OMB M-23-02 | By 2035 | Federal agencies must migrate to PQC for all systems |
| NSA CNSA 2.0 | 2025-2033 (phased) | Software/firmware signing by 2025; web servers by 2025; networking by 2026; OS/infrastructure by 2027; custom/legacy by 2033 |
| U.S. Executive Order 14028 | Ongoing | Requires cryptographic inventory and migration planning |
| CISA PQC Migration Guidance | Ongoing | Recommends immediate adoption of hybrid key exchange |
| BSI TR-02102 | Ongoing | Recommends hybrid PQC for German government systems |
Summary
The NIST PQC Standardization Process represents the largest and most rigorous cryptographic algorithm evaluation in history. From 82 submissions to three published FIPS standards and two additional selections, the process delivered:
- ML-KEM (FIPS 203): The primary key encapsulation mechanism for the post-quantum era, based on module lattices
- ML-DSA (FIPS 204): The primary digital signature scheme, prioritizing implementation simplicity and balanced performance
- SLH-DSA (FIPS 205): The conservative hash-based backup signature scheme with the strongest security assumptions
- FN-DSA (pending): The compact lattice-based signature for bandwidth-constrained applications
- HQC (selected March 2025): The code-based backup KEM providing algorithmic diversity from ML-KEM
The process also demonstrated the importance of open evaluation (SIKE and Rainbow were broken during the competition, not after deployment), algorithmic diversity (no single mathematical family should be trusted exclusively), and implementation simplicity as a first-class security property.
For the mathematical foundations underlying these algorithms, see Mathematical Foundations. For practical migration guidance, see PQC Migration Strategies. For the quantum computing threat model that motivates this entire effort, see Quantum Computing Threat Model.
Further Reading
- NIST FIPS 203 — Module-Lattice-Based Key-Encapsulation Mechanism Standard (ML-KEM)
- NIST FIPS 204 — Module-Lattice-Based Digital Signature Standard (ML-DSA)
- NIST FIPS 205 — Stateless Hash-Based Digital Signature Standard (SLH-DSA)
- NIST IR 8413 — Status Report on the Third Round of the NIST Post-Quantum Cryptography Standardization Process
- NIST SP 800-208 — Recommendation for Stateful Hash-Based Signature Schemes (XMSS, LMS)
- Castryck & Decru (2022) — “An efficient key recovery attack on SIDH” (the paper that broke SIKE)
- Beullens (2022) — “Breaking Rainbow Takes a Weekend on a Laptop”
- CISA/NSA/NIST Joint Guidance — “Quantum-Readiness: Migration to Post-Quantum Cryptography”
- ETSI TR 103 616 — Migration strategies and recommendations for quantum-safe schemes