← Back to Post-Quantum Cryptography

Tools, Testing & Resources

16 min read

Overview

Migrating to post-quantum cryptography demands more than algorithm selection and library integration. It demands tooling — the scanners that find classical crypto buried in your codebase, the test frameworks that prove your PQC implementation is correct, the benchmarks that tell you whether ML-KEM-768 can survive your latency budget, and the formal verification tools that provide mathematical assurance when decades of cryptanalysis have not yet had time to do the same. The PQC ecosystem’s tooling maturity lags behind its algorithmic maturity, but the gap is closing fast.

This page catalogs the practical toolchain for PQC migration: discovery tools that build your cryptographic inventory, testing infrastructure that validates implementations, benchmarking suites that quantify performance, formal verification frameworks that prove correctness, and the learning resources and vendor tracking tables that keep your migration plan grounded in reality. For the migration methodology that consumes these tools’ outputs, see Migration Strategies & Crypto Agility. For the library ecosystem these tools validate, see Real-World Implementations & Libraries.


1. Crypto Discovery Tools

Before you can migrate, you need to know what you have. Cryptographic discovery — building a comprehensive inventory of every algorithm, key length, certificate, protocol version, and hardcoded cryptographic constant in your environment — is the foundational step of any PQC migration. The tools in this section automate that inventory.

1.1 IBM Quantum Safe Explorer (QSE)

IBM’s Quantum Safe Explorer is an enterprise-grade cryptographic discovery and risk assessment platform. It operates across three dimensions: source code scanning, binary analysis, and network traffic inspection.

Capabilities:

  • Source code scanning. Parses Java, C/C++, Python, Go, and .NET codebases to identify cryptographic API calls — javax.crypto.*, OpenSSL function invocations, crypto/tls configuration, .NET System.Security.Cryptography usage. Maps each call to the underlying algorithm and key size.
  • Binary analysis. Scans compiled artifacts (JARs, shared libraries, container images) for cryptographic signatures without requiring source access. Identifies statically linked crypto libraries by matching known function signatures and constant tables.
  • CBOM generation. Produces a Cryptography Bill of Materials — a structured inventory (typically JSON/XML) of every cryptographic asset discovered. The CBOM format aligns with emerging OWASP and IETF standards for cryptographic transparency.
  • Quantum risk scoring. Classifies each discovered cryptographic usage as quantum-safe, quantum-vulnerable, or unknown. Assigns risk scores based on algorithm type, key size, and data sensitivity context.
  • Remediation guidance. Maps each vulnerable finding to specific PQC replacement recommendations (e.g., RSA-2048 key exchange to ML-KEM-768, ECDSA-P256 signatures to ML-DSA-65).

What it finds that manual review misses:

  • Transitive dependencies pulling in vulnerable crypto (your code uses a library that uses a library that calls RSA)
  • Hardcoded algorithm identifiers in configuration files (cipher suite strings, JCE provider configurations)
  • Certificate pinning configurations that will break during PQC certificate transitions
  • Custom protocol implementations using raw cryptographic primitives
# Example QSE CLI invocation (simplified)
qse scan --source ./my-application \
         --output cbom.json \
         --format cbom-v1 \
         --risk-assessment quantum \
         --include-dependencies

1.2 sslyze: TLS Configuration Scanning

sslyze is an open-source TLS scanner that has added PQC-specific probes. It tests live servers to determine which cipher suites, key exchange mechanisms, and signature algorithms are offered and accepted.

PQC-relevant capabilities:

# Scan a server for PQC TLS support
sslyze --regular --json_out results.json target.example.com

# Specifically test for hybrid key exchange support
sslyze --tlsv1_3 target.example.com

What to look for in results:

  • Key exchange groups. Check for x25519_mlkem768, secp256r1_mlkem768, or other hybrid KEM groups in the TLS 1.3 supported_groups extension.
  • Signature algorithms. Look for ML-DSA OIDs in the signature_algorithms extension.
  • Certificate chain algorithms. Verify whether the server’s certificate chain uses PQC or hybrid signatures.
  • Downgrade behavior. Confirm the server correctly negotiates PQC when the client offers it and falls back gracefully when it does not.

1.3 Cryptosense: Enterprise Crypto Inventory

Cryptosense provides agent-based and agentless cryptographic discovery for enterprise environments. It differs from source-code scanners by operating at the runtime level — intercepting actual cryptographic operations as they occur.

Key differentiators:

  • Runtime interception. Instruments JVMs, .NET runtimes, and native crypto libraries (OpenSSL, NSS, CNG) to capture every cryptographic operation in production traffic. This catches crypto usage that static analysis misses — dynamically loaded algorithms, runtime-selected cipher suites, ephemeral key generation patterns.
  • Policy engine. Define organizational cryptographic policies (e.g., “no RSA below 3072 bits,” “no SHA-1 for signatures,” “all key exchanges must be quantum-safe by Q4 2027”) and automatically flag violations.
  • Compliance mapping. Maps findings against NIST SP 800-131A, CNSA 2.0 timelines, PCI DSS cryptographic requirements, and custom compliance frameworks.
  • Integration. Feeds into SIEM platforms (Splunk, Sentinel), ticketing systems (Jira, ServiceNow), and CI/CD pipelines for continuous cryptographic compliance.

1.4 CryptoGuard: Static Analysis for Crypto Misuse

CryptoGuard is a static analysis tool specifically designed to detect cryptographic misuse in source code. While general-purpose SAST tools flag some crypto issues, CryptoGuard understands cryptographic semantics — it knows that using ECB mode is not just a “code quality” issue but a confidentiality-breaking vulnerability.

Detection categories relevant to PQC migration:

  • Hardcoded cryptographic keys and IVs
  • Use of deprecated algorithms (DES, 3DES, RC4, MD5 for integrity)
  • Insufficient key lengths (RSA < 2048, ECC < 256)
  • Incorrect API usage (reused nonces, missing authentication tags)
  • Non-constant-time comparisons of cryptographic values

In the PQC context, CryptoGuard helps you find the code that must change — every location where a quantum-vulnerable algorithm is instantiated, configured, or assumed.

1.5 Custom Scripts: OpenSSL and Nmap

For targeted PQC testing, custom scripts built on standard tools remain indispensable.

OpenSSL s_client for PQC testing:

# Test ML-KEM hybrid key exchange against a server
openssl s_client -connect target.example.com:443 \
  -groups x25519_mlkem768 \
  -tls1_3 \
  -brief

# List all available PQC groups in your OpenSSL build
openssl list -kem-algorithms | grep -i ml-kem

# Test PQC certificate verification
openssl s_client -connect target.example.com:443 \
  -sigalgs "mldsa65" \
  -CAfile pqc-root-ca.pem \
  -verify 4

# Generate ML-KEM keypair for testing
openssl genpkey -algorithm mlkem768 -out mlkem768_key.pem

Nmap NSE scripts:

# Enumerate TLS cipher suites including PQC
nmap --script ssl-enum-ciphers -p 443 target.example.com

# Custom NSE script for PQC group detection
# (Community scripts available for TLS 1.3 group enumeration)
nmap --script tls-supported-groups -p 443 target.example.com

Bulk scanning script for PQC readiness across your infrastructure:

#!/bin/bash
# pqc-scan.sh — Scan hosts for PQC TLS support

PQC_GROUPS="x25519_mlkem768:secp256r1_mlkem768:secp384r1_mlkem1024"

while IFS= read -r host; do
  echo "=== Scanning $host ==="
  for group in ${PQC_GROUPS//:/ }; do
    result=$(echo | openssl s_client -connect "$host:443" \
      -groups "$group" -tls1_3 2>&1)
    if echo "$result" | grep -q "Server Temp Key"; then
      echo "  [SUPPORTED] $group"
    else
      echo "  [NOT SUPPORTED] $group"
    fi
  done
done < hosts.txt

1.6 Code Scanning: Semgrep and CodeQL

Semgrep rules for deprecated crypto detection:

# semgrep-pqc-rules.yaml
rules:
  - id: quantum-vulnerable-rsa-keygen
    patterns:
      - pattern: KeyPairGenerator.getInstance("RSA")
    message: >
      RSA key generation detected. RSA is vulnerable to quantum
      attack via Shor's algorithm. Plan migration to ML-KEM
      (FIPS 203) for key encapsulation or ML-DSA (FIPS 204)
      for digital signatures.
    severity: WARNING
    metadata:
      category: pqc-migration
      cwe: "CWE-327"
      references:
        - https://csrc.nist.gov/projects/post-quantum-cryptography

  - id: quantum-vulnerable-ecdsa
    patterns:
      - pattern: Signature.getInstance("SHA256withECDSA")
    message: >
      ECDSA signature detected. Elliptic curve cryptography
      is vulnerable to quantum attack. Migrate to ML-DSA
      (FIPS 204) or SLH-DSA (FIPS 205).
    severity: WARNING
    metadata:
      category: pqc-migration

  - id: quantum-vulnerable-ecdh
    patterns:
      - pattern: KeyAgreement.getInstance("ECDH")
    message: >
      ECDH key agreement detected. Migrate to ML-KEM
      (FIPS 203) hybrid key encapsulation.
    severity: WARNING
    metadata:
      category: pqc-migration
# Run Semgrep with PQC rules
semgrep --config semgrep-pqc-rules.yaml ./src/

CodeQL queries for cryptographic inventory:

/**
 * @name Quantum-vulnerable cryptographic algorithm usage
 * @description Finds all uses of RSA, ECDSA, ECDH, and
 *              DH that will require PQC migration.
 * @kind problem
 * @problem.severity warning
 * @id java/quantum-vulnerable-crypto
 */

import java

from MethodAccess ma
where
  ma.getMethod().hasName("getInstance") and
  ma.getMethod().getDeclaringType().hasQualifiedName("javax.crypto", "Cipher") and
  ma.getArgument(0).(StringLiteral).getValue().regexpMatch(".*RSA.*|.*EC.*")
select ma, "Quantum-vulnerable algorithm: " + ma.getArgument(0).(StringLiteral).getValue()

2. Testing Frameworks

Testing PQC implementations is more critical — and more complex — than testing classical crypto. These algorithms are newer, the implementations less mature, and the attack surface less thoroughly explored by decades of cryptanalysis. Rigorous testing compensates for the time the algorithms have not yet spent in the wild.

2.1 OQS Test Infrastructure

The Open Quantum Safe project maintains comprehensive test infrastructure that serves as both a validation tool and a reference for building your own PQC test suites.

Test categories:

  • Functional correctness. For every algorithm: generate keypair, encapsulate/sign, decapsulate/verify. Repeat across all parameter sets. Verify round-trip correctness.
  • Known Answer Tests (KATs). Compare implementation outputs against NIST-provided test vectors. Any deviation indicates a bug — there is no “acceptable variance” in deterministic cryptographic operations.
  • Memory safety. Valgrind and AddressSanitizer runs across all operations to detect buffer overflows, use-after-free, and uninitialized memory reads.
  • Constant-time verification. Using tools like ctgrind and timecop to verify that secret-dependent operations do not exhibit timing variations.
# Build and run OQS tests
cd liboqs && mkdir build && cd build
cmake -DOQS_BUILD_TESTS=ON ..
make -j$(nproc)

# Run all KEM tests
ctest --test-dir tests -R test_kem

# Run all signature tests
ctest --test-dir tests -R test_sig

# Run specific algorithm test
./tests/test_kem ML-KEM-768

# Run with verbose output for debugging
./tests/test_kem ML-KEM-768 --verbose

2.2 NIST ACVP for PQC

The Automated Cryptographic Validation Protocol (ACVP) is NIST’s framework for automated testing of cryptographic implementations. ACVP is essential for FIPS 140-3 validation, and its extension to PQC algorithms is a critical milestone for enterprise adoption.

How ACVP works:

  1. Registration. The implementation under test (IUT) registers its claimed algorithm capabilities with the ACVP server — which algorithms, parameter sets, and operations it supports.
  2. Test vector generation. The ACVP server generates test vectors tailored to the IUT’s capabilities. For ML-KEM, this includes key generation vectors, encapsulation vectors (with known public keys), and decapsulation vectors (with known secret keys and ciphertexts).
  3. Response submission. The IUT processes each vector and returns results.
  4. Validation. The server compares responses against known-correct answers.

PQC-specific ACVP test types:

AlgorithmACVP Test CategoryWhat It Validates
ML-KEMAFT (Algorithm Functional Test)Key generation, encapsulation, decapsulation
ML-KEMVAL (Validation Test)Decapsulation with invalid ciphertexts (implicit rejection)
ML-DSAAFTKey generation, signature generation, verification
ML-DSAGDT (Generated Data Test)Non-deterministic signature generation with hedged randomness
SLH-DSAAFTAll parameter sets, both deterministic and randomized signing
# Example ACVP client interaction (using libacvp)
acvp_app --server acvp.nist.gov \
         --port 443 \
         --alg ML-KEM-768 \
         --kat \
         --cert_file client.pem \
         --key_file client.key

2.3 Interoperability Testing

PQC interoperability is the difference between “my implementation works” and “my implementation works with everyone else’s implementation.” The OQS project maintains an interop test server, and several industry efforts coordinate multi-vendor testing.

OQS interop test server:

The OQS project hosts public-facing TLS servers configured with various PQC algorithm combinations. These endpoints let you test your client implementation against a known-good server without deploying your own infrastructure.

# Test your client against OQS interop servers
# (endpoints subject to change — check oqs project for current URLs)
openssl s_client -connect test.openquantumsafe.org:6000 \
  -groups x25519_mlkem768 -tls1_3

# Verify certificate chain with PQC signatures
openssl s_client -connect test.openquantumsafe.org:6001 \
  -CAfile oqs-root.pem -verify_return_error

Industry interop testing initiatives:

  • IETF Hackathons. Regular interop events where implementers test PQC-enabled TLS, X.509, and S/MIME implementations against each other.
  • NIST NCCoE PQC Migration Project. Coordinates multi-vendor interoperability testing with real enterprise infrastructure.
  • Cloudflare-Google interop. Chrome and Cloudflare conducted the largest real-world PQC interop test by enabling X25519+ML-KEM-768 hybrid key exchange across production traffic.

2.4 Fuzzing PQC Implementations

Fuzzing is particularly valuable for PQC implementations because these codebases are younger and have received less scrutiny than OpenSSL’s RSA implementation (which has been fuzzed for decades).

AFL++ fuzzing targets for PQC:

// fuzz_mlkem_decaps.c — AFL++ harness for ML-KEM decapsulation
#include <oqs/oqs.h>
#include <string.h>

int main(int argc, char **argv) {
    OQS_KEM *kem = OQS_KEM_new(OQS_KEM_alg_ml_kem_768);
    if (!kem) return 1;

    uint8_t *public_key = malloc(kem->length_public_key);
    uint8_t *secret_key = malloc(kem->length_secret_key);
    uint8_t *shared_secret = malloc(kem->length_shared_secret);

    // Generate a valid keypair
    OQS_KEM_keypair(kem, public_key, secret_key);

    // Read fuzz input as ciphertext
    uint8_t *ciphertext = malloc(kem->length_ciphertext);
    size_t len = fread(ciphertext, 1, kem->length_ciphertext, stdin);

    if (len == kem->length_ciphertext) {
        // This should never crash, even with malformed input
        // ML-KEM uses implicit rejection — invalid ciphertexts
        // produce a pseudorandom shared secret, not an error
        OQS_KEM_decaps(kem, shared_secret, ciphertext, secret_key);
    }

    free(public_key); free(secret_key);
    free(shared_secret); free(ciphertext);
    OQS_KEM_free(kem);
    return 0;
}
# Compile with AFL++ instrumentation
AFL_USE_ASAN=1 afl-clang-fast fuzz_mlkem_decaps.c \
  -I/usr/local/include -L/usr/local/lib -loqs -o fuzz_target

# Create seed corpus from valid ciphertexts
mkdir corpus && ./generate_valid_ciphertexts > corpus/seed_0

# Launch fuzzer
afl-fuzz -i corpus -o findings -m none -- ./fuzz_target

libFuzzer integration:

// libfuzzer_mlkem.c
#include <oqs/oqs.h>
#include <stdint.h>
#include <stddef.h>

int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
    OQS_KEM *kem = OQS_KEM_new(OQS_KEM_alg_ml_kem_768);
    if (!kem || size < kem->length_ciphertext + kem->length_secret_key) {
        OQS_KEM_free(kem);
        return 0;
    }

    uint8_t shared_secret[32];
    // Split fuzz input into secret_key and ciphertext
    OQS_KEM_decaps(kem, shared_secret, data, data + kem->length_ciphertext);

    OQS_KEM_free(kem);
    return 0;
}

2.5 Known Answer Tests (KATs)

KATs are the gold standard for verifying that a cryptographic implementation produces correct output. NIST publishes KAT vectors for all standardized PQC algorithms, and any conforming implementation must reproduce them exactly.

Structure of a KAT file:

# ML-KEM-768 Known Answer Test Vectors
# Format: count, seed, pk, sk, ct, ss

count = 0
seed = 061550234D158C5EC95595FE04EF7A25767F2E24CC2BC479D09D86DC9ABCFDE7...
pk = A72C3...
sk = 07638BB1...
ct = B58B82D...
ss = A05D7E...

count = 1
seed = D81C4D8D734FCBFBEADE3D3F8A039FAA2A2C9957E835AD55B22E75BF57BB556A...
...

Running KATs against your implementation:

# Using the NIST-provided KAT runner
./PQCgenKAT_kem

# Comparing output
diff PQCkemKAT_2400.rsp expected_PQCkemKAT_2400.rsp

# Any difference = implementation bug. No exceptions.

Why KATs are necessary but not sufficient:

KATs verify functional correctness on known inputs. They do not verify:

  • Constant-time behavior (side-channel resistance)
  • Memory safety (buffer handling, allocation failures)
  • Randomness quality (entropy source, DRBG seeding)
  • Edge cases not covered by NIST vectors (algorithm-specific corner cases)

KATs are your first gate — pass them before proceeding to deeper testing. For the algorithm-level details that inform what these tests validate, see Lattice-Based Cryptography and NIST PQC Standardization.


3. Benchmarking

Performance is a deployment gating factor. PQC algorithms have different performance profiles than their classical counterparts — ML-KEM is faster than RSA key exchange but produces larger ciphertexts, ML-DSA signatures are larger than ECDSA, SLH-DSA signing is orders of magnitude slower. You need concrete numbers for your hardware, your workloads, and your latency budgets.

3.1 SUPERCOP

SUPERCOP (System for Unified Performance Evaluation Related to Cryptographic Operations and Primitives) is the most comprehensive cryptographic benchmarking framework, maintained by Daniel J. Bernstein and Tanja Lange.

What makes SUPERCOP unique:

  • Exhaustive compilation. SUPERCOP compiles each implementation with dozens of different compiler flag combinations and reports the fastest result. This eliminates “I forgot to enable AVX-512” from distorting benchmarks.
  • Multiple implementations per algorithm. Tests reference code, optimized C, assembly, and vectorized implementations side by side.
  • Standardized measurement. Uses CPU cycle counts (via rdtsc or platform equivalents) for hardware-independent comparison.
  • Reproducible. Given the same hardware and SUPERCOP version, results are deterministic.
# Download and run SUPERCOP (warning: full run takes hours)
wget https://bench.cr.yp.to/supercop/supercop-YYYYMMDD.tar.xz
tar xf supercop-YYYYMMDD.tar.xz
cd supercop-YYYYMMDD

# Run benchmarks
./do-part init
./do-part crypto_kem
./do-part crypto_sign

# Results appear in bench/ directory

3.2 liboqs Speed Benchmarks

For quick, targeted benchmarks of PQC algorithms, the liboqs speed test is the fastest path to actionable numbers.

# Build liboqs with speed tests
cd liboqs/build
cmake -DOQS_BUILD_SPEED=ON -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc)

# Run all KEM benchmarks
./tests/speed_kem

# Run specific algorithm
./tests/speed_kem ML-KEM-768

# Sample output:
# ML-KEM-768:
#   keygen:    avg     28,451 cycles (   0.010 ms)
#   encaps:    avg     36,782 cycles (   0.013 ms)
#   decaps:    avg     41,206 cycles (   0.015 ms)

# Run all signature benchmarks
./tests/speed_sig

# Sample output (approximate, varies by hardware):
# ML-DSA-65:
#   keygen:    avg    112,340 cycles (   0.040 ms)
#   sign:      avg    287,650 cycles (   0.103 ms)
#   verify:    avg    118,900 cycles (   0.043 ms)

3.3 eBACS Benchmark Database

The eBACS (ECRYPT Benchmarking of Cryptographic Systems) project maintains an online database of SUPERCOP results across hundreds of machines. This is your reference for comparing PQC algorithm performance across different CPU architectures without running benchmarks yourself.

How to use eBACS effectively:

  1. Find your CPU architecture. eBACS results are indexed by machine. Find a machine matching your deployment target (e.g., Intel Ice Lake, AMD Zen 4, ARM Neoverse).
  2. Compare operation costs. For KEM migration, compare crypto_kem results for ML-KEM against crypto_dh results for X25519. For signature migration, compare crypto_sign results for ML-DSA against ECDSA.
  3. Account for parameter sets. ML-KEM-512 is faster than ML-KEM-1024, but provides NIST Level 1 vs. Level 5 security. Ensure you compare algorithms at equivalent security levels.

3.4 Running Your Own Benchmarks

Synthetic benchmarks tell you algorithm speed. Production-representative benchmarks tell you whether your system will meet its SLAs.

# Step 1: Benchmark raw algorithm performance
./tests/speed_kem ML-KEM-768 --iterations 10000

# Step 2: Benchmark TLS handshake with PQC
# Using openssl s_time for handshake throughput
openssl s_server -cert pqc-cert.pem -key pqc-key.pem \
  -groups x25519_mlkem768 -tls1_3 -www &
SERVER_PID=$!

openssl s_time -connect localhost:4433 \
  -new -time 30 -groups x25519_mlkem768

kill $SERVER_PID

# Step 3: Benchmark with realistic traffic patterns
# Using h2load for HTTP/2 with PQC TLS
h2load -n 10000 -c 100 -t 4 https://localhost:4433/

# Step 4: Compare classical vs PQC handshake latency
# Classical baseline
openssl s_time -connect localhost:4433 -new -time 30 \
  -groups x25519

# PQC hybrid
openssl s_time -connect localhost:4433 -new -time 30 \
  -groups x25519_mlkem768

3.5 Interpreting Results: What Metrics Matter

Not all benchmark numbers are equally actionable. Here is what to focus on:

MetricWhat It Tells YouWhen It Matters
CPU cycles per operationRaw computational cost, hardware-independentAlgorithm selection, comparing implementations
Wall-clock latency (ms)User-perceived delayTLS handshake budgets, API response times
Throughput (ops/sec)Sustained capacity under loadHigh-volume services, certificate issuance
Public key size (bytes)Bandwidth and storage impactCertificate chains, IoT/constrained devices
Ciphertext/signature size (bytes)Wire format overheadMTU constraints, packet fragmentation
Memory consumption (KB)RAM requirements during operationsEmbedded systems, HSMs, serverless functions
Key generation timeCost of fresh key materialEphemeral key exchange, short-lived certificates

Common benchmarking mistakes to avoid:

  • Benchmarking debug builds. Always use -O2 or -O3 with release flags. Debug builds can be 10-50x slower.
  • Ignoring CPU frequency scaling. Pin CPU frequency during benchmarks or measure in cycles, not milliseconds.
  • Single-threaded only. Test concurrent performance — contention on shared RNG state or memory allocators changes the picture.
  • Neglecting key sizes. An algorithm that is fast but produces 1.5 KB public keys may be impractical for your certificate chain depth.

4. Formal Verification

Post-quantum algorithms have not survived 40 years of cryptanalysis like RSA. They have not been deployed in billions of TLS connections like ECDH. The mathematical security proofs exist, but the gap between a proven-secure specification and a bug-free C implementation is where vulnerabilities live. Formal verification — mathematically proving that code correctly implements its specification — narrows this gap.

4.1 EasyCrypt for PQC Proofs

EasyCrypt is a proof assistant designed specifically for cryptographic security proofs. It enables machine-checked proofs that a cryptographic scheme meets its security definitions (IND-CCA2, EUF-CMA, etc.) under specified assumptions.

PQC applications:

  • ML-KEM security proof. EasyCrypt has been used to formalize the IND-CCA2 security proof of the Fujisaki-Okeyama transform used in ML-KEM, proving that the construction is secure in the (quantum) random oracle model assuming the hardness of Module-LWE.
  • Reduction tightness. Formal verification of security reduction tightness helps quantify the concrete security level — the gap between the assumed hard problem’s difficulty and the scheme’s provable security.
  • Composition proofs. Verifying that hybrid constructions (classical + PQC) maintain security even if one component is broken.
(* Simplified EasyCrypt proof sketch for KEM IND-CCA2 *)
require import MLKEM.
require import FO_Transform.

module MLKEM_CCA = FO_Transform(MLKEM_CPA).

lemma mlkem_cca_security &m :
  `| Pr[IND_CCA(MLKEM_CCA, A).main() @ &m : res] - 1%r/2%r | <=
    `| Pr[MLWE(B1(A)).main() @ &m : res] - 1%r/2%r | +
    Pr[Correctness(MLKEM_CPA).main() @ &m : res].

4.2 Jasmin/Libjade: Verified Implementations

The Jasmin programming language and its associated Libjade library represent the most ambitious effort to produce formally verified, high-performance PQC implementations.

Architecture:

  • Jasmin language. A low-level programming language designed for writing cryptographic code that is both verifiable and compilable to efficient assembly. Jasmin sits between C and assembly — it gives you control over registers, memory layout, and instruction selection while remaining amenable to formal verification.
  • Libjade. A library of PQC implementations written in Jasmin, with proofs of functional correctness and constant-time execution. Current implementations include ML-KEM (all parameter sets) and ML-DSA.
  • Verification stack. Jasmin programs are verified using the EasyCrypt proof assistant. The toolchain proves: (1) the Jasmin code implements the specification correctly, (2) the compiled assembly preserves these properties, and (3) the code executes in constant time.

Why this matters:

Specification (FIPS 203)
    ↓  [proven correct by EasyCrypt]
Jasmin implementation
    ↓  [verified compilation]
x86-64 assembly
    ↓  [constant-time proven]
Production binary

Every step in this chain is mathematically verified. Compare this to a typical C implementation where the gap between spec and code is bridged only by code review and testing.

4.3 HACL*: Verified Crypto Library

HACL* (High-Assurance Cryptographic Library) is a formally verified cryptographic library written in F*, a functional programming language with dependent types and a proof assistant built in.

PQC-relevant capabilities:

  • Verified Kyber/ML-KEM. HACL* includes a verified implementation of ML-KEM with proofs of functional correctness, memory safety, and secret independence (no branching on secret data).
  • Extraction to C. F* code is extracted to C through the KaRaMeL tool, producing readable, auditable C code that inherits the verification guarantees of the F* source.
  • Integration. HACL* code is used in production by Mozilla Firefox (via NSS), the Linux kernel, and the Tezos blockchain.

Verification guarantees provided by HACL:*

PropertyWhat It MeansHow It Is Verified
Functional correctnessOutput matches specification for all inputsF* type-checking against spec
Memory safetyNo buffer overflows, use-after-free, or null dereferencesF* linear types and bounds checking
Secret independenceNo branching or memory access patterns depend on secret dataInformation flow analysis in F*
Side-channel resistanceConstant-time execution for all secret-dependent operationsVerified at the F* level, preserved through extraction

4.4 Why Formal Verification Matters More for PQC

Classical cryptographic implementations have benefited from decades of real-world deployment. OpenSSL’s RSA implementation has been scrutinized by thousands of researchers, broken and fixed dozens of times, and hardened through the accumulated experience of 25+ years of production use. PQC implementations have had none of this.

The verification gap:

  • Lattice-based schemes require careful handling of polynomial arithmetic, NTT (Number Theoretic Transform) operations, and modular reduction. Off-by-one errors in these operations can leak secret key bits.
  • ML-KEM’s implicit rejection mechanism (returning a pseudorandom value for invalid ciphertexts instead of an error) must be implemented exactly correctly — any deviation creates an oracle that can be exploited to recover the secret key.
  • Side-channel resistance is harder for PQC because the operations are more complex. Constant-time polynomial multiplication is more difficult to get right than constant-time modular exponentiation.
  • Novel failure modes. Decryption failure rates in lattice schemes create attack vectors that do not exist for RSA or ECC. An implementation that slightly increases the failure probability can enable key recovery attacks.

For deeper context on the algorithm-level properties that make verification critical, see Lattice-Based Cryptography and Other PQC Families.


5. PQC Migration Toolchain Flow

The following diagram shows how the tools described above fit together in a PQC migration workflow:

flowchart TD
    A[Discovery Phase] --> B[Assessment Phase]
    B --> C[Implementation Phase]
    C --> D[Validation Phase]
    D --> E[Deployment Phase]

    A1[IBM QSE / Cryptosense] --> A
    A2[Semgrep / CodeQL Rules] --> A
    A3[sslyze / nmap Scans] --> A

    B1[CBOM Analysis] --> B
    B2[Risk Scoring Matrix] --> B
    B3[Vendor Readiness Check] --> B

    C1[liboqs / OpenSSL 3.5+] --> C
    C2[Library Integration] --> C
    C3[Protocol Updates] --> C

    D1[KAT Validation] --> D
    D2[ACVP Testing] --> D
    D3[Interop Testing] --> D
    D4[Fuzzing Campaigns] --> D
    D5[Formal Verification] --> D

    E1[Benchmarking Suite] --> E
    E2[Canary Deployment] --> E
    E3[Monitoring & Rollback] --> E

    style A fill:#e1f5fe
    style B fill:#fff3e0
    style C fill:#e8f5e9
    style D fill:#fce4ec
    style E fill:#f3e5f5

6. Testing Pyramid for PQC

graph TD
    subgraph Testing Pyramid
        L1["<b>Unit Tests</b><br/>KATs, algorithm correctness,<br/>individual function tests"]
        L2["<b>Integration Tests</b><br/>Library API tests, TLS handshake,<br/>certificate chain validation"]
        L3["<b>Security Tests</b><br/>Fuzzing, side-channel analysis,<br/>constant-time verification"]
        L4["<b>Interop Tests</b><br/>Multi-vendor TLS, cross-platform,<br/>hybrid negotiation"]
        L5["<b>Performance Tests</b><br/>Benchmarks, load testing,<br/>latency regression"]
        L6["<b>Formal Verification</b><br/>EasyCrypt proofs, Jasmin/Libjade,<br/>HACL* verified code"]
    end

    L1 --> L2 --> L3 --> L4 --> L5 --> L6

    style L1 fill:#c8e6c9
    style L2 fill:#a5d6a7
    style L3 fill:#81c784
    style L4 fill:#66bb6a
    style L5 fill:#4caf50
    style L6 fill:#388e3c,color:#fff

7. Learning Resources

7.1 Curated Resource Table

ResourceTypeLevelFocusDescription
NIST PQC Project PageOfficial portalAll levelsStandards, submissions, reportsAuthoritative source for all NIST PQC standardization documents, draft standards, and call for proposals
PQCrypto Conference SeriesAcademic conferenceAdvancedResearch frontiersAnnual conference dedicated to post-quantum cryptography research, proceedings published by Springer
Mathematics of Isogenies for Post-Quantum Cryptography (Galbraith)TextbookAdvancedMathematical foundationsDeep treatment of the algebraic geometry and number theory underlying isogeny-based schemes
Post-Quantum Cryptography (Bernstein, Buchmann, Dahmen)TextbookIntermediate-AdvancedComprehensive surveyThe foundational textbook covering all PQC families with rigorous mathematical treatment
CryptoHack PQC ChallengesInteractive platformBeginner-IntermediateHands-on learningBrowser-based cryptography challenges including lattice problems, LWE, and PQC scheme attacks
Cloudflare PQC Blog SeriesBlog postsIntermediateDeployment practicePractical insights from deploying PQC at scale, including real-world performance data and lessons learned
IETF PQC Working GroupStandards bodyIntermediate-AdvancedProtocol integrationDrafts and RFCs for integrating PQC into TLS, X.509, IKEv2, and other IETF protocols
A Decade of Lattice Cryptography (Peikert)Survey paperAdvancedLattice theoryComprehensive survey of lattice-based cryptographic constructions and their security foundations
NIST SP 800-227GuidelinesIntermediateMigration planningNIST recommendations for migrating to post-quantum cryptographic standards
SAFEcrypto EU Project PublicationsResearch outputsAdvancedImplementationPublications on secure architectures for future emerging cryptography, including side-channel resistance

For security engineers starting PQC migration:

  1. Foundation (Week 1-2). Read NIST SP 800-227 and the NIST PQC FAQ. Understand why migration is needed and the timeline pressures.
  2. Algorithm literacy (Week 3-4). Work through CryptoHack lattice challenges. Read the FIPS 203/204/205 specification overviews (not the full specs — the overview sections).
  3. Hands-on implementation (Week 5-6). Build a test application using liboqs. Run the OQS test suite. Generate PQC certificates and establish a TLS 1.3 connection with hybrid key exchange.
  4. Migration planning (Week 7-8). Run crypto discovery tools against your codebase. Build a CBOM. Assess your vendor readiness using the tracker below.
  5. Deep specialization (Ongoing). Follow the PQCrypto conference, subscribe to the pqc-forum mailing list, and monitor NIST’s ongoing additional signature standardization.

8. Vendor PQC Readiness Tracker

The following tables capture the state of PQC support across major technology vendors. PQC adoption is evolving rapidly — treat these as a snapshot and verify current status before making migration decisions.

8.1 Cloud Providers

VendorProduct/ServicePQC SupportAlgorithmStatusNotes
AWSKMSHybrid key agreementML-KEM-768GATLS connections to KMS endpoints use hybrid PQC
AWSACMPQC certificatesML-DSA-65PreviewCertificate issuance with PQC signatures
AWSs2n-tlsHybrid key exchangeX25519+ML-KEM-768GADefault for AWS SDK connections
AzureKey VaultPQC key typesML-KEM, ML-DSAPreviewManaged HSM support for PQC key operations
AzureAzure TLSHybrid key exchangeX25519+ML-KEM-768GAEnabled for Azure Front Door, Application Gateway
GCPCloud KMSHybrid key agreementML-KEM-768GABoringSSL-backed, enabled by default for gRPC
GCPCertificate Authority ServicePQC certificatesML-DSA-65PreviewHybrid certificate issuance

8.2 Browser Vendors

VendorBrowserPQC SupportAlgorithmStatusNotes
GoogleChromeHybrid key exchangeX25519+ML-KEM-768GA (default)Enabled by default since Chrome 124, uses BoringSSL
MozillaFirefoxHybrid key exchangeX25519+ML-KEM-768GA (default)Enabled by default since Firefox 128, uses NSS
AppleSafariHybrid key exchangeX25519+ML-KEM-768GAEnabled via Apple CryptoKit, macOS 15+ / iOS 18+
MicrosoftEdgeHybrid key exchangeX25519+ML-KEM-768GA (default)Inherits Chromium PQC support

8.3 OS Vendors

VendorOSPQC SupportAlgorithmStatusNotes
MicrosoftWindowsCNG API PQCML-KEM, ML-DSAPreviewAvailable via Windows CNG in Windows 11 24H2+
ApplemacOS / iOSCryptoKit PQCML-KEM-768GAAvailable via Apple CryptoKit, macOS 15+ / iOS 18+
CanonicalUbuntuOpenSSL 3.5+ PQCML-KEM, ML-DSA, SLH-DSAAvailableVia OpenSSL 3.5 in Ubuntu 25.04+
Red HatRHELOpenSSL PQCML-KEM, ML-DSAPreviewRHEL 10 ships OpenSSL 3.5 with PQC providers
SUSESLESOpenSSL PQCML-KEM, ML-DSAPreviewAvailable through OpenSSL 3.5 backport

8.4 Network Equipment

VendorProduct LinePQC SupportAlgorithmStatusNotes
CiscoIOS-XE RoutersIKEv2 hybrid PQCML-KEM-768PreviewLimited to specific platforms (Catalyst 8000 series)
CiscoMerakiNoneRoadmapPlanned for future firmware updates
Palo AltoPAN-OS FirewallsTLS inspection PQC passthroughGAPasses through PQC TLS without breaking inspection
Palo AltoGlobalProtect VPNHybrid PQC key exchangeML-KEM-768PreviewIKEv2 with PQC hybrid key exchange
FortinetFortiOSIPsec PQC hybridML-KEM-768PreviewAvailable in FortiOS 7.6+ for site-to-site VPN
JuniperJunos OSIKEv2 hybrid PQCML-KEM-768PreviewLimited platform support (SRX5000 series)

8.5 Certificate Authorities

CAPQC SupportAlgorithmStatusNotes
DigiCertHybrid PQC certificatesML-DSA + RSA/ECDSAGADual-signed certificates for gradual migration
DigiCertPure PQC certificatesML-DSA-65, ML-DSA-87PreviewFor testing and early adoption
SectigoHybrid PQC certificatesML-DSA + ECDSAPreviewAvailable through Sectigo Certificate Manager
EntrustPQC-ready certificatesML-DSAPreviewEntrust PKI platform supports PQC issuance
Let’s EncryptNoneRoadmapEvaluating PQC for ACME protocol integration
Google Trust ServicesHybrid PQC certificatesML-DSA-65PreviewFor Google Cloud customers

9. Migration Templates

9.1 Cryptographic Inventory Spreadsheet Structure

A well-structured cryptographic inventory is the foundation of PQC migration planning. Use this structure as a template:

ColumnDescriptionExample Values
Asset IDUnique identifier for the cryptographic usageCRYPTO-001, CRYPTO-002
System/ApplicationWhich system uses this cryptoPayment Gateway, Auth Service
ComponentSpecific component within the systemTLS termination, JWT signing
AlgorithmCurrent algorithm in useRSA-2048, ECDSA-P256, AES-256-GCM
UsageHow the algorithm is usedKey exchange, digital signature, encryption
Key SizeCurrent key length2048 bits, 256 bits
Library/ProviderImplementation providing the algorithmOpenSSL 3.0, BoringSSL, AWS-LC
Quantum RiskRisk classificationHigh (broken by Shor’s), Medium (weakened by Grover’s), Low (quantum-safe)
PQC ReplacementTarget post-quantum algorithmML-KEM-768, ML-DSA-65, AES-256 (no change)
Data SensitivityClassification of protected dataPII, financial, healthcare, public
Data LifespanHow long data must remain confidential5 years, 25 years, indefinite
Migration PriorityPriority based on risk and sensitivityP0 (immediate), P1 (within 1 year), P2 (within 3 years)
DependenciesOther systems affected by migrationUpstream CA, partner API, HSM firmware
OwnerResponsible team/individualPlatform Security Team, Alice Smith
StatusCurrent migration statusNot Started, In Progress, Testing, Complete

9.2 Risk Assessment Matrix

Evaluate each cryptographic asset against two dimensions: quantum vulnerability and data exposure impact.

                    DATA EXPOSURE IMPACT
                    Low        Medium      High       Critical
                 ┌──────────┬──────────┬──────────┬──────────┐
    Symmetric    │          │          │          │          │
    (AES-128+)   │  ACCEPT  │  ACCEPT  │ MONITOR  │ MONITOR  │
    Grover's     │          │          │          │          │
    weakened     ├──────────┼──────────┼──────────┼──────────┤
    QUANTUM      │          │          │          │          │
    Hash-based   │  ACCEPT  │ MONITOR  │ MONITOR  │  PLAN    │
    VULN.        │ (SHA-256+)│         │          │          │
                 ├──────────┼──────────┼──────────┼──────────┤
    RSA/ECC      │          │          │          │          │
    Signatures   │ MONITOR  │  PLAN    │ MIGRATE  │ MIGRATE  │
    (auth only)  │          │          │  SOON    │   NOW    │
                 ├──────────┼──────────┼──────────┼──────────┤
    RSA/ECC/DH   │          │          │          │          │
    Key Exchange │  PLAN    │ MIGRATE  │ MIGRATE  │ MIGRATE  │
    (HNDL risk)  │          │  SOON    │   NOW    │   NOW    │
                 └──────────┴──────────┴──────────┴──────────┘

HNDL = “Harvest Now, Decrypt Later” — encrypted data captured today can be decrypted when quantum computers arrive. This makes key exchange migration more urgent than signature migration for long-lived confidential data. For the mathematical basis of this urgency model, see Migration Strategies & Crypto Agility.

9.3 Migration Project Plan Outline

PQC Migration Project Plan
==========================

Phase 0: Discovery (Weeks 1-4)
  [ ] Run crypto discovery tools across all codebases
  [ ] Scan network infrastructure with sslyze/nmap
  [ ] Build CBOM for all applications
  [ ] Identify all certificate chains and their algorithms
  [ ] Catalog HSM/KMS usage and vendor PQC readiness

Phase 1: Assessment (Weeks 5-8)
  [ ] Classify each crypto asset by quantum risk
  [ ] Score data sensitivity and lifespan
  [ ] Apply risk assessment matrix to prioritize migration
  [ ] Assess vendor PQC readiness (use tracker above)
  [ ] Identify dependency chains that block migration
  [ ] Estimate migration effort per asset (T-shirt sizing)

Phase 2: Architecture (Weeks 9-12)
  [ ] Design crypto-agile architecture patterns
  [ ] Select PQC algorithms per use case
  [ ] Choose hybrid vs. pure PQC strategy per system
  [ ] Define library/provider strategy (OpenSSL 3.5, AWS-LC, etc.)
  [ ] Plan certificate infrastructure migration (CA readiness)
  [ ] Document rollback procedures

Phase 3: Implementation (Weeks 13-24)
  [ ] Upgrade TLS libraries to PQC-capable versions
  [ ] Implement hybrid key exchange for highest-priority systems
  [ ] Migrate signature schemes where PQC certificates are available
  [ ] Update HSM firmware/configuration for PQC support
  [ ] Modify application code for crypto-agile algorithm selection

Phase 4: Validation (Weeks 25-30)
  [ ] Execute KAT validation for all PQC implementations
  [ ] Run ACVP testing where FIPS validation is required
  [ ] Conduct interoperability testing with partners/vendors
  [ ] Perform security testing (fuzzing, side-channel analysis)
  [ ] Run performance benchmarks under production-like load
  [ ] Validate compliance with CNSA 2.0 / organizational policy

Phase 5: Deployment (Weeks 31-36)
  [ ] Canary deployment to non-critical systems
  [ ] Monitor for compatibility issues, performance regression
  [ ] Gradual rollout to production systems
  [ ] Update monitoring and alerting for PQC-specific metrics
  [ ] Document lessons learned and update runbooks

Phase 6: Ongoing (Continuous)
  [ ] Monitor NIST standardization updates
  [ ] Track vendor PQC support evolution
  [ ] Re-assess risk as quantum computing advances
  [ ] Validate crypto agility by rotating algorithms
  [ ] Maintain cryptographic inventory (CBOM) as systems change

9.4 Testing Checklist

Use this checklist for each PQC implementation before deployment:

PQC Implementation Testing Checklist
=====================================

Functional Correctness
  [ ] All NIST KAT vectors pass
  [ ] Round-trip tests (keygen → encaps/sign → decaps/verify)
  [ ] Edge cases: zero-length messages, maximum-length messages
  [ ] Error handling: invalid inputs, truncated data, corrupted keys
  [ ] Implicit rejection (ML-KEM): invalid ciphertext returns pseudorandom value

Security Testing
  [ ] Constant-time verification (ctgrind/timecop)
  [ ] Memory safety (Valgrind, AddressSanitizer, MSan)
  [ ] Fuzzing campaign (minimum 24 hours, AFL++ or libFuzzer)
  [ ] No secret-dependent branching (verified via binary analysis)
  [ ] RNG quality: entropy source validated, DRBG properly seeded

Performance Validation
  [ ] Benchmark on target hardware (not just dev laptops)
  [ ] Latency within SLA at p50, p95, p99
  [ ] Throughput meets capacity requirements
  [ ] Memory consumption within budget
  [ ] Key/ciphertext/signature sizes fit protocol constraints (MTU, etc.)

Interoperability
  [ ] Tested against OQS interop server
  [ ] Tested against at least 2 independent implementations
  [ ] Hybrid negotiation: graceful fallback when peer lacks PQC support
  [ ] Certificate chain validation with PQC signatures
  [ ] Backward compatibility: classical-only clients still work

Compliance & Governance
  [ ] Algorithm selection matches organizational policy
  [ ] FIPS 140-3 validation status verified (if required)
  [ ] CNSA 2.0 timeline compliance verified
  [ ] Change management approval obtained
  [ ] Rollback procedure documented and tested

10. Quick-Reference: Tool Selection Guide

Choosing the right tool for each stage of your PQC migration:

Migration StagePrimary ToolAlternativesOutput
Source code inventoryIBM QSE, Semgrep rulesCodeQL, CryptoGuardCBOM, vulnerability list
Network scanningsslyzenmap NSE, custom OpenSSL scriptsTLS configuration report
Runtime crypto discoveryCryptosenseCustom EBPF probesCrypto operation inventory
Functional testingOQS test suite, KATsCustom test harnessesPass/fail per algorithm
FIPS validation testingNIST ACVPManual CAVP submissionACVP certificate
Interop testingOQS interop serverIETF hackathon infrastructureCompatibility matrix
Security testingAFL++, libFuzzerHonggfuzz, custom fuzzersCrash reports, coverage
Benchmarkingliboqs speed, SUPERCOPCustom harnesses, h2loadCycles, latency, throughput
Formal verificationJasmin/Libjade, HACL*EasyCrypt proofsMachine-checked proofs
Vendor trackingManual researchNIST NCCoE reportsVendor readiness matrix

Summary

PQC migration tooling is maturing rapidly, but no single tool covers the full migration lifecycle. A complete PQC toolchain requires discovery tools to find what needs changing, testing frameworks to validate the changes, benchmarking suites to ensure performance viability, and formal verification to provide assurance that newer implementations lack the decades of battle-testing their classical predecessors enjoyed.

The practical path forward: start with discovery (Section 1) to build your cryptographic inventory, use the risk assessment matrix (Section 9.2) to prioritize, validate with the testing checklist (Section 9.4), benchmark on your actual hardware (Section 3.4), and track vendor readiness (Section 8) to know when your dependencies are ready. For the broader migration framework that ties these tools together, see Migration Strategies & Crypto Agility. For the specific library choices that your testing will validate, see Real-World Implementations & Libraries. For protocol-level integration guidance, see PQC in Protocols.