← Back to Post-Quantum Cryptography

PQC in Protocols

19 min read

Overview

Post-quantum cryptographic algorithms do not exist in a vacuum. They must be integrated into the real-world protocols that billions of devices use daily — TLS, SSH, IPsec, Signal, WireGuard, X.509 PKI, DNSSEC, and more. Each protocol has its own architecture, message size constraints, round-trip assumptions, and ecosystem of implementations. Dropping ML-KEM or ML-DSA into these protocols is not a matter of simply swapping one algorithm for another. Every protocol presents unique engineering challenges: ClientHello fragmentation in TLS, UDP packet size limits in DNSSEC, bandwidth constraints on mobile in Signal, and certificate chain bloat in X.509 PKI.

This page provides a protocol-by-protocol analysis of post-quantum integration — what works today, what is still in draft, what is fundamentally broken by PQC size increases, and what creative engineering solutions the community has devised. For background on the algorithms themselves, see Lattice-Based Cryptography and Hash-Based Signatures. For an understanding of why this migration is urgent, see Classical Cryptography at Risk.


1. TLS 1.3

1.1 The Most Critical Protocol

TLS 1.3 protects the vast majority of internet traffic. Every HTTPS connection, every API call, every cloud service interaction relies on TLS. Making TLS quantum-resistant is the single highest-priority protocol migration in the PQC transition — and fortunately, it is also the most advanced.

The TLS 1.3 handshake has two cryptographic dependencies vulnerable to quantum attack:

  1. Key exchange (currently ECDH via X25519 or P-256) — broken by Shor’s algorithm, enabling passive harvest-now-decrypt-later attacks
  2. Authentication (server certificate signature, typically ECDSA or RSA) — broken by Shor’s algorithm, but requires an active man-in-the-middle attack at handshake time

The asymmetry matters: key exchange is the more urgent concern because encrypted traffic captured today can be decrypted retroactively once a quantum computer exists. Authentication is only vulnerable to real-time attacks, which require a functioning quantum computer at the moment of the handshake.

1.2 X25519MLKEM768 Hybrid Key Exchange

The primary mechanism for PQC in TLS 1.3 is hybrid key exchange — combining a classical algorithm (X25519) with a post-quantum KEM (ML-KEM-768) so that the connection is secure as long as either algorithm remains unbroken.

The specific hybrid construction standardized for TLS is X25519MLKEM768, defined in RFC 9999 (draft-ietf-tls-mlkem, adopted 2024). The construction works as follows:

  1. The client generates an X25519 ephemeral key pair and an ML-KEM-768 encapsulation key
  2. Both public keys are sent in the ClientHello key_share extension
  3. The server performs X25519 key agreement and ML-KEM-768 encapsulation
  4. The server returns its X25519 public key and the ML-KEM-768 ciphertext in the ServerHello key_share
  5. Both parties concatenate the X25519 shared secret and the ML-KEM-768 shared secret
  6. The concatenated secret feeds into the TLS 1.3 key schedule (HKDF-Extract)

The hybrid approach is critical for the transition period. It provides a security guarantee that pure PQC cannot: even if ML-KEM is broken by a novel classical attack (as happened to SIKE), the connection remains protected by X25519. Conversely, when quantum computers arrive, X25519 falls but ML-KEM holds.

1.3 Handshake Size Impact

The most immediate practical consequence of PQC in TLS is increased handshake size. The table below compares classical, hybrid, and pure PQC key exchange sizes:

ComponentX25519 (Classical)X25519MLKEM768 (Hybrid)ML-KEM-768 (Pure PQC)
Client key share32 bytes1,216 bytes (32 + 1,184)1,184 bytes
Server key share32 bytes1,120 bytes (32 + 1,088)1,088 bytes
Total KE overhead64 bytes2,336 bytes2,272 bytes
ClientHello total~250 bytes~1,450 bytes~1,420 bytes
ServerHello total~120 bytes~1,200 bytes~1,170 bytes
Full handshake (KE only)~370 bytes~2,650 bytes~2,590 bytes
Handshake increasebaseline~6.2x~6.0x

These numbers represent the key exchange contribution only. When PQC signatures are added for authentication (see Section 1.5), the total handshake size increases dramatically further.

1.4 ClientHello Fragmentation and Middlebox Issues

A classical TLS 1.3 ClientHello fits comfortably in a single TCP segment (typically under 300 bytes of TLS payload). With X25519MLKEM768, the ClientHello grows to approximately 1,450 bytes — still within a single TCP segment in most configurations, but large enough to trigger problems with some network middleboxes.

The fragmentation problem arises from two sources:

  1. TCP segmentation: If the ClientHello exceeds the path MTU (typically 1,500 bytes for Ethernet minus IP and TCP headers, leaving ~1,460 bytes of payload), it must be split across multiple TCP segments. Some middleboxes (firewalls, load balancers, DPI engines) reassemble only the first TCP segment and fail when the ClientHello is incomplete.

  2. TLS record layer fragmentation: TLS allows a single handshake message to span multiple TLS records, but some middlebox implementations assume the ClientHello is contained in a single record.

Google’s experiments deploying X25519MLKEM768 in Chrome revealed that approximately 0.5% of connections failed due to middlebox interference — a significant number at internet scale. The mitigations deployed include:

  • Split ClientHello: Sending the ClientHello across exactly two TLS records, with a carefully chosen split point, to work around middleboxes that choke on single large records
  • Fallback mechanisms: If the hybrid handshake fails, Chrome retries with classical-only key exchange
  • ECH (Encrypted Client Hello) interaction: ECH already causes larger ClientHellos, so PQC compounds an existing challenge

For pure PQC signatures in client certificates or certificate chains, the problem becomes more severe. An ML-DSA-65 signature is 3,309 bytes — a single certificate could push the handshake to 5-10 KB, requiring multiple round trips and dramatically increasing connection setup latency.

1.5 Certificate Chain Size Explosion with PQC Signatures

While hybrid key exchange is deployable today, PQC authentication is a much harder problem. The issue is certificate chain size:

Signature AlgorithmSignature SizePublic Key SizeTypical Cert Size
ECDSA (P-256)64 bytes64 bytes~800 bytes
RSA-2048256 bytes256 bytes~1,200 bytes
ML-DSA-442,420 bytes1,312 bytes~4,500 bytes
ML-DSA-653,309 bytes1,952 bytes~6,000 bytes
ML-DSA-874,627 bytes2,592 bytes~8,000 bytes
SLH-DSA-SHA2-128s7,856 bytes32 bytes~8,700 bytes
SLH-DSA-SHA2-128f17,088 bytes32 bytes~17,900 bytes

A typical TLS certificate chain contains 2-3 certificates (end-entity + intermediate + root). With ECDSA, the entire chain is roughly 2.5 KB. With ML-DSA-65, it balloons to approximately 18 KB. With SLH-DSA variants, it can exceed 50 KB.

This size explosion has cascading effects:

  • Handshake latency: Multiple TCP round trips needed to deliver the certificate chain
  • QUIC amplification limits: QUIC restricts server responses to 3x the client’s initial message until the client is verified, making large certificate chains problematic
  • Embedded and IoT devices: Constrained devices with limited memory and bandwidth cannot handle 50 KB handshakes
  • CDN and edge computing: Connection setup time directly impacts page load performance

1.6 Merkle Tree Certificates Proposal

To address the certificate size problem, researchers have proposed Merkle Tree Certificates (MTCs) — a fundamentally different approach to TLS authentication that sidesteps the signature size issue entirely.

The core idea: instead of embedding a signature in each certificate, the CA publishes a Merkle tree of all valid certificates at regular intervals (e.g., hourly). The server proves its certificate is in the tree by providing a Merkle inclusion proof — a sequence of hashes — rather than a signature.

How it works:

  1. The CA collects all certificate issuance requests over a time window
  2. The CA builds a Merkle tree with each leaf being a certificate
  3. The CA publishes the tree root (a single hash, ~32 bytes)
  4. Servers obtain their Merkle inclusion proof (log₂(n) hashes for n certificates)
  5. During TLS handshake, the server sends its certificate + inclusion proof instead of a signed certificate
  6. The client verifies the inclusion proof against the known tree root

The authentication payload shrinks from thousands of bytes (PQC signature) to a few hundred bytes (Merkle path). For a tree with 1 million certificates, the inclusion proof is approximately 20 hashes x 32 bytes = 640 bytes — dramatically smaller than any PQC signature.

Trade-offs:

  • Requires a transparency-log-style infrastructure for distributing tree roots
  • Certificates are only valid after the next tree publication (introducing issuance latency)
  • Revocation semantics change — a certificate is revoked by excluding it from the next tree
  • Requires clients to have a recent tree root, implying a background update mechanism

The IETF draft (draft-davidben-tls-merkle-tree-certs) is under active development and represents one of the most promising long-term solutions to PQC authentication in TLS.

1.7 Browser and Server Support Status

As of early 2026, deployment of PQC in TLS is well underway for key exchange:

ImplementationX25519MLKEM768PQC SignaturesNotes
Chrome/ChromiumEnabled by default (v131+)ExperimentalFirst major browser to ship hybrid KE
FirefoxEnabled by default (v134+)Not yetFollowed Chrome’s lead
Safari/WebKitEnabled by defaultNot yetApple platforms support via Security framework
OpenSSL 3.5SupportedML-DSA supportedProvider-based architecture
BoringSSLEnabled by defaultExperimentalPowers Chrome and Android
AWS s2n-tlsSupportedNot yetUsed across AWS services
CloudflareEnabled by defaultResearchHandles ~20% of web traffic
NginxVia OpenSSL 3.5Via OpenSSL 3.5Configuration required
CaddyVia Go 1.24 crypto/tlsNot yetGo’s standard library support

The key exchange transition is proceeding rapidly. PQC authentication remains the harder problem and is unlikely to see broad deployment until the certificate size issue is resolved.

QUIC-specific challenges:

QUIC (RFC 9000) uses TLS 1.3 for its handshake but imposes additional constraints. The QUIC anti-amplification limit restricts the server’s initial response to 3x the size of the client’s first flight until the client’s address is validated. With classical TLS, the server’s certificate chain fits within this limit. With PQC certificates (18+ KB for an ML-DSA-65 chain), the server may exceed the amplification limit before completing the handshake, requiring an additional round trip for address validation. This adds latency to every new QUIC connection — directly impacting web performance for HTTP/3.

Additionally, QUIC’s 0-RTT (early data) resumption mechanism relies on previously cached server parameters. If the server rotates to a PQC certificate between connections, the client’s cached certificate may be invalid, forcing a full handshake. Certificate caching strategies will need to account for the transition period where servers may present different certificate types.

sequenceDiagram
    participant C as Client
    participant S as Server

    Note over C,S: PQC TLS 1.3 Handshake (X25519MLKEM768)

    C->>S: ClientHello<br/>+ key_share: X25519 pk (32B) || ML-KEM-768 ek (1,184B)<br/>+ supported_groups: X25519MLKEM768<br/>~1,450 bytes total

    Note over S: Generate X25519 shared secret<br/>Encapsulate ML-KEM-768 → ciphertext + shared secret<br/>Concatenate both shared secrets

    S->>C: ServerHello<br/>+ key_share: X25519 pk (32B) || ML-KEM-768 ct (1,088B)<br/>~1,200 bytes total

    S->>C: EncryptedExtensions
    S->>C: Certificate (ECDSA for now, PQC future)
    S->>C: CertificateVerify (ECDSA sig)
    S->>C: Finished

    Note over C: Derive X25519 shared secret<br/>Decapsulate ML-KEM-768 → shared secret<br/>Concatenate → HKDF-Extract into key schedule

    C->>S: Finished

    Note over C,S: Application Data (encrypted with hybrid-derived keys)

2. SSH

2.1 sntrup761x25519-sha512 — Quantum-Resistant Key Exchange

OpenSSH was one of the earliest protocols to deploy post-quantum key exchange in production. Since OpenSSH 8.9 (released February 2022), the hybrid key exchange method sntrup761x25519-sha512@openssh.com has been available. Since OpenSSH 9.0 (released April 2022), it has been the default key exchange method — making SSH arguably the most PQC-ready major protocol.

The construction combines:

  • sntrup761: Streamlined NTRU Prime, a lattice-based KEM with a 761-dimensional lattice. Not the NIST-standardized ML-KEM, but a well-analyzed alternative from the NTRU family designed by Daniel J. Bernstein and collaborators. It was an alternate candidate in Round 3 of the NIST process.
  • X25519: The same elliptic curve Diffie-Hellman construction used throughout modern cryptography
  • SHA-512: Hash function for key derivation

The shared secrets from both algorithms are combined using SHA-512 to produce the session key material. Like the TLS hybrid approach, the connection is secure as long as either algorithm holds.

2.2 Key Exchange Size Impact

The size impact on SSH key exchange is meaningful but manageable:

Key Exchange MethodClient → ServerServer → ClientTotal
curve25519-sha256 (classical)32 bytes32 bytes64 bytes
sntrup761x25519-sha512 (hybrid)1,190 bytes (1,158 + 32)1,071 bytes (1,039 + 32)2,261 bytes
Increase37x33x35x

In absolute terms, the increase from 64 bytes to ~2.3 KB is trivial for modern networks. SSH connections are typically long-lived, so the one-time key exchange overhead is amortized over the session duration. Even on high-latency satellite links, the additional data fits within a single packet exchange.

2.3 Host Key and User Authentication PQC Options

While key exchange is solved, SSH authentication presents the same challenges as TLS:

Host keys: The server’s host key is used to authenticate the server to the client. Classical host key types include ssh-ed25519 and ssh-rsa. PQC host key types are not yet standardized for OpenSSH, but experimental support exists:

  • ML-DSA-65 host keys would add ~1,952 bytes to the public key and ~3,309 bytes to the signature
  • SLH-DSA host keys offer smaller public keys (32 bytes) but much larger signatures
  • The known_hosts file size would increase substantially with PQC public keys

User authentication: Public key authentication requires the user to hold a PQC private key and the server to store the corresponding public key in authorized_keys. The same size considerations as host keys apply. For organizations using SSH certificates (via ssh-keygen -s), certificate sizes will increase with PQC signature algorithms.

Current status: OpenSSH has not yet standardized PQC signature algorithms for host keys or user authentication. The IETF SSH working group is developing drafts for ML-DSA integration, but deployment is likely 1-2 years behind key exchange.

2.4 Configuration Examples

Verifying PQC key exchange is active (OpenSSH 9.0+, default):

# Check which key exchange was negotiated
ssh -v user@host 2>&1 | grep "kex:"
# Expected output: kex: sntrup761x25519-sha512@openssh.com

# Verify server supports hybrid KE
ssh -Q kex | grep sntrup
# Expected: sntrup761x25519-sha512@openssh.com

Explicitly configuring PQC key exchange in sshd_config:

# /etc/ssh/sshd_config
# Prefer hybrid PQC, fall back to classical
KexAlgorithms sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org

# Once PQC host keys are available (future):
# HostKeyAlgorithms ml-dsa-65@openssh.com,ssh-ed25519

Client-side configuration in ~/.ssh/config:

# ~/.ssh/config
Host *
    KexAlgorithms sntrup761x25519-sha512@openssh.com,curve25519-sha256
    # Disable classical-only fallback for high-security environments:
    # KexAlgorithms sntrup761x25519-sha512@openssh.com

Verifying the negotiated algorithm after connection:

# Full debug output showing KE algorithm
ssh -vvv user@host 2>&1 | grep -E "kex_input_kexinit|kex: algorithm"

# For automated verification in scripts
ssh -o BatchMode=yes -v user@host exit 2>&1 | \
    grep "kex: sntrup761x25519" && echo "PQC: active" || echo "PQC: inactive"

3. IPsec/IKEv2

3.1 RFC 9370 — Multiple Key Exchanges in IKEv2

IPsec uses the Internet Key Exchange protocol version 2 (IKEv2) to establish Security Associations (SAs). Classical IKEv2 performs a single Diffie-Hellman key exchange during the IKE_SA_INIT exchange. To add post-quantum security, RFC 9370 (published October 2023) defines a mechanism for performing multiple key exchanges within a single IKEv2 negotiation.

The design philosophy is deliberately conservative: rather than modifying the core IKE_SA_INIT exchange, RFC 9370 introduces Additional Key Exchange (AKE) payloads in subsequent IKE_INTERMEDIATE exchanges. This approach:

  • Maintains backward compatibility with existing IKEv2 implementations
  • Allows each key exchange to use a different algorithm (e.g., classical ECDH + post-quantum ML-KEM)
  • Permits more than two key exchanges if desired (e.g., ECDH + ML-KEM + Classic McEliece for defense in depth)
  • Keeps the security proofs of the core IKEv2 protocol intact

3.2 How Additional KE Payloads Work

The RFC 9370 negotiation flow extends the standard IKEv2 exchange:

Initiator                          Responder
---------                          ---------

IKE_SA_INIT:
HDR, SAi1, KEi(DH), Ni  -------->
                         <--------  HDR, SAr1, KEr(DH), Nr

    (Classical DH completed — derive initial SK_e/SK_a)

IKE_INTERMEDIATE (1st additional KE):
HDR, SK{KEi(ML-KEM-768)} -------->
                         <--------  HDR, SK{KEr(ML-KEM-768 ciphertext)}

    (ML-KEM shared secret mixed into key derivation)

IKE_INTERMEDIATE (2nd additional KE, optional):
HDR, SK{KEi(McEliece)}   -------->
                         <--------  HDR, SK{KEr(McEliece ciphertext)}

    (McEliece shared secret mixed into key derivation)

IKE_AUTH:
HDR, SK{IDi, AUTH, ...}  -------->
                         <--------  HDR, SK{IDr, AUTH, ...}

Each additional key exchange produces an independent shared secret. The final SA key material is derived by mixing all shared secrets together using the standard IKEv2 PRF+ function. The security guarantee is that the SA is secure as long as any one of the key exchanges produces a secret unknown to the attacker.

Key technical details:

  • The SA payload in IKE_SA_INIT includes Additional_Key_Exchange transform types to negotiate which post-quantum algorithms will be used
  • Each IKE_INTERMEDIATE exchange is encrypted and authenticated under the keys derived from all previous exchanges
  • If the responder does not support additional key exchanges, the negotiation falls back to classical-only (graceful degradation)
  • The protocol supports up to seven additional key exchanges (transform types 6-12)

3.3 Implementation Support Status

ImplementationRFC 9370PQC AlgorithmsProduction ReadyNotes
strongSwanYes (6.0+)ML-KEM-512/768/1024, FrodoKEMYesMost mature implementation; uses liboqs
LibreswanYes (5.0+)ML-KEM-768, ML-KEM-1024YesAvailable in RHEL 9.4+ and Fedora 40+
Windows IKEv2NoNoNoNo announced timeline
Cisco IOS-XEExperimentalML-KEM-768NoLab availability only
Juniper JunosNoNoNoTracking RFC 9370
Linux kernel (XFRM)N/AN/AN/AKernel handles ESP, not IKE; no kernel changes needed

strongSwan configuration example:

# /etc/swanctl/conf.d/pqc-hybrid.conf
connections {
    pqc-vpn {
        version = 2
        proposals = aes256-sha384-x25519-ke1_mlkem768
        local {
            auth = pubkey
            certs = server.pem
        }
        remote {
            auth = pubkey
        }
        children {
            net {
                mode = tunnel
                esp_proposals = aes256gcm128
            }
        }
    }
}

The ke1_mlkem768 in the proposal string specifies ML-KEM-768 as the first additional key exchange (beyond the classical x25519 exchange). strongSwan handles the IKE_INTERMEDIATE exchanges transparently.

Libreswan configuration example:

# /etc/ipsec.d/pqc-connection.conf
conn pqc-tunnel
    authby=rsasig
    left=%defaultroute
    leftcert=host.pem
    right=203.0.113.1
    rightid=@peer.example.com
    # Enable hybrid PQC key exchange
    ikev2=yes
    intermediate=yes
    ike=aes256-sha2_384-ecp384+mlkem768
    esp=aes256gcm16
    auto=start

Operational considerations for IPsec:

  • SA rekey overhead: IPsec SAs are rekeyed periodically (typically every 1-8 hours). Each rekey with RFC 9370 requires the additional IKE_INTERMEDIATE exchanges, adding 1-2 round trips and ~2.3 KB of key exchange data. For VPN concentrators handling thousands of tunnels, the aggregate rekey bandwidth and CPU cost should be benchmarked.
  • IKE fragmentation: RFC 7383 (IKE message fragmentation) is important for PQC deployments. Large KE payloads (especially Classic McEliece at ~1 MB) may exceed path MTU and require IKE-level fragmentation — distinct from IP fragmentation and handled within the IKE protocol itself.
  • HSM integration: Hardware Security Modules used for IKE authentication may not yet support PQC algorithms for signing. The additional key exchanges themselves do not require HSM support (they are ephemeral), but if PQC signatures are added to IKE_AUTH in the future, HSM firmware updates will be necessary.

4. Signal Protocol

4.1 PQXDH — Post-Quantum Extended Diffie-Hellman

The Signal Protocol, used by Signal, WhatsApp, Google Messages (RCS), and others, introduced PQXDH (Post-Quantum Extended Diffie-Hellman) in September 2023 as a replacement for the X3DH (Extended Triple Diffie-Hellman) initial key agreement protocol. PQXDH adds an ML-KEM-768 key encapsulation to the existing X3DH Diffie-Hellman exchanges, providing hybrid post-quantum security for the initial key agreement.

Why this matters: Signal’s forward secrecy and future secrecy properties depend on the initial key agreement. If an attacker records the initial handshake and later gains access to a quantum computer, they could derive all subsequent message keys from the compromised initial secret. PQXDH prevents this harvest-now-decrypt-later scenario.

4.2 How X25519 + ML-KEM-768 Integrates into the Ratchet

Signal’s Double Ratchet protocol consists of three layers:

  1. Initial key agreement (X3DH / PQXDH) — establishes the first root key
  2. Diffie-Hellman ratchet — updates the root key with new DH exchanges on each message round-trip
  3. Symmetric ratchet — derives per-message keys using a KDF chain

PQXDH modifies layer 1 while leaving layers 2 and 3 unchanged. The protocol works as follows:

Sender (Alice) → Recipient (Bob):

  1. Bob publishes a signed pre-key bundle containing:

    • Identity key (IK_B): X25519 long-term key
    • Signed pre-key (SPK_B): X25519 medium-term key
    • One-time pre-key (OPK_B): X25519 ephemeral key (optional)
    • PQ pre-key (PQPK_B): ML-KEM-768 encapsulation key (new in PQXDH)
  2. Alice generates an ephemeral X25519 key pair (EK_A) and performs:

    • DH1 = X25519(IK_A, SPK_B)
    • DH2 = X25519(EK_A, IK_B)
    • DH3 = X25519(EK_A, SPK_B)
    • DH4 = X25519(EK_A, OPK_B) — if one-time pre-key available
    • SS = ML-KEM-768.Encapsulate(PQPK_B) — produces shared secret + ciphertext
  3. The master secret is derived as:

    master_secret = HKDF(DH1 || DH2 || DH3 || [DH4] || SS)
  4. Alice sends Bob: her identity key, ephemeral key, ML-KEM-768 ciphertext, and the encrypted first message

  5. Bob decapsulates ML-KEM-768 using his PQ private key, performs the same DH computations, and derives the same master secret

The DH ratchet in layer 2 continues to use X25519 exclusively. Adding ML-KEM to every ratchet step would be possible but is not currently implemented — the overhead would be significant for high-frequency messaging. The initial PQXDH exchange provides harvest-now-decrypt-later protection, while the ongoing DH ratchet provides forward secrecy against classical attackers.

4.3 Key Size Implications for Mobile

Mobile devices face unique constraints that make PQC integration challenging:

ComponentX3DH (Classical)PQXDH (Hybrid)Increase
Pre-key bundle (stored on server)~130 bytes~1,314 bytes (+1,184 ML-KEM ek)~10x
Initial message overhead~96 bytes~1,184 bytes (+1,088 ML-KEM ct)~12x
Pre-key storage (100 one-time keys)~3.2 KB~121 KB~38x
Bandwidth per new conversation~226 bytes~2,498 bytes~11x

For individual conversations, the overhead is negligible on modern mobile networks. The scaling concern is pre-key storage on Signal’s servers: each user uploads a batch of one-time pre-keys, and adding ML-KEM-768 encapsulation keys multiplies the per-key storage by roughly 10x. For Signal’s hundreds of millions of users, this represents a significant infrastructure cost.

Signal’s implementation mitigates this by:

  • Using “last-resort” PQ pre-keys that are reused when one-time PQ pre-keys are exhausted (with a security trade-off: loss of per-conversation PQ one-time key uniqueness)
  • Implementing pre-key rotation schedules that balance security against storage/bandwidth costs
  • Compressing pre-key bundles where possible

4.4 Future Directions: PQ Ratcheting

The current PQXDH design provides PQC protection only for the initial key agreement. The ongoing Double Ratchet still uses X25519 for its DH ratchet steps, meaning that message keys derived after the initial exchange are protected against harvest-now-decrypt-later (by the initial PQ exchange) but the forward secrecy mechanism itself relies on classical DH.

Research is ongoing into PQ-secure ratcheting — replacing the X25519 DH ratchet with ML-KEM-based ratcheting. The challenges are significant:

  • Asymmetry of KEMs: DH is symmetric (both parties contribute public keys and derive the same shared secret). KEMs are asymmetric (one party encapsulates, the other decapsulates). This changes the ratchet semantics — the party who generated the encapsulation key must be tracked, and pre-key distribution becomes necessary at every ratchet step.
  • Bandwidth: Each ratchet step with ML-KEM-768 would add ~1,184 bytes (encapsulation key) or ~1,088 bytes (ciphertext) to each message. For high-frequency messaging, this overhead is non-trivial on mobile networks.
  • Battery and CPU: While individual ML-KEM operations are fast, performing encapsulation/decapsulation on every message exchange has measurable battery impact on mobile devices when aggregated across thousands of daily messages.

The most likely approach is a periodic PQ ratchet — performing ML-KEM re-keying every N ratchet steps (e.g., every 100 messages or every hour) rather than on every message. This provides bounded exposure to quantum attack on the ratchet while keeping per-message overhead manageable.


5. WireGuard

5.1 WireGuard’s Design Constraints

WireGuard’s cryptographic design is intentionally minimalist and opinionated: it uses a fixed set of primitives (X25519, ChaCha20-Poly1305, BLAKE2s) with no algorithm negotiation. This simplicity is a security feature — there is no version downgrade attack, no cipher suite negotiation complexity, no legacy algorithm support. But it also means that adding PQC cannot be done through configuration or algorithm selection. Any change to WireGuard’s cryptography requires modifying the protocol itself.

The WireGuard team has been reluctant to modify the core protocol for PQC, preferring to wait until PQC algorithms are thoroughly battle-tested. This created a gap: users who need post-quantum protection today have no in-protocol option.

5.2 Rosenpass — PQ Overlay Protocol

Rosenpass fills this gap with an elegant architectural approach: it runs as a separate protocol alongside WireGuard, performs a post-quantum key exchange, and feeds the resulting shared secret into WireGuard’s pre-shared key (PSK) mechanism — a feature that WireGuard already supports specifically for this purpose.

How Rosenpass works:

  1. Rosenpass runs as a userspace daemon alongside the WireGuard interface
  2. It performs a hybrid key exchange using ML-KEM-768 (and optionally Classic McEliece) between peers
  3. The resulting PQ shared secret is installed as WireGuard’s pre-shared key (PSK) via the wg set interface
  4. WireGuard mixes the PSK into its Noise IK handshake key derivation
  5. Rosenpass periodically re-keys (default: every 2 minutes) to maintain forward secrecy for the PQ component
  6. If Rosenpass fails or is disabled, WireGuard continues to function with classical-only security

The key insight is that WireGuard’s PSK slot was designed exactly for this use case. From the WireGuard whitepaper: “A pre-shared key […] for adding an additional layer of symmetric-key cryptographic protection, for post-quantum resistance.” Rosenpass exploits this foresight without touching WireGuard’s code.

Deployment example:

# /etc/rosenpass/rosenpass.toml
public_key = "/etc/rosenpass/rp-public"
secret_key = "/etc/rosenpass/rp-secret"
listen = ["0.0.0.0:9999"]

[[peers]]
public_key = "/etc/rosenpass/peer-public"
endpoint = "203.0.113.1:9999"
wg_interface = "wg0"
wg_peer = "WG_PEER_PUBLIC_KEY_BASE64"
# Generate Rosenpass key pair
rosenpass gen-keys --secret-key /etc/rosenpass/rp-secret \
                   --public-key /etc/rosenpass/rp-public

# Start Rosenpass daemon (will automatically set WireGuard PSK)
rosenpass exchange-config /etc/rosenpass/rosenpass.toml

Architecture:

┌─────────────────────────────────────────────┐
│                 Application                  │
├─────────────────────────────────────────────┤
│              WireGuard (wg0)                │
│   X25519 Noise IK + PSK from Rosenpass      │
├──────────────────┬──────────────────────────┤
│   WireGuard UDP  │   Rosenpass UDP          │
│   (port 51820)   │   (port 9999)            │
├──────────────────┴──────────────────────────┤
│              Network Interface               │
└─────────────────────────────────────────────┘

Limitations:

  • Adds a separate daemon to manage, monitor, and update
  • The Rosenpass protocol itself has not undergone the same level of audit as WireGuard core
  • PSK rotation requires brief coordination between Rosenpass and WireGuard, creating a small window where the PSK may be stale
  • Does not protect WireGuard’s static key authentication (only the session key exchange gets PQ protection)

Alternative approaches under discussion:

Beyond Rosenpass, the WireGuard community is exploring longer-term options:

  • WireGuard v2: A potential future protocol revision that natively incorporates ML-KEM into the Noise IK handshake pattern. No timeline or commitment exists, but the WireGuard team has acknowledged the eventual necessity.
  • Noise protocol PQC patterns: The Noise Protocol Framework (which WireGuard is built on) has proposed extensions for PQC KEMs. If adopted, WireGuard could incorporate PQC through a new Noise pattern rather than a from-scratch protocol redesign.
  • PresharedKey rotation scripts: For organizations that cannot deploy Rosenpass, manually rotating WireGuard PSKs derived from a PQC key agreement (performed out-of-band) provides a basic level of PQ protection, though without the automation and forward secrecy of Rosenpass.

6. X.509 PKI

6.1 The Certificate Size Explosion Problem

X.509 certificates are the foundation of internet trust — used in TLS, code signing, email (S/MIME), document signing, and enterprise authentication. The PKI ecosystem was designed around RSA and ECDSA, where certificates are compact (800-1,200 bytes). Post-quantum signatures fundamentally change the arithmetic.

Certificate size comparison (end-entity certificate, typical fields):

AlgorithmPublic KeySignatureCertificate3-Cert Chain
ECDSA P-25664 B64 B~800 B~2.5 KB
RSA-2048256 B256 B~1,200 B~3.6 KB
ML-DSA-441,312 B2,420 B~4,500 B~13.5 KB
ML-DSA-651,952 B3,309 B~6,000 B~18 KB
ML-DSA-872,592 B4,627 B~8,000 B~24 KB
SLH-DSA-128s32 B7,856 B~8,700 B~26 KB
SLH-DSA-128f32 B17,088 B~17,900 B~54 KB

Note that each certificate in a chain contains both a public key (the subject’s) and a signature (the issuer’s). In a chain of n certificates, you have n public keys and n signatures. The chain verification cost scales linearly with both size and cryptographic operations.

6.2 Composite Certificates vs. Dual Certificates

Two competing approaches exist for transitioning X.509 to PQC:

Composite Certificates (draft-ounsworth-pq-composite-sigs, draft-ietf-lamps-pq-composite-sigs):

A single certificate contains both a classical and a PQC public key, signed by the CA using both a classical and a PQC signature algorithm. The verifier must validate both signatures for the certificate to be accepted.

Certificate:
    Subject Public Key:
        Classical: ECDSA P-256 public key (64 bytes)
        PQC: ML-DSA-65 public key (1,952 bytes)
    Issuer Signature:
        Classical: ECDSA P-256 signature (64 bytes)
        PQC: ML-DSA-65 signature (3,309 bytes)
    Total overhead: ~5,389 bytes additional vs. classical-only

Advantages:

  • Single certificate to manage per identity
  • Atomic trust decision (both algorithms verified together)
  • Existing PKI infrastructure (CT logs, OCSP, CRLs) works with one certificate per entity

Disadvantages:

  • Certificates are very large — effectively the sum of both algorithms
  • Both algorithms must be supported by the verifier; no graceful degradation
  • New OIDs required for every composite combination

Dual Certificates (also called “parallel certificates”):

Two separate certificates are issued for the same identity — one classical, one PQC. The server presents whichever certificate the client can verify, or both.

Advantages:

  • Each certificate is a standard X.509 certificate — no format changes
  • Graceful degradation: classical-only clients use the classical certificate
  • Allows independent rotation and revocation of classical and PQC certificates

Disadvantages:

  • Doubles the certificate management burden for every entity
  • CAs must issue, track, and revoke two certificates per entity
  • Certificate transparency logs must handle twice the volume
  • The binding between the two certificates (proving they belong to the same entity) requires additional mechanisms

6.3 CA/Browser Forum Requirements

The CA/Browser Forum (CA/B Forum) governs the policies for publicly-trusted TLS certificates. Their requirements directly affect PQC certificate deployment:

Current status (as of early 2026):

  • The CA/B Forum Baseline Requirements do not yet permit PQC-only certificates for publicly-trusted TLS
  • Composite certificate profiles are under discussion in the Server Certificate Working Group (SCWG)
  • The maximum certificate validity period (currently 398 days, moving toward 90 days) compounds the PQC transition challenge — shorter-lived certificates mean more frequent issuance but also faster algorithm agility
  • Certificate Transparency (CT) log operators are evaluating the storage and bandwidth implications of PQC-sized certificates
  • OCSP response size will increase if stapled OCSP responses contain PQC signatures

Expected timeline:

  • 2026-2027: Ballot proposals for composite or hybrid certificate profiles
  • 2027-2028: First publicly-trusted PQC certificates (likely composite ML-DSA + ECDSA)
  • 2028+: PQC-only certificate profiles (once classical algorithms are formally deprecated)

6.4 Certificate Chain Validation Performance

Beyond size, PQC signatures significantly impact chain validation time:

AlgorithmSign TimeVerify Time3-Cert Chain Verify
ECDSA P-256~0.05 ms~0.12 ms~0.36 ms
RSA-2048~1.5 ms~0.03 ms~0.09 ms
ML-DSA-65~0.15 ms~0.12 ms~0.36 ms
SLH-DSA-128s~2,500 ms~3.5 ms~10.5 ms
SLH-DSA-128f~45 ms~1.8 ms~5.4 ms

ML-DSA verification is competitive with ECDSA — the performance concern is primarily about size, not computation. SLH-DSA, however, is dramatically slower for signing (making it impractical for high-volume CA operations with the “s” parameter sets) and measurably slower for verification.

For environments where verification latency matters (e.g., TLS session establishment on embedded devices, OCSP stapling at scale), ML-DSA is the clear choice over SLH-DSA. SLH-DSA’s advantage is its conservative security foundation (hash functions only), making it a backup if lattice problems are broken — but its performance profile limits practical deployment. For deeper analysis of these trade-offs, see Hash-Based Signatures.


7. DNSSEC

7.1 The Signature Size Problem

DNSSEC is arguably the protocol most fundamentally challenged by post-quantum cryptography. The protocol signs DNS records with digital signatures that must be delivered alongside query responses — and DNS is built on UDP with severe size constraints.

The constraint chain:

  1. Original DNS: 512-byte UDP limit (RFC 1035)
  2. EDNS0 (RFC 6891): Extends the UDP payload to a negotiated maximum, typically 1,232-4,096 bytes
  3. Path MTU: Practical maximum of ~1,232 bytes to avoid IP fragmentation (RFC 8900 recommendation)
  4. TCP fallback: Available but adds latency (connection setup) and server load; many resolvers and authoritative servers deprioritize TCP

A typical DNSSEC response for a signed A record contains:

  • The A record itself (~16 bytes of RDATA)
  • An RRSIG record containing the signature (~signature_size + ~40 bytes of metadata)
  • Potentially NSEC/NSEC3 records for authenticated denial of existence

With ECDSA P-256 (algorithm 13), the RRSIG signature is 64 bytes, and a signed response fits comfortably in a single UDP packet. With RSA-2048 (algorithm 8), the signature is 256 bytes — larger but still manageable.

PQC signature sizes in DNSSEC context:

AlgorithmSignature SizeTypical Signed ResponseFits in 1,232B UDP?
ECDSA P-25664 B~250 BYes
RSA-2048256 B~450 BYes
ML-DSA-442,420 B~2,600 BNo
ML-DSA-653,309 B~3,500 BNo
SLH-DSA-128s7,856 B~8,050 BNo
SLH-DSA-128f17,088 B~17,300 BNo

Every PQC signature algorithm exceeds the recommended UDP maximum. This means PQC DNSSEC would require:

  • Mandatory TCP fallback for most signed responses — fundamentally changing DNS operational characteristics
  • IP fragmentation for UDP responses — unreliable, frequently blocked by middleboxes, and a source of cache poisoning vulnerabilities
  • Neither option is acceptable for the high-throughput, low-latency requirements of DNS resolution at scale

7.2 SLH-DSA and ML-DSA Feasibility Analysis

ML-DSA-44 is the smallest viable PQC signature for DNSSEC, but its 2,420-byte signatures still double the typical DNS response size beyond the recommended UDP maximum. Analysis:

  • Zone signing: ML-DSA-44 signing is fast (~0.1 ms), making zone signing feasible even for large zones
  • Response size: A signed A record response would be ~2,600 bytes — requiring either TCP or large EDNS0 with fragmentation risk
  • NSEC3 responses: Authenticated denial-of-existence responses include multiple RRSIG records, compounding the size problem
  • Key rollovers: ZSK rollovers with ML-DSA would temporarily require both old and new DNSKEY records in the zone apex, further inflating responses

SLH-DSA is essentially infeasible for DNSSEC:

  • Even the smallest variant (SLH-DSA-SHA2-128s at 7,856 bytes) produces responses that are 6x the recommended UDP maximum
  • Signing time for “s” variants (~2,500 ms) makes dynamic signing (used by many authoritative servers) impractical
  • Zone transfer (AXFR/IXFR) bandwidth would increase by orders of magnitude

Potential mitigations being researched:

  • Signature aggregation: Using a single signature to cover multiple DNS records, reducing the per-record overhead
  • Compact DNSSEC: Redesigning the DNSSEC wire format to reduce metadata overhead
  • Hybrid approaches: ECDSA signatures for real-time validation with PQC signatures available via TCP for high-security resolvers
  • Key-based validation: Similar to Merkle Tree Certificates (see Section 1.6), using inclusion proofs instead of per-record signatures

7.3 Zone Transfer Implications

Zone transfers (AXFR for full, IXFR for incremental) are used to replicate DNS zones between primary and secondary authoritative servers. PQC signatures dramatically increase zone sizes:

Zone ScenarioECDSA P-256ML-DSA-44Increase
1,000 RRsets~160 KB~2.8 MB17.5x
100,000 RRsets~16 MB~280 MB17.5x
Large TLD (10M RRsets)~1.6 GB~28 GB17.5x

For large zone operators (TLDs, large hosting providers), the storage, bandwidth, and transfer time implications are severe. Zone transfer intervals may need to be extended, secondary server infrastructure may need significant capacity upgrades, and the provisioning pipeline for DNS hosting platforms would need to handle order-of-magnitude increases in data volume.

The DNSSEC community’s consensus is that a purely “swap the algorithm” approach will not work for DNS. Fundamental protocol-level changes will be needed, and these changes will likely take 5-10 years to design, standardize, implement, and deploy — making DNSSEC the longest pole in the PQC transition tent.

7.4 Operational Considerations for DNS Operators

Even before PQC algorithms are standardized for DNSSEC, DNS operators should begin preparing:

Resolver infrastructure: Recursive resolvers will need to handle significantly larger responses. Buffer sizes, cache memory allocation, and bandwidth provisioning all need to be re-evaluated. A resolver handling 100,000 queries per second with PQC DNSSEC would see response bandwidth increase from approximately 25 MB/s (ECDSA) to approximately 260 MB/s (ML-DSA-44) — a 10x jump.

TCP readiness: Many DNS deployments treat TCP as a fallback mechanism that handles a small fraction of total traffic (typically less than 5%). With PQC signatures, TCP may become the primary transport for signed responses. DNS infrastructure — authoritative servers, resolvers, load balancers, and monitoring systems — must be tested and provisioned for a dramatic increase in TCP query volume.

DNSKEY record sizes: The zone apex DNSKEY RRset contains both the ZSK and KSK public keys plus their signatures. With ML-DSA-44, the DNSKEY RRset alone would be approximately 7-8 KB (two public keys at 1,312 bytes each plus signatures). During key rollovers, when old and new keys coexist, this can double. Priming queries for root zone trust anchors would require TCP unconditionally.

Monitoring and alerting: DNS monitoring tools that track response sizes, TCP fallback rates, and fragmentation rates should be instrumented now. Baseline measurements with current ECDSA signatures will be invaluable for comparison when PQC algorithms are eventually deployed.

For a broader view of cryptographic algorithm families and their properties, see Other PQC Families.


8. Protocol Readiness Summary

The following table summarizes the PQC readiness status across all major protocols:

ProtocolPQC KEM StatusPQC Sig StatusKey StandardsEstimated Timeline
TLS 1.3Deployed (X25519MLKEM768)Research/Draft (Merkle Tree Certs)RFC 9999 (draft), draft-ietf-tls-mlkemKEM: Now; Sigs: 2028+
SSHDefault (sntrup761x25519, OpenSSH 9.0+)Not yet standardizedopenssh-portableKEM: Now; Sigs: 2027+
IPsec/IKEv2Standardized (RFC 9370)Uses existing IKEv2 auth (no PQC sigs yet)RFC 9370KEM: Now; Sigs: 2027+
SignalDeployed (PQXDH)N/A (symmetric auth after KE)Signal PQXDH SpecDeployed
WireGuardVia Rosenpass (PSK injection)N/A (static key auth)Rosenpass protocolAvailable now
X.509 PKIN/A (PKI is signatures)Draft (composite certs)draft-ietf-lamps-pq-composite-sigs2027-2028
DNSSECN/A (DNSSEC is signatures)Research (fundamental challenges)None yet2030+
QUICVia TLS 1.3 (inherits)Via TLS 1.3 (inherits)Inherits TLS standardsInherits TLS timeline
S/MIMEStandardized (RFC 9629)Draft (ML-DSA in CMS)RFC 9629, draft-ietf-lamps-cms-ml-dsa2027+
quadrantChart
    title Protocol PQC Migration Priority Matrix
    x-axis Low Urgency --> High Urgency
    y-axis Low Readiness --> High Readiness

    SSH: [0.85, 0.92]
    TLS KEM: [0.95, 0.88]
    Signal: [0.70, 0.90]
    IPsec: [0.75, 0.70]
    WireGuard: [0.50, 0.55]
    TLS Sigs: [0.80, 0.30]
    X.509 PKI: [0.70, 0.25]
    DNSSEC: [0.40, 0.10]
    S/MIME: [0.30, 0.35]

9. Cross-Protocol Themes and Challenges

9.1 Lessons from Early Deployment

The large-scale deployments of PQC in TLS (Chrome, Cloudflare) and SSH (OpenSSH) have surfaced several operational lessons that are relevant across all protocols:

Performance is not the bottleneck — compatibility is. ML-KEM-768 operations are fast enough that cryptographic computation adds negligible latency. The real deployment blockers are network-level: middlebox incompatibilities, packet size assumptions in monitoring tools, and firewall rules that whitelist based on expected message sizes.

Gradual rollout with telemetry is essential. Chrome deployed X25519Kyber768 (later X25519MLKEM768) to 1% of stable traffic, then 10%, then 50%, monitoring for connection failures at each stage. This ramp-up pattern — deploy, measure, fix, expand — should be the template for every protocol transition.

Library support drives adoption. Protocols where the dominant library (OpenSSL, BoringSSL, libssh) ships PQC by default see rapid adoption. Protocols that require users to compile custom libraries or patch source code see near-zero adoption outside research environments. Library maintainers are the most important stakeholders in the PQC transition.

Configuration complexity is the enemy. OpenSSH made sntrup761x25519 the default — no configuration required. TLS requires the client and server to independently opt in to hybrid key shares. IPsec requires explicit proposal configuration with new transform type keywords. The less configuration required, the faster adoption proceeds.

9.2 The KEM vs. Signature Asymmetry

A clear pattern emerges across all protocols: PQC key exchange/encapsulation is deployable today, while PQC signatures remain a hard problem. This is because:

  1. KEMs are session-ephemeral: The key exchange data is not stored, cached, or replayed. Larger key shares are a one-time cost per connection.
  2. Signatures are stored and replicated: Certificates, DNSSEC records, and signed artifacts persist in caches, databases, and log systems. Size increases have multiplicative effects across the infrastructure.
  3. KEM sizes are manageable: ML-KEM-768 adds ~2.3 KB to a handshake. ML-DSA-65 adds ~5.3 KB per signature — and a certificate chain may contain 3-6 signatures.
  4. Hybrid KEMs are straightforward: Concatenate two key shares, concatenate two shared secrets, derive the key. Hybrid signatures are more complex (composite OIDs, dual verification, failure semantics).

9.3 The Middlebox Problem

Every protocol that transits the public internet encounters middleboxes — firewalls, load balancers, DPI engines, proxies, and NAT devices — that make assumptions about protocol message sizes. PQC consistently breaks these assumptions:

  • TLS: ClientHello exceeds expected single-record size
  • DNS: Responses exceed UDP packet size expectations
  • IPsec: IKE_INTERMEDIATE exchanges are unfamiliar to some firewalls
  • SSH: Larger key exchange init messages may trigger IDS false positives

Testing PQC protocol deployments against real-world middlebox populations is essential and cannot be done in laboratory environments alone. Google, Cloudflare, and other large-scale operators have been conducting these experiments, and their findings consistently reveal a long tail of incompatible network equipment.

9.4 Algorithm Agility vs. Simplicity

PQC integration forces a tension between two design principles:

  • Algorithm agility: The ability to swap cryptographic algorithms without changing the protocol. TLS and IPsec are designed for agility; WireGuard deliberately rejects it.
  • Simplicity: Fewer options mean fewer bugs, smaller attack surfaces, and easier analysis. Every negotiable algorithm is a potential downgrade target.

The hybrid transition period compounds this tension: protocols must now support classical-only, hybrid, and eventually PQC-only configurations, multiplying the complexity. The lesson from TLS’s history with cipher suite proliferation (hundreds of combinations, many insecure) should inform the PQC transition — agility is necessary, but must be constrained.

9.5 Timeline Implications for Practitioners

For security professionals planning PQC migration:

  1. Immediate (now): Enable hybrid PQC key exchange in TLS, SSH, and IPsec. These are deployed, standardized (or near-standardized), and impose minimal operational cost.
  2. Near-term (2026-2028): Prepare PKI infrastructure for PQC certificates. Test composite certificate issuance, evaluate CA vendor readiness, size certificate storage and bandwidth accordingly.
  3. Medium-term (2028-2030): Deploy PQC authentication in TLS and SSH. Evaluate Merkle Tree Certificates or equivalent solutions for the certificate size problem.
  4. Long-term (2030+): Address DNSSEC. This will require protocol-level redesign and cannot be solved by algorithm substitution alone.

For detailed guidance on planning the organizational transition, see NIST PQC Standardization. For analysis of the mathematical hardness assumptions underpinning these protocol changes, see Lattice-Based Cryptography and Code-Based Cryptography.


10. Key Takeaways

  1. PQC key exchange is a solved deployment problem — TLS, SSH, IPsec, Signal, and WireGuard all have production-ready hybrid KEM support today. If you are not using it, start now.

  2. PQC signatures are the hard frontier — certificate sizes, DNSSEC response sizes, and chain validation overhead remain unsolved at scale. Merkle Tree Certificates and signature aggregation are promising but not yet standardized.

  3. Hybrid is the correct transition strategy — every production deployment uses hybrid (classical + PQC) constructions. Pure PQC is inappropriate until algorithms have accumulated decades of cryptanalytic confidence.

  4. Protocol-specific constraints dominate — a single PQC algorithm cannot be naively substituted into every protocol. Each protocol’s message size limits, round-trip assumptions, and ecosystem constraints require tailored solutions.

  5. DNSSEC is the most challenging protocol — the fundamental mismatch between PQC signature sizes and DNS’s UDP-centric architecture means that DNSSEC will require protocol-level redesign, not just algorithm substitution.

  6. The middlebox problem is real — laboratory testing is insufficient. Large-scale experiments on the public internet consistently reveal compatibility issues that no amount of standards compliance can predict.

  7. Start with key exchange, then authentication — this is not just a practical prioritization but reflects the threat model: harvest-now-decrypt-later attacks (defeated by PQC KEM) are happening today, while real-time authentication forgery (requiring PQC signatures to prevent) requires a functioning quantum computer at attack time.