← Back to Post-Quantum Cryptography

Harvest Now, Decrypt Later (HNDL)

16 min read

Overview

Of all the threats posed by quantum computing to cryptography, Harvest Now, Decrypt Later (HNDL) is the only one that is already active. Every other quantum-cryptographic risk — breaking RSA key exchange, forging digital signatures, compromising blockchain consensus — requires a cryptographically relevant quantum computer (CRQC) that does not yet exist. HNDL does not. It requires only storage capacity and patience, both of which adversaries have in abundance.

The premise is straightforward: an adversary intercepts and stores encrypted network traffic today, with the intention of decrypting it once a sufficiently powerful quantum computer becomes available. The encrypted data sits in cold storage — inert, unreadable, but perfectly preserved — until the day Shor’s algorithm can be run against the key exchange that protected it. On that day, every intercepted session becomes plaintext.

This is not a theoretical concern. Intelligence agencies have a documented history of bulk collection programs that capture encrypted traffic without the immediate ability to decrypt all of it. The economic calculus is simple: storage is cheap and getting cheaper, the data never expires, and the potential intelligence value of retroactively decrypting years of diplomatic cables, corporate communications, or military logistics is enormous. HNDL is not a future risk. It is a present-tense collection activity whose impact is deferred.

This page examines the HNDL threat model in detail: who is collecting, what they are collecting, how to assess which data is at risk, the mathematical framework for determining when migration becomes urgent, and the concrete organizational steps required to neutralize the threat before a CRQC materializes. For background on the algorithms that make HNDL viable, see Shor’s Algorithm & Grover’s Algorithm. For the migration strategies that address it, see Migration Strategies & Crypto Agility and Hybrid Cryptography Approaches.


1. The HNDL Threat Model

1.1 What HNDL Is

Harvest Now, Decrypt Later is a deferred cryptanalysis attack. It exploits a fundamental asymmetry in modern public-key cryptography: the confidentiality of encrypted data depends on the key exchange remaining secure not just at the time of transmission, but for the entire lifetime of the data’s sensitivity. If the key exchange is broken at any future point, the data is compromised retroactively.

In a standard TLS 1.3 session, the client and server perform an ephemeral key exchange — typically X25519 or ECDH-P256 — to establish a shared secret. This shared secret is used to derive symmetric session keys (AES-256-GCM or ChaCha20-Poly1305), which encrypt the actual application data. The symmetric encryption is quantum-resistant (AES-256 retains approximately 128-bit security under Grover’s algorithm). But the key exchange that produced the symmetric keys is not. An adversary who records the full TLS handshake and the encrypted payload can, once they have a CRQC, run Shor’s algorithm against the key exchange parameters to recover the shared secret, derive the session keys, and decrypt the entire session.

The critical point: the symmetric cipher’s quantum resistance is irrelevant if the key exchange is compromised. AES-256 is not the weak link. The ephemeral Diffie-Hellman or ECDH key exchange is. This is why HNDL specifically targets data in transit — data encrypted at rest using symmetric keys derived from passwords or hardware tokens is not vulnerable to this particular attack vector (though it may have other quantum exposure).

1.2 The Attack Lifecycle

HNDL follows a five-phase lifecycle:

graph LR
    A["Phase 1<br/>Collection"] --> B["Phase 2<br/>Storage"]
    B --> C["Phase 3<br/>Wait"]
    C --> D["Phase 4<br/>Decryption"]
    D --> E["Phase 5<br/>Exploitation"]

    style A fill:#e74c3c,color:#fff
    style B fill:#e67e22,color:#fff
    style C fill:#f1c40f,color:#000
    style D fill:#3498db,color:#fff
    style E fill:#8e44ad,color:#fff

Phase 1 — Collection. The adversary positions themselves to intercept encrypted traffic. This may involve tapping fiber-optic cables, compromising network infrastructure, exploiting BGP hijacking to redirect traffic, or operating malicious nodes within anonymity networks. The collection is indiscriminate — the adversary captures everything, because they cannot read the contents to determine what is valuable. Selectivity comes later.

Phase 2 — Storage. Captured traffic is written to long-term storage. Given the declining cost of storage media, retaining petabytes of encrypted traffic for decades is economically trivial for a nation-state actor. The data is indexed by source, destination, timestamp, and any available metadata (which is often unencrypted — IP addresses, packet sizes, timing patterns, SNI fields in TLS). This metadata enables selective decryption later, so the adversary does not need to decrypt everything, only the sessions of interest.

Phase 3 — Wait. The adversary waits for a CRQC to become available. This phase could last years or decades. The cost during this phase is only storage maintenance — which continues to become cheaper over time.

Phase 4 — Decryption. Once a CRQC is operational, the adversary selectively decrypts sessions of interest. Using the metadata index from Phase 2, they identify high-value targets — sessions between specific IP addresses, connections to specific government or corporate domains, communications from specific geographic regions. For each target session, they extract the key exchange parameters from the recorded TLS handshake, run Shor’s algorithm to recover the shared secret, derive the session keys, and decrypt the payload. This process is computationally expensive per-session but massively parallelizable and can be applied to specific sessions of interest rather than the entire corpus.

Phase 5 — Exploitation. The decrypted data is exploited for intelligence, economic advantage, blackmail, or strategic positioning. The value depends on the data type and its remaining sensitivity at the time of decryption. Government secrets classified for 50 years retain full value. A session token that expired in 2024 does not.

1.3 Why HNDL Is the Most Immediate Quantum Threat

Every other quantum-cryptographic threat requires the adversary to have a CRQC at the time of the attack. Forging a digital signature requires a CRQC now. Breaking a live TLS session requires a CRQC now. Defeating a blockchain’s hash-based proof-of-work requires a CRQC now.

HNDL is different. The adversary acts now — collecting and storing data — and only needs the CRQC later. The implication is stark: every day that passes with classical-only key exchange in production, adversaries accumulate more data that will be retroactively decryptable. The window of vulnerability is not the future date when a CRQC is built. The window is open right now and has been for years. Every TLS session using ECDH or X25519 without a post-quantum component that has been observed by an adversary is a session whose confidentiality has an expiration date.

This is why organizations like CISA, NSA, ANSSI, BSI, and the UK NCSC have issued guidance that HNDL is the primary motivator for early adoption of hybrid cryptography. You do not need to wait for a CRQC to be affected by the quantum threat. You are affected today.


2. Nation-State Collection Programs

2.1 Known and Suspected Programs

The existence of bulk data collection programs is well-documented, primarily through the Snowden disclosures (2013) and subsequent reporting. While these revelations focused on capabilities as they existed over a decade ago, the infrastructure and institutional incentives have only grown since.

UPSTREAM and PRISM (NSA, United States). The UPSTREAM program involved direct tapping of fiber-optic cables carrying internet backbone traffic, collecting data as it transited key network junctions. PRISM involved obtaining data directly from technology companies under FISA court orders. Both programs collected encrypted data that could not be immediately decrypted, with the explicit understanding that future capabilities might enable access. Internal NSA documents referenced the need to “collect it all” — the philosophy that storing data now, regardless of immediate decryptability, is an investment in future intelligence capability.

TEMPORA (GCHQ, United Kingdom). Britain’s GCHQ operated TEMPORA, which tapped undersea fiber-optic cables landing in the UK, buffering up to 30 days of internet traffic for analysis. The sheer volume — reported at 21 petabytes per day in 2012 — far exceeded the capacity for real-time analysis, confirming the “store first, analyze later” model.

Other programs. Other nations with sophisticated signals intelligence capabilities — including China (technical reconnaissance bureaus under the PLA Strategic Support Force), Russia (SORM system requiring ISP-level interception capabilities), France (DGSE’s access to submarine cable landing stations), and Israel (Unit 8200) — are assessed by Western intelligence agencies to maintain comparable or more targeted collection programs. The details are less public, but the incentive structure is identical.

2.2 Undersea Cable Tapping

Approximately 95% of intercontinental internet traffic traverses undersea fiber-optic cables. There are roughly 600 active submarine cable systems globally, and the number of cable landing stations — the physical points where cables come ashore — is relatively small. These landing stations represent natural chokepoints for signals intelligence collection.

Tapping undersea cables is technically demanding but well within nation-state capability. Methods include:

  • Landing station access: Legal or covert access to the facilities where cables terminate. This is the most common approach, as cable landing stations are located in national territory and subject to domestic law (or domestic intelligence mandates).
  • Submarine tapping: Physically accessing cables on the ocean floor using specialized submarines or remotely operated vehicles. The USS Jimmy Carter (SSN-23) was reportedly modified specifically for undersea cable operations.
  • Optical splitters: Modern fiber-optic taps use optical splitters that divert a fraction of the light signal without interrupting the cable’s operation. This can be installed at landing stations or at any accessible point along the cable route.

The relevance to HNDL is direct: undersea cable taps provide access to enormous volumes of encrypted international traffic — exactly the type of data (diplomatic communications, international financial transactions, cross-border corporate traffic) that retains high intelligence value over long time horizons.

2.3 The Economics of Storage

The economic argument for HNDL has strengthened dramatically over time. Storage costs have followed a relentless downward trajectory:

YearCost per Terabyte (HDD)Cost per Petabyte
2010~$50~$50,000
2015~$30~$30,000
2020~$15~$15,000
2025~$10~$10,000

For a nation-state intelligence agency with a multi-billion-dollar budget, storing petabytes of intercepted traffic is a rounding error. The NSA’s Utah Data Center (completed 2014) was estimated to have an initial storage capacity measured in exabytes. Even at 2014 prices, the capital cost of storage was a small fraction of the facility’s total $1.5 billion budget. The ongoing cost of powering and cooling that storage is the larger expense — but it is still trivial relative to the intelligence value of the stored data.

The economics only improve with time. An adversary who begins collecting today faces lower storage costs per petabyte than at any previous point. And the data they are collecting today — protected by classical key exchange — will be decryptable by a CRQC without any need for the algorithms to change. The cost-benefit analysis is overwhelmingly favorable for collection.

2.4 The “Collect It All” Mentality

The strategic logic of bulk collection programs is that the cost of not collecting exceeds the cost of collecting. Storage is cheap. Analysis capability improves over time. And data that was not collected cannot be retroactively obtained. This creates a systematic bias toward over-collection: when in doubt, collect.

For HNDL specifically, this mentality means that adversaries are likely collecting more encrypted traffic than they could ever hope to selectively decrypt, operating on the assumption that future quantum capability will enable targeted decryption of whatever specific sessions turn out to be interesting. They do not need to know today which encrypted sessions contain valuable intelligence. They only need to know that some of them do — and that storing all of them costs less than the risk of missing the critical ones.


3. Data Sensitivity and Time-Value Analysis

Not all data is equally vulnerable to HNDL. The key variable is the sensitivity duration — how long the data retains its value to the adversary relative to the expected timeline for CRQC availability. Data that becomes worthless within months has negligible HNDL risk. Data that remains sensitive for decades is critically exposed.

3.1 Data Sensitivity vs. Time-Value Matrix

Data TypeSensitivity DurationHNDL Risk LevelMigration Priority
Government classified (TOP SECRET/SCI)50–75 yearsCriticalImmediate
National infrastructure SCADA/ICS configs20–40 yearsCriticalImmediate
Health records (patient data)Patient lifetime (~60+ years)CriticalImmediate
Intelligence sources and methodsIndefiniteCriticalImmediate
Long-term strategic plans (military/corporate)10–25 yearsHighNear-term
Trade secrets and proprietary IPVaries (5–30 years)HighNear-term
Financial records and transactions7–15 yearsHighNear-term
Legal communications (privilege)Duration of matter + appealsHighNear-term
Personal communications (private)Lifetime of individualsMediumPlanned
Corporate email (general)3–7 yearsMediumPlanned
Authentication credentialsUntil rotation (hours–months)LowStandard cycle
Session tokensHoursNegligibleStandard cycle
Ephemeral messaging (forward secrecy)Already expired if PFS usedNegligibleStandard cycle

The table reveals a clear pattern: data whose sensitivity outlasts the expected CRQC timeline is at critical HNDL risk. If you expect a CRQC within 15 years, any data that remains sensitive beyond 2041 is already at risk today. Government secrets classified for 50 years are already living on borrowed time.

3.2 Sector-Specific Analysis

Government and defense. The sensitivity duration of classified information is defined by regulation, not by business judgment. TOP SECRET material in the United States is classified for a minimum of 25 years, with extensions to 50 or 75 years for sensitive sources and methods. NATO and Five Eyes classified material follows similar timelines. Any encrypted communication containing classified content that is intercepted today will be decryptable long before it is scheduled for declassification. This makes government and defense the single highest-priority sector for HNDL mitigation. The NSA’s CNSA 2.0 mandate reflects this reality.

Healthcare. HIPAA requires protection of patient health information (PHI) for the lifetime of the patient plus a retention period. A health record created for a 25-year-old patient today must remain confidential until at least 2086 — sixty years from now. Electronic health records transmitted between hospitals, clinics, laboratories, and insurance providers via HL7 FHIR interfaces or health information exchanges (HIEs) are typically protected by TLS with classical key exchange. Every one of these transmissions is an HNDL target. The healthcare sector faces additional challenges: EHR systems are vendor-controlled, update cycles are slow, and regulatory certification (HITRUST, SOC 2 for healthcare) adds latency to cryptographic changes.

Financial services. Financial data has a complex sensitivity profile. Individual transaction data (credit card purchases, wire transfers) has relatively short sensitivity — typically 3–7 years for fraud and audit purposes. But correspondent banking records, SWIFT messages, and trade finance documentation can retain sensitivity for 15–25 years due to sanctions enforcement, anti-money-laundering (AML) investigations, and litigation discovery. Proprietary trading algorithms and risk models, if transmitted over networks, can retain competitive value for decades.

Legal sector. Attorney-client privileged communications are protected for the duration of the legal matter plus any appeal period, which can span decades for complex litigation, patent disputes, or regulatory investigations. Law firms transmitting privileged communications via email or document sharing platforms are creating HNDL targets with indefinite sensitivity duration. The legal profession’s relatively slow adoption of encryption technologies exacerbates this risk.

3.3 The Metadata Problem

Even when payload data has limited time-value, metadata can retain sensitivity far longer. A decrypted email from 2024 might contain stale information. But the fact that two specific individuals communicated, at a specific time, from specific locations, can remain operationally sensitive indefinitely — particularly for intelligence sources, whistleblowers, journalists, dissidents, and covert operatives.

TLS metadata is partially visible without decryption (IP addresses, timing, sizes), but full decryption reveals additional metadata within the encrypted payload: email headers, API endpoints, authentication identities, and internal routing information. This extended metadata may have a sensitivity duration far exceeding the payload content itself.

3.4 The Aggregation Effect

Individual data points may have limited value, but aggregated data from multiple decrypted sources can produce intelligence far exceeding the sum of its parts. Consider an adversary who retroactively decrypts:

  • A defense contractor’s email (revealing project names and personnel)
  • A hotel booking system’s traffic (revealing travel patterns of those personnel)
  • A government employee’s personal email (revealing family connections and financial pressures)
  • A medical provider’s records (revealing health conditions that create leverage)

No single source is a complete intelligence picture. But correlated across sources, they produce a comprehensive profile suitable for recruitment, coercion, or operational planning. This aggregation effect means that even data with moderate individual sensitivity can become critically sensitive when combined with other HNDL-collected sources. Organizations that dismiss their data as “not interesting enough” fail to account for its value as one piece of a larger puzzle.


4. Mosca’s Theorem: A Framework for Urgency

4.1 The Theorem

Michele Mosca formalized the HNDL urgency question in what is now known as Mosca’s theorem (sometimes called the “Mosca inequality”). It provides a simple framework for determining when an organization must begin its post-quantum migration:

If X + Y > Z, then you must act now.

Where:

  • X = the number of years the data must remain secure (shelf life)
  • Y = the number of years required to migrate systems to PQC (migration time)
  • Z = the number of years until a CRQC is available (threat timeline)

The logic is straightforward. If the sum of your data’s required security lifetime and your migration time exceeds the time until a CRQC exists, you are already too late. The data you are generating today will be decryptable before you can protect it, and the window for starting migration has already closed.

gantt
    title Mosca's Theorem — The Critical Question
    dateFormat  YYYY
    axisFormat  %Y
    section Time Horizons
    Data must remain secure (X)       :x, 2026, 2076
    Migration time needed (Y)         :y, 2026, 2036
    CRQC arrives (Z)                  :milestone, z, 2041, 0d
    section Assessment
    X + Y deadline                    :crit, deadline, 2026, 2086

If the milestone (CRQC arrival) falls before the end of the “Data must remain secure” bar, the data is at risk. If it falls before the “Migration time needed” bar ends, migration must begin immediately. If X + Y > Z, you needed to start yesterday.

4.2 Worked Example 1: Government Classified Data

Scenario: A government agency handling TOP SECRET intelligence reports.

ParameterValueRationale
X (shelf life)50 yearsClassification duration for TOP SECRET material
Y (migration time)10 yearsLegacy classified networks, accreditation cycles, allied interoperability
Z (CRQC estimate)15 yearsModerate consensus estimate (2041)

Calculation: X + Y = 50 + 10 = 60 years. Z = 15 years. Since 60 > 15, the inequality is satisfied by an enormous margin.

Interpretation: This agency needed to begin its PQC migration decades ago in theoretical terms. In practical terms, it means migration must begin immediately with the highest possible urgency. Even if the agency could complete migration in 5 years instead of 10, the data generated today will remain sensitive for 50 years — far beyond any reasonable CRQC timeline. Every day of delay adds another day’s worth of interceptable classified traffic to the adversary’s collection.

timeline
    title Government Classified Data — Mosca's Theorem
    2026 : Migration must begin
    2036 : Migration complete (optimistic)
    2041 : CRQC available (estimate)
    2076 : Data declassification

Status: Already too late for data generated before migration completes. Immediate action required.

gantt
    title Worked Example 1 — Government Classified Data
    dateFormat  YYYY
    axisFormat  %Y
    section Timelines
    Migration window (Y=10yr)         :active, mig, 2026, 2036
    CRQC arrival (Z=15yr)             :milestone, crqc, 2041, 0d
    Data sensitivity (X=50yr)         :data, 2026, 2076
    section Risk
    Unprotected data exposure         :crit, risk, 2026, 2036

The red bar represents data generated during the migration period — data that is transmitted under classical-only encryption and will be decryptable once the CRQC arrives. Even with an aggressive 10-year migration, a full decade of classified communications is exposed.

4.3 Worked Example 2: Healthcare Provider

Scenario: A regional hospital system transmitting electronic health records (EHR) between facilities.

ParameterValueRationale
X (shelf life)30 yearsPatient records for a 40-year-old remain sensitive for their lifetime
Y (migration time)5 yearsEHR vendor dependency, HL7/FHIR protocol updates, compliance certification
Z (CRQC estimate)15 yearsModerate consensus estimate (2041)

Calculation: X + Y = 30 + 5 = 35 years. Z = 15 years. Since 35 > 15, the inequality is satisfied.

Interpretation: The healthcare provider must begin migration now. The 5-year migration window is realistic if started immediately — the provider would complete migration by ~2031, before the estimated CRQC arrival. However, any delay compresses the available migration window. If migration is deferred to 2029, the provider would not complete until ~2034, still before the CRQC estimate but with dangerously thin margin. And all patient data transmitted between now and migration completion remains vulnerable to retroactive decryption.

Status: Must begin migration planning immediately. Every year of delay is unrecoverable.

gantt
    title Worked Example 2 — Healthcare Provider
    dateFormat  YYYY
    axisFormat  %Y
    section Timelines
    Migration window (Y=5yr)          :active, mig, 2026, 2031
    CRQC arrival (Z=15yr)             :milestone, crqc, 2041, 0d
    Data sensitivity (X=30yr)         :data, 2026, 2056
    section Assessment
    Safe window after migration       :done, safe, 2031, 2041

The healthcare provider has a feasible path if migration begins immediately. The 10-year gap between migration completion (2031) and estimated CRQC arrival (2041) provides a reasonable safety margin. But this margin evaporates quickly with delays — a 5-year postponement eliminates it entirely.

4.4 Worked Example 3: E-Commerce Platform

Scenario: An online retailer processing credit card transactions and customer orders.

ParameterValueRationale
X (shelf life)2 yearsPCI DSS data retention limits; card numbers rotate; order data becomes stale
Y (migration time)3 yearsCloud-native infrastructure, TLS library updates, CDN and WAF compatibility
Z (CRQC estimate)15 yearsModerate consensus estimate (2041)

Calculation: X + Y = 2 + 3 = 5 years. Z = 15 years. Since 5 < 15, the inequality is not satisfied.

Interpretation: This retailer has significant runway. Data generated today will be worthless long before a CRQC exists. The migration can be planned on a standard upgrade cycle without emergency measures. However, the retailer should still factor PQC into upcoming architectural decisions — selecting PQC-compatible TLS libraries, ensuring crypto agility, and monitoring CRQC timeline estimates for any acceleration.

Status: Plan and prepare, but no emergency action required. Integrate PQC into standard upgrade cycles.

gantt
    title Worked Example 3 — E-Commerce Platform
    dateFormat  YYYY
    axisFormat  %Y
    section Timelines
    Migration window (Y=3yr)          :active, mig, 2026, 2029
    Data sensitivity (X=2yr)          :data, 2026, 2028
    CRQC arrival (Z=15yr)             :milestone, crqc, 2041, 0d
    section Assessment
    Data expired before CRQC          :done, safe, 2028, 2041

The e-commerce data expires (loses sensitivity) in 2028 — thirteen years before the estimated CRQC. Even if the CRQC timeline is dramatically compressed, the data is worthless long before quantum decryption becomes feasible. Migration is still good practice but is not urgent.

4.5 Sensitivity Analysis

Mosca’s theorem is only as reliable as its inputs, and all three parameters carry uncertainty:

X (shelf life) uncertainty: Organizations often underestimate how long data remains sensitive. “We only keep records for 7 years” ignores the fact that an adversary may have already copied those records. Shelf life should be assessed from the adversary’s perspective, not the organization’s retention policy.

Y (migration time) uncertainty: Migration always takes longer than planned. Dependencies on vendors, standards bodies, hardware refresh cycles, and compliance recertification create cascading delays. A realistic Y estimate should include buffer for supply chain, testing, and rollback scenarios.

Z (CRQC timeline) uncertainty: This is the most uncertain parameter. Estimates range from 10 to 30+ years, and a breakthrough in quantum error correction could compress the timeline dramatically. Risk-averse organizations should use the lower bound of credible CRQC estimates for planning purposes.


5. CRQC Timeline Estimates

The timeline for a cryptographically relevant quantum computer — one capable of running Shor’s algorithm against RSA-2048 or P-256 — is the most contested variable in the entire PQC discussion. Estimates vary widely based on assumptions about error correction overhead, hardware scaling, and the pace of engineering breakthroughs.

5.1 Current Estimates

SourceEstimated CRQC DateConfidenceNotes
Global Risk Institute (2024 survey)2035–2045MediumAnnual expert survey; median response shifted earlier in recent years
RAND Corporation (2023)2040–2050Low–MediumEmphasized engineering challenges over theoretical capability
Chinese Academy of Sciences (2024)2035–2040MediumPublished roadmap for photonic and superconducting approaches
IBM Quantum Roadmap (2025)2035+MediumBased on published hardware scaling targets through 100K+ qubits
Google Quantum AI (2025)2035–2040MediumWillow chip error correction results suggest accelerating timeline
NIST (assessment, 2024)“Within 15 years is plausible”MediumMotivated urgency of FIPS 203/204/205 publication
BSI, Germany (2024)2035–2040MediumPlanning assumption for federal infrastructure migration
NSA CNSA 2.0 (2022)Not publicly estimatedActions imply urgency: mandated PQC by 2035 for NSS
ANSSI, France (2024)“Cannot exclude before 2030”LowPrecautionary principle; recommends immediate hybrid deployment
Michele Mosca (2023)“1/6 chance by 2033”Low–MediumProbability-based framing rather than point estimate
Arthur Herman, Hudson Institute (2024)2030–2035LowAggressive timeline; influenced by classified briefings

5.2 Why Estimates Keep Shifting Earlier

Several factors are compressing CRQC timeline estimates:

  1. Error correction breakthroughs. Google’s Willow chip (2024) demonstrated below-threshold error correction for the first time at meaningful scale, suggesting that the engineering path to fault-tolerant computation is viable. Microsoft’s topological qubit announcements (2025) — if validated — could dramatically reduce the physical-to-logical qubit overhead.

  2. Investment acceleration. Global quantum computing investment has exceeded $40 billion cumulatively, with significant government funding programs in the US ($2.7B via the National Quantum Initiative), EU (€1B Quantum Flagship), China (estimated $15B+), and others. This capital is accelerating hardware development timelines.

  3. Algorithmic improvements. Theoretical improvements to Shor’s algorithm and related quantum algorithms continue to reduce the qubit count and gate depth required for cryptographically relevant computations. Recent papers have proposed approaches that could reduce RSA-2048 factoring requirements by 10–100x compared to earlier estimates.

  4. Classified programs. The urgency of government PQC mandates — particularly NSA’s CNSA 2.0 timeline requiring PQC adoption for national security systems by 2035 — suggests that classified assessments may place the CRQC timeline earlier than public estimates. Intelligence agencies do not typically mandate expensive infrastructure changes with a 20-year buffer.

5.3 Understanding the Estimates

Several important caveats apply to any CRQC timeline estimate:

Point estimates are misleading. A statement like “a CRQC will arrive in 2040” implies a precision that does not exist. Probability distributions are more appropriate: “there is a 10% chance of a CRQC by 2035, a 50% chance by 2045, and a 90% chance by 2060.” For HNDL risk management, the relevant number is the lower tail of the distribution — the earliest plausible date — because data collected today will be exposed at that earliest date.

Public estimates may lag classified assessments. Government mandates (NSA CNSA 2.0, CISA advisories) requiring PQC adoption by 2035 may be informed by classified quantum computing programs whose progress is not reflected in public timelines. When an intelligence agency mandates a cryptographic transition with a specific deadline, that deadline is typically informed by threat intelligence about adversary capabilities, not by academic consensus estimates.

Discontinuous breakthroughs are possible. CRQC timelines assume a roughly continuous pace of improvement. A theoretical breakthrough in quantum error correction, a new qubit modality with dramatically lower error rates, or a mathematical advance that reduces the quantum resources required for Shor’s algorithm could compress the timeline discontinuously. The history of technology is punctuated by such breakthroughs, and planning exclusively for gradual improvement is risky.

5.4 The Irrelevance of Precise Estimates for HNDL

For HNDL specifically, the precise CRQC date matters less than for other quantum threats. The reason: HNDL is already happening. Whether a CRQC arrives in 2035 or 2050, data being collected today will eventually be decryptable. The question is not “will a CRQC arrive before my data becomes worthless?” (Mosca’s theorem answers that). The question is “is my data being collected right now?” For any organization whose traffic traverses networks accessible to nation-state adversaries — which is to say, virtually every organization — the answer is likely yes.


6. Evidence of HNDL Activity

6.1 Public Reporting and Intelligence Assessments

Direct evidence of HNDL activity is, by nature, difficult to obtain — a successful HNDL collection operation is invisible to the victim. However, multiple lines of evidence support the assessment that HNDL is actively occurring:

Snowden disclosures (2013). Documents revealed that the NSA and allied agencies routinely collected encrypted traffic that they could not immediately decrypt, storing it for future cryptanalysis. The stated rationale was not quantum computing — it was the possibility of obtaining keys through other means (rubber-hose cryptanalysis, key server compromise, implementation vulnerabilities). But the infrastructure and practice of storing encrypted traffic is directly applicable to quantum HNDL.

CISA and NSA joint advisory (2022). The advisory explicitly cited “harvest now, decrypt later” as a current threat and recommended immediate adoption of quantum-resistant cryptography for sensitive data. Government agencies do not issue joint advisories for theoretical threats — the language implies assessed intelligence that HNDL collection is occurring.

French ANSSI assessment (2024). ANSSI’s guidance on post-quantum migration explicitly referenced the HNDL threat as a present-tense concern, stating that “the confidentiality of currently exchanged data is already threatened” by quantum-capable adversaries planning for future decryption.

Congressional briefings (2023–2025). Multiple US congressional committees received classified briefings on quantum threats to national security, resulting in bipartisan legislation accelerating PQC migration timelines. The urgency of legislative action suggests the classified threat assessments are more alarming than public estimates.

Chinese quantum research pace. China’s quantum computing program, backed by estimated investment exceeding $15 billion, has produced a series of claims regarding quantum computational advantage (Jiuzhang for photonic systems, Zuchongzhi for superconducting qubits). While these demonstrations are not cryptographically relevant, they signal sustained commitment and rapid capability development. Western intelligence assessments consistently identify China as one of the most likely actors to develop early CRQC capability — and simultaneously one of the most likely to conduct HNDL collection at scale through its extensive domestic internet infrastructure and reported access to international cable systems.

6.2 Network Traffic Analysis Indicators

While specific HNDL collection operations are covert, network security researchers have documented anomalous traffic patterns consistent with large-scale collection:

  • Unexplained traffic duplication. Network operators have observed traffic being mirrored to unknown destinations, particularly at international peering points and cable landing stations.
  • BGP hijacking for interception. Multiple documented incidents of BGP route manipulation redirected traffic through specific autonomous systems, providing opportunities for traffic copying. While some of these were attributed to misconfigurations, others showed patterns consistent with intentional interception.
  • Increased storage procurement. Open-source intelligence (OSINT) tracking of government procurement contracts shows sustained investment in high-capacity storage infrastructure by intelligence agencies worldwide, well beyond what operational analysis requirements would justify.

6.3 The Scale of the Problem

To understand the magnitude of potential HNDL collection, consider the scale of global internet traffic:

  • Global internet traffic in 2025 is approximately 5 exabytes per day (5,000 petabytes).
  • Approximately 95% of web traffic is encrypted (TLS), virtually all using classical key exchange.
  • Undersea cables carry an estimated 4.5 exabytes per day of this traffic.
  • At current storage costs ($10/TB), storing 1% of daily global traffic (50 PB) costs approximately $500,000 — per day. This is within the budget of any major intelligence agency.

Of course, no adversary stores all global traffic. Selective collection at key chokepoints — cable landing stations, major peering exchanges, cloud provider interconnects — is far more efficient. An adversary tapping the top 20 submarine cables could capture a disproportionate share of high-value international traffic (diplomatic, financial, military) while ignoring the bulk of consumer streaming and social media traffic.

The stored corpus grows daily, and it never needs to be discarded. Unlike perishable intelligence, encrypted traffic captured today is just as decryptable in 2045 as it will be in 2035. There is no expiration date on the ciphertext.

6.4 The Retroactive Decryption Problem

HNDL creates a unique challenge for security assessment: you cannot detect it after the fact. Unlike a conventional data breach — which leaves forensic artifacts, triggers monitoring alerts, and can be contained once detected — HNDL collection is passive. The adversary makes a perfect copy of the traffic. The original transmission proceeds normally. There is no alteration, no anomaly, no indicator of compromise.

This means that conventional breach detection and incident response frameworks are entirely ineffective against HNDL. There is no alert to investigate, no IOC to hunt for, no containment action to take. The “breach” has already occurred — the data is already in the adversary’s possession — but it will not become exploitable until the CRQC arrives. By the time the impact materializes, the collection event may be decades in the past, well beyond any log retention period.

This means that conventional breach notification frameworks — built around the assumption that a breach can be detected, reported, and remediated — are fundamentally mismatched to HNDL. There is no “breach date” to report. The data was collected years or decades ago. The compromise only becomes apparent when decrypted data surfaces — at which point the original collection is ancient history, the systems involved may no longer exist, and the individuals affected may not even remember the communications that were compromised.

This is why HNDL must be addressed proactively through cryptographic migration, not reactively through monitoring and incident response. It represents a paradigm shift in how organizations must think about data protection: the threat is not “someone might break in tomorrow” but “someone already has our data and is waiting for the key.”


7. Organizational Response Framework

7.1 Data Classification for Quantum Risk

The first step in an HNDL response strategy is classifying data assets by their quantum risk exposure. This is distinct from traditional data classification (which focuses on current confidentiality requirements) — quantum risk classification adds the temporal dimension of sensitivity duration.

graph TD
    A["Data Asset Inventory"] --> B{"Sensitivity Duration<br/>> 10 years?"}
    B -->|Yes| C{"Transmitted over<br/>public networks?"}
    B -->|No| D["Standard PQC<br/>Migration Cycle"]
    C -->|Yes| E{"Currently protected<br/>by PQC/hybrid?"}
    C -->|No| F["Lower HNDL Risk<br/>— Encrypt at Rest"]
    E -->|Yes| G["Adequate Protection<br/>— Monitor & Maintain"]
    E -->|No| H["CRITICAL HNDL RISK<br/>— Immediate Action"]

    style H fill:#e74c3c,color:#fff
    style G fill:#27ae60,color:#fff
    style D fill:#3498db,color:#fff
    style F fill:#f39c12,color:#fff

Organizations should build a quantum risk register that maps every data flow against:

  1. Data type and sensitivity duration (from the time-value matrix in Section 3)
  2. Transmission path (internal network only, VPN, public internet, cross-border)
  3. Current cryptographic protection (algorithm, key size, protocol version)
  4. Adversary access likelihood (traffic traverses known collection points, geopolitical exposure)

Data flows that score high on all four dimensions require immediate attention.

7.2 Network Segmentation and Traffic Protection Priorities

Not all network traffic carries equal HNDL risk. Organizations should prioritize protection based on a tiered approach:

Tier 1 — Immediate quantum protection required:

  • Government classified networks and diplomatic communications
  • Healthcare data exchange (HL7 FHIR, HIE transactions)
  • Long-term financial records and correspondent banking
  • Intellectual property and R&D communications
  • Legal privilege communications
  • Source-identifying intelligence or journalistic data

Tier 2 — Near-term migration (12–24 months):

  • General corporate communications involving strategic planning
  • Supply chain management systems
  • HR and personnel records
  • Customer databases with long-lived PII

Tier 3 — Standard migration cycle:

  • Transactional e-commerce data
  • Marketing and analytics data
  • Public-facing content management
  • Short-lived session data

Network segmentation can reduce HNDL exposure by ensuring that the highest-value data flows are isolated and protected first, rather than waiting for a complete enterprise-wide migration. Practical segmentation approaches include:

  • Dedicated network paths for Tier 1 data, physically or logically separated from general corporate traffic
  • Separate TLS termination for high-sensitivity endpoints, allowing PQC deployment without affecting all services simultaneously
  • Data loss prevention (DLP) rules that prevent Tier 1 data from being transmitted over non-PQC-protected channels
  • Zero-trust architecture that validates cryptographic posture as part of access decisions — denying connections that do not meet minimum quantum-resistance requirements for the data classification being accessed

7.3 Immediate Actions

Organizations can take several steps today that materially reduce HNDL exposure without requiring a complete PQC migration:

Encrypt at rest with AES-256. Data stored using AES-256 with keys derived from non-public-key sources (passwords, HSM-generated keys, key wrapping) is not vulnerable to Shor’s algorithm. Ensuring all stored data uses AES-256 (not AES-128, which has only ~64-bit security under Grover’s algorithm) provides a quantum-resistant baseline for data at rest. This does not protect data in transit — but it ensures that data which has already been received and stored is safe.

Deploy hybrid TLS. Hybrid TLS — combining classical key exchange (X25519) with a post-quantum KEM (ML-KEM-768) — provides immediate HNDL protection for data in transit. Major browsers (Chrome, Firefox, Edge) and cloud providers (AWS, Cloudflare, Google Cloud) already support hybrid TLS. For server-to-server communication, libraries like Open Quantum Safe (OQS) provide drop-in hybrid TLS support. See Real-World Implementations & Libraries for deployment guidance.

Enable Perfect Forward Secrecy (PFS). PFS using ephemeral key exchange ensures that compromising a server’s long-term private key does not retroactively compromise past sessions. While PFS does not protect against HNDL (the ephemeral key exchange itself is the target), it does limit the blast radius: each session requires a separate quantum computation to decrypt. An adversary cannot break one session and derive keys for all sessions. This makes mass decryption more expensive and forces the adversary to expend quantum computation per-session.

Minimize data transmission over public networks. Where possible, use dedicated circuits, MPLS networks, or VPN tunnels for high-sensitivity data flows. This does not eliminate HNDL risk (VPN tunnels still use classical key exchange), but it reduces the likelihood that traffic is captured at public peering points and cable landing stations.

Implement TLS 1.3 everywhere. TLS 1.3 eliminates insecure cipher suites, mandates PFS, and provides a clean extension mechanism for adding PQC key exchange. Organizations still running TLS 1.2 (or worse) should upgrade as a prerequisite for PQC migration. TLS 1.3’s design, with its simplified handshake and mandatory AEAD ciphers, is better suited to hybrid PQC integration.

Audit and eliminate RSA key transport. Some legacy systems still use RSA key transport (where the client encrypts a pre-master secret with the server’s RSA public key) rather than ephemeral Diffie-Hellman. RSA key transport is doubly dangerous for HNDL: not only is the key exchange quantum-vulnerable, but there is no forward secrecy — compromising the server’s long-term RSA key (via quantum computation) retroactively compromises every recorded session that used that key. Eliminating RSA key transport in favor of ephemeral ECDHE (and ultimately PQC hybrid) is a critical prerequisite.

7.4 Protocol-Specific Immediate Guidance

TLS. Deploy hybrid key exchange (X25519MLKEM768) on all externally-facing TLS endpoints. For TLS 1.3, this uses the hybrid key share mechanism defined in draft-ietf-tls-hybrid-design. Most modern TLS libraries (OpenSSL 3.5+, BoringSSL, AWS-LC, wolfSSL) support ML-KEM hybrid modes. CDN providers (Cloudflare, AWS CloudFront, Fastly) offer hybrid TLS as a configuration option. The handshake size increase (~1,100 bytes for ML-KEM-768 ciphertext + public key) is negligible for most deployments but should be tested against middleboxes, firewalls, and IDS/IPS systems that may enforce TLS message size limits.

VPN. IPsec VPNs using IKEv2 can integrate PQC key exchange through additional key exchange payloads (RFC 9370 defines the framework). WireGuard, with its fixed Noise protocol handshake using X25519, requires protocol-level changes for PQC — the Rosenpass project provides a PQC wrapper for WireGuard that adds ML-KEM key exchange as an outer layer. OpenVPN has experimental PQC support via the OQS-OpenVPN fork. Organizations using site-to-site VPNs for high-sensitivity traffic should prioritize PQC VPN deployment, as VPN traffic is often concentrated on predictable, high-value routes that are attractive collection targets.

SSH. OpenSSH has supported hybrid key exchange since version 9.0 (using sntrup761x25519-sha512@openssh.com, a hybrid of NTRU Prime and X25519). This should be enabled as the preferred key exchange algorithm on all SSH servers handling sensitive administrative access. Note that SSH key exchange protects the session, but SSH host keys and user authentication keys (RSA, ECDSA, Ed25519) are also quantum-vulnerable — replacement with PQC signature algorithms (ML-DSA) requires OpenSSH 10.0+ or patched builds.

Email. S/MIME and PGP-encrypted email present unique HNDL challenges because encrypted emails are typically stored indefinitely in mailboxes, creating a durable HNDL target. Organizations using email encryption for sensitive communications should evaluate hybrid S/MIME implementations and consider whether encrypted email is the appropriate channel for data with long sensitivity durations.

7.5 Medium-Term Strategy: Full PQC Migration for High-Value Data Paths

The immediate actions in Section 7.3 are stop-gap measures. The definitive solution is a full migration to post-quantum cryptography for all high-value data paths. This means:

Phase 1 — Inventory and assessment (3–6 months). Complete the quantum risk classification exercise described in Section 7.1. Identify every cryptographic dependency in high-value data paths: TLS libraries, VPN implementations, SSH configurations, API gateways, database connections, message queues, and inter-service communication.

Phase 2 — Pilot hybrid deployment (6–12 months). Deploy hybrid TLS (X25519 + ML-KEM-768) on the highest-priority data paths identified in the inventory. Monitor for performance impact (ML-KEM adds approximately 1–2KB to TLS handshakes and negligible latency). Validate compatibility with intermediaries (CDNs, WAFs, load balancers, proxies). See PQC in Protocols for protocol-specific guidance.

Phase 3 — Expand to all Tier 1 and Tier 2 data paths (12–24 months). Roll out hybrid cryptography across all high and medium-priority data flows. Update VPN configurations, SSH deployments, and internal service meshes. Engage with vendors whose products do not yet support PQC.

Phase 4 — Enterprise-wide PQC (24–48 months). Complete migration across all remaining systems. Replace classical-only key exchange everywhere. Establish ongoing crypto agility to accommodate future algorithm changes (the NIST standardization process is ongoing, and additional algorithms may be standardized).

Phase 5 — Validation and continuous monitoring. Verify that no classical-only key exchange remains in production. Implement monitoring to detect configuration drift that could re-introduce classical-only connections. Conduct regular cryptographic assessments to validate PQC deployment completeness. Deploy cryptographic bill-of-materials (CBOM) tooling to maintain continuous visibility into algorithm usage across the enterprise.


8. HNDL Risk Decision Matrix

The following matrix synthesizes the data sensitivity analysis, Mosca’s theorem, and organizational response framework into a decision tool:

quadrantChart
    title HNDL Risk Assessment Matrix
    x-axis "Low Adversary Access" --> "High Adversary Access"
    y-axis "Short Sensitivity Duration" --> "Long Sensitivity Duration"
    quadrant-1 "CRITICAL: Migrate Immediately"
    quadrant-2 "HIGH: Migrate Within 12 Months"
    quadrant-3 "LOW: Standard Upgrade Cycle"
    quadrant-4 "MEDIUM: Plan and Prioritize"
    "Classified intel over internet": [0.9, 0.95]
    "Health records via cloud": [0.75, 0.85]
    "Banking SWIFT messages": [0.7, 0.7]
    "Corporate email (cloud)": [0.65, 0.4]
    "Internal API traffic": [0.2, 0.5]
    "E-commerce transactions": [0.6, 0.15]
    "IoT sensor data": [0.3, 0.1]

The quadrant positions drive action:

  • Upper-right (Critical): Data with long sensitivity duration transmitted over adversary-accessible networks. Migration should already be underway.
  • Upper-left (High): Long-lived sensitive data on relatively controlled networks. Migration should begin within 12 months.
  • Lower-right (Medium): Short-lived data on exposed networks. Plan PQC integration into upcoming upgrade cycles.
  • Lower-left (Low): Short-lived data on controlled networks. Standard migration timeline is acceptable.

9. Quantifying HNDL Exposure

9.1 Measuring Current Exposure

Organizations can estimate their HNDL exposure using concrete metrics:

Volume of quantum-vulnerable traffic. Audit TLS connections across all endpoints and classify by key exchange algorithm. Any connection using ECDHE, X25519, DHE, or RSA key transport without a PQC component is quantum-vulnerable. Express this as a percentage of total traffic and as absolute volume (GB/day of quantum-vulnerable traffic). This becomes the baseline metric for tracking migration progress.

Weighted exposure score. Multiply the volume of quantum-vulnerable traffic by the sensitivity duration of the data it carries. A server transmitting 10GB/day of classified data with a 50-year sensitivity produces a far higher weighted exposure than a server transmitting 100GB/day of transactional data with a 2-year sensitivity. This weighted score provides a prioritization metric that accounts for both volume and consequence.

Geographic exposure. Map data flows against known or suspected surveillance jurisdictions. Traffic that crosses international boundaries, transits undersea cables, or traverses known interception infrastructure receives a higher exposure multiplier. Traffic that remains on dedicated private circuits within a single trusted jurisdiction receives a lower multiplier.

9.2 Cost-Benefit Framework

The cost of HNDL mitigation must be weighed against the cost of retroactive data compromise:

Cost of migration. Hybrid TLS deployment on modern infrastructure is relatively inexpensive — often a configuration change and library update. The primary costs are testing, validation, and vendor coordination, not the cryptographic operations themselves. ML-KEM-768 adds approximately 0.1–0.3ms of latency per handshake and ~1.1KB of additional data, which is negligible for most workloads. The real expense is organizational: project management, change control, compliance recertification, and the opportunity cost of engineering time.

Cost of inaction. If an adversary decrypts 10 years of intercepted government communications, the intelligence damage is incalculable. If a competitor decrypts a pharmaceutical company’s drug trial data, the economic loss can reach billions. If a hospital’s patient records are retroactively exposed, the regulatory penalties, litigation costs, and reputational damage are severe. These costs are speculative — they depend on whether the adversary has actually collected the traffic and whether a CRQC materializes — but the expected value calculation favors migration for any organization handling long-lived sensitive data.

Asymmetry of regret. Migrating to PQC and discovering that a CRQC never arrives (or arrives too late to matter) costs the organization a modest investment in cryptographic modernization — which has independent security benefits (crypto agility, modern protocols, reduced technical debt). Failing to migrate and discovering that an adversary has decrypted decades of sensitive communications is a catastrophic, irreversible loss. The asymmetry of regret strongly favors action.


10. Frequently Misunderstood Aspects of HNDL

10.1 “We Use AES-256, So We’re Safe”

This is the single most common misconception. AES-256 is indeed quantum-resistant — it retains approximately 128 bits of security under Grover’s algorithm, which is more than sufficient. But AES-256 only protects the data once the symmetric key is established. If that symmetric key was derived through a classical key exchange (ECDH, X25519, RSA), the adversary does not attack the AES encryption. They attack the key exchange, recover the symmetric key, and then use it to decrypt the AES-encrypted payload trivially. The symmetric cipher’s strength is irrelevant when the key it protects is derived from a quantum-vulnerable exchange.

10.2 “Perfect Forward Secrecy Protects Against HNDL”

PFS means each session uses an ephemeral key pair, so compromising the server’s long-term key does not retroactively compromise past sessions. This is excellent security hygiene and remains important. But PFS does not protect against HNDL because the ephemeral key exchange itself is the target. The adversary does not need the server’s long-term key — they record the ephemeral public values from the handshake and use Shor’s algorithm to compute the ephemeral shared secret directly. PFS raises the cost of HNDL (one quantum computation per session, rather than one computation to derive all sessions), but it does not prevent it.

10.3 “Our Data Isn’t Interesting Enough to Collect”

Bulk collection programs do not discriminate based on perceived value. The adversary collects everything and selects later. Your traffic may be collected not because it is specifically targeted, but because it transits the same fiber-optic cables as traffic that is targeted. Once collected, the adversary has your encrypted data indefinitely, regardless of whether they originally intended to capture it. Additionally, data that seems uninteresting today — a vendor’s internal emails, a logistics company’s shipping records, a hospital’s appointment scheduling — can become intelligence gold when combined with other decrypted sources. The value of any individual data set increases when correlated with others.

10.4 “We Can Just Re-encrypt Everything When Quantum Computers Arrive”

This misunderstands the HNDL attack model. The adversary already has a copy of your encrypted traffic from before you migrated. Re-encrypting your current data does nothing about the historical copy in the adversary’s possession. You cannot retroactively protect data that has already been transmitted under classical-only encryption. The only defense is to ensure data is protected with quantum-resistant cryptography at the time of transmission. There is no post-hoc fix for HNDL.


11. HNDL and Regulatory/Compliance Landscape

11.1 Government Mandates

Governments have responded to the HNDL threat with increasing urgency:

  • US Executive Order 14028 (2021) and NSM-10 (2022): Directed federal agencies to inventory cryptographic systems and begin planning quantum-resistant migration. NSA’s CNSA 2.0 suite mandates PQC for national security systems by 2035.
  • EU Recommendation on PQC (2024): European Commission recommended member states begin PQC migration for critical infrastructure, explicitly citing HNDL as the motivating threat.
  • France (ANSSI, 2024): Published detailed PQC migration guidance requiring hybrid cryptography for all new systems handling sensitive government data.
  • Germany (BSI, 2024): Updated Technical Guidelines to recommend PQC-hybrid for all government communications systems.
  • UK (NCSC, 2024): Published guidance identifying HNDL as a “current threat to long-term secrets” and recommending prioritized PQC deployment.

11.2 Industry Standards

Compliance frameworks are beginning to incorporate quantum risk:

  • PCI DSS: The PCI Council has initiated research on quantum impacts to payment security. While no PQC requirements exist yet, the short sensitivity duration of payment data means HNDL risk is relatively low for pure transaction data (though cardholder databases with long retention periods are a different matter).
  • HIPAA: No explicit quantum requirements, but the “reasonable safeguards” standard will likely be interpreted to require PQC for health data once NIST standards are finalized — which they now are (FIPS 203, 204, 205).
  • SOC 2 / ISO 27001: Cryptographic control requirements will increasingly be assessed against quantum risk as auditors incorporate PQC readiness into control evaluations.

Organizations that proactively address HNDL risk position themselves ahead of regulatory mandates, avoiding the scramble that inevitably follows when compliance deadlines arrive.

11.3 Liability and Duty-of-Care Considerations

An emerging legal question is whether organizations that fail to migrate to PQC in a timely manner will face liability for data compromised through retroactive quantum decryption. If an organization is aware of the HNDL threat (and awareness is now difficult to deny, given the volume of government advisories), and if quantum-resistant alternatives are available and economically feasible (which they are, as of the FIPS 203/204/205 publication), the argument that failure to migrate constitutes negligence becomes increasingly tenable.

This is analogous to the evolution of breach liability over the past two decades. In the early 2000s, organizations that failed to encrypt data at rest were seen as unfortunate victims of breaches. By 2020, failure to encrypt was widely viewed as negligence. The same trajectory is likely for failure to deploy quantum-resistant cryptography — what is considered “best practice” today will become “minimum standard” within 5–10 years, and organizations that delayed migration will face legal and regulatory consequences for data that was avoidably exposed to HNDL collection.


12. Building an HNDL-Specific Threat Model

12.1 Threat Modeling Process

A structured HNDL threat model should answer five questions:

  1. What data do we transmit that has a sensitivity duration exceeding 10 years? Catalog every data flow, classifying by sensitivity duration. Focus on data in transit, not data at rest (which can be protected separately with symmetric encryption).

  2. What networks does that data traverse? Map the physical path of each data flow. Does it cross public internet? Transit international boundaries? Pass through known surveillance jurisdictions? Use cloud infrastructure hosted in specific countries?

  3. What cryptography currently protects that data in transit? Identify every key exchange algorithm in use. Any flow using ECDH, X25519, RSA key exchange, or DH without a post-quantum component is vulnerable.

  4. What is our realistic migration timeline for each data flow? Assess vendor dependencies, protocol constraints, testing requirements, and compliance recertification timelines. Be realistic — migration always takes longer than planned.

  5. Does Mosca’s inequality hold for any of these data flows? For each flow, compute X + Y and compare against your chosen Z. Any flow where X + Y > Z requires immediate action.

The output of this threat model should be a prioritized matrix of data flows, ranked by HNDL risk score (a composite of sensitivity duration, adversary access likelihood, and current cryptographic protection). This matrix directly informs the migration priority order from Section 7.

flowchart TD
    A["Identify sensitive data flows"] --> B["Classify sensitivity duration"]
    B --> C["Map network paths"]
    C --> D["Audit current crypto"]
    D --> E["Apply Mosca's theorem"]
    E --> F{"X + Y > Z?"}
    F -->|Yes| G["Add to immediate<br/>migration queue"]
    F -->|No| H["Add to standard<br/>migration cycle"]
    G --> I["Prioritize by<br/>risk score"]
    H --> I
    I --> J["Execute migration<br/>plan"]

    style G fill:#e74c3c,color:#fff
    style H fill:#3498db,color:#fff

12.2 Integrating HNDL into Existing Security Programs

HNDL should not exist as a standalone initiative. It integrates into existing security programs:

  • Risk management: Add HNDL to the enterprise risk register as a strategic risk with a long time horizon. Quantify potential impact using data valuation and sensitivity analysis.
  • Architecture review: All new system designs should be assessed for HNDL exposure. Any new system transmitting long-lived sensitive data over public networks should deploy hybrid cryptography from day one.
  • Vendor management: Require PQC roadmaps from vendors handling sensitive data. Include PQC capability requirements in procurement evaluations and contract renewals.
  • Incident response: Acknowledge that HNDL “breaches” are undetectable and not amenable to traditional incident response. The response is migration, not containment.
  • Board reporting: HNDL risk should be communicated to senior leadership as a strategic risk comparable to climate risk or supply chain risk — a long-horizon threat requiring sustained investment and organizational commitment.
  • Security awareness training: Include HNDL in security awareness programs so that employees handling sensitive data understand why cryptographic migration is necessary and why data handling practices (minimizing transmission of sensitive data over public networks) matter now, not just after a CRQC exists.
  • Third-party risk management: Assess HNDL exposure through third-party connections. If a business partner handles your sensitive data and transmits it using classical-only encryption, your data is exposed regardless of your own cryptographic posture. Include PQC readiness in third-party security assessments and contractual requirements.

Summary

HNDL is not a future threat. It is a present-tense collection activity whose exploitation is deferred until quantum computing matures. Adversaries — particularly nation-state intelligence agencies — are intercepting and storing encrypted traffic today, indexed by metadata for selective future decryption. The economics of storage make this activity cheap. The intelligence value of retroactive decryption makes it irresistible.

The key takeaways for security professionals:

  1. HNDL is happening now. Collection does not require a CRQC. Only decryption does. Every day of classical-only key exchange adds to the adversary’s stored corpus.

  2. Mosca’s theorem provides the decision framework. If X (data shelf life) + Y (migration time) > Z (time to CRQC), you are already behind. For government, healthcare, and any organization with data sensitivity exceeding 10–15 years, the inequality is satisfied today.

  3. Symmetric encryption does not solve the problem. AES-256 is quantum-resistant, but it does not protect data in transit if the key exchange that established the symmetric key was classical. The attack targets the key exchange, not the cipher.

  4. Hybrid TLS is available today. Hybrid key exchange (X25519 + ML-KEM-768) is supported by major browsers, cloud providers, and TLS libraries. Deploying it is the single most impactful action an organization can take against HNDL.

  5. There is no retroactive fix. Data transmitted under classical-only encryption cannot be re-protected. The adversary’s copy of your encrypted traffic is permanent. The only defense is protecting data with quantum-resistant cryptography at the time of transmission.

  6. The cost of inaction exceeds the cost of action. PQC migration is a modest engineering investment with independent security benefits. Retroactive quantum decryption of sensitive data is a catastrophic, irreversible loss. The asymmetry of regret favors migration.

The window to protect data against retroactive quantum decryption closes a little more each day. Every session transmitted under classical-only key exchange is another entry in the adversary’s collection — an entry that cannot be recalled, re-encrypted, or revoked. The only defense is proactive migration to quantum-resistant cryptography, starting with the data that matters most. For a structured approach to that migration, see Migration Strategies & Crypto Agility. For the specific algorithms and protocol integrations that make it possible, see Lattice-Based Cryptography and PQC in Protocols.