Frameworks & Methodologies
Overview
Red team operations are only as effective as the structure behind them. Ad hoc adversary simulation produces inconsistent results, leaves gaps in coverage, and makes it difficult to measure improvement over time. Industry frameworks and methodologies exist to impose rigor on the process — defining phases, deliverables, quality benchmarks, and repeatable procedures that transform red teaming from an art into a disciplined practice.
The landscape of red teaming frameworks spans multiple origins: some emerged from the penetration testing community (PTES), others from financial regulators seeking assurance that systemically important institutions can withstand sophisticated attacks (CBEST, TIBER-EU), and still others from military doctrine adapted for the cyber domain (Cyber Kill Chain). Each carries its own assumptions about threat models, engagement scope, and the role of threat intelligence.
This page provides a comprehensive survey of the major frameworks, kill chain models, and engagement methodologies that underpin modern red team operations. Understanding these is essential not just for red team operators, but for the defenders, executives, and regulators who commission, scope, and consume the results of red team engagements.
Framework Comparison
The following table provides a high-level comparison of the major frameworks and methodologies referenced throughout this page.
| Framework | Origin | Primary Sector | Scope | Intelligence-Led | Certification Required | Year |
|---|---|---|---|---|---|---|
| PTES | Community / Industry | General | Full pentest lifecycle | Optional | No | 2009 |
| CBEST | Bank of England | UK Financial Services | Threat-intelligence-led red team | Yes | Yes (CREST accredited) | 2014 |
| TIBER-EU | European Central Bank | EU Financial Sector | Intelligence-based ethical red team | Yes | Yes (varies by jurisdiction) | 2018 |
| CREST STAR | CREST | Global / Cross-sector | Simulated targeted attack & response | Yes | Yes (CREST certified) | 2016 |
| Cyber Kill Chain | Lockheed Martin | Defense / General | Intrusion analysis model | No | No | 2011 |
| Unified Kill Chain | Paul Pols | Academic / General | Extended intrusion model | No | No | 2017 |
| NIST CSF | NIST | US / Global Critical Infrastructure | Risk management framework | No | No | 2014 (2.0: 2024) |
| MITRE ATT&CK | MITRE | Global / Cross-sector | Adversary TTP knowledge base | No | No | 2015 |
Penetration Testing Execution Standard (PTES)
Background
The Penetration Testing Execution Standard was developed by a group of information security practitioners to define a baseline for penetration testing engagements. Although it was designed primarily for penetration testing rather than full adversary simulation, PTES provides a structural foundation that many red team methodologies build upon or reference.
The Seven Phases
PTES defines seven distinct phases for a complete engagement:
1. Pre-Engagement Interactions This phase covers everything that happens before technical work begins. It includes scoping discussions, defining rules of engagement, establishing communication channels, identifying emergency contacts, and securing legal authorization. For red team engagements, this phase is particularly critical because the expanded scope (physical access, social engineering, multi-month timelines) demands more detailed agreements than a standard penetration test.
Key activities:
- Scope definition and boundary setting
- Legal authorization and contracts
- Communication plan and escalation procedures
- Timeline and milestone agreement
- Data handling and classification requirements
2. Intelligence Gathering The intelligence gathering phase encompasses both passive and active reconnaissance. Passive techniques include OSINT collection, domain enumeration, social media analysis, and public records research. Active techniques involve network scanning, service fingerprinting, and direct interaction with target systems.
For red teams, this phase is significantly more extensive than for standard penetration tests. Operators may spend weeks or months in this phase, building detailed target profiles that include organizational structure, key personnel, technology stack, physical facility layouts, and supply chain relationships.
3. Threat Modeling PTES prescribes a structured approach to threat modeling that combines intelligence gathered in phase two with an understanding of the target’s business context. The goal is to identify the most likely and most impactful attack paths before any exploitation occurs.
Red teams extend this phase to include adversary emulation planning — selecting specific threat actors to emulate based on the organization’s threat landscape and mapping their known TTPs to potential attack vectors against the target.
4. Vulnerability Analysis This phase involves identifying weaknesses in the target environment through automated scanning, manual testing, and analysis of gathered intelligence. It covers network vulnerabilities, application flaws, configuration weaknesses, and human factors.
Red teams often de-emphasize automated vulnerability scanning in favor of manual analysis and custom exploit development, as automated tools generate noise that may alert defenders and do not reflect how sophisticated adversaries operate.
5. Exploitation The exploitation phase is where identified vulnerabilities are actually leveraged to gain access. PTES emphasizes precision over breadth — exploiting only what is necessary to achieve engagement objectives rather than exploiting every vulnerability found.
For red teams, exploitation is governed by the principle of stealth. The goal is to gain access through the path of least resistance and highest operational security, often favoring social engineering, supply chain compromise, or zero-day exploitation over noisy network-based attacks.
6. Post-Exploitation Post-exploitation covers everything that happens after initial access: privilege escalation, lateral movement, persistence establishment, data exfiltration, and achieving engagement objectives. PTES defines this phase in terms of the value of compromised systems and data to the target organization.
This is where red team engagements diverge most significantly from penetration tests. Red teams may operate in a target environment for weeks, establishing multiple persistence mechanisms, moving laterally through segmented networks, and carefully timing their actions to avoid detection. The MITRE ATT&CK framework provides detailed taxonomies that are invaluable during this phase.
7. Reporting The final phase produces the deliverable that justifies the entire engagement. PTES defines a reporting structure that includes an executive summary, technical findings, risk ratings, and remediation guidance.
Red team reports differ from penetration test reports in their emphasis on narrative. A red team report tells the story of the engagement: what the adversary did, what the defenders detected (and missed), and what organizational changes would have disrupted the attack chain. Detailed guidance on effective reporting is covered in Metrics & Reporting.
Strengths and Limitations
Strengths:
- Provides a comprehensive, structured lifecycle for testing engagements
- Well-understood across the industry, creating common vocabulary
- Flexible enough to adapt to red team operations
- Freely available and community-maintained
Limitations:
- Designed for penetration testing, not adversary simulation — lacks explicit guidance on threat intelligence integration, adversary emulation, and purple team collaboration
- Has not been significantly updated since its initial release
- Does not prescribe specific quality standards or assessor qualifications
- No formal certification or accreditation program
CBEST (Bank of England)
Regulatory Context
CBEST is an intelligence-led penetration testing framework developed by the Bank of England in collaboration with CREST and the UK financial sector. It was designed to assess the cyber resilience of UK financial institutions — banks, insurers, financial market infrastructures, and other systemically important firms — against sophisticated, persistent threat actors.
CBEST was one of the first regulatory frameworks to mandate that red team testing be driven by genuine threat intelligence rather than generic vulnerability scanning. This intelligence-led approach ensures that tests simulate realistic threats relevant to the specific institution rather than theoretical attack scenarios.
Framework Structure
CBEST engagements consist of three primary phases:
Phase 1: Threat Intelligence A CBEST-accredited threat intelligence provider conducts a targeted threat intelligence assessment for the institution under test. This assessment identifies:
- Threat actors with the motivation and capability to target the specific institution
- Attack vectors and TTPs associated with those threat actors
- The institution’s external attack surface and exposure
- Relevant intelligence from recent campaigns against the financial sector
The threat intelligence report produces specific attack scenarios that the red team will execute. This is the defining characteristic of CBEST: the red team’s objectives and methods are derived from real-world threat intelligence, not from the red team’s own preferences or capabilities.
Phase 2: Penetration Testing (Red Team Execution) A CBEST-accredited penetration testing provider executes the attack scenarios defined in the threat intelligence phase. The red team operates under strict rules of engagement and attempts to achieve objectives that represent realistic adversary goals — such as accessing payment systems, exfiltrating customer data, or disrupting critical business services.
Key characteristics of CBEST red team execution:
- Testing targets live production systems, not lab environments
- Social engineering and physical access testing are in scope
- Engagements typically run for 10-12 weeks
- Only a small “control group” within the institution is aware of the test
- The blue team (SOC, incident response) is not informed in advance
Phase 3: Reporting and Remediation The reporting phase produces a detailed account of the engagement, including:
- Attack narrative describing each phase of the operation
- Detection and response assessment — what the blue team detected and how they responded
- Findings mapped to control weaknesses
- Strategic and tactical remediation recommendations
- Comparison of institutional resilience against the identified threat landscape
The regulator (Bank of England / PRA / FCA) receives a summary of findings and tracks remediation through supervisory processes.
Accreditation Requirements
CBEST mandates that both the threat intelligence provider and the penetration testing provider hold CREST accreditation. Individual assessors must hold relevant CREST certifications (CCT INF, CCT APP, or equivalent), and the provider organization must demonstrate mature quality assurance processes.
TIBER-EU
Background and Purpose
TIBER-EU (Threat Intelligence-Based Ethical Red Teaming for the European Union) was published by the European Central Bank (ECB) in 2018 as a pan-European framework for intelligence-led red teaming of financial entities. It was heavily influenced by CBEST and the Dutch TIBER-NL framework, and was designed to be adopted and implemented by EU member states through national implementations.
The framework aims to standardize how financial entities across Europe test their cyber resilience against sophisticated adversaries, while allowing individual member states flexibility in implementation details and regulatory oversight.
Three-Phase Structure
TIBER-EU engagements follow three clearly defined phases:
graph LR
subgraph Phase1["Phase 1: Preparation"]
A1["Scope Definition"] --> A2["Procurement of<br/>TI & RT Providers"]
A2 --> A3["Establish<br/>Control Team"]
end
subgraph Phase2["Phase 2: Testing"]
B1["Generic Threat<br/>Landscape (GTL)"] --> B2["Targeted Threat<br/>Intelligence (TTI)"]
B2 --> B3["Red Team<br/>Test Execution"]
end
subgraph Phase3["Phase 3: Closure"]
C1["Red Team<br/>Report"] --> C2["Blue Team<br/>Report"]
C2 --> C3["Replay &<br/>Workshop"]
C3 --> C4["Remediation<br/>Plan"]
C4 --> C5["TIBER-EU<br/>Attestation"]
end
Phase1 --> Phase2 --> Phase3
style Phase1 fill:#1a1a2e,stroke:#e94560,color:#ffffff
style Phase2 fill:#1a1a2e,stroke:#0f3460,color:#ffffff
style Phase3 fill:#1a1a2e,stroke:#16213e,color:#ffffff
Phase 1: Preparation
The preparation phase establishes the governance and scope for the test:
- Scope definition: Identifying the critical functions of the financial entity that will be the target of the test. These are the functions whose disruption would impact financial stability, market integrity, or consumer protection.
- Control team formation: A small group (typically 3-5 people) within the entity who are aware of the test. This group manages the engagement and acts as the interface between the entity, the TI provider, the RT provider, and the regulator.
- Procurement: Selecting a threat intelligence provider and a red team provider. TIBER-EU requires that these are separate entities to maintain independence between intelligence and execution.
Phase 2: Testing
The testing phase consists of three sequential sub-phases:
-
Generic Threat Landscape (GTL): The TI provider produces a report on the general threat landscape facing the financial sector in the relevant jurisdiction. This identifies active threat actors, emerging attack trends, and sector-wide vulnerabilities. The GTL provides context but is not institution-specific.
-
Targeted Threat Intelligence (TTI): The TI provider conducts targeted intelligence gathering on the specific entity. This produces detailed attack scenarios tailored to the entity’s technology stack, organizational structure, geographic presence, and identified threat actors. The TTI report directly informs the red team’s test plan.
-
Red Team Test: The red team executes the scenarios defined in the TTI report against the entity’s live production environment. The test typically runs for 10-12 weeks. The red team attempts to compromise critical functions, move laterally, establish persistence, and demonstrate impact — all while evading the entity’s detection and response capabilities.
Phase 3: Closure
The closure phase is where organizational learning occurs:
- Red team report: Detailed account of the engagement, including attack narrative, tools and techniques used, systems compromised, and objectives achieved or not achieved.
- Blue team report: The entity’s security operations team produces their own account of what they detected, how they responded, and what they missed. This is produced before the blue team sees the red team report.
- Replay workshop: A joint session where the red team walks through the engagement with both the control team and the blue team. This is the most valuable learning moment in the entire process.
- Remediation plan: The entity produces a plan to address identified weaknesses.
- TIBER-EU attestation: A formal document confirming that the test was conducted in accordance with the TIBER-EU framework.
National Implementations
TIBER-EU is a framework, not a regulation. Individual member states adopt it through national implementations:
| Country | National Implementation | Lead Authority | Status |
|---|---|---|---|
| Netherlands | TIBER-NL | De Nederlandsche Bank | Active (predecessor to TIBER-EU) |
| Germany | TIBER-DE | Deutsche Bundesbank | Active |
| Belgium | TIBER-BE | National Bank of Belgium | Active |
| Ireland | TIBER-IE | Central Bank of Ireland | Active |
| Italy | TIBER-IT | Banca d’Italia | Active |
| Spain | TIBER-ES | Banco de Espana | Active |
| Finland | TIBER-FI | Bank of Finland | Active |
| Denmark | TIBER-DK | Danmarks Nationalbank | Active |
| Sweden | TIBER-SE | Sveriges Riksbank | Active |
| Portugal | TIBER-PT | Banco de Portugal | Active |
| Romania | TIBER-RO | National Bank of Romania | Active |
| Austria | TIBER-AT | Oesterreichische Nationalbank | Active |
| Luxembourg | TIBER-LU | Banque centrale du Luxembourg | Active |
DORA and the Evolution of TIBER-EU
The EU’s Digital Operational Resilience Act (DORA), which entered into force in January 2025, formally codified threat-led penetration testing (TLPT) as a regulatory requirement for significant financial entities. DORA’s TLPT provisions are directly based on the TIBER-EU framework, effectively transforming what was previously a voluntary framework into a regulatory mandate for in-scope institutions.
CREST
Organization and Mission
CREST (Council of Registered Ethical Security Testers) is an international not-for-profit accreditation and certification body for the technical information security industry. Founded in the UK, CREST provides assurance that security testing companies and individual practitioners meet defined standards of competence, ethics, and professionalism.
CREST STAR Methodology
The Simulated Targeted Attack and Response (STAR) methodology is CREST’s framework for adversary simulation engagements. STAR is designed to be used by CREST-accredited companies and certified practitioners to conduct intelligence-led red team operations.
Key elements of STAR:
- Threat intelligence-driven scoping: Engagements begin with threat intelligence gathering to identify realistic adversary profiles and attack scenarios
- Controlled attack simulation: Structured execution of attack scenarios against live environments
- Detection and response assessment: Explicit evaluation of the target’s ability to detect and respond to the simulated attack
- Collaborative debrief: Joint sessions between the red team and the target’s security team to maximize learning
Certification Hierarchy
CREST maintains a hierarchy of individual certifications relevant to red teaming:
| Certification | Level | Focus Area |
|---|---|---|
| CREST Practitioner Security Analyst (CPSA) | Entry | Foundational security testing |
| CREST Registered Penetration Tester (CRT) | Intermediate | Infrastructure or application testing |
| CREST Certified Infrastructure Tester (CCT INF) | Advanced | Infrastructure penetration testing |
| CREST Certified Application Tester (CCT APP) | Advanced | Application security testing |
| CREST Certified Simulated Attack Manager (CCSAM) | Expert | Red team engagement management |
| CREST Certified Simulated Attack Specialist (CCSAS) | Expert | Red team technical execution |
The CCSAM and CCSAS certifications are specifically designed for red team practitioners and are required for CBEST and certain TIBER-EU national implementations.
Global Recognition
CREST has expanded beyond the UK and now operates member companies and examination centers across Europe, Asia-Pacific, the Middle East, Africa, and the Americas. Several national regulators beyond the UK recognize CREST accreditation as a qualifying standard for mandated security testing.
Lockheed Martin Cyber Kill Chain
The Seven Phases
The Cyber Kill Chain, published by Lockheed Martin in 2011, is an intrusion analysis model derived from military kill chain doctrine. It describes the sequential phases an adversary must complete to achieve their objective in a network intrusion.
graph TD
R["1. Reconnaissance<br/>Target identification<br/>& information gathering"] --> W["2. Weaponization<br/>Coupling exploit with<br/>backdoor into payload"]
W --> D["3. Delivery<br/>Transmitting weapon<br/>to target environment"]
D --> E["4. Exploitation<br/>Triggering the exploit<br/>to execute code"]
E --> I["5. Installation<br/>Installing backdoor<br/>or persistent access"]
I --> C["6. Command & Control<br/>Establishing channel<br/>for remote control"]
C --> A["7. Actions on Objectives<br/>Achieving the<br/>adversary's goals"]
style R fill:#16213e,stroke:#e94560,color:#ffffff
style W fill:#1a1a2e,stroke:#e94560,color:#ffffff
style D fill:#16213e,stroke:#0f3460,color:#ffffff
style E fill:#1a1a2e,stroke:#0f3460,color:#ffffff
style I fill:#16213e,stroke:#e94560,color:#ffffff
style C fill:#1a1a2e,stroke:#e94560,color:#ffffff
style A fill:#16213e,stroke:#0f3460,color:#ffffff
1. Reconnaissance The adversary identifies and researches the target. This includes harvesting email addresses, identifying employees on social media, discovering internet-facing infrastructure, and mapping the organization’s technology stack. Both passive (OSINT) and active (scanning, probing) techniques are employed.
2. Weaponization The adversary creates a deliverable payload by coupling an exploit with a backdoor. This might involve generating a malicious document with an embedded exploit, creating a trojanized application, or crafting a custom implant. The target is not directly involved in this phase — it occurs entirely within the adversary’s infrastructure.
3. Delivery The adversary transmits the weaponized payload to the target environment. Common delivery vectors include spear-phishing emails with malicious attachments, watering hole attacks on websites frequented by target employees, and USB drops. The delivery mechanism is selected based on intelligence gathered during reconnaissance.
4. Exploitation The payload triggers, exploiting a vulnerability to execute the adversary’s code on the target system. This might exploit a software vulnerability (buffer overflow, use-after-free), a configuration weakness, or a human factor (user enables macros, clicks a link, enters credentials).
5. Installation The adversary installs a persistent backdoor or remote access tool on the compromised system. This ensures continued access even if the initial exploitation vector is patched or the user closes the malicious document. Persistence mechanisms include registry run keys, scheduled tasks, DLL hijacking, and bootkit installation.
6. Command and Control (C2) The compromised system establishes a communication channel back to the adversary’s infrastructure. C2 channels are designed to blend with normal network traffic — using HTTPS, DNS tunneling, or legitimate cloud services. The adversary now has interactive, remote control over the compromised system.
7. Actions on Objectives The adversary pursues their ultimate goal: data exfiltration, intellectual property theft, system disruption, ransomware deployment, espionage, or preparation for future operations. This phase may involve lateral movement to additional systems, privilege escalation, and extended dwell time within the network.
Criticisms and Limitations
The Cyber Kill Chain, while influential, has drawn significant criticism:
- Perimeter-centric: The model assumes the adversary must breach a network perimeter, which does not account for insider threats, supply chain compromises, or cloud-native attacks where there is no traditional perimeter.
- Linear progression: The model implies a strictly sequential process. In practice, adversaries iterate, skip phases, and operate multiple parallel attack chains simultaneously.
- Incomplete post-exploitation coverage: The “Actions on Objectives” phase collapses a vast range of post-compromise activities into a single step. Lateral movement, privilege escalation, defense evasion, and data staging are inadequately represented.
- Defender bias: The model was designed from the defender’s perspective for intrusion detection and disruption. It does not fully capture the adversary’s decision-making process or the complexity of modern attack campaigns.
- No coverage of non-malware attacks: Living-off-the-land techniques, credential abuse, and social engineering campaigns that do not involve traditional malware payloads are poorly represented.
Modern Relevance
Despite its limitations, the Cyber Kill Chain remains valuable as a communication tool — particularly for explaining adversary operations to non-technical stakeholders. It provides an intuitive, sequential narrative that makes complex intrusions understandable. Many organizations use it as a starting point and layer additional frameworks (MITRE ATT&CK, Unified Kill Chain) on top for operational depth. The MITRE ATT&CK framework, in particular, provides the granular TTP mapping that the Kill Chain lacks.
Unified Kill Chain
Addressing the Gaps
The Unified Kill Chain (UKC), developed by Paul Pols in 2017, extends and refines the Lockheed Martin Cyber Kill Chain by integrating concepts from MITRE ATT&CK and addressing the original model’s most significant limitations. The UKC expands the seven-phase model to eighteen phases organized across three meta-phases, providing substantially more granularity for post-exploitation activities.
Three Meta-Phases and Eighteen Phases
Initial Foothold (Gaining Access)
| Phase | Description |
|---|---|
| 1. Reconnaissance | Researching, identifying, and selecting targets |
| 2. Resource Development | Preparing tools, infrastructure, and capabilities |
| 3. Delivery | Transporting weaponized payload to the target |
| 4. Social Engineering | Manipulating human targets to enable access |
| 5. Exploitation | Taking advantage of vulnerabilities to execute code |
| 6. Persistence | Ensuring continued access through system restarts and credential changes |
| 7. Defense Evasion | Avoiding detection by security monitoring and controls |
| 8. Command & Control | Establishing remote communication channel |
Network Propagation (Moving Through the Environment)
| Phase | Description |
|---|---|
| 9. Pivoting | Using compromised systems to access otherwise unreachable networks |
| 10. Discovery | Gaining knowledge about the internal environment and systems |
| 11. Privilege Escalation | Gaining higher-level permissions than initially obtained |
| 12. Execution | Running adversary-controlled code on target systems |
| 13. Credential Access | Stealing credentials for legitimate accounts |
| 14. Lateral Movement | Moving between systems in the target environment |
Actions on Objectives (Achieving Goals)
| Phase | Description |
|---|---|
| 15. Collection | Gathering data relevant to the adversary’s objectives |
| 16. Exfiltration | Removing collected data from the target environment |
| 17. Impact | Disrupting, destroying, or manipulating target systems and data |
| 18. Objectives | Achieving the adversary’s strategic and operational goals |
Key Improvements Over the Original Kill Chain
- Explicit social engineering phase: Recognizes that human manipulation is a distinct attack phase, not merely a delivery mechanism
- Resource development phase: Captures the adversary’s pre-attack infrastructure and tooling preparation
- Detailed post-exploitation model: Fourteen of the eighteen phases cover activities after initial access, compared to just two in the original Kill Chain
- Non-linear execution: The UKC explicitly acknowledges that phases can be executed in parallel, repeated, or skipped
- Integration with ATT&CK: Each UKC phase maps directly to MITRE ATT&CK tactics, providing a bridge between the high-level kill chain view and the granular TTP detail
NIST Cybersecurity Framework Integration
Mapping Red Teaming to NIST CSF Functions
The NIST Cybersecurity Framework (CSF) organizes cybersecurity risk management into six core functions (Govern was added in CSF 2.0). Red team engagements produce findings and insights that map directly to each function, making the NIST CSF a useful lens for translating red team results into enterprise risk language.
| NIST CSF Function | Red Team Relevance | Example Findings |
|---|---|---|
| Govern | Tests organizational governance structures, risk management processes, and oversight mechanisms | Inadequate oversight of third-party access; risk acceptance decisions not informed by realistic threat scenarios; policy gaps identified through engagement |
| Identify | Red team reconnaissance reveals the organization’s actual attack surface, often identifying assets and exposures unknown to the security team | Shadow IT discovered during OSINT; unmanaged internet-facing systems; incomplete asset inventories; third-party connections not in CMDB |
| Protect | Red team exploitation directly tests protective controls — firewalls, endpoint protection, access controls, network segmentation, and security awareness training | Endpoint detection bypassed with custom payload; MFA not enforced on VPN; network segmentation failures allowing lateral movement; employees susceptible to spear-phishing |
| Detect | The red team’s ability to operate undetected is a direct measure of the organization’s detection capabilities | Dwell time metrics; C2 traffic not flagged by SIEM; data exfiltration not detected by DLP; anomalous authentication patterns missed |
| Respond | If detected, the organization’s incident response is tested in real-time against a realistic adversary | Incident response playbooks not followed; containment actions insufficient; communication breakdowns between SOC and management; time to containment metrics |
| Recover | Red team reporting informs recovery planning by identifying which systems would require remediation and restoration after a real compromise | Backup integrity validation; recovery time estimates; identification of single points of failure; business continuity plan gaps |
Practical Application
Organizations can use this mapping to structure their red team reporting in terms familiar to senior leadership and board members. Rather than presenting a list of technical vulnerabilities, the red team report can frame findings as gaps within specific CSF functions, tying them directly to the organization’s risk management framework.
This approach is particularly effective when communicating red team results to regulators, auditors, and board-level risk committees who are familiar with NIST CSF but may not understand the technical details of the engagement. See Metrics & Reporting for detailed guidance on structuring red team reports for executive audiences.
Intelligence-Led Testing
The Role of Threat Intelligence
Intelligence-led testing represents a paradigm shift from traditional vulnerability-focused security assessments to threat-focused adversary simulation. The core principle is simple: test against the threats that actually target your organization, not against generic vulnerability catalogs.
This approach requires integrating cyber threat intelligence (CTI) into every phase of the red team engagement — from scoping and planning through execution and reporting.
Threat Intelligence Integration Points
Pre-Engagement: Threat Profiling Before the engagement begins, a CTI analyst (either internal or from a dedicated TI provider) produces a threat profile for the target organization. This profile identifies:
- Relevant threat actors: APT groups, financially motivated criminals, hacktivists, or insiders with demonstrated interest in the target’s sector, geography, or technology stack
- Historical campaigns: Previous attacks against the organization, its sector, or its supply chain
- TTPs: The specific tools, techniques, and procedures used by identified threat actors, mapped to MITRE ATT&CK
- Attack surface exposure: The organization’s external footprint as it would appear to an adversary conducting reconnaissance
- Emerging threats: New vulnerabilities, exploit kits, or attack techniques that are relevant to the organization’s technology stack
During Engagement: Adversary Emulation The red team uses the threat profile to design and execute realistic attack scenarios. Rather than using their preferred tools and techniques, operators emulate the specific TTPs of the identified threat actors. This ensures the engagement tests the organization’s defenses against the threats most likely to materialize.
Adversary emulation plans typically specify:
- Initial access vectors (e.g., spear-phishing with macro-enabled documents, exploitation of public-facing applications)
- Malware or implant characteristics (e.g., specific C2 protocols, evasion techniques)
- Lateral movement methods (e.g., pass-the-hash, remote service exploitation)
- Persistence mechanisms (e.g., scheduled tasks, registry modifications)
- Data collection and exfiltration methods
Post-Engagement: Intelligence Feedback Loop After the engagement, findings are fed back into the CTI function. The red team’s experience operating inside the target environment provides valuable intelligence about the organization’s actual defensive posture, which can be compared against the threat landscape to assess residual risk.
CTI Provider Selection
For organizations that lack internal CTI capabilities, selecting an external CTI provider is critical. Key evaluation criteria include:
- Source access: Does the provider have access to diverse intelligence sources — open source, deep/dark web, technical feeds, human intelligence networks, and industry sharing groups?
- Sector expertise: Does the provider have demonstrated experience analyzing threats to your specific sector?
- Analytical rigor: Does the provider employ structured analytical techniques (Analysis of Competing Hypotheses, Diamond Model) rather than relying solely on automated indicator feeds?
- Actionable output: Does the provider produce threat assessments that can directly inform red team planning, or only raw indicators of compromise?
- Independence: For regulated engagements (CBEST, TIBER-EU), the CTI provider must be organizationally independent from the red team provider.
Red Team Engagement Lifecycle
End-to-End Process
The following diagram illustrates the complete lifecycle of a red team engagement, from initial scoping through remediation validation.
graph TD
subgraph PreEngagement["Pre-Engagement"]
PE1["Scoping &<br/>Objectives"] --> PE2["Rules of<br/>Engagement"]
PE2 --> PE3["Legal<br/>Authorization"]
PE3 --> PE4["Threat<br/>Intelligence"]
PE4 --> PE5["Attack Plan<br/>Development"]
end
subgraph Execution["Execution"]
EX1["Reconnaissance<br/>& OSINT"] --> EX2["Initial<br/>Access"]
EX2 --> EX3["Establish<br/>Persistence"]
EX3 --> EX4["Privilege<br/>Escalation"]
EX4 --> EX5["Lateral<br/>Movement"]
EX5 --> EX6["Objective<br/>Completion"]
end
subgraph PostEngagement["Post-Engagement"]
PO1["Evidence<br/>Collection"] --> PO2["Report<br/>Writing"]
PO2 --> PO3["Executive<br/>Debrief"]
PO3 --> PO4["Technical<br/>Replay"]
PO4 --> PO5["Remediation<br/>Planning"]
PO5 --> PO6["Remediation<br/>Validation"]
end
PreEngagement --> Execution --> PostEngagement
style PreEngagement fill:#1a1a2e,stroke:#e94560,color:#ffffff
style Execution fill:#1a1a2e,stroke:#0f3460,color:#ffffff
style PostEngagement fill:#1a1a2e,stroke:#16213e,color:#ffffff
Phase Details
Pre-Engagement
The pre-engagement phase establishes the foundation for a successful engagement. Rushing this phase invariably leads to scope disputes, legal complications, or test results that do not address the organization’s actual risk concerns.
-
Scoping and Objectives: Define what the engagement is intended to demonstrate. Objectives should be expressed in business terms (“Can an adversary access the SWIFT payment system?”) rather than technical terms (“Can you get domain admin?”). Clear objectives ensure the red team’s activities remain focused and the results are meaningful to stakeholders.
-
Rules of Engagement: The formal agreement governing how the engagement will be conducted. Covered in detail in the ROE section below.
-
Legal Authorization: Written authorization from an individual with legal authority to permit the testing activities. This is not optional and must be obtained before any technical activity begins. For engagements spanning multiple jurisdictions, legal authorization may need to be obtained in each jurisdiction.
-
Threat Intelligence: As described in the intelligence-led testing section, threat intelligence informs the engagement’s attack scenarios and adversary profile.
-
Attack Plan Development: The red team develops a detailed plan mapping threat intelligence to specific attack paths, tools, and techniques. The plan includes contingencies for detection and alternative approaches.
Execution
-
Reconnaissance and OSINT: Deep intelligence gathering on the target, including technical infrastructure, personnel, business processes, physical locations, and supply chain relationships. This phase may begin weeks before active testing.
-
Initial Access: Executing the planned attack vector to gain a foothold in the target environment. This might involve spear-phishing, exploitation of external-facing applications, physical access, or supply chain compromise.
-
Establish Persistence: Installing mechanisms to maintain access if the initial vector is closed. This typically involves deploying multiple persistence mechanisms across different systems to ensure resilience.
-
Privilege Escalation: Elevating access from the initial foothold to the level required to achieve engagement objectives. This may involve exploiting local vulnerabilities, abusing misconfigurations, or leveraging harvested credentials.
-
Lateral Movement: Moving from the initially compromised system to additional systems required to reach the objective. Careful operational security is essential to avoid triggering detection during lateral movement.
-
Objective Completion: Achieving the defined engagement objectives — accessing target data, demonstrating impact on critical systems, or simulating adversary end-goals like ransomware deployment (in a controlled manner).
Post-Engagement
-
Evidence Collection: Compiling all screenshots, logs, tool outputs, and artifacts gathered during the engagement. Thorough evidence collection is essential for credible reporting and replay sessions.
-
Report Writing: Producing the engagement deliverable. This includes an executive summary, a detailed attack narrative, findings with severity ratings, detection and response assessment, and remediation recommendations.
-
Executive Debrief: Presenting key findings and strategic recommendations to senior leadership. This presentation focuses on business risk and organizational resilience rather than technical details.
-
Technical Replay: Walking through the engagement step-by-step with the blue team and technical stakeholders. This is the primary learning opportunity and should be conducted in a collaborative, non-adversarial manner.
-
Remediation Planning: The organization develops a plan to address identified weaknesses, prioritized by risk and feasibility.
-
Remediation Validation: The red team re-tests specific findings to confirm that remediations are effective. This closes the loop and provides assurance that the investment in red teaming has resulted in measurable security improvement. Case studies of this process in practice are covered in Real Engagements.
Rules of Engagement (ROE) Template
Purpose
The Rules of Engagement document is the governing agreement for a red team engagement. It defines what is permitted, what is prohibited, how emergencies are handled, and how the engagement will be managed. A well-crafted ROE protects both the red team and the target organization.
Key Sections
The following outline represents the essential sections of a comprehensive ROE document:
1. Authorization and Scope
- Name and title of authorizing individual
- Date and duration of authorization
- Specific systems, networks, and facilities in scope
- Systems, networks, and facilities explicitly out of scope
- Geographic and jurisdictional boundaries
- Third-party systems and the authorization status for each
2. Objectives
- Primary engagement objectives (stated in business terms)
- Secondary objectives (if primary objectives are achieved early)
- Specific adversary profile or threat scenario being emulated
- Success criteria and how objective completion will be demonstrated
3. Authorized Activities
- Technical testing: network exploitation, application attacks, wireless testing
- Social engineering: phishing, vishing, physical pretexting
- Physical access: tailgating, badge cloning, facility intrusion
- Data handling: what data can be accessed, copied, or exfiltrated
- Denial of service: whether any form of availability impact is permitted
- Specific tools or techniques that are pre-approved or pre-prohibited
4. Constraints and Limitations
- Systems or data that must not be modified under any circumstances
- Maximum acceptable impact thresholds (e.g., no disruption to production systems)
- Testing windows (time-of-day restrictions, blackout periods)
- Geographic restrictions on operator locations
- Legal or regulatory constraints (e.g., GDPR implications for accessing personal data)
5. Communication Plan
- Primary and backup communication channels between red team and control group
- Check-in frequency and format
- Escalation procedures for unexpected findings (e.g., discovery of existing compromise)
- Status reporting cadence and format
6. Emergency Procedures
- Emergency contact list (control group, IT operations, physical security, legal)
- “Get out of jail free” protocol — what happens if a red team operator is confronted by security or law enforcement
- Incident deconfliction — how to determine if a detected event is the red team or a real adversary
- System impact response — procedures if red team activity causes unintended disruption
- Emergency stop (“kill switch”) procedures and who can invoke them
7. Evidence and Data Handling
- What evidence the red team will collect (screenshots, logs, tool output)
- How sensitive data encountered during the engagement will be handled
- Data encryption and storage requirements
- Data retention period and destruction procedures
- Chain of custody requirements for evidence
8. Reporting Requirements
- Deliverables and their format
- Report distribution list and classification
- Draft review process and timeline
- Final delivery date and method
9. Legal Provisions
- Liability limitations and indemnification
- Non-disclosure agreements
- Compliance with applicable laws and regulations
- Insurance requirements
10. Signatures
- Authorizing executive (target organization)
- Red team lead (testing organization)
- Legal counsel (both parties, if required)
- Date of execution
Critical Considerations
- Specificity matters: Vague ROE creates ambiguity that can lead to disputes, legal exposure, or test results that stakeholders question. Be precise about what is and is not permitted.
- Living document: The ROE should include provisions for amendments during the engagement. If the red team discovers an unexpected situation, there must be a process for modifying the rules without halting the entire engagement.
- Legal review: The ROE should always be reviewed by legal counsel for both parties before execution. Red team activities, by their nature, involve actions that could be illegal without proper authorization.
- Regulatory alignment: For regulated engagements (CBEST, TIBER-EU), the ROE must comply with framework-specific requirements and may require regulatory approval.
Choosing the Right Framework
Selecting the appropriate framework or methodology depends on several factors:
Regulatory requirements: If your organization is subject to CBEST, TIBER-EU, or similar regulatory mandates, the framework choice is predetermined. Focus on understanding the specific requirements of your national implementation.
Organizational maturity: Organizations new to red teaming may benefit from starting with PTES-structured engagements that test fundamental controls before progressing to intelligence-led frameworks that assume a baseline of security maturity.
Objectives: If the goal is to test detection and response capabilities, frameworks with explicit blue team assessment components (CBEST, TIBER-EU, CREST STAR) are most appropriate. If the goal is to identify technical vulnerabilities in depth, PTES may be more suitable.
Budget and timeline: Intelligence-led frameworks require separate threat intelligence and red team engagements, typically running 3-6 months end-to-end. PTES-structured engagements can be conducted in shorter timeframes with smaller budgets.
Threat landscape: Organizations facing sophisticated, persistent threats (nation-state actors, advanced cybercriminal groups) benefit most from intelligence-led frameworks that emulate specific adversaries. Organizations primarily concerned with opportunistic threats may not need this level of sophistication.
In practice, experienced red team organizations draw from multiple frameworks, assembling a methodology tailored to each engagement. The MITRE ATT&CK framework serves as a common language that bridges these different approaches, providing granular TTP mapping regardless of which overarching methodology is employed.
Summary
Red teaming frameworks provide the structure that transforms ad hoc adversary simulation into repeatable, measurable, and defensible security assessments. The key takeaways:
- PTES provides a solid foundational lifecycle for testing engagements but lacks specific guidance on threat intelligence integration and adversary emulation
- CBEST and TIBER-EU represent the gold standard for intelligence-led red teaming in the financial sector, with DORA extending this approach into regulatory mandate across the EU
- CREST provides the accreditation and certification infrastructure that underpins quality assurance for red team providers and practitioners
- The Cyber Kill Chain remains a valuable communication tool despite its limitations, particularly for explaining adversary operations to non-technical audiences
- The Unified Kill Chain addresses the original Kill Chain’s gaps with significantly more granularity in post-exploitation phases
- NIST CSF provides a bridge between red team findings and enterprise risk management language
- Intelligence-led testing ensures that red team engagements simulate realistic threats rather than theoretical attack scenarios
- Rules of Engagement are the legal and operational foundation that protects both the red team and the target organization
The next page covers the MITRE ATT&CK framework in depth — the common language that ties these frameworks together at the tactical level.