← Back to Red Teaming

Red Teaming Fundamentals

18 min read

What Is Red Teaming?

Red teaming is the practice of rigorously challenging plans, policies, systems, and assumptions by adopting an adversary’s perspective. In cybersecurity, a red team is an authorized group of professionals who emulate real-world attackers — their tactics, techniques, and procedures (TTPs) — to test an organization’s detection and response capabilities under realistic conditions.

Unlike vulnerability assessments or standard penetration tests, red teaming is objective-driven. The red team is given a mission — exfiltrate specific data, compromise a critical system, disrupt a business process — and must find a way to achieve it, just as a real adversary would. The value lies not in a list of vulnerabilities, but in a demonstrated narrative of what an attacker can actually accomplish against your defenses.

Red teaming answers the question every CISO eventually faces: “If a sophisticated adversary targeted us today, what would actually happen?”


History of Red Teaming

Red teaming did not originate in cybersecurity. Its roots stretch back centuries, and understanding that lineage is essential for appreciating why the discipline works the way it does.

Prussian Kriegsspiel (1812)

The modern concept of adversarial simulation traces to Georg von Reisswitz, a Prussian military officer who formalized Kriegsspiel (literally “war game”) in 1812. His system used detailed maps, dice-driven combat resolution, and — critically — two opposing teams with an umpire controlling information flow. Each side could only see what their scouts reported, forcing commanders to make decisions under uncertainty.

The Prussian General Staff adopted Kriegsspiel as a standard training tool after the Napoleonic Wars. It gave officers experience in thinking through an adversary’s options and likely responses — the intellectual foundation of all red teaming that followed. Prussia’s military success in the wars of German unification (1864–1871) was attributed in part to this systematic adversarial thinking.

World War II and Operational Deception

Both Allied and Axis powers employed dedicated teams to think from the enemy’s perspective during World War II. The British XX Committee (Double-Cross System) ran turned German agents, requiring analysts to constantly model German intelligence expectations. The U.S. Navy’s OP-20-G codebreaking unit had to think like Japanese naval commanders to prioritize which communications to decrypt and analyze.

Post-war, the concept of dedicated adversarial analysis became embedded in Western military doctrine.

Cold War and the RAND Corporation (1950s–1980s)

The RAND Corporation formalized red teaming as an analytical methodology during the Cold War. RAND analysts were tasked with thinking like Soviet military planners — identifying how the USSR might launch a first strike, exploit NATO weaknesses, or respond to U.S. nuclear posture changes.

Key contributions from this era:

  • Herman Kahn’s escalation theory, which required modeling adversary decision-making under nuclear threat
  • Albert Wohlstetter’s basing studies, which used adversarial analysis to expose vulnerabilities in Strategic Air Command’s bomber deployment
  • RAND’s “Red Team/Blue Team” terminology, which became the standard nomenclature still used today

The fundamental insight from RAND’s work was that organizations are systematically bad at identifying their own weaknesses. Cognitive biases — groupthink, confirmation bias, anchoring — prevent defenders from seeing what attackers see. An independent adversarial team, freed from organizational assumptions, consistently found vulnerabilities that internal teams missed.

Tiger Teams (1970s)

In the 1970s, the U.S. government began assembling Tiger Teams — small groups of experts authorized to break into government facilities and systems to test their security. The term originated in engineering (NASA used “tiger teams” for critical failure analysis), but the security application marked a pivotal shift: organizations were now deliberately attacking themselves.

Notable Tiger Team activities included:

  • Physical penetration tests of military installations and nuclear facilities
  • TEMPEST assessments testing electromagnetic emanation security
  • Early computer security evaluations of mainframe systems, documented in reports like the 1970 Ware Report and the 1972 Anderson Report

The Anderson Report (1972) is particularly significant. James P. Anderson’s work for the U.S. Air Force explicitly described the concept of a “penetration team” — external experts authorized to attempt computer system compromise using the same methods a hostile intelligence agency would employ. This is the direct ancestor of modern red teaming.

Transition to Cybersecurity (1990s–2000s)

As organizations became network-dependent in the 1990s, the military red teaming concept migrated to information security:

  • 1997: “Eligible Receiver” — The NSA conducted a classified red team exercise against Department of Defense networks. Using only publicly available tools and techniques, the red team demonstrated the ability to disrupt military command and control. The results were alarming enough to catalyze the creation of what eventually became U.S. Cyber Command.
  • 1998: “Solar Sunrise” — While not a red team exercise (it was an actual intrusion by teenagers), the incident validated Eligible Receiver’s findings and accelerated government cybersecurity investment.
  • 2001–2003: Post-9/11 red teaming — The 9/11 Commission Report explicitly cited failures of imagination and adversarial thinking. The Department of Homeland Security and intelligence community expanded red team programs significantly.
  • 2003: “Millennium Challenge 2002” — A $250 million military wargame where the red team commander, Lt. Gen. Paul Van Riper, used unconventional tactics to sink a significant portion of the Blue fleet on the first day. When the exercise was reset with constraints on red team behavior, it became a cautionary tale about the importance of allowing red teams genuine freedom to operate.

Modern Red Teaming (2010s–Present)

The 2010s saw red teaming professionalize and expand into the private sector:

  • 2013: MITRE ATT&CK framework development began, providing a common language for describing adversary behavior (see MITRE ATT&CK for a deep dive)
  • 2015–2018: Commercial red team services became mainstream, offered by consultancies and specialized firms
  • 2018: TIBER-EU (Threat Intelligence-Based Ethical Red Teaming) framework published by the European Central Bank, establishing standards for financial sector red teaming
  • 2020s: Assumed breach engagements became the norm, reflecting the reality that perimeter compromise is often trivial
  • 2023–2026: AI-augmented red teaming emerged, with both red and blue teams leveraging machine learning for attack path discovery and anomaly detection
timeline
    title Evolution of Red Teaming
    1812 : Prussian Kriegsspiel
         : Formalized adversarial wargaming
    1950s : RAND Corporation
          : Cold War adversarial analysis
          : Red/Blue team terminology coined
    1970s : Tiger Teams
          : Government penetration testing
          : Anderson Report (1972)
    1997 : Eligible Receiver
         : First major cyber red team exercise
    2003 : Millennium Challenge
         : Lessons in red team freedom
    2013 : MITRE ATT&CK
         : Common adversary behavior language
    2018 : TIBER-EU Framework
         : Industry red team standards
    2020s : Assumed Breach Era
          : AI-augmented operations

Definitions: The Color Spectrum of Security Teams

Security operations use a color-coded team model. Understanding each role is essential for effective red teaming.

Red Team

The offensive team. Red teamers emulate real-world adversaries to test an organization’s defenses end-to-end. They operate covertly, using the same tactics an actual attacker would — phishing, exploitation, lateral movement, data exfiltration — with the goal of achieving specific objectives without being detected.

A red team engagement is not about finding every vulnerability. It is about demonstrating realistic attack paths and testing whether the blue team can detect and respond to sophisticated threats.

Blue Team

The defensive team. Blue team members are responsible for detecting, responding to, and remediating security incidents. This includes SOC analysts, incident responders, threat hunters, and security engineers. During a red team engagement, the blue team typically does not know an exercise is occurring — they respond to red team activity as they would to a real intrusion.

Purple Team

A collaborative function (not always a permanent team) where red and blue team members work together to maximize defensive improvement. Purple teaming involves the red team revealing their techniques and the blue team tuning detections in near-real-time. This iterative approach accelerates the feedback loop between offense and defense.

Purple teaming is not a replacement for red teaming — it serves a different purpose. Red teaming tests defenses under realistic conditions; purple teaming optimizes those defenses through direct collaboration.

White Team

The oversight team. White team members are the trusted agents who know the engagement is occurring, manage rules of engagement, serve as the communication bridge between the red team and organizational leadership, and can halt operations if safety concerns arise. The white team typically includes the CISO or a senior security leader and legal counsel.

Adversary Simulation vs. Penetration Testing

These terms are often used interchangeably, but they describe fundamentally different activities:

Adversary simulation (red teaming) emulates a specific threat actor’s TTPs against a specific organization to test detection and response. The goal is realism. The red team models a plausible adversary — an APT group, a ransomware operator, an insider threat — and executes their playbook.

Penetration testing identifies and exploits vulnerabilities within a defined scope to assess technical security posture. The goal is coverage. The pentester methodically tests systems for weaknesses and documents findings.

The distinction matters because it determines methodology, reporting, and organizational value. A penetration test might find 47 vulnerabilities rated by severity. A red team engagement might find 3 attack paths that demonstrate how an adversary could steal your most sensitive data without triggering a single alert. Both are valuable; they serve different purposes.


Comparison: Red Team vs. Pentest vs. Vulnerability Assessment vs. Bug Bounty

Understanding when to use each approach is a critical decision for security leaders.

DimensionRed TeamPenetration TestVulnerability AssessmentBug Bounty
ScopeFull organization (people, process, technology)Defined systems, networks, or applicationsDefined asset inventoryDefined assets (usually external-facing)
Duration4–12 weeks typical; some run continuously1–4 weeks typicalDays to weeks (scanning)Ongoing / continuous
MethodologyAdversary emulation based on threat intelligence; objective-drivenStructured testing methodology (OWASP, PTES, OSSTMM)Automated scanning with manual validationResearcher-driven; varies by individual
StealthMaximum stealth required; detection = failure for red teamLimited stealth; defender awareness variesNo stealth; fully coordinatedNo stealth; researchers submit findings
Attacker KnowledgeBlack box or assumed breach; minimal prior infoWhite, gray, or black boxFull asset knowledge typicallyBlack box; external perspective
ReportingAttack narrative with detection gaps, business impactVulnerability list with severity ratings, remediationScan results with risk ratingsIndividual vulnerability reports
Primary ValueTests detection & response; reveals realistic attack pathsIdentifies exploitable vulnerabilitiesBroad vulnerability identification at scaleContinuous external testing; diverse perspectives
Cost$100K–$500K+ per engagement$15K–$150K per engagement$5K–$50K per assessmentBounty payouts ($500–$100K+ per finding)
When to UseMature security program; need to test blue teamPre-launch, compliance, periodic assuranceRegular hygiene; asset discoverySupplement internal testing; external perspective

Practitioner insight: These approaches are complementary, not competing. A mature security program uses all four. Vulnerability assessments provide breadth, penetration tests provide depth on specific systems, bug bounties provide continuous external pressure, and red team engagements provide the realistic end-to-end validation that ties everything together.


The Adversary Mindset

Technical skills are necessary for red teaming but insufficient. What distinguishes a red teamer from a penetration tester is how they think. The adversary mindset is a cognitive framework that can be developed and practiced.

Think Like an Attacker

Real attackers do not follow methodologies. They do not limit themselves to a scope document. They look for the path of least resistance to their objective, and they exploit whatever they find — technical vulnerabilities, human psychology, business process weaknesses, supply chain dependencies.

Adopting the adversary mindset means asking:

  • “What do I want?” — Start with the objective, not the target list. An attacker wants data, access, disruption, or leverage. Everything else is a means to that end.
  • “What would I do if I had no rules?” — Then determine which of those approaches can be safely and legally simulated within the engagement rules.
  • “What assumptions are the defenders making?” — Then violate those assumptions. If they assume attacks come from the internet, attack from the inside. If they assume phishing targets email, use voice calls or physical access. If they trust a particular vendor, compromise the vendor relationship.
  • “What is the easiest path, not the most impressive?” — Attackers are pragmatic. They will use a stolen password before they write a zero-day exploit. Complexity is the enemy of operational security.

Objective-Driven Thinking

A red team operation is structured around objectives, not targets. The difference is profound:

  • Target-focused: “Compromise the domain controller” — This is a pentest deliverable.
  • Objective-focused: “Demonstrate the ability to exfiltrate customer PII to an external system without triggering any SOC alerts” — This is a red team objective.

Objective-driven operations force the red team to chain together multiple capabilities — initial access, persistence, privilege escalation, lateral movement, collection, and exfiltration — in a way that tests the full defensive stack. Each phase must succeed, and each phase must remain undetected.

The OODA Loop

Colonel John Boyd’s OODA Loop (Observe-Orient-Decide-Act) is a decision-making framework from fighter combat doctrine that applies directly to red team operations.

graph LR
    O[Observe<br/>Gather intel on<br/>target environment] --> OR[Orient<br/>Analyze findings<br/>against objectives]
    OR --> D[Decide<br/>Select optimal<br/>attack path]
    D --> A[Act<br/>Execute the<br/>chosen technique]
    A --> O

    style O fill:#1a365d,stroke:#2b6cb0,color:#fff
    style OR fill:#2c5282,stroke:#2b6cb0,color:#fff
    style D fill:#2b6cb0,stroke:#63b3ed,color:#fff
    style A fill:#3182ce,stroke:#63b3ed,color:#fff

Applied to red teaming:

OODA PhaseRed Team ApplicationExample
ObserveReconnaissance, network enumeration, monitoring blue team responseIdentify that the SOC monitors east-west traffic but not cloud API calls
OrientAnalyze observations against mission objectives; update mental model of the environmentRealize the cloud environment is a blind spot; reorient attack path through Azure
DecideSelect the next technique, target, or pivot based on current understandingDecide to move C2 traffic to Azure Functions to blend with legitimate cloud traffic
ActExecute the chosen action; collect resultsDeploy serverless C2 redirector; establish new communication channel

The red team’s goal is to cycle through the OODA loop faster than the blue team. When the red team acts before the blue team can orient to their previous action, the defenders fall behind and lose the initiative. This mirrors real-world adversary behavior — advanced threat actors operate at a tempo that outpaces defender response.

Cognitive Biases That Favor the Attacker

Red teamers exploit the same cognitive biases that have undermined defenders throughout history. Understanding these biases is both a weapon for the red team and a warning for the blue team:

  • Normalcy bias — Defenders assume that because nothing bad has happened, nothing bad will happen. Unusual activity is rationalized as benign. Red teams exploit this by making their activity look routine.
  • Confirmation bias — Defenders see what they expect to see. A SOC analyst who has investigated 500 false positives will interpret the 501st alert the same way, even if this one is real.
  • Anchoring — Once a defender forms an initial assessment (“it is just a misconfigured service account”), they resist updating that assessment even when new evidence contradicts it.
  • Automation bias — Over-reliance on automated tools creates blind spots. If the SIEM does not alert, defenders assume nothing happened. Red teams deliberately operate below detection thresholds.
  • Sunk cost fallacy — Organizations continue to invest in security controls that are not working because they have already spent heavily on them. Red teams expose these failures.

Understanding and exploiting these biases is not manipulation — it is realistic adversary emulation. Real threat actors depend on these same cognitive weaknesses.

Creative Problem Solving

Red teamers regularly encounter situations where the obvious path is blocked. A firewall rule prevents outbound connections. An EDR product catches the standard tool. A physical door requires a badge the team does not have. The adversary mindset treats these as constraints to work around, not barriers to stop at.

Techniques for creative problem solving:

  • Inversion: Instead of asking “How do I get past this control?” ask “What would make this control irrelevant?”
  • Analogy: How have other industries solved similar access problems? Social engineering techniques often borrow from confidence games, sales psychology, and intelligence tradecraft.
  • Assumption mapping: List every assumption the defenders are making. Each assumption is a potential attack vector.
  • Time shifting: If a control is strong during business hours, is it equally strong at 2 AM on a holiday weekend?

Persistence and Patience

Real adversaries — particularly nation-state actors and organized crime groups — are patient. They spend weeks or months in reconnaissance. They establish multiple persistence mechanisms. They wait for opportunities rather than forcing them.

Red teams must cultivate the same patience. Rushing leads to detection. An engagement where the red team is caught on day two because they moved too aggressively has limited value compared to one where they operated undetected for six weeks and demonstrated a complete attack chain.


Organizational Maturity Requirements

Red teaming is not appropriate for every organization at every stage. Commissioning a red team engagement before the organization is ready wastes budget and produces frustration rather than improvement.

Prerequisites for Red Teaming

Before engaging a red team, an organization should have:

  1. A functioning blue team or SOC — There must be someone to detect and respond to red team activity. If no one is watching, the red team will succeed trivially, and the engagement will prove nothing beyond what was already known.

  2. Established security controls — Firewalls, endpoint detection, log aggregation, identity management. These do not need to be perfect, but they need to exist. The red team tests whether controls work; they cannot test controls that are absent.

  3. Incident response capability — A documented and practiced IR process. The engagement will generate incidents that the blue team must handle. If there is no IR process, the organization needs tabletop exercises before red teaming.

  4. Executive buy-in and understanding — Leadership must understand that a red team engagement will reveal failures. If findings will be met with blame rather than improvement, the engagement will be counterproductive. The CISO must have executive support to act on findings.

  5. Threat intelligence context — Understanding which adversaries target your industry and what they are after. This intelligence informs the red team’s emulation profile. Without it, the red team is simulating a generic attacker rather than a relevant one. See MITRE ATT&CK for frameworks that structure this intelligence.

  6. Prior penetration testing — The organization should have addressed findings from at least one penetration test cycle. Red teaming is wasted if the team achieves objectives by exploiting well-known, unpatched vulnerabilities that a basic pentest would have found.

Security Program Maturity Model

The following maturity model helps organizations assess readiness for different types of security testing:

Maturity LevelDescriptionSecurity Testing AppropriateKey Characteristics
Level 1: InitialAd hoc security; no formal programVulnerability scanning onlyNo dedicated security staff; reactive to incidents; compliance-driven at best
Level 2: DevelopingBasic controls deployed; part-time security staffVulnerability assessments, basic penetration testsFirewall and AV deployed; some logging; no centralized monitoring; limited IR capability
Level 3: DefinedFormal security program; dedicated team; documented processesComprehensive penetration testing, initial purple team exercisesSOC established; SIEM deployed; IR plan documented and practiced; basic threat hunting
Level 4: ManagedMetrics-driven security; proactive threat hunting; mature SOCFull red team engagements, advanced purple teaming24/7 SOC; threat intelligence program; automated response capabilities; regular IR exercises
Level 5: OptimizingContinuous improvement; adversary emulation program; security embedded in cultureContinuous red teaming, assumed breach exercises, custom adversary emulationDedicated internal red team; purple team cadence; threat-informed defense; security champions across the business

Practitioner insight: Most organizations that request red teaming are at Level 2 or 3. This is not a criticism — it reflects the reality that red teaming has become a buzzword, and organizations often want it before they are ready. A responsible red team provider will assess maturity and recommend the appropriate engagement type. A Level 2 organization benefits far more from a thorough penetration test than a red team engagement where the team achieves all objectives on the first day.


When to Red Team: A Decision Framework

Deciding whether to conduct a red team engagement requires evaluating several factors.

Indicators That an Organization Is Ready

  • The SOC has been operational for 12+ months and has matured its detection capabilities
  • The organization has completed at least two penetration test cycles and remediated critical findings
  • An incident response plan exists, has been tested via tabletop exercises, and has been executed during at least one real incident
  • Executive leadership has explicitly requested adversary simulation and understands the implications
  • The organization faces realistic threat actor targeting (based on industry, data holdings, or geopolitical exposure)
  • There is budget and organizational will to act on red team findings

Indicators That an Organization Is NOT Ready

  • No centralized logging or monitoring capability
  • The last penetration test found critical unpatched vulnerabilities that remain unaddressed
  • No incident response plan or the plan has never been tested
  • Security team would view red team findings as a personal attack rather than an improvement opportunity
  • Leadership expects red teaming to “prove security is working” rather than to identify gaps
  • No threat intelligence context — the organization cannot articulate who might attack them or why

Common Anti-Patterns

Red teaming too early: The most common mistake. An organization at maturity Level 2 commissions a red team engagement. The red team compromises the domain within hours using basic techniques. The report says “everything is broken.” The organization is overwhelmed, does not know where to start, and the investment yields minimal actionable improvement.

Red teaming as compliance checkbox: Some organizations conduct red team engagements purely to satisfy regulatory requirements or board expectations. The engagement is scoped so narrowly (or the red team so constrained) that it cannot produce meaningful findings. This creates a false sense of security.

Red teaming without follow-through: The engagement produces an excellent report. The report goes into a drawer. No detections are improved, no processes are changed, no controls are hardened. The same attack paths will work again next year.

Punishing the blue team: If the blue team is penalized for failing to detect red team activity, the organization has fundamentally misunderstood the purpose of the exercise. Red teaming is a tool for improvement, not evaluation. Blue teams that fear red team exercises will resist the program, hide failures, or lobby for engagement restrictions that reduce realism.


The Red Team Engagement Lifecycle

A red team engagement follows a structured lifecycle, even though the operational phase is deliberately unstructured and adaptive.

graph TD
    A[1. Scoping & Planning<br/>Define objectives, RoE,<br/>threat profile] --> B[2. Threat Intelligence<br/>Research target org,<br/>select adversary TTPs]
    B --> C[3. Reconnaissance<br/>OSINT, infrastructure<br/>mapping, target ID]
    C --> D[4. Initial Access<br/>Phishing, exploitation,<br/>physical, supply chain]
    D --> E[5. Post-Exploitation<br/>Persistence, privilege<br/>escalation, lateral movement]
    E --> F[6. Objective Execution<br/>Data exfiltration,<br/>business impact demo]
    F --> G[7. Reporting & Debrief<br/>Attack narrative,<br/>detection gap analysis]
    G --> H[8. Remediation Support<br/>Purple team follow-up,<br/>detection tuning]
    H --> I[9. Retest & Validation<br/>Verify improvements,<br/>close gaps]

    style A fill:#1a365d,stroke:#2b6cb0,color:#fff
    style B fill:#1a365d,stroke:#2b6cb0,color:#fff
    style C fill:#2c5282,stroke:#2b6cb0,color:#fff
    style D fill:#c53030,stroke:#e53e3e,color:#fff
    style E fill:#c53030,stroke:#e53e3e,color:#fff
    style F fill:#c53030,stroke:#e53e3e,color:#fff
    style G fill:#2c5282,stroke:#2b6cb0,color:#fff
    style H fill:#2f855a,stroke:#48bb78,color:#fff
    style I fill:#2f855a,stroke:#48bb78,color:#fff

Each phase of this lifecycle is covered in detail across the subsequent pages of this topic. For command-and-control infrastructure, see C2 Frameworks. For the tactical execution mapped to ATT&CK, see MITRE ATT&CK.


The Red Team Charter

Every red team — whether internal or contracted — should operate under a formal charter. The charter is a foundational document that defines the team’s authority, purpose, and boundaries.

Purpose Statement

The charter should articulate why the red team exists in clear, business-aligned language:

“The Red Team exists to provide realistic adversary simulation that identifies gaps in [Organization]‘s ability to prevent, detect, and respond to targeted attacks against our critical assets and business operations.”

The purpose should be tied to organizational risk management, not just technical security testing.

Scope of Authority

The charter must define:

  • What the red team is authorized to do — Specific techniques, target categories, and environments
  • What is explicitly out of scope — Production systems with safety implications, specific business-critical windows, partner/customer systems
  • Escalation procedures — How the red team reports imminent dangers discovered during an engagement (e.g., an active real-world breach)
  • Evidence handling — How the red team stores, protects, and eventually destroys sensitive data obtained during engagements
  • Legal protections — Authorization documentation that protects red team members from criminal or civil liability (the “get out of jail free” letter)

Reporting Structure

Red team reporting structure has a direct impact on effectiveness:

Reporting ModelAdvantagesDisadvantages
Reports to CISOClose alignment with security strategy; fast remediation loopsCISO may suppress unfavorable findings; potential conflict of interest
Reports to CRO/RiskIndependence from security org; objective assessmentMay lack technical context for scoping; slower remediation coordination
Reports to Board/Audit CommitteeMaximum independence; strongest governanceMost politically complex; may create adversarial relationship with security team
External (contracted)Fresh perspective; no organizational biasLess institutional knowledge; engagement boundaries limit persistence

Practitioner insight: The most effective model in practice is a hybrid — an internal red team that reports to the CISO for day-to-day operations but has a dotted-line to the audit committee or CRO for annual reporting. This provides operational efficiency while maintaining independence. External red teams should supplement, not replace, internal capability.

Rules of Engagement (RoE)

The Rules of Engagement are the operational guardrails for every engagement. They are negotiated between the red team and the white team before operations begin and documented in writing. A thorough RoE document includes:

  • Authorized targets — Specific networks, systems, physical locations, and personnel in scope
  • Prohibited actions — Techniques that could cause harm (e.g., denial of service against production systems, social engineering of executives who have not consented)
  • Operational windows — Times when testing is permitted and any blackout periods
  • Communication protocols — Emergency contact information, check-in schedules, incident escalation procedures
  • Deconfliction process — How the red team and white team distinguish red team activity from real attacks. This typically involves a secure communication channel and timestamp logging
  • Data handling requirements — Classification, storage, transmission, and destruction requirements for any data the red team accesses
  • Stop conditions — Specific events that require the red team to halt operations immediately (e.g., discovery of an active real-world breach, potential safety impact)

Key Principles of Red Teaming

These principles distinguish professional red teaming from ad hoc offensive security testing.

1. Assume Breach Mindset

The most valuable red team engagements increasingly start from an assumed breach position. Rather than spending weeks on initial access (which, for most organizations, is achievable through commodity phishing), the red team begins with a foothold — a compromised workstation, a set of valid credentials, or access to an internal network segment.

This approach maximizes the engagement’s value by focusing time on the harder questions: Can the organization detect lateral movement? Can they identify privilege escalation? Will they notice data staging and exfiltration? These are the capabilities that matter against sophisticated adversaries who have already bypassed perimeter defenses.

Assumed breach does not mean initial access testing is unimportant. It means the organization has made a strategic decision about where red team time creates the most value.

2. Objective-Based Operations

Every red team engagement must have clearly defined objectives that align with organizational risk. Good objectives are:

  • Specific: “Access the SWIFT payment system and demonstrate the ability to initiate a transaction” — not “test the financial systems”
  • Measurable: Success or failure must be unambiguous
  • Threat-informed: Derived from realistic adversary targeting of the organization’s actual crown jewels
  • Business-relevant: Framed in terms executives understand — revenue impact, regulatory exposure, reputational damage

3. Realistic Threat Emulation

The red team should emulate adversaries that plausibly target the organization. A regional bank does not need emulation of a nation-state APT using zero-day exploits. A defense contractor does. Threat emulation profiles should be built from:

  • Threat intelligence on adversary groups known to target the organization’s industry
  • MITRE ATT&CK mappings of those groups’ known TTPs (see MITRE ATT&CK)
  • Tool selection that reflects adversary capability — using C2 frameworks and techniques appropriate to the emulated threat tier

This does not mean the red team only uses techniques the threat actor has been observed using. It means the red team operates at a comparable capability level and targets the same type of objectives the adversary would pursue.

4. Minimal Footprint

Professional red teams operate with the minimum footprint necessary to achieve objectives. This means:

  • Minimal tooling on target — Prefer living-off-the-land techniques over deploying custom malware when feasible
  • Clean operational infrastructure — Dedicated, compartmentalized infrastructure for each engagement; nothing reused across clients
  • Controlled data access — Access only the data necessary to prove the objective was achieved; do not exfiltrate real sensitive data when a screenshot of access is sufficient
  • Thorough cleanup — Remove all persistence mechanisms, tools, and accounts created during the engagement. Document everything that was deployed so the organization can verify removal

5. Continuous Documentation

Red teams must document every action taken during an engagement. This serves multiple purposes:

  • Legal protection — Proves the team operated within authorized scope
  • Reproducibility — Allows the organization to understand exactly what happened and replay the attack for purple team exercises
  • Detection mapping — Every red team action is an opportunity to ask “should this have been detected?” and map the answer against existing detection coverage
  • Deconfliction — If the blue team detects activity, the white team can quickly determine whether it is the red team or a real adversary

6. Safety First

The red team must never cause unintended harm. This principle is non-negotiable:

  • No denial of service to production systems unless explicitly authorized and controlled
  • No destruction of data — ever, under any circumstances
  • Immediate escalation of any discovered real-world compromise or safety-critical vulnerability
  • Respect for people — Social engineering should be conducted professionally and within agreed boundaries. Manipulation tactics should not cause lasting psychological distress or target individuals in vulnerable situations

Building the Foundation

Red teaming is a discipline, not a service. It requires organizational maturity to commission, technical and creative skill to execute, and institutional courage to act on the findings. The principles and frameworks in this page establish the foundation — the subsequent pages in this topic build upon it with tactical depth.

Recommended reading order for this topic:

  1. Fundamentals (this page) — History, philosophy, principles
  2. Planning & Scoping — Engagement planning, RoE development, threat profiling
  3. Reconnaissance & OSINT — Intelligence gathering techniques
  4. MITRE ATT&CK — Adversary behavior framework and TTP mapping
  5. Initial Access Techniques — Getting in: phishing, exploitation, physical
  6. C2 Frameworks — Command and control infrastructure
  7. Lateral Movement & Privilege Escalation — Moving through the environment
  8. Reporting & Debrief — Communicating findings effectively
  9. Purple Teaming — Collaborative improvement
  10. Building a Red Team Program — Standing up an internal capability

Summary

PrincipleCore QuestionAnti-Pattern
Assume Breach”What happens after the perimeter fails?”Spending entire engagement on initial access
Objective-Based”What does the adversary actually want?”Reporting vulnerability counts instead of attack narratives
Realistic Emulation”Would a real threat actor do this?”Using techniques beyond the emulated adversary’s capability
Minimal Footprint”Am I leaving unnecessary traces?”Deploying heavy tooling when living-off-the-land suffices
Continuous Documentation”Can I prove every action I took?”Relying on memory for timeline reconstruction
Safety First”Could this cause unintended harm?”Testing production systems without safeguards

Red teaming is the most rigorous form of security testing available to an organization. It combines centuries of adversarial thinking methodology with modern offensive security capabilities to answer the question that no other assessment can: What would actually happen if a skilled, motivated adversary targeted us?

The discipline demands:

  • Historical awareness — Understanding where red teaming came from and why it works
  • The adversary mindset — Thinking like an attacker, not a tester
  • Organizational readiness — Ensuring the investment produces actionable improvement
  • Professional principles — Operating safely, ethically, and within authorized boundaries
  • Commitment to follow-through — Using findings to drive measurable defensive improvement

With these fundamentals in place, an organization is ready to move from theory to practice.