← Back to Red Teaming

Lessons from Real Engagements

18 min read

Lessons from Real Engagements

Red teaming is not a theoretical exercise. Every engagement carries real legal risk, real career consequences, and real potential for things to go catastrophically wrong. The difference between a successful operation and a career-ending disaster often comes down to preparation, documentation, and discipline — not technical skill.

This page compiles hard-won lessons from publicly documented incidents, conference talks, and the collective experience of the offensive security community. These are the mistakes that seasoned operators have made so you don’t have to.

“The best red teamers aren’t the ones who never make mistakes — they’re the ones who build systems that prevent the same mistake from happening twice.”

For foundational concepts on red team operations, see Fundamentals. For structured approaches to engagements, see Frameworks & Methodologies.


The Coalfire Courthouse Incident (2019)

Perhaps no single event has done more to reshape how the offensive security industry thinks about authorization, scope, and deconfliction than the arrest of Gary DeMercurio and Justin Wynn in September 2019.

What Happened

Coalfire Labs was contracted by the State Court Administration (SCA) of Iowa to conduct physical penetration testing of county courthouses across the state. On September 11, 2019, two Coalfire consultants — Gary DeMercurio and Justin Wynn — were conducting an authorized physical intrusion test of the Dallas County Courthouse in Adel, Iowa, when they were arrested by the Dallas County Sheriff’s Department.

Despite carrying a signed authorization letter (often called a “get out of jail free” letter), the consultants were charged with third-degree burglary (a Class D felony) and possession of burglary tools. They spent the night in jail.

What Went Wrong

The incident exposed multiple systemic failures in how the engagement was scoped, authorized, and communicated:

  1. Ambiguous authority chain: The SCA believed it had the authority to authorize testing of all county courthouses. Dallas County disagreed, asserting that the county — not the state — controlled the physical building. This jurisdictional ambiguity was never resolved before testing began.

  2. Law enforcement was not notified: Local police and the county sheriff had no knowledge that authorized testing was taking place. When the alarm triggered, they responded exactly as they should — by arresting the intruders.

  3. Scope confusion: The Rules of Engagement (ROE) did not clearly delineate the boundaries of physical access authorization. The authorization letter referenced “electronic” security testing but the physical penetration component was poorly documented.

  4. Time-of-day restrictions were violated or unclear: The consultants were operating after hours (around midnight), which raised additional suspicion and complicated the legal defense.

  5. No deconfliction protocol existed: There was no mechanism to quickly verify authorization with a trusted contact when law enforcement arrived on scene.

The Aftermath

  • Both consultants faced felony charges that took months to resolve
  • Coalfire’s reputation suffered significant damage despite being a well-respected firm
  • The incident sparked industry-wide conversations about authorization practices
  • Iowa eventually passed legislation (House File 2205) addressing authorized security testing
  • Both consultants were ultimately acquitted, but not before enduring significant personal and professional hardship

Key Takeaways

  • Never assume authorization from one entity covers all entities in scope — verify independently with each party that has physical or legal control
  • Always notify local law enforcement (or have the client do so) before physical testing
  • Carry multiple forms of authorization — written letter, emergency contact numbers, and a direct line to someone who can confirm authorization 24/7
  • Document everything obsessively — time stamps, photos, GPS coordinates, entry/exit logs
  • When in doubt, stop — no engagement finding is worth a felony charge

OPSEC Failures in the Wild

Operational Security (OPSEC) failures can burn infrastructure, expose tools, compromise future engagements, and in extreme cases, endanger operators. The following are real categories of failures observed across the industry.

Common OPSEC Failures

Failure CategoryWhat HappensImpactPrevention
IP address reuseSame attack infrastructure used across multiple engagementsAttribution across clients, burned IPs, potential legal exposureFresh infrastructure per engagement, destroy after use
VirusTotal uploadsCustom tools/payloads uploaded for “testing” or by automated sandboxesPermanent public attribution, tool signatures exposed to AV vendorsNever upload to VT, use local sandboxes, control payload distribution
Geolocation mismatchC2 traffic originates from unexpected countries (VPS in Russia for US engagement)Blue team immediate escalation, potential law enforcement involvementUse regionally appropriate infrastructure, document IP ranges with client
Tool signature reuseSame unique beacon configuration, watermarks, or tool modifications across engagementsCross-engagement attribution, tool burned for future useUnique configurations per engagement, rotate tool signatures
Social media exposurePosting about active engagements, checking in at client locationsClient identification, engagement timing exposure, trust violationStrict social media blackout during engagements
Hotel WiFi operationsRunning attacks from hotel networks near client siteTrivial geolocation, network logging by hotel, potential legal issuesUse dedicated mobile hotspots, VPN chains, or remote infrastructure
DNS leaksOperator machine DNS queries leak real infrastructure detailsC2 infrastructure exposed, operator attributionDNS over HTTPS, dedicated DNS resolvers, split-tunnel VPN
Clipboard mistakesPasting client data into wrong chat window or documentData breach, NDA violation, client trust destructionDedicated VMs per engagement, clipboard isolation
Recon data spillageStoring recon data in shared cloud drives or unencrypted local storageClient data exposure if operator machine compromisedEncrypted storage, per-engagement data segregation
Git repository exposurePushing engagement-specific tools or configs to public reposTool exposure, client identification, methodology leakSeparate private repos per engagement, .gitignore discipline

The VirusTotal Problem in Detail

VirusTotal is perhaps the single most dangerous OPSEC threat to red team operations. Here’s why:

  • Every file uploaded to VirusTotal is permanently stored and available to premium subscribers (which includes nation-states and threat intelligence companies)
  • Automated sandboxes at organizations may forward suspicious files to VT — even your carefully crafted payloads
  • YARA rules can be written against your unique tool signatures and will retroactively match against every file ever uploaded
  • Threat intel companies actively hunt for red team tools on VT to build detections

Mitigation strategies:

  • Never upload anything to VirusTotal — use local analysis tools (such as strings, file, PE analysis tools, or local sandbox VMs)
  • Inform the client’s security team that your payloads should NOT be uploaded to external sandboxes
  • Use payload obfuscation that varies per engagement to prevent cross-engagement correlation
  • Monitor VirusTotal for your tool signatures (defensive awareness)
  • Design payloads to be ephemeral — they should not survive a reboot or leave persistent artifacts

Infrastructure Discipline

Every engagement should follow a strict infrastructure lifecycle:

  1. Provision — fresh VPS, domains aged appropriately, unique TLS certificates
  2. Configure — unique C2 profiles, redirectors, DNS records
  3. Operate — all traffic flows through redirectors, never expose real infrastructure
  4. Document — log all infrastructure details for the final report
  5. Destroy — tear down ALL infrastructure within 24-48 hours of engagement end
  6. Verify — confirm destruction, check for lingering DNS records or cloud resources

Authorization & ROE Documentation

The Rules of Engagement (ROE) document and authorization letter are your most important legal protections. A handshake agreement or verbal authorization is worth nothing when law enforcement arrives at 2 AM.

What a Proper Authorization Letter Must Contain

At minimum, a legally defensible authorization letter should include:

  1. Authorizing party identification — full legal name, title, and authority to authorize testing
  2. Company/organization identification — legal entity name, address, registration details
  3. Scope definition — explicitly enumerated:
    • IP address ranges and CIDR blocks
    • Domain names and subdomains
    • Physical building addresses and specific areas
    • Cloud environments (account IDs, subscription IDs)
    • Personnel in scope for social engineering (or explicit exclusions)
  4. Time window — exact start date/time and end date/time, including timezone
  5. Authorized activities — what techniques are permitted (network scanning, exploitation, phishing, physical access, social engineering, wireless testing)
  6. Prohibited activities — explicit list of what must NOT be done (DoS attacks, data destruction, specific systems to avoid)
  7. Operator identification — names and contact information of all authorized testers
  8. Emergency contacts — 24/7 reachable contacts at the client organization, with phone numbers
  9. Deconfliction contacts — who to call if law enforcement or incident response is triggered
  10. Legal signatures — signed and dated by an authorized representative with legal authority
  11. Legal review attestation — confirmation that the client’s legal counsel has reviewed the authorization

Authorization Letter Best Practices

  • Carry physical copies on your person during physical engagements — phones can be confiscated
  • Have a digital copy accessible from a phone or separate device not carried in your toolkit
  • Include a QR code or URL linking to a verification page the client controls
  • List the law firm that reviewed the authorization so law enforcement can verify independently
  • Include photos of authorized testers to prevent impersonation claims
  • Have the client’s legal counsel sign in addition to the technical sponsor
  • Obtain authorization from EVERY entity with jurisdiction — don’t assume a parent company can authorize testing of a subsidiary’s physical locations

Pre-Engagement Checklist

  • Signed authorization letter received and verified
  • Legal review completed by client’s counsel
  • Scope boundaries documented and confirmed in writing
  • Emergency and deconfliction contacts tested (actually call them)
  • Time windows confirmed with all stakeholders
  • Insurance coverage verified (professional liability / E&O)
  • Local laws reviewed for engagement jurisdiction
  • Data handling procedures agreed upon
  • Communication channels established (encrypted)
  • Incident response escalation path documented
  • Physical access authorization confirmed with building management
  • Law enforcement notification completed (if physical testing)
  • NDA executed between all parties
  • Prior engagement artifacts cleaned from operator machines
  • Fresh infrastructure provisioned and tested

Understanding the legal landscape is non-negotiable for red team operators. Ignorance of the law is not a defense, and the laws governing computer access and physical security testing vary significantly by jurisdiction.

CFAA — Computer Fraud and Abuse Act (United States)

The CFAA (18 U.S.C. Section 1030) is the primary federal statute governing unauthorized computer access in the United States.

Key provisions relevant to red teaming:

  • Section 1030(a)(2) — Obtaining information from a protected computer without authorization
  • Section 1030(a)(5) — Knowingly causing damage to a protected computer
  • Section 1030(a)(7) — Threatening to damage a computer to extort something of value

Critical CFAA considerations:

  • “Authorization” is poorly defined — the CFAA does not clearly define what constitutes “authorization,” leading to inconsistent court interpretations
  • Exceeding authorized access is treated the same as unauthorized access in many circuits — if your ROE says “do not access the HR database” and you do, you may have committed a federal crime
  • The Van Buren decision (2021) — the Supreme Court narrowed the interpretation of “exceeds authorized access” to mean accessing areas of a computer that are off-limits, not merely using permitted access for improper purposes. This is significant for red teamers because it clarifies that the scope of your authorization matters enormously
  • State laws may be stricter — many states have their own computer crime statutes that can be more restrictive than the CFAA
  • Civil liability — the CFAA includes a civil cause of action, meaning you can be sued even if criminal charges aren’t filed
JurisdictionPrimary LawKey ConsiderationsPhysical TestingSocial Engineering
United States (Federal)CFAA (18 U.S.C. 1030)Authorization must be explicit, scope matters post-Van BurenState trespass laws apply separatelyWiretapping laws vary by state (one-party vs. two-party consent)
United KingdomComputer Misuse Act 1990Three core offenses: unauthorized access, intent to commit further offenses, unauthorized modificationTrespass is primarily civil, not criminal (unless aggravated)Regulation of Investigatory Powers Act (RIPA) considerations
European UnionDirective 2013/40/EU + national lawsGDPR applies to any PII encountered during testingVaries by member stateGDPR data processing requirements for phishing simulations
GermanySection 202a StGB (Penal Code)“Overcoming specific security measures” is a key elementStGB Section 123 (Trespass)Strict data protection under BDSG
AustraliaCriminal Code Act 1995, Part 10.7Unauthorized access, modification, or impairmentState-level trespass lawsTelecommunications (Interception) Act
CanadaCriminal Code Section 342.1Unauthorized use of computer, mischief to dataProvincial trespass lawsPIPEDA for data protection

GDPR Considerations for European Engagements

When conducting red team operations in or against organizations subject to GDPR, additional precautions are required:

  • PII encountered during testing must be handled according to GDPR principles — you cannot simply exfiltrate and store employee data without appropriate safeguards
  • Phishing simulations that collect personal data (credentials, personal information) require a lawful basis for processing — typically “legitimate interest” with a documented assessment
  • Data minimization — only collect the minimum PII necessary to demonstrate the finding, then securely delete
  • Right to erasure — employees whose data you capture during testing may have rights under GDPR
  • Cross-border data transfers — if your red team infrastructure is outside the EU and you’re exfiltrating data (even for testing), transfer mechanisms apply
  • Document your GDPR compliance approach in the ROE and have the client’s DPO sign off
  • Breach notification — if you discover an actual breach during testing (not your simulated one), the client has 72 hours to notify the supervisory authority under Article 33

Wiretapping and Recording Laws

Physical and social engineering engagements often involve recording (audio, video, or screen capture). Be aware:

  • United States: Wiretapping laws vary by state. Eleven states require all-party consent for recording conversations (California, Connecticut, Florida, Illinois, Maryland, Massachusetts, Michigan, Montana, New Hampshire, Pennsylvania, Washington). The remaining states allow one-party consent
  • United Kingdom: Recording is generally permitted if one party consents, but RIPA governs interception of communications
  • EU: Generally requires all-party consent for recording, with variations by member state
  • Always get explicit permission in the ROE for any recording activities, specifying exactly what will be recorded and how recordings will be stored, protected, and destroyed

Deconfliction

Deconfliction is the process of ensuring that authorized red team activity is not confused with actual malicious activity, preventing unnecessary incident response escalation, law enforcement involvement, or worse.

Why Deconfliction Matters

Without proper deconfliction:

  • The SOC may escalate your activity to law enforcement, resulting in real investigations
  • Incident response teams may spend thousands of dollars and hundreds of hours responding to your simulated attack
  • Evidence of your testing may be submitted to threat intelligence platforms, polluting threat data
  • In extreme cases, operators may face arrest (as in the Coalfire incident)
  • The client may invoke their cyber insurance, creating contractual and legal complications

Deconfliction Protocol Flow

flowchart TD
    A[Red Team Activity Detected by Blue Team] --> B{Is activity in deconfliction log?}
    B -->|Yes| C[Trusted Agent confirms authorized activity]
    C --> D[Blue team continues monitoring without intervention]
    D --> E[Document detection for report]

    B -->|No| F{Contact Trusted Agent}
    F -->|Reached| G{Trusted Agent confirms?}
    G -->|Yes - Known activity| D
    G -->|No - Unknown activity| H[REAL INCIDENT — Escalate immediately]

    F -->|Not Reached| I[Escalate to backup deconfliction contact]
    I -->|Reached| G
    I -->|Not Reached| J[Follow standard IR procedures]
    J --> K[Red team pauses all operations]
    K --> L[Establish communication through any channel]

    H --> M[Red team pauses operations]
    M --> N[Full IR investigation initiated]
    N --> O[Determine if real threat actor or unknown red team activity]

    style H fill:#ff4444,color:#fff
    style N fill:#ff4444,color:#fff
    style D fill:#44aa44,color:#fff
    style E fill:#44aa44,color:#fff

The Trusted Agent Model

The Trusted Agent (sometimes called the “White Cell” or “Deconfliction Contact”) is a small group of individuals at the client organization who know that a red team engagement is occurring. This group must be:

  • As small as possible to maintain test integrity — typically the CISO, one SOC manager, and one IR lead
  • Available 24/7 during the engagement window via phone (not just email)
  • Empowered to make decisions — they must be able to immediately confirm or deny that activity is authorized
  • Disciplined about secrecy — they must not tip off the broader SOC team, which would undermine the test
  • Briefed on the engagement scope — they need enough detail to make deconfliction decisions without seeing the full operational plan

Deconfliction Without Compromising Test Integrity

The challenge of deconfliction is balancing security against test validity. Here are proven approaches:

  1. Sealed envelope method: Provide the trusted agent with a sealed envelope containing engagement details (IP ranges, time windows, general techniques). They open it ONLY if they need to deconflict. This preserves test integrity while enabling rapid verification.

  2. Hash-based verification: Provide SHA-256 hashes of your C2 domains and IP addresses to the trusted agent. They can verify observed activity against hashes without knowing the actual infrastructure in advance.

  3. Code word system: Establish a code word that operators can provide to the trusted agent (or even law enforcement, via the trusted agent) to quickly verify authorization.

  4. Scheduled check-ins: Regular check-in calls (daily or weekly) where the red team provides a general summary of activity timing without revealing specific techniques. The trusted agent can then pre-clear time windows with the SOC.

  5. Deconfliction log: Maintain a running log of all red team activity timestamps, source IPs, and target systems. The trusted agent can query this log when suspicious activity is reported.

Real Incidents of Friendly Fire

The industry has seen numerous cases of deconfliction failures:

  • SOC escalation to FBI: A financial institution’s SOC detected red team lateral movement, classified it as APT activity, and reported it to the FBI’s IC3. The resulting investigation disrupted both the engagement and the client’s operations for weeks.
  • Threat intel contamination: Red team C2 domains were reported to threat intelligence feeds, causing the infrastructure to be blocked across the client’s entire industry sector.
  • Insurance claim triggered: A red team phishing campaign was detected, escalated as a breach, and the client’s CISO (who was not a trusted agent) filed a cyber insurance claim before the engagement could be deconflicted.
  • Physical security response: During a physical penetration test, a building security guard escalated to local police, who responded with weapons drawn. The operator had an authorization letter but the tense confrontation could have ended very differently.

Client Communication

Effective client communication is what separates a professional red team from a group of hackers with a contract. Communication failures can destroy client relationships, create legal exposure, and undermine the value of the entire engagement.

Pre-Engagement Alignment

Before any technical work begins, ensure alignment on:

  • Objectives: What is the client actually trying to learn? “Test our security” is not a sufficient objective. Push for specifics: “Can an external attacker reach PCI cardholder data?” or “Can a phishing attack lead to domain compromise?”
  • Success criteria: How will the client measure engagement success? Number of findings? Specific objectives achieved? Detection rates?
  • Risk appetite: What level of disruption is acceptable? Can you send mass phishing campaigns? Can you exploit production systems? What about after-hours testing?
  • Reporting expectations: What format? What level of detail? Who is the audience (technical team vs. board of directors)?
  • Timeline and milestones: When are check-ins? When is the draft report due? When is the final presentation?

During-Engagement Communication

  • Daily or weekly check-ins (as agreed in the ROE) — brief status updates on progress, any issues, and upcoming activity
  • Immediate notification is required for:
    • Discovery of an active, real breach or compromise by an actual threat actor
    • Systems becoming unstable or unresponsive as a result of testing
    • Discovery of critical vulnerabilities that pose immediate risk (e.g., unauthenticated RCE on an internet-facing system)
    • Any contact with law enforcement
    • Scope boundary ambiguity that needs clarification before proceeding
  • Never go silent for extended periods — if the client doesn’t hear from you, they worry
  • Document all communications — email confirmations of verbal agreements, meeting notes, scope change approvals

Handling Scope Creep

Scope creep is one of the most common engagement risks. It typically manifests as:

  • Client asks “while you’re in there, can you also test…” mid-engagement
  • Red team discovers adjacent systems and wants to pursue them
  • Initial access leads to a different network segment than originally scoped

How to handle it:

  1. Stop and document the scope change request
  2. Assess the legal implications — is the new scope covered by the existing authorization?
  3. Get written approval before proceeding — an email from the authorized contact is the minimum
  4. Update the ROE if the change is significant
  5. Adjust timeline and cost if necessary — scope creep without timeline adjustment leads to rushed work and missed findings

Managing Client Pushback

Sometimes clients push back during engagements:

  • “Stop phishing our employees, it’s causing too much disruption” — Document the request, comply, and note in the report that the phishing phase was curtailed at client request. This is their prerogative.
  • “Don’t exploit that vulnerability, it’s too risky” — Respect the boundary, document the finding as-is, and note that exploitation was not attempted per client instruction.
  • “We need you to pause for a business-critical event” — Comply immediately. Business operations always take priority over testing.
  • “Can you make the report less scary?” — Never sanitize findings to make them less severe. You can adjust language for audience, but the technical truth must be preserved. This is an ethical line.

War Stories & Lessons

The following scenarios are generalized and anonymized composites based on publicly shared experiences from conference talks, blog posts, and industry discussions by firms such as SpecterOps, TrustedSec, NetSPI, and others. No specific client information is disclosed.

The Domain Admin in 20 Minutes

Scenario: An internal red team engagement at a mid-size financial firm. The objective was to assess internal network security posture starting from an assumed-breach position (a workstation on the corporate LAN).

What happened:

  1. Initial enumeration with BloodHound revealed a Kerberos delegation misconfiguration — a service account had unconstrained delegation enabled on a member server
  2. The service account’s SPN was vulnerable to Kerberoasting — the password hash was cracked in under 3 minutes using a modest GPU rig (the password was Summer2019!)
  3. The cracked service account had local admin rights on two servers, one of which had a Domain Admin’s credentials cached in memory (extracted via Mimikatz / credential dumping)
  4. From initial access to Domain Admin: 18 minutes

Lessons:

  • Delegation misconfigurations are among the most impactful Active Directory findings — most organizations don’t audit them
  • Service account password policies are often neglected (long-lived, weak passwords)
  • Credential caching on servers is a persistent and underestimated risk
  • The attack chain involved zero exploits — only misconfigurations and weak credentials
  • This scenario plays out in some variation in roughly 60-70% of internal assessments

Phishing the CEO’s Assistant

Scenario: A social engineering engagement targeting a technology company’s executive leadership.

What happened:

  1. OSINT revealed the CEO’s executive assistant through LinkedIn, corporate press releases, and conference speaker bios
  2. A pretext was crafted: a “journalist” requesting an interview with the CEO for an industry publication
  3. Initial contact was made via LinkedIn, building rapport over several days
  4. A “calendar scheduling link” was sent, which actually led to a credential harvesting page mimicking the company’s SSO portal
  5. The assistant entered credentials, which provided access to the CEO’s calendar, email (via delegation), and shared drives
  6. The CEO’s email contained board materials, M&A discussions, and unannounced product plans

Lessons:

  • Executive assistants are high-value targets with extensive access but often less security training than the executives they support
  • Multi-day social engineering campaigns with proper rapport building have dramatically higher success rates than mass phishing
  • Email delegation and shared calendar access often provides nearly the same access as compromising the executive directly
  • This engagement led to an overhaul of the client’s delegation policies and executive staff security training
  • The ethical handling of discovered sensitive business information (M&A data) was critical — it was reported but never read in detail

The Printer That Owned the Network

Scenario: External penetration test of a healthcare organization.

What happened:

  1. External scanning revealed a network-connected multifunction printer with a web interface exposed to the internet (misconfigured firewall rule)
  2. The printer was using default credentials (admin/admin)
  3. The printer’s configuration pages revealed internal IP addressing, LDAP server addresses, and a service account used for scan-to-email functionality
  4. The LDAP service account credentials were stored in the printer’s configuration and extractable
  5. Those credentials provided authenticated access to the internal Active Directory
  6. From the AD foothold, standard lateral movement techniques led to domain compromise within hours

Lessons:

  • IoT and OT devices (printers, cameras, building management systems) are consistently the weakest entry points
  • Default credentials on network devices remain one of the most prevalent vulnerabilities in enterprise environments
  • Printers often store service account credentials in recoverable formats
  • Network segmentation (or lack thereof) determines whether a compromised printer is a minor finding or a total compromise
  • This pattern — default creds on peripheral device leading to domain compromise — is extremely common

Cloud Misconfiguration Leading to Full Compromise

Scenario: Cloud security assessment of a SaaS company running on AWS.

What happened:

  1. An S3 bucket enumeration revealed a publicly readable bucket containing application deployment packages
  2. The deployment packages contained hardcoded AWS access keys for a CI/CD service account
  3. The CI/CD service account had overly permissive IAM policies — it could assume roles in the production account
  4. Role chaining through the CI/CD account led to a production admin role
  5. Production admin had access to RDS databases containing customer data, Secrets Manager entries, and the ability to modify production infrastructure

Lessons:

  • Cloud misconfigurations are the new perimeter vulnerabilities — they are exploited at scale by real attackers
  • Hardcoded credentials in deployment artifacts is a systemic problem, especially in CI/CD pipelines
  • IAM policy complexity in cloud environments creates a massive attack surface that most organizations don’t fully understand
  • The blast radius of a single exposed credential can be enormous in cloud environments with role chaining
  • See Metrics & Reporting for how to quantify and communicate cloud risk effectively

Physical Access to Domain Admin via Drop Box

Scenario: Combined physical and network penetration test of a financial services firm.

What happened:

  1. The red team conducted physical reconnaissance of the target building, identifying a loading dock with minimal security
  2. Wearing high-visibility vests and carrying clipboards, operators entered through the loading dock during a delivery window
  3. A network drop box (Raspberry Pi with cellular modem) was planted under a desk in a conference room — connected to an open Ethernet port
  4. The drop box established a reverse tunnel to the red team’s infrastructure, providing remote access to the internal network
  5. From the internal network position, the team followed a standard Active Directory attack chain to achieve domain compromise within two days

Lessons:

  • Physical security and network security cannot be assessed in isolation — they are deeply interconnected
  • Social engineering through authority cues (uniforms, clipboards, confidence) remains devastatingly effective
  • Conference rooms are ideal drop box locations — they have network ports, power outlets, and irregular occupancy
  • Network Access Control (NAC) would have prevented the drop box from obtaining network access — the client had none
  • The combination of physical access and network exploitation represents a realistic threat model that most organizations don’t test

Do’s and Don’ts

Engagement Conduct

DO:

  • Maintain detailed, timestamped logs of all activity
  • Screenshot or record evidence of every finding as you discover it
  • Test your tools in a lab environment before using them against the client
  • Have a rollback plan for every exploitation attempt
  • Encrypt all client data at rest and in transit
  • Use dedicated hardware and VMs for each engagement
  • Maintain a clean separation between personal and engagement activities
  • Report immediately if you cause any unintended disruption

DON’T:

  • Access systems or data outside the agreed scope — even if you stumble upon them
  • Continue testing if you’re unsure whether an action is in scope
  • Store client data on personal devices or cloud storage
  • Discuss engagement details on social media, in public spaces, or with unauthorized individuals
  • Use client network access for personal activities
  • Modify or delete client data unless explicitly authorized and necessary
  • Ignore signs of a real breach just because you’re focused on your objectives
  • Rush through exploitation without understanding the potential impact

Operator Discipline

DO:

  • Sleep. Fatigued operators make mistakes that have legal consequences
  • Maintain situational awareness — know what’s running, what’s connected, and what’s exposed
  • Use the buddy system for physical engagements — never operate alone
  • Keep your authorization letter accessible at all times during physical tests
  • Practice de-escalation techniques for confrontational situations during physical tests
  • Know when to walk away — ego has no place in professional red teaming

DON’T:

  • Consume alcohol or substances during or immediately before an engagement
  • Operate on unfamiliar tools without practice — a failed exploit in production is not the time to learn
  • Let competitive drive push you past scope boundaries
  • Record phone calls without understanding local wiretapping laws
  • Tailgate into areas not covered by your authorization
  • Bluff law enforcement — be honest, calm, and provide your authorization documentation

Evidence Handling

DO:

  • Maintain chain of custody for all evidence and artifacts
  • Use cryptographic hashing to ensure evidence integrity
  • Store evidence in encrypted containers with access controls
  • Retain evidence for the agreed-upon period after the engagement
  • Securely destroy evidence according to the data handling agreement

DON’T:

  • Transfer evidence over unencrypted channels
  • Store evidence in locations accessible to unauthorized personnel
  • Commingle evidence from different engagements
  • Retain client data longer than the agreed retention period
  • Use client data for demonstrations, training, or marketing without explicit written permission

Ethical Considerations

Red teaming operates in a unique ethical space. You are authorized to break into systems, deceive people, and exploit vulnerabilities — but authorization does not absolve you of ethical responsibility.

Responsible Disclosure Within Engagements

During an engagement, you may discover vulnerabilities in third-party products or services. Handle these carefully:

  • If the vulnerability is in a third-party product (not the client’s custom code), coordinate with the client on whether to report to the vendor through responsible disclosure
  • Zero-day vulnerabilities discovered during engagements present a particular ethical challenge — the client hired you to test their security, but the vulnerability may affect millions of others
  • Discuss disclosure policies in the pre-engagement meeting — establish clear guidelines for how third-party vulnerabilities will be handled
  • Never use zero-days discovered during one engagement on another client without proper disclosure and vendor awareness

Handling Discovered Criminal Activity

Occasionally, red team engagements uncover evidence of actual criminal activity — insider threats, fraud, existing compromises by real threat actors, or illegal content on systems.

Recommended approach:

  1. Stop and document what you found — preserve evidence without disturbing it
  2. Notify the trusted agent immediately through the established communication channel
  3. Do not investigate further — you are not law enforcement and additional investigation may contaminate evidence
  4. Follow the client’s direction — they will involve their legal counsel and potentially law enforcement
  5. Document your actions thoroughly — what you saw, when you saw it, what you did and didn’t do afterward
  6. Consult your own legal counsel if the discovery involves content that may create reporting obligations (e.g., certain types of illegal content have mandatory reporting requirements in many jurisdictions)

Protecting Employee PII

Red team engagements, especially social engineering campaigns, inevitably involve employee personal information:

  • Collect only what’s necessary — you need enough to demonstrate the finding, not a complete dossier
  • Anonymize or pseudonymize in reports where possible — “User A in the Finance department” rather than “Jane Smith, Senior Accountant”
  • Phishing campaign results should focus on aggregate statistics, not individual naming and shaming
  • Credential dumps should be reported as password policy findings, not as lists of individual users with weak passwords (unless specific accounts represent high-risk findings)
  • Securely destroy employee PII after the engagement and reporting are complete
  • Never use employee information gathered during one engagement for any other purpose

Avoiding Unnecessary Damage

The goal of red teaming is to improve security, not to prove how much damage you can cause:

  • Prefer simulation over actual exploitation when the finding can be demonstrated without real impact
  • Avoid destructive techniques unless specifically authorized and necessary
  • Don’t exfiltrate real data when synthetic data would prove the same point — take screenshots of file listings rather than copying actual files
  • Restore systems to their pre-test state when possible
  • If you discover a fragile system, report it as a finding rather than testing it to failure

Ethical Boundaries

Some actions remain ethically out of bounds regardless of authorization:

  • Never target individuals’ personal devices or accounts — even if they use them for work (unless explicitly in scope and legally authorized)
  • Never use information gathered during an engagement for personal benefit — financial, competitive, or otherwise
  • Never share client vulnerabilities with other clients, competitors, or public audiences without explicit permission
  • Never fabricate findings — if you didn’t find a vulnerability, don’t invent one to justify the engagement cost
  • Never use coercion, threats, or intimidation during social engineering beyond what is proportionate and authorized
  • Always consider the human impact — the person who clicked the phishing link is not the problem; the system that allowed it is

Post-Engagement Checklist

  • All client data collected during the engagement inventoried
  • Evidence encrypted and stored according to data handling agreement
  • All attack infrastructure decommissioned and verified destroyed
  • DNS records, cloud resources, and domain registrations cleaned up
  • Operator machines wiped of engagement-specific data
  • Backdoors and persistence mechanisms removed from client systems
  • Client notified of all persistence mechanisms planted (with removal instructions)
  • Draft report delivered within agreed timeline
  • Debrief meeting scheduled with client stakeholders
  • Lessons learned documented internally
  • Tools and techniques catalogued for future reference (internally only)
  • Client feedback solicited
  • Invoice and contractual obligations completed
  • Data retention countdown initiated (destroy after agreed period)

Conclusion

The lessons in this page are not theoretical. Every scenario described has played out — some publicly, many more behind NDAs. The red team community learns from these experiences collectively, which is why public post-mortems and conference talks sharing lessons learned (while protecting client confidentiality) are so valuable.

The technical skills of exploitation, lateral movement, and persistence are necessary but not sufficient. The operators who build lasting careers in red teaming are the ones who master the non-technical dimensions: legal awareness, client communication, ethical judgment, OPSEC discipline, and meticulous documentation.

For guidance on structuring your findings into actionable reports, see Metrics & Reporting. For the foundational frameworks that guide professional engagements, see Frameworks & Methodologies.

“In red teaming, the thing that ends your career is never a technical failure — it’s always a process failure.”