Enabling Comparison Of Planned Vs Executed Vs Detected Techniques
Overview
Enabling Comparison Of Planned Vs Executed Vs Detected Techniques is a detection assessment tool that appears across penetration testing workflows in this knowledge base. It is referenced as part of higher-level security analysis, investigation, monitoring, or validation activity rather than as an end in itself.
What It Is
Enabling Comparison Of Planned Vs Executed Vs Detected Techniques is best understood as a penetration-testing tool in this knowledge base. Its role is conceptual and system-facing rather than procedural: it gives analysts or defenders a structured way to examine evidence, model system behavior, or reason about security state.
How It Works
Enabling Comparison Of Planned Vs Executed Vs Detected Techniques works by turning technical inputs into more interpretable outputs at the system level. Across the source skills, it appears as part of larger analysis, investigation, monitoring, or validation loops rather than as a standalone end state.
Core Concepts
- red team
- adversary emulation
- MITRE ATT&CK
- Cobalt Strike
- detection assessment
- penetration testing
Typical Workflow
- Threat actor selection: Select an adversary group relevant to the organization's industry. For financial services, emulate FIN7 or Lazarus Group. For healthcare, emulate APT41 or FIN12. Map the selected adversary's known TTPs from MITRE ATT&CK.
- Objective definition: Define measurable objectives such as "Access customer financial data from the core banking system" or "Demonstrate ability to deploy ransomware across the domain"
- 1. Initial Access (TA0001): Phishing, exploiting public-facing applications, or supply chain compromise
- 2. Execution (TA0002): PowerShell, scripting, exploitation for client execution
- 3. Persistence (TA0003): Scheduled tasks, registry modifications, implant deployment
- 4. Privilege Escalation (TA0004): Token impersonation, exploitation for privilege escalation
- 5. Defense Evasion (TA0005): Process injection, timestomping, indicator removal
- 6. Credential Access (TA0006): LSASS dumping, Kerberoasting, credential stuffing
- 7. Lateral Movement (TA0008): Remote services, pass-the-hash, remote desktop
- 8. Collection/Exfiltration (TA0009/TA0010): Data staging, exfiltration over C2
Use Cases
- Assessing an organization's ability to detect, respond to, and contain a realistic adversary operation
- Testing the effectiveness of the security operations center (SOC), incident response team, and threat hunting capabilities
- Validating security investments by simulating attacks that chain multiple vulnerabilities and techniques
- Evaluating the organization's security posture against specific threat actors (nation-state, ransomware groups, insider threats)
- Meeting regulatory requirements for adversary simulation (TIBER-EU, CBEST, AASE, iCAST)
- Operating too aggressively and getting detected immediately, providing no value for testing Blue Team's advanced detection capabilities
- Using exclusively custom tools instead of living-off-the-land techniques that real adversaries prefer
Limitations
- Output still depends on context, data quality, and surrounding analysis.
- The tool should be interpreted as part of a broader workflow, not as a complete answer by itself.
- Capabilities and visibility vary depending on environment, integrations, and available inputs.
Related Tools
- And Post Exploitation Capabilities, Cobalt Strike, DNS) With Cross Platform Implants, Malleable C2 Profiles, MITRE ATT&CK Navigator, Mythic, Sliver, WireGuard
Sources
- executing-red-team-exercise