TL;DR: MITRE’s AADAPT framework offers a structured way to understand attacker techniques targeting digital-asset and tokenized-value systems. This guide shows how to operationalize AADAPT through red teaming: map your value flows, align components to AADAPT tactics, prioritize high-impact attack paths, run safe offensive tests, and capture the telemetry needed to validate detection and response. The result is a practical, evidence-driven approach to strengthening digital asset security and reducing the risk of economic loss.
When MITRE published Adversarial Actions in Digital Asset Payment Techniques (AADAPT), it addressed a fast-forming blind spot: a common language for adversary behavior against systems that represent, move, or manage digital value. For organizations that leverage blockchain technology without operating the underlying infrastructure, such as banks piloting token settlement, cloud providers running validators, or manufacturers using smart contracts for provenance, that language matters because it connects technical failure modes to economic harm.
From an offensive security perspective, a critical shift occurs. When value becomes code, the security problem changes from “prevent server compromise” to “can we detect and contain adversarial manipulation of business logic and asset flows?”
AADAPT is a framework. Red teams operationalize it.
While frameworks classify behavior, security teams can leverage red teaming to find out whether that behavior will actually succeed against your environment.
The AADAPT catalogs adversary techniques specific to digital-asset environments, including flash loans, oracle manipulation, cross-chain evasion, and credential abuse, and your role as a practitioner is to convert those techniques into repeatable scenarios to answer two operational imperatives:
- Can an adversary extract value or materially corrupt economic logic?
- If they attempt that, will our detection and containment mechanisms interrupt the attack before irreversible loss?
Start with value flows, not tools
Effective red teaming begins with understanding where value lives, how it moves, and which systems influence it. Organizations that treat tokenization or digital-asset functionality as “just another feature” often overlook the operational pathways that attackers target first. A clear view of value flows creates the foundation for meaningful adversary testing.
- Inventory value-bearing components.
Begin by identifying every system, service, or workflow that creates, stores, authorizes, or moves digital value. This goes beyond smart contracts or protocol logic. Consider the full stack that participates in value operations, such as custodial signing services (HSM/KMS), smart-contract admin paths, oracles and data feeds, bridge/bridge-router integrations, settlement pipelines, and any automation that triggers economic action.
The goal is to map every system that could meaningfully influence value or business logic. - Map those components to AADAPT tactics.
Once you have a clear picture of value-bearing systems, connect them to the adversarial behaviors AADAPT describes. This helps organizations understand how attackers may approach each component, e.g., recon, initial access, execution, credential abuse, defense evasion, impact/fraud.
It creates a structured way for organizations to think about where attackers will probe, what tactics they may employ, and which scenarios should be represented in testing. It also ensures red team engagements evaluate both the supporting infrastructure and the logic that governs value. - Prioritize by feasibility and impact.
Once you understand where value flows through the system, focus on the areas where an attacker is most likely to gain meaningful leverage. Prioritization should balance two factors: how easy a weakness is to exploit and how directly it could affect value or business logic.
Organizations should consider two broad categories when determining where red teams should spend their time:- Operational weaknesses such as exposed credentials in source control, misconfigured cloud permissions, leaked API keys, or overly permissive CI/CD pipelines. These issues appear frequently in real environments and often provide the quickest route to systems that influence value.
- Value-specific weaknesses such as unreliable data feeds, privileged signing workflows, administrative upgrade paths, or transaction logic that can be influenced or bypassed. These areas require deeper analysis but directly affect how value is created, transferred, or authorized
By prioritizing targets that pair realistic attacker feasibility with meaningful business impact, organizations ensure their red team efforts focus on the weaknesses most likely to drive real-world risk.
Execute safe, realistic tests
Tokenized systems bring special risk. Do not run destructive tests against live value without exhaustive controls.
- Use forked mainnets or isolated testnets that replicate state and liquidity. These give realistic behavior without risking customer funds.
- Define blast radius and rollback plans in contracts. Every offensive test must have an agreed rollback or mitigation strategy.
- Notify upstream/downstream stakeholders (custodians, exchanges, third-party oracles) where tests touch shared infrastructure.
- Formalize legal and compliance signoffs. Offensive testing that touches financial rails needs explicit authorization and oversight.
Discipline here is how you run useful, repeatable experiments that produce defensible outcomes.
Telemetry to collect
Conducting offensive tests without proper telemetry can be a waste. Instead, we recommend capturing data that maps directly to the attack chain and to your detection hypotheses:
- Chain-level logs: full transaction traces, nonces, gas usage, contract call graphs, and block timestamps.
- Application and API logs: who/what initiated the on-chain transaction (API key, session, service account).
- KMS/HSM telemetry: signing requests, key policy evaluation, failed/denied signings, and signer identity correlation.
- Oracle/Feed metadata: feed version, timestamps, aggregation composition, deviation thresholds.
- Bridge/Swap traces: source/destination chain IDs, bridge operator logs, and intermediary routing.
- Anomaly markers: sudden spikes in transaction value, velocity anomalies, contract reentry patterns, or outlier gas usage.
Instrument these with consistent identifiers, so you can reconstruct an attack sequence end-to-end and map each observable to an AADAPT technique.
From findings to defenses: The purple-team cadence
Offense testing serves as a signal to drive detection and response. Use a tight purple team loop to ensure detections are tested and validated.
- Run the offensive scenario. Capture telemetry and detection outcomes.
- Map each step to an AADAPT technique and list the observables that occurred (or didn’t).
- Hypothesize detection rules or telemetry gaps (e.g., “Block assets manipulation events when the action diverge from mathematical expectations.”).
- Implement minimal detection logic or enrich telemetry.
- Re-test until time-to-detect and containment meet your risk threshold.
Quantify progress: number of AADAPT techniques exercised, mean time to detect (MTTD), mean time to contain (MTTC), and estimated prevented loss under comparable production conditions. Those metrics speak to executives and regulators.
Maturity model — what leaders should measure
Security teams should track readiness across three axes:
- Visibility: Are all value paths observable? (Chain, API, KMS, oracles)
- Coverage: How many high-priority AADAPT techniques have been exercised?
- Response: Can you contain and remediate before irreversible economic loss?
Pragmatic milestones:
- Inventory & mapping complete.
- One end-to-end AADAPT scenario executed and detection rules implemented.
- Quarterly rotation covering new AADAPT techniques and purple-team follow-ups to reduce MTTD/MTTC.
Realistic offensive scenarios (practical, actionable)
To get started, we’ve included a few example attack templates you can adapt to your environment. Each includes the objective, execution outline, and what you should be measuring:
Scenario A — Flash-Loan Economic Manipulation
- Objective: Determine whether transient capital can change contract state to the organization’s detriment (drain funds, trigger liquidation).
- Execution: Use a forked mainnet/testnet with realistic liquidity. Execute a flash-loan sequence that manipulates price or liquidity pools to trigger the target contract path.
- Measure: Successful state change, time-to-detection (chain-analytics, SIEM, alerts), and controls triggered (circuit breakers, oracle thresholds).
Scenario B — Oracle/Data-Feed Poisoning
- Objective: Assess whether corrupted inputs drive unwanted automated behavior.
- Execution: Deploy a fake feed or modify feed aggregation in an isolated environment. Observe contract behavior when feed deviates outside normal bounds.
- Measure: Detection of anomalous feed values, triggering of fail safes, human/automated intervention latency, and downstream business logic impact.
Scenario C — Credential / Signing Abuse
- Objective: Test multi-party signing and key governance under insider or automation compromise.
- Execution: Simulate compromised signing (automated job, CI/CD token, or operator account) and attempt unauthorized admin actions or contract upgrades.
- Measure: Audit logs, KMS/HSM alerts, abnormal signing patterns, and policy enforcement (multisig thresholds, usage throttles).
Scenario D — Cross-Chain Evasion & Traceability
- Objective: Evaluate whether assets moved across chains can be traced and if the attack obscures attribution quickly enough to avoid response.
- Execution: Simulate rapid bridging of assets using common bridge flows and mixers in a forked environment.
- Measure: Time to correlate on-chain movements, effectiveness of chain-analysis feeds, and forensics completeness.
Why offensive security is the forward edge
AADAPT gives us a language for attacker behavior. Offensive security gives that language teeth. If you want to know whether a flash-loan can actually drain funds, whether an oracle tweak can trigger mass liquidations, or whether a signing automation can be abused, you need to simulate the attack and measure your system’s response.
For cybersecurity professionals, that simulation is where the hard answers live. Controls and policies are necessary, but if untested, they’re guesses. Red teaming against AADAPT techniques turns guesswork into data: exploitable path, detection signal, and remediation roadmap. If your program has not yet treated tokenization as a first-class risk domain, start there.
If you want to move from theory to proof, explore Bishop Fox’s Red Team services. We design AADAPT-aligned engagements, run controlled adversary simulations in safe environments, and deliver the telemetry-backed findings and remediation playbooks your SOC and leadership can act on. Contact us to schedule an assessment or tabletop the scenarios most likely to impact your business.
Subscribe to our blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.