The OWASP Agentic AI Threats and Mitigations framework is the broadest of OWASP’s three AI security lists, covering 15 threats. It goes deeper than the Agentic Top 10 (which focuses on the top risks) and the LLM Top 10 (which focuses on model-level risks).

Pipelock covers 12 of 15. Seven strong, two moderate, three partial.

Coverage at a glance

#ThreatCoverage
T1Memory PoisoningStrong
T2Tool MisuseStrong
T3Privilege CompromiseStrong
T4Resource OverloadPartial
T5Cascading Hallucination AttacksOut of scope
T6Intent Breaking & Goal ManipulationModerate
T7Misaligned & Deceptive BehaviorsStrong
T8Repudiation & UntraceabilityStrong
T9Identity Spoofing & ImpersonationPartial
T10Overwhelming Human-in-the-LoopNot yet addressed
T11Unexpected RCE and Code AttacksModerate
T12Agent Communication PoisoningStrong
T13Rogue Agents in Multi-Agent SystemsStrong
T14Human Attacks on Multi-Agent SystemsPartial
T15Human ManipulationOut of scope

Strong coverage (7 threats)

T1: Memory Poisoning

Malicious data injected into agent memory corrupts future decisions. Poisoned workspace files, config, or context documents alter agent behavior long after the initial injection.

T2: Tool Misuse

Agents misuse legitimate tools due to injection, misalignment, or unsafe delegation.

T3: Privilege Compromise

Unauthorized escalation or misuse of permissions. Leaked credentials let agents operate beyond scope.

T7: Misaligned & Deceptive Behaviors

Agents act deceptively due to misaligned objectives. A compromised agent may exfiltrate data while appearing to function normally.

T8: Repudiation & Untraceability

Agent actions can’t be reliably traced or accounted for. Insufficient logging makes incident reconstruction impossible.

T12: Agent Communication Poisoning

False or malicious information injected into inter-agent communication channels.

T13: Rogue Agents in Multi-Agent Systems

Compromised or misaligned agents disrupt coordinated operations through shared resources.


Moderate coverage (2 threats)

T6: Intent Breaking & Goal Manipulation

Attackers alter or redirect agent goals toward unintended actions.

Overlaps with T1 and T12. Response scanning catches explicit “ignore previous instructions” patterns. Does not detect subtle goal manipulation through carefully crafted context.

T11: Unexpected RCE and Code Attacks

Unsafe code generation leads to remote code execution. Agents execute attacker-controlled code or exfiltrate results.

Containment: pipelock sandbox provides Landlock + seccomp + network namespace isolation on Linux, and sandbox-exec profiles on macOS (alpha). On Windows, see Anthropic srt.


Partial coverage (3 threats)

T4: Resource Overload

Attackers exhaust resources to disrupt performance.

Per-domain rate limiting, response size limits, and request timeouts cover network-level resource consumption. Does not address CPU/memory exhaustion from agent compute.

T9: Identity Spoofing & Impersonation

Adversaries impersonate agents or users.

Ed25519 signing provides agent identity verification for files. Per-agent profiles with listener binding provide spoof-proof identity for proxy traffic. No certificate-based agent authentication yet.

T14: Human Attacks on Multi-Agent Systems

Humans exploit inter-agent trust to trigger cascading failures.

Integrity monitoring and signing create trust boundaries. Audit logging enables detection. No automated trust policy enforcement between agents yet.


Not addressed

T5: Cascading Hallucination Attacks

False information from one model spreads through interconnected systems.

Out of scope. Hallucination detection requires model-level semantic analysis, not network-layer scanning.

T10: Overwhelming Human-in-the-Loop

Attackers overload human overseers with excessive approval requests to reduce scrutiny.

Pipelock’s HITL feature (action: ask) prompts for approval but has no rate limiting or batching of approval requests. High-volume flooding could reduce human attention. Approval rate limiting and auto-escalation are on the roadmap.

T15: Human Manipulation

Exploiting user trust in AI to deceive humans into unsafe actions.

Social engineering at the human-AI interaction layer. Pipelock operates at infrastructure, not the conversation layer.


The three OWASP frameworks

  1. Top 10 for LLM Applications (2025): model and application risks. 7/10 covered.
  2. Top 10 for Agentic Applications (ASI01-ASI10): agent-specific risks. 10/10 covered.
  3. Agentic AI Threats and Mitigations (T1-T15): this page. Broadest framework. 12/15 covered.

Further reading