The OWASP Top 10 for Large Language Model Applications (2025) is the most widely referenced framework for LLM security risks. It covers everything from prompt injection to misinformation, spanning model-level and application-level threats.

Pipelock is a network-layer tool. It sits between AI agents and the internet, scanning HTTP, WebSocket, and MCP traffic. Some of these threats fall squarely in that layer. Others are about model internals, and those are out of scope by design.

Coverage at a glance

#ThreatCoverage
LLM01Prompt InjectionStrong
LLM02Sensitive Information DisclosureStrong
LLM03Supply Chain VulnerabilitiesPartial
LLM04Data and Model PoisoningOut of scope
LLM05Improper Output HandlingModerate
LLM06Excessive AgencyStrong
LLM07System Prompt LeakageModerate
LLM08Vector and Embedding WeaknessesOut of scope
LLM09MisinformationOut of scope
LLM10Unbounded ConsumptionPartial

7 of 10 covered. 3 strong, 2 moderate, 2 partial. The 3 out-of-scope threats are about model training and output truthfulness, not network security.


LLM01: Prompt Injection (Strong)

Attackers craft inputs that override the model’s instructions. Direct injection comes through user input. Indirect injection comes through fetched content, tool results, or documents the model reads. This is the #1 risk for a reason.

Pipelock catches indirect injection at every entry point:

Actions: block (reject), strip (redact matched text), warn (log and pass through), or ask (human approval in the terminal).

Gap: Regex-based detection. Novel injection patterns that don’t match known templates can slip through. Classifier-based detection is on the roadmap.


LLM02: Sensitive Information Disclosure (Strong)

The model leaks API keys, credentials, PII, or proprietary information through network requests. This happens when an agent gets tricked (via injection) into exfiltrating secrets, or when the model includes credential material in tool calls unprompted.

Pipelock has six detection layers for this:

  1. DLP pattern matching: 46 built-in patterns covering AWS, GCP, Azure, GitHub, Stripe, OpenAI, Anthropic, and 31 other providers.
  2. Environment variable leak detection: scans for the proxy’s own env var values in outbound traffic, both raw and base64 encoded.
  3. Entropy analysis: flags high-entropy URL segments and subdomains that look like encoded secrets, even without a pattern match.
  4. Domain blocklist: pastebin, transfer.sh, requestbin, ngrok, and other exfiltration targets blocked by default.
  5. Cross-request exfiltration detection (CEE): tracks secret fragments across multiple requests. An agent that splits a key across 5 URLs is still caught.
  6. Data budget: per-domain and global byte budgets cap how much data an agent can send anywhere.

Bypassing one layer isn’t enough when there are six.


LLM03: Supply Chain Vulnerabilities (Partial)

Malicious or tampered tools, packages, or plugins compromise the application through its dependencies. In the MCP world, this means poisoned tool servers.

Pipelock covers the MCP side:

Not covered: dependency scanning, model provenance, package integrity. Use Trivy or Dependabot for that.


LLM04: Data and Model Poisoning (Out of Scope)

Training data gets manipulated to embed backdoors or biases into the model itself.

Pipelock operates at runtime, not during model development. Training data integrity is a model-level concern. Nothing a network proxy can do about it.


LLM05: Improper Output Handling (Moderate)

LLM outputs get passed to downstream systems without validation. The model generates a URL, the application fetches it. The model generates SQL, the application runs it. If that output contains XSS, SSRF payloads, or command injection, the downstream system is compromised.

Pipelock catches some of this:

Limitation: Pipelock scans content entering the agent and blocks dangerous outbound requests. It doesn’t control what the agent does with clean content after scanning. That’s the application’s job.


LLM06: Excessive Agency (Strong)

The model has more permissions, autonomy, or tool access than the task requires. Combined with prompt injection, excessive agency turns a content-reading agent into one that can delete databases, send emails, or transfer money.

This is Pipelock’s second core strength after injection detection:


LLM07: System Prompt Leakage (Moderate)

The system prompt gets exposed through crafted queries, revealing internal instructions, API keys, or business logic embedded in the prompt.

Pipelock catches system prompt exfiltration through network traffic:

Limitation: Pipelock catches prompts being sent over the network. It can’t prevent the model from revealing prompt content in its conversational output. That requires model-level controls or application-level output filtering.


LLM08: Vector and Embedding Weaknesses (Out of Scope)

Attackers manipulate vector embeddings in RAG pipelines, injecting malicious content through similarity search or poisoning the knowledge base.

This is application-layer. Pipelock operates at network transport. Vector database internals, embedding generation, and retrieval ranking are between the application and its data store, not between the agent and the internet.


LLM09: Misinformation (Out of Scope)

The model generates false or fabricated information.

Evaluating truthfulness requires semantic analysis at the model or application layer. A network proxy scans for security threats (injection, exfiltration, SSRF), not factual accuracy.


LLM10: Unbounded Consumption (Partial)

The application allows uncontrolled resource consumption: excessive API calls, token usage, or data transfer that enable denial-of-service.

Pipelock covers the network side:

Not covered: token usage, compute time, or memory consumption at the model or application layer.


The three OWASP frameworks

There are three separate OWASP frameworks for AI security. Pipelock has coverage mappings for all of them:

  1. Top 10 for LLM Applications (2025): this page. Model and application risks. 7/10 covered.
  2. Top 10 for Agentic Applications (ASI01-ASI10): agent-specific risks like tool misuse, rogue agents, and inter-agent attacks. 10/10 covered (3 strong, 3 moderate, 4 partial).
  3. Agentic AI Threats and Mitigations (T1-T15): the broadest framework, 15 threats. 12/15 covered (7 strong, 2 moderate, 3 partial).

Pipelock maps strongest against the agentic frameworks because it’s an infrastructure tool built for agent security. The LLM Top 10 includes model-level concerns (training data, embeddings, hallucination) that sit at a different layer.

Further reading