Attack breakdowns, CVE analysis, runtime defense strategies, and agent firewall implementation. All research is backed by working code in Pipelock.
- March 25, 2026
What Happens When Your AI Agent Makes an HTTP Request
You gave your AI agent your secrets and network access. Three things can go wrong, and none of them look like traditional security problems.
- March 11, 2026
One request looks clean. Five requests leak your AWS key.
Per-request DLP scans each request in isolation. An agent that splits a secret across five requests gets five clean scans and a successful exfiltration. Cross-request detection fixes that.
- March 8, 2026
We built a test corpus for AI agent egress security tools
72 attack cases across 8 categories. Secret exfiltration, prompt injection, MCP tool poisoning, chain detection. Any security tool can run against it. No vendor lock-in.
- March 6, 2026
Your agent leaks secrets in POST bodies, not just URLs
URL scanning catches secrets in hostnames and query strings. But agents also make POST requests. Secrets in JSON bodies, form fields, multipart uploads, and HTTP headers bypass URL-level DLP entirely.
- March 5, 2026
Guardrails deleted, now what?
OBLITERATUS and similar tools remove safety guardrails from open-weight models using weight ablation. When the model won't refuse, your only defense is the network layer.
- March 5, 2026
Your MCP server's tool descriptions are an attack surface
MCP tool descriptions go straight into your agent's context window. A malicious server hides instructions in them. Your agent reads them and obeys. Here's the attack, three variants, and what catches it at the network layer.
- March 3, 2026
CVE-2026-25253: WebSocket Hijacking in OpenClaw AI Agents
A CVSS 8.8 vulnerability in OpenClaw lets attackers hijack agent sessions via cross-site WebSocket. The attack chain, what each step does, and how to add defense-in-depth.
- March 3, 2026
Your AI agent leaks API keys through DNS queries
Most DLP tools scan HTTP bodies. Your secrets leak before that, in the DNS lookup. Here's the attack, the proof, and why scan ordering matters.
- February 24, 2026
Every protocol your agent speaks, scanned
AI agents talk over HTTP, MCP, and WebSocket. Each protocol has its own attack surface. Here's what can go wrong on each one.
- February 22, 2026
Your Agent Just Leaked Your AWS Keys: The Attack and Fix
A prompt injection tells your coding agent to exfiltrate credentials via HTTP. No malware. Here's the attack, the output, and the config that stops it.
- February 21, 2026
What is an agent firewall?
AI agents make HTTP requests, call tools, and handle credentials. An agent firewall scans traffic in both directions before anything gets through.
- February 14, 2026
EU AI Act Runtime Security: What You Need Before August
The EU AI Act's high-risk requirements take effect August 2, 2026. The compliance standard won't land until Q4. Here's what to build now if you're running AI agents.
- February 13, 2026
The First AI Agent Espionage Campaign: What Defenses Matter
Anthropic disclosed GTG-1002, the first AI agent espionage campaign. A state actor jailbroke Claude Code for autonomous hacking. What happened and which defenses work.
- February 11, 2026
What's next for Pipelock: the v0.2 roadmap
GitHub Actions, MCP input scanning, smart DLP, and what Pipelock Pro will look like.
- February 10, 2026
Securing Claude Code with Pipelock
A practical guide to wrapping Claude Code's MCP servers with Pipelock for runtime prompt injection and credential leak protection.
- February 9, 2026
283 ClawHub Skills Are Leaking Your Secrets
Snyk found 283 ClawHub skills leaking API keys through the LLM context window. Static scanning can't catch runtime exfiltration. Here's what can.
- February 8, 2026
Lateral movement in multi-agent LLM systems
When one compromised agent can pivot to others through shared context, MCP servers, or tool delegation, a single injection compromises the entire mesh.