- March 25, 2026
What Happens When Your AI Agent Makes an HTTP Request
You gave your AI agent your secrets and network access. Three things can go wrong, and none of them look like traditional security problems.
- March 11, 2026
One request looks clean. Five requests leak your AWS key.
Per-request DLP scans each request in isolation. An agent that splits a secret across five requests gets five clean scans and a successful exfiltration. Cross-request detection fixes that.
- March 5, 2026
Guardrails deleted, now what?
OBLITERATUS and similar tools remove safety guardrails from open-weight models using weight ablation. When the model won't refuse, your only defense is the network layer.
- February 24, 2026
Every protocol your agent speaks, scanned
AI agents talk over HTTP, MCP, and WebSocket. Each protocol has its own attack surface. Here's what can go wrong on each one.
- February 22, 2026
Your Agent Just Leaked Your AWS Keys: The Attack and Fix
A prompt injection tells your coding agent to exfiltrate credentials via HTTP. No malware. Here's the attack, the output, and the config that stops it.
- February 21, 2026
What is an agent firewall?
AI agents make HTTP requests, call tools, and handle credentials. An agent firewall scans traffic in both directions before anything gets through.
- February 11, 2026
What's next for Pipelock: the v0.2 roadmap
GitHub Actions, MCP input scanning, smart DLP, and what Pipelock Pro will look like.
- February 10, 2026
Securing Claude Code with Pipelock
A practical guide to wrapping Claude Code's MCP servers with Pipelock for runtime prompt injection and credential leak protection.
- February 9, 2026
283 ClawHub Skills Are Leaking Your Secrets
Snyk found 283 ClawHub skills leaking API keys through the LLM context window. Static scanning can't catch runtime exfiltration. Here's what can.