PipeLab
  • Pipelock
  • Learn
  • Ecosystem
  • Enterprise
  • Pricing
  • Blog
  • About
  • Contact

Mcp

  • March 25, 2026

    What Happens When Your AI Agent Makes an HTTP Request

    You gave your AI agent your secrets and network access. Three things can go wrong, and none of them look like traditional security problems.

  • March 8, 2026

    We built a test corpus for AI agent egress security tools

    72 attack cases across 8 categories. Secret exfiltration, prompt injection, MCP tool poisoning, chain detection. Any security tool can run against it. No vendor lock-in.

  • February 24, 2026

    Every protocol your agent speaks, scanned

    AI agents talk over HTTP, MCP, and WebSocket. Each protocol has its own attack surface. Here's what can go wrong on each one.

  • February 10, 2026

    Securing Claude Code with Pipelock

    A practical guide to wrapping Claude Code's MCP servers with Pipelock for runtime prompt injection and credential leak protection.

  • February 9, 2026

    283 ClawHub Skills Are Leaking Your Secrets

    Snyk found 283 ClawHub skills leaking API keys through the LLM context window. Static scanning can't catch runtime exfiltration. Here's what can.

PipeLab

Security tools for AI agents.

GitHub X / Twitter LinkedIn Email
Terms Privacy Refunds

A PipeLab project.