The EU AI Act’s high-risk requirements take effect August 2, 2026. Articles 9, 12, 14, and 15 require risk management, audit logging, human oversight, and cybersecurity controls for AI systems.

Most compliance tools focus on model governance (training data, bias, documentation). None of them cover what happens when your AI agent makes HTTP requests, calls MCP tools, or tries to exfiltrate secrets at runtime.

Pipelock fills that gap. It sits between your AI agent and the network. MCP proxy mode scans tool arguments and responses bidirectionally. Fetch proxy mode scans fetched response content and URL parameters. Forward proxy mode (CONNECT) filters by hostname, with optional TLS interception for full body inspection. The controls it enforces map directly to EU AI Act articles.

What Gets Enforced August 2

ArticleRequirementPenalty
Art. 9Risk management system (continuous, iterative)Up to EUR 15M or 3% turnover
Art. 12Automatic event logging for traceabilityUp to EUR 15M or 3% turnover
Art. 14Human oversight with override and stop capabilityUp to EUR 15M or 3% turnover
Art. 15Cybersecurity, resilience, fail-safe mechanismsUp to EUR 15M or 3% turnover
Art. 26Deployer monitoring and 6-month log retentionUp to EUR 15M or 3% turnover

Industry estimates put conformity assessment at 8-14 months. If you haven’t started planning, now is the time.

How Pipelock Maps to Each Article

Article 9: Risk Management

Article 9 requires identifying, analyzing, and mitigating risks through design and continuous monitoring.

Pipelock’s architecture is the mitigation: capability separation. The agent holds secrets but has no network access. The proxy has network access but no secrets. This eliminates network-based credential exfiltration by design, not by policy.

On top of that, the 11-layer scanner pipeline classifies risks at runtime:

Scanner LayerRisk Category
Domain blocklistKnown malicious destinations
DLP (46 credential patterns)Secret exfiltration
SSRF protectionInternal infrastructure probing
Rate limitingAbuse and resource exhaustion
Entropy analysisEncoded or obfuscated data exfiltration
MCP scanningTool poisoning and injection

Every scan decision is logged. Every threshold is configurable. Hot-reload (fsnotify + SIGHUP) lets you update policies without restarting.

Article 12: Record-Keeping

Article 12 requires automatic event logging that identifies risk situations and supports post-market monitoring.

Pipelock logs every request as structured JSON. Event shapes vary by type:

{
  "event": "blocked",
  "method": "GET",
  "url": "https://api.example.com/data",
  "scanner": "dlp",
  "reason": "dlp_aws_access_key",
  "client_ip": "127.0.0.1",
  "request_id": "abc123"
}

Blocked events include scanner and reason. Allowed events add status code, size, and duration. Forward logs to your SIEM via webhook or syslog (both built in). Prometheus metrics (/metrics) and a ready-to-import Grafana dashboard provide real-time monitoring.

In multi-agent deployments, run separate pipelock instances per agent with distinct log files for per-agent traceability.

Article 14: Human Oversight

Article 14 requires the ability to understand system operation, detect anomalies, override outputs, and intervene with a “stop button.”

Pipelock provides four mechanisms:

  1. HITL approval (action: ask): flagged requests pause the agent and present a terminal prompt. The operator approves, denies, or strips sensitive content. Timeout defaults to deny (fail-closed).

  2. Kill switch: four independent activation sources (config file, API endpoint, SIGUSR1 signal, sentinel file). Any one active blocks all traffic. The API runs on a dedicated port so agents cannot self-deactivate.

  3. Config modes: audit (log only), balanced (default), strict (aggressive blocking). Map directly to different risk tolerances per Art. 14(3).

  4. Prometheus + Grafana: real-time visibility into what agents are doing, which requests are blocked, and why.

Article 15: Cybersecurity and Resilience

Article 15 requires protection against unauthorized alteration, adversarial attacks, confidentiality breaches, and fail-safe behavior.

Art. 15 RequirementPipelock Control
Adversarial examples (Art. 15(5))Content scanning with NFKC normalization, zero-width stripping, case-insensitive matching
Confidentiality attacks (Art. 15(5))DLP scanning (46 credential patterns), env leak detection (raw + base64 + hex), entropy analysis
Data poisoning (Art. 15(5))File integrity monitoring (SHA256 manifests), Ed25519 signing, response injection scanning
Unauthorized alteration (Art. 15(5))Capability separation prevents agent manipulation into exfiltrating data
Fail-safe mechanisms (Art. 15(4))Fail-closed architecture: scan error, HITL timeout, parse failure, DNS error all default to block
Resilience to faults (Art. 15(4))DNS rebinding protection, IPv4-mapped IPv6 normalization, TLS interception with cert cache

Article 26: Deployer Obligations

Article 26 requires deployers to monitor AI operation and retain logs for at least 6 months.

Pipelock provides monitoring infrastructure (Prometheus, /health for K8s liveness probes, structured logs) but does not enforce retention periods. Whether logs are kept for 6 months depends on your log infrastructure.

What Pipelock Does NOT Cover

Pipelock is a runtime network security layer. It does not cover:

Pipelock is one component of a compliance stack. Use it alongside model governance tools (Credo AI, Holistic AI) and organizational processes.

NIST AI RMF Crosswalk

Pipelock’s controls also map to the NIST AI Risk Management Framework 1.0:

NIST FunctionPipelock FeaturesEU AI Act Cross-Reference
GOVERNCapability separation, fail-closed design, per-instance isolationArt. 9, 14
MAP11-layer risk classification, config presets for risk toleranceArt. 9
MEASUREPrometheus metrics, structured audit logs, alertingArt. 12, 15
MANAGEHITL override, kill switch, hot-reload, MCP scanningArt. 14, 15

The full mapping with subcategory-level detail is in the EU AI Act Compliance Mapping document.

Enforcement Timeline

DateMilestone
August 1, 2024EU AI Act enters into force
February 2, 2025Prohibited AI practices (Art. 5) take effect
August 2, 2025General-purpose AI model obligations (Art. 51-55) take effect
August 2, 2026High-risk AI system requirements take effect (Art. 9, 12-15, 26)
August 2, 2027Extended transition for safety-component AI

Penalties: up to EUR 35M or 7% global turnover (prohibited practices), EUR 15M or 3% (high-risk violations), EUR 7.5M or 1% (misleading information). SME and startup fines are capped at the lower of percentage or absolute amount.

Get Started

# Install
brew install luckyPipewrench/tap/pipelock

# Generate a config
pipelock generate config --preset balanced > pipelock.yaml

# Enable audit logging + run
pipelock run --config pipelock.yaml

Set HTTPS_PROXY=http://127.0.0.1:8888 on your agent. Every connection is now logged and filtered. Enable TLS interception for full HTTPS body scanning.

For the full compliance mapping with NIST crosswalk, see the source document on GitHub.