Using Pipelock with OpenAI Agents SDK#
Pipelock wraps MCP servers used by OpenAI Agents as a stdio proxy, scanning
every request and response for credential leaks, prompt injection, and tool
description poisoning. This guide covers MCPServerStdio integration and
Docker Compose deployment.
Quick Start#
# 1. Install pipelock
go install github.com/luckyPipewrench/pipelock/cmd/pipelock@latest
# 2. Generate a config (or copy a preset)
pipelock generate config --preset balanced > pipelock.yaml
# 3. Verify
pipelock version
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
name="Pipelock Filesystem",
params={
"command": "pipelock",
"args": [
"mcp", "proxy",
"--config", "pipelock.yaml",
"--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/workspace"
],
},
) as server:
agent = Agent(
name="Research Assistant",
instructions="You help users research information using available tools.",
mcp_servers=[server],
)
result = await Runner.run(agent, "List files in the workspace")
print(result.final_output)
asyncio.run(main())
That's it. Pipelock intercepts all MCP traffic between the agent and the
filesystem server, scanning in both directions.
How It Works#
OpenAI Agent <--> pipelock mcp proxy <--> MCP Server
(client) (scan both ways) (subprocess)
Pipelock scans three things:
- Outbound requests. Catches credentials leaking through tool arguments
(API keys, tokens, private key material). - Inbound responses. Catches prompt injection in tool results.
- Tool descriptions. Catches poisoned tool definitions and mid-session
rug-pull changes.
Integration Patterns#
Pattern A: Single MCP Server#
The simplest case: one MCP server wrapped with Pipelock.
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
name="Pipelock Filesystem",
params={
"command": "pipelock",
"args": [
"mcp", "proxy",
"--config", "pipelock.yaml",
"--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/workspace"
],
},
) as server:
agent = Agent(
name="File Analyst",
instructions="You analyze files in the workspace.",
mcp_servers=[server],
)
result = await Runner.run(agent, "List all files in the workspace")
print(result.final_output)
asyncio.run(main())
Pattern B: Multiple MCP Servers#
Note: Parenthesized context managers (
async with (...)) require Python
3.10+.
Wrap each server independently. If one returns a poisoned response, Pipelock
blocks it without affecting the others:
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with (
MCPServerStdio(name="Pipelock Filesystem", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
}) as filesystem,
MCPServerStdio(name="Pipelock SQLite", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock.yaml", "--",
"python", "-m", "mcp_server_sqlite", "--db", "/data/app.db"],
}) as database,
MCPServerStdio(name="Pipelock GitHub", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-github"],
}) as github,
):
agent = Agent(
name="Dev Assistant",
instructions="You help with code, data, and project management.",
mcp_servers=[filesystem, database, github],
)
result = await Runner.run(agent, "List project issues")
print(result.final_output)
asyncio.run(main())
Pattern C: Mixed Transports#
Wrap stdio servers with Pipelock. Remote servers connect directly and are not
covered by the stdio proxy:
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio, MCPServerStreamableHttp
async def main():
async with (
MCPServerStdio(name="Pipelock Filesystem", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
}) as local,
# Remote server: NOT scanned by pipelock
# Pipelock's MCP proxy only wraps stdio servers.
MCPServerStreamableHttp(name="Remote API", params={
"url": "https://api.example.com/mcp",
}) as remote,
):
agent = Agent(
name="Hybrid Agent",
instructions="You use local and remote tools.",
mcp_servers=[local, remote],
)
result = await Runner.run(agent, "Use local and remote tools")
print(result.final_output)
asyncio.run(main())
Note:
MCPServerSseis deprecated. UseMCPServerStreamableHttpfor new
remote MCP connections.
Note: Pipelock's MCP proxy only wraps stdio-based servers. Remote HTTP/SSE
MCP connections go directly to the remote endpoint and bypass Pipelock. For
outbound HTTP traffic from your agent code (API calls, web fetches), route those
through pipelock run as a fetch proxy. See the
HTTP fetch proxy section below.
Tip: The SDK supports strict JSON schema validation on MCP tools, which
pairs well with Pipelock. The SDK validates schema structure while Pipelock
validates content:
from agents import Agent
from agents.agent import MCPConfig
agent = Agent(
...,
mcp_config=MCPConfig(convert_schemas_to_strict=True),
)
Pattern D: Multi-Agent Handoffs#
When using the SDK's handoff feature, each agent can have its own set of
Pipelock-wrapped MCP servers with different configs:
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with (
MCPServerStdio(name="Pipelock Filesystem (warn)", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock-warn.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/data"],
}) as research_fs,
MCPServerStdio(name="Pipelock Filesystem (strict)", params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "pipelock-strict.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/output"],
}) as writer_fs,
):
writer = Agent(
name="Writer",
instructions="You write reports based on research.",
mcp_servers=[writer_fs],
)
researcher = Agent(
name="Researcher",
instructions="You research topics using web and file tools.",
mcp_servers=[research_fs],
handoffs=[writer],
)
result = await Runner.run(researcher, "Research and write a report on workspace contents")
print(result.final_output)
asyncio.run(main())
Docker Compose#
Network-isolated deployment where the agent container has no direct internet
access:
networks:
pipelock-internal:
internal: true
driver: bridge
pipelock-external:
driver: bridge
services:
pipelock:
# Pin to a specific version for production (e.g., ghcr.io/luckypipewrench/pipelock:2.1.2)
image: ghcr.io/luckypipewrench/pipelock:latest
networks:
- pipelock-internal
- pipelock-external
command: ["run", "--listen", "0.0.0.0:8888", "--config", "/config/pipelock.yaml"]
volumes:
- ./pipelock.yaml:/config/pipelock.yaml:ro
healthcheck:
test: ["/pipelock", "healthcheck"]
interval: 10s
timeout: 3s
start_period: 5s
retries: 3
openai-agent:
build: .
networks:
- pipelock-internal
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- PIPELOCK_FETCH_URL=http://pipelock:8888/fetch
depends_on:
pipelock:
condition: service_healthy
The agent container can only reach the pipelock service. All HTTP traffic goes
through the fetch proxy. MCP servers running as subprocesses inside the agent
container are wrapped with pipelock mcp proxy as shown above.
You can also generate this template with:
pipelock generate docker-compose --agent generic
HTTP Fetch Proxy#
For scanning HTTP traffic from OpenAI agents (web fetches, API calls), run
Pipelock as a fetch proxy:
pipelock run --config pipelock.yaml
Configure your agent to route HTTP requests through http://localhost:8888/fetch:
import requests
def fetch_through_pipelock(url: str) -> str:
resp = requests.get(
"http://localhost:8888/fetch",
params={"url": url},
headers={"X-Pipelock-Agent": "openai-research"},
timeout=30,
)
resp.raise_for_status()
data = resp.json()
if data.get("blocked"):
raise RuntimeError(f"Pipelock blocked request: {data.get('block_reason')}")
return data.get("content", "")
TLS Interception#
When using pipelock as an HTTP forward proxy (HTTPS_PROXY), CONNECT tunnels
are opaque by default: pipelock only sees the hostname, not the request body or
response content. Enabling TLS interception closes this gap by performing a MITM
on HTTPS connections, giving you full DLP on request bodies and response
injection detection through CONNECT tunnels.
To enable it:
- Generate a CA and enable TLS interception (see the TLS Interception Guide)
- Trust the CA in your Python environment:
export SSL_CERT_FILE=~/.pipelock/ca.pem
# Or for requests/httpx specifically:
export REQUESTS_CA_BUNDLE=~/.pipelock/ca.pem
MCP proxy mode (stdio wrapping) does not require TLS interception. It scans
traffic in both directions without certificates.
Choosing a Config#
| Config | Action | Best For |
|---|---|---|
balanced | warn (default) | Recommended starting point (--preset balanced) |
strict | block (default) | High-security, production (--preset strict) |
generic-agent.yaml | warn (default) | Agent-specific tuning (copy from configs/) |
claude-code.yaml | block (default) | Unattended coding agents (copy from configs/) |
Start with balanced to log detections without blocking. Review the logs,
tune thresholds, then switch to strict for production.
Troubleshooting#
MCP server not starting#
Verify the command works without Pipelock first:
npx -y @modelcontextprotocol/server-filesystem /tmp
Then wrap it:
pipelock mcp proxy -- npx -y @modelcontextprotocol/server-filesystem /tmp
Seeing Pipelock output#
Pipelock logs to stderr. To see real-time output during development:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | \
pipelock mcp proxy --config pipelock.yaml -- npx -y @modelcontextprotocol/server-filesystem /tmp
False positives#
Switch to warn mode to see what's being flagged without blocking:
response_scanning:
action: warn
mcp_input_scanning:
action: warn
mcp_tool_scanning:
action: warn
Review stderr output, then tighten thresholds.
Config file not found#
Use absolute paths if relative paths don't resolve:
MCPServerStdio(
name="Pipelock Filesystem",
params={
"command": "pipelock",
"args": ["mcp", "proxy", "--config", "/etc/pipelock/config.yaml", "--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
},
)