Control Zero is in early beta. We ship daily. Feedback: team@controlzero.ai

Control Zero for Developers

Govern your AI app at runtime.

A transparent gateway proxy that evaluates every request. SDKs for Python, Node.js, and Go that enforce policies in-process. The same governance layer, whether you ship a weekend project or a fleet of autonomous agents.

pip$ pip install controlzero
npm$ npm install @controlzero/sdk
go$ go get controlzero.ai/sdk/go

No account required. Works in local-only mode.

// 001

Gateway Proxy

Your AI agent already talks to an LLM through an API base URL. The gateway sits between your agent and the provider. Change one environment variable. Every request passes through your policies before reaching the model. Every response is evaluated before it reaches your application.

No SDK. No code changes. No new dependencies. The gateway is protocol-compatible with OpenAI and Anthropic APIs. Your existing client libraries, LangChain pipelines, and MCP tool calls work without modification.

PII detection on inbound prompts. Model access control on outbound requests. Cost cap enforcement on token usage. Full audit trail on every decision. All from a single config change.

1 line
Config change to integrate
0
Code changes required
# Before
ANTHROPIC_API_URL=https://api.anthropic.com
# After
ANTHROPIC_API_URL=https://gateway.controlzero.ai
AI Agent
Claude, GPT, or any LLM agent
API call
Control Zero Gateway
Pre-flight GuardModel / Cost / PII
Policy EngineCompiled
InterceptorAllow / Block
Forward
LLM Provider
Anthropic, OpenAI, Ollama
Audit Trail
Immutable, queryable

// 002

SDK Integration

When you need per-tool governance, secret injection, or local policy evaluation, wrap your AI calls with the SDK. Three languages. Three lines of code. Works without an API key in local-only mode. Point it at a local policy file and everything runs on your machine.

from controlzero import ControlZero

# Cloud mode: policies sync from the dashboard
cz = ControlZero(api_key="cz_live_...")

# Local-only mode: no account, no network calls
cz = ControlZero(policy_path="./policies.json")

await cz.initialize()

# Every tool call is governed
result = await cz.call_tool(
    "github", "list_issues",
    {"repo": "acme/app"}
)
# {"decision": "allow", "tool": "github.list_issues"}
Your Application
Python, Node.js, or Go
cz.call_tool()
Control Zero SDK
Policy Check
Compiled engine evaluates rules in-process.
Compiled engine
Secret Injection
Auto-inject API keys and credentials at runtime. No hardcoded secrets.
Audit Logging
Background batch upload. Every decision recorded with full context.
execute
Tool Executes
github.list_issues
Result
decision: allow

// 003

What Gets Enforced

PII Detection

Detect and mask sensitive data in prompts before they reach the model. Names, emails, credit cards, and custom patterns. Applied on every request, logged on every match.

Model Access Control

Allow or deny specific LLMs per project, environment, or role. Prevent cost surprises from expensive models. Enforce model boundaries across your entire fleet.

Cost Caps

Set per-request, daily, and monthly token and cost limits. Alerts when you approach thresholds. Hard stops when you hit them. No more runaway spending.

Full Audit Trail

Every decision recorded with timestamp, policy matched, agent identity, and full context. Searchable, exportable, and compliance-ready from day one.

// 004

How Policies Work

Define rules as JSON. Policies are cryptographically signed and evaluated locally by a compiled engine. Define them in the dashboard, via API, or in a local file.

{
  "name": "production-gpt4-only",
  "effect": "allow",
  "actions": ["llm:invoke"],
  "resources": ["model:gpt-4"],
  "conditions": {
    "max_tokens": 4096,
    "roles": ["senior-engineer"],
    "pii_mask": ["email", "credit_card"]
  }
}
  • >
    Model allowlists: Restrict which LLMs are available per project or environment.
  • >
    Token and cost limits: Cap spending per request, per day, or per month.
  • >
    Role-based access: Different permissions for different roles and teams.
  • >
    PII masking rules: Mask sensitive data before it reaches the model.
  • >
    Five enforcement modes: Block, warn, audit, shadow, or allow. Start soft, tighten later.

Policy Evaluation Flow

Incoming Request
Model Allowed?
Unauthorized model
Cost Within Budget?
Budget exceeded
PII Detected?
PII found, masked/blocked
Forward to LLM
Tool call in response?If no tool call, skip to result
Policy Allows Tool?
Tool call denied
Response Returned
Audit logged to immutable store

Get started in 60 seconds.

Install the SDK or point at the gateway. Your first policy is live immediately. Free tier. No credit card. Works without an account.