Control Zero for Developers
Govern your AI app at runtime.
A transparent gateway proxy that evaluates every request. SDKs for Python, Node.js, and Go that enforce policies in-process. The same governance layer, whether you ship a weekend project or a fleet of autonomous agents.
$ pip install controlzero$ npm install @controlzero/sdk$ go get controlzero.ai/sdk/goNo account required. Works in local-only mode.
// 001
Gateway Proxy
Your AI agent already talks to an LLM through an API base URL. The gateway sits between your agent and the provider. Change one environment variable. Every request passes through your policies before reaching the model. Every response is evaluated before it reaches your application.
No SDK. No code changes. No new dependencies. The gateway is protocol-compatible with OpenAI and Anthropic APIs. Your existing client libraries, LangChain pipelines, and MCP tool calls work without modification.
PII detection on inbound prompts. Model access control on outbound requests. Cost cap enforcement on token usage. Full audit trail on every decision. All from a single config change.
// 002
SDK Integration
When you need per-tool governance, secret injection, or local policy evaluation, wrap your AI calls with the SDK. Three languages. Three lines of code. Works without an API key in local-only mode. Point it at a local policy file and everything runs on your machine.
from controlzero import ControlZero
# Cloud mode: policies sync from the dashboard
cz = ControlZero(api_key="cz_live_...")
# Local-only mode: no account, no network calls
cz = ControlZero(policy_path="./policies.json")
await cz.initialize()
# Every tool call is governed
result = await cz.call_tool(
"github", "list_issues",
{"repo": "acme/app"}
)
# {"decision": "allow", "tool": "github.list_issues"}// 003
What Gets Enforced
PII Detection
Detect and mask sensitive data in prompts before they reach the model. Names, emails, credit cards, and custom patterns. Applied on every request, logged on every match.
Model Access Control
Allow or deny specific LLMs per project, environment, or role. Prevent cost surprises from expensive models. Enforce model boundaries across your entire fleet.
Cost Caps
Set per-request, daily, and monthly token and cost limits. Alerts when you approach thresholds. Hard stops when you hit them. No more runaway spending.
Full Audit Trail
Every decision recorded with timestamp, policy matched, agent identity, and full context. Searchable, exportable, and compliance-ready from day one.
// 004
How Policies Work
Define rules as JSON. Policies are cryptographically signed and evaluated locally by a compiled engine. Define them in the dashboard, via API, or in a local file.
{
"name": "production-gpt4-only",
"effect": "allow",
"actions": ["llm:invoke"],
"resources": ["model:gpt-4"],
"conditions": {
"max_tokens": 4096,
"roles": ["senior-engineer"],
"pii_mask": ["email", "credit_card"]
}
}- >Model allowlists: Restrict which LLMs are available per project or environment.
- >Token and cost limits: Cap spending per request, per day, or per month.
- >Role-based access: Different permissions for different roles and teams.
- >PII masking rules: Mask sensitive data before it reaches the model.
- >Five enforcement modes: Block, warn, audit, shadow, or allow. Start soft, tighten later.
Policy Evaluation Flow
Get started in 60 seconds.
Install the SDK or point at the gateway. Your first policy is live immediately. Free tier. No credit card. Works without an account.