Control Zero is in early beta. We ship daily. Feedback: team@controlzero.ai

How It Works

Three steps. Full governance.

Install the SDK or point at the gateway. Define your policies. Every request evaluated, every decision logged.

// 001

Install. Define. Enforce.

01

Install

Install the SDK or point your API base URL at the gateway. One package, one line of config. No infrastructure changes. Works with OpenAI, Anthropic, Google, LangChain, CrewAI, AutoGen, MCP, and more.

02

Define Policies

Set rules from the dashboard, via API, or in a local JSON file. Model allowlists, cost caps, tool restrictions, PII filters. Policies are cryptographically signed and cached locally.

03

Enforce and Audit

Every request evaluated in real time. Allow, block, warn, or shadow. Five enforcement modes. Full audit trail on every decision. Searchable, exportable, compliance-ready.

// 002

Code Examples

Wrap your existing client. Everything else stays the same.

Python

python
# 1. Install: pip install controlzero
import controlzero
from controlzero.integrations.openai import wrap_openai
import openai
# 2. Initialize with your project key
cz = controlzero.init(api_key="cz_live_...")
# 3. Wrap your client; governance is automatic
client = wrap_openai(openai.OpenAI(), cz)
# Use as normal. Every call is governed + audited.
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)

Node.js

javascript
// 1. Install: npm install @controlzero/sdk
import { ControlZero } from '@controlzero/sdk';
import OpenAI from 'openai';
// 2. Initialize with your project key
const cz = new ControlZero({ apiKey: 'cz_live_...' });
// 3. Wrap your client; governance is automatic
const client = cz.wrapOpenAI(new OpenAI());
// Use as normal. Every call is governed + audited.
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }],
});

// 003

Policies from the Dashboard

Define policies in the Control Zero dashboard or via the API. No code changes required. Policies are cryptographically signed and synced to the SDK in real time.

  • >Model allowlists and blocklists
  • >Per-request token and cost limits
  • >Tool and resource restrictions
  • >Role-based and time-based conditions

Example Policy

json
{
"name": "restrict-to-gpt4",
"effect": "allow",
"actions": ["llm:invoke"],
"resources": ["model:gpt-4", "model:gpt-4-turbo"],
"enforcement": "block"
}

// 004

Request Flow

Every request follows this path. Most steps evaluate locally, with no per-call network round trip.

01

Your App

Agent makes an LLM call through the wrapped client.

02

SDK Policy Check

Policies evaluated locally from cache. No network call for valid requests.

03

Provider Call

Request goes directly to the LLM provider. Data never touches our servers.

04

Audit Log

Decision and metadata logged asynchronously. Zero impact on response time.

Ready to get started?

Free tier available. No credit card required. Up and running in under 5 minutes.