How It Works
Three steps. Full governance.
Install the SDK or point at the gateway. Define your policies. Every request evaluated, every decision logged.
// 001
Install. Define. Enforce.
Install
Install the SDK or point your API base URL at the gateway. One package, one line of config. No infrastructure changes. Works with OpenAI, Anthropic, Google, LangChain, CrewAI, AutoGen, MCP, and more.
Define Policies
Set rules from the dashboard, via API, or in a local JSON file. Model allowlists, cost caps, tool restrictions, PII filters. Policies are cryptographically signed and cached locally.
Enforce and Audit
Every request evaluated in real time. Allow, block, warn, or shadow. Five enforcement modes. Full audit trail on every decision. Searchable, exportable, compliance-ready.
// 002
Code Examples
Wrap your existing client. Everything else stays the same.
Python
# 1. Install: pip install controlzeroimport controlzerofrom controlzero.integrations.openai import wrap_openaiimport openai# 2. Initialize with your project keycz = controlzero.init(api_key="cz_live_...")# 3. Wrap your client; governance is automaticclient = wrap_openai(openai.OpenAI(), cz)# Use as normal. Every call is governed + audited.response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}])Node.js
// 1. Install: npm install @controlzero/sdkimport { ControlZero } from '@controlzero/sdk';import OpenAI from 'openai';// 2. Initialize with your project keyconst cz = new ControlZero({ apiKey: 'cz_live_...' });// 3. Wrap your client; governance is automaticconst client = cz.wrapOpenAI(new OpenAI());// Use as normal. Every call is governed + audited.const response = await client.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: prompt }],});// 003
Policies from the Dashboard
Define policies in the Control Zero dashboard or via the API. No code changes required. Policies are cryptographically signed and synced to the SDK in real time.
- >Model allowlists and blocklists
- >Per-request token and cost limits
- >Tool and resource restrictions
- >Role-based and time-based conditions
Example Policy
{ "name": "restrict-to-gpt4", "effect": "allow", "actions": ["llm:invoke"], "resources": ["model:gpt-4", "model:gpt-4-turbo"], "enforcement": "block"}// 004
Request Flow
Every request follows this path. Most steps evaluate locally, with no per-call network round trip.
Your App
Agent makes an LLM call through the wrapped client.
SDK Policy Check
Policies evaluated locally from cache. No network call for valid requests.
Provider Call
Request goes directly to the LLM provider. Data never touches our servers.
Audit Log
Decision and metadata logged asynchronously. Zero impact on response time.
Ready to get started?
Free tier available. No credit card required. Up and running in under 5 minutes.