
We cut your cost per token by 0%
Smart routing sends simple prompts to cheap models, hard ones to powerful ones. One API, 13+ providers, automatic fallbacks. Your AI stack, finally optimized.
Open source. MIT Licensed. No credit card required.






Works with every major provider
How it works
Your app talks to OpenTracy. OpenTracy talks to every LLM provider.

Everything you need to manage LLMs
From routing to evaluation to distillation. One platform.

One API, Every Model
Send requests to OpenAI, Anthropic, Google, Mistral, and 9 more through a single endpoint.

Real-Time Traces
Every request logged with full input, output, cost, and latency. Query millions of traces instantly.

Cost Tracking
Automatic per-token pricing for 70+ models. See exactly where your money goes.

Smart Routing
Route simple prompts to cheap models, complex ones to powerful ones. Automatic fallbacks.

Quality Monitoring
AI agents scan traces for hallucinations and quality drops. Get alerts before users notice.

Model Distillation
Train smaller, faster, cheaper models from your production data. Own your models.
Simple by design
If you've used the OpenAI SDK, you already know OpenTracy.
import opentracy as ot
# Call any model — one line
response = ot.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
fallbacks=["anthropic/claude-3-haiku"]
)
print(response.choices[0].message.content)
print(f"Cost: ${response._cost:.6f}")OpenAI-compatible
Same format you already use. Change one line to start.
Automatic fallbacks
If a provider goes down, OpenTracy switches to your backup.
Cost on every response
Every response includes exact cost. No more guessing.
Full streaming
All providers including Anthropic SSE translation.
Simple, predictable pricing
Start free. Scale when you need to. No surprises.
Free
For developers exploring LLM routing and observability.
- ✓Up to 10,000 requests/month
- ✓3 distillation runs/month
- ✓All 13+ providers
- ✓Full trace logging
- ✓Community support
Starter
For teams running LLMs in production.
- ✓Unlimited requests
- ✓Unlimited distillation
- ✓Advanced evaluations
- ✓AI quality scanning
- ✓Priority support
- ✓Team collaboration
- ✓Self-host option
- ✓Custom routing rules
Enterprise
For organizations that want a turnkey AI infrastructure with hands-on guidance.
- ✓Everything in Starter
- ✓24/7 dedicated support
- ✓Full setup done for you
- ✓Implementation consulting
- ✓Dedicated onboarding
- ✓Custom API architecture review
- ✓VPC deployment
- ✓SSO / SAML
- ✓Audit logs
- ✓Custom SLAs
- ✓On-premise option
- ✓BYOK encryption
Trusted by AI teams
From startups to enterprise, teams use OpenTracy to simplify their LLM stack.
“OpenTracy let us consolidate 4 different LLM integrations into a single API. Our team ships features 3x faster now.”
“The cost tracking alone saved us $2,400/month. We had no idea how much we were overspending on GPT-4 calls.”
“Switched from a custom routing layer to OpenTracy in one afternoon. The fallback system caught two provider outages in the first week.”
Join the community
Open source, open development. Build with us.

Open source. Self-host or cloud.
Run on your own infrastructure with full control, or use our managed cloud. MIT licensed, no vendor lock-in.
Free tier available. No credit card required.