LiteLLM
Unified API proxy for 100+ LLM providers — one interface, any model
⭐16,000
LLM APIs & InferenceFree (open-source), hosted proxy available
About LiteLLM
LiteLLM provides a unified OpenAI-compatible interface to call 100+ LLM APIs including OpenAI, Anthropic, Cohere, Replicate, local models, and more. Acts as a proxy that handles auth, load balancing, fallbacks, and spend tracking across all providers.
Features
✦Unified API for 100+ LLM providers
✦Load balancing across multiple API keys/providers
✦Automatic fallbacks when providers fail
✦Spend tracking and budget alerts per team/project
✦Rate limiting and retry logic built-in
✦OpenAI SDK compatible — zero code changes
✦Self-hostable proxy server
✦Supports streaming, function calling, vision
Pros & Cons
Pros
- +One API for 100+ providers
- +Built-in load balancing and fallbacks
- +Spend tracking and rate limiting
- +OpenAI SDK compatible
Cons
- −Adds a proxy layer (slight latency)
- −Complex config for advanced features
Platforms
LinuxmacOSDocker
Tags
Related AI Concepts
Similar Tools
Hugging Face
The AI community platform with 500K+ models and datasets
Free + Pro $9/mo + EnterpriseFireworks AI
Fast and efficient LLM inference platform
Pay-per-useTogether AI
Fast inference and fine-tuning for open-source models
Pay-per-useOpenRouter
Unified API for 200+ AI models from all providers
Pay-per-use (varies by model)📰 Featured In
All guides →Need help choosing?
Compare LiteLLM with alternatives side by side
Compare Tools →