Built-in providers
The AI Gateway includes built-in support for these providers:| Provider | Description |
|---|---|
| OpenAI | GPT models, o1/o3 reasoning models |
| Anthropic | Claude models |
| Gemini models | |
| DeepSeek | DeepSeek Chat and Reasoner models |
| OpenRouter | Multi-provider routing service |
| Hyperbolic | Open-source model hosting |
| InceptionLabs | Diffusion-based language models |
| Inference.net | Distributed inference network |
Operating modes
Passthrough mode (default)
When no providers are configured, the gateway operates in passthrough mode:- All built-in providers are available
- Your SDK’s API key is forwarded directly to the provider
- Failover across models and providers is still enabled
- Metrics and observability are still enabled
Managed mode
When you explicitly configure providers, you gain additional control:- Specify which providers are allowed
- Manage API keys centrally
- Add custom metadata for tracking
- Configure custom or self-hosted providers
Custom providers
You can add any OpenAI-compatible endpoint as a custom provider:Provider selection
When a request arrives, the gateway determines which provider to use:- Explicit provider - If the model name includes a provider prefix (for example,
openai:gpt-4o), that provider is used - Catalog lookup - The gateway looks up the model in its catalog to find the default provider
- Selection strategy - If configured, model selection strategies can override the default
Next steps
- Models - Understanding model naming and request formats
- API Keys - How authentication works
- Configuring Providers - Detailed configuration guide