Prerequisite: You need an AI Gateway endpoint before continuing. Create one using the dashboard quickstart or follow the manual setup guide.
baseURL to route all requests through your gateway, getting automatic failover, key rotation, and observability.
Installation
Basic usage
Point the SDK at your AI Gateway endpoint:Streaming
The AI Gateway fully supports streaming responses:Using different providers
Route to different providers using model prefixes:Automatic model selection
Let the gateway choose the best model:Embeddings
Generate embeddings through the gateway:Function calling
Tool/function calling works exactly as documented by OpenAI:Async usage
Use async clients for better performance:Error handling
The gateway handles many errors automatically through failover. For errors that reach your app:Supported endpoints
The AI Gateway supports these OpenAI API endpoints:| Endpoint | Description |
|---|---|
/v1/chat/completions | Chat completions (GPT-4, Claude, etc.) |
/v1/completions | Legacy completions |
/v1/embeddings | Text embeddings |
/v1/models | List available models |
Next steps
- Model Selection Strategies - Configure routing logic
- Configuring Providers - Set up providers and keys
- Multi-Provider Failover - Failover examples