Skip to main content
Prerequisite: You need an AI Gateway endpoint before continuing. Create one using the dashboard quickstart or follow the manual setup guide.
The AI Gateway works with any OpenAI-compatible SDK or HTTP client. If your language or framework isn’t covered in the dedicated guides, you can still use the gateway.

Requirements

For any SDK or client to work with the AI Gateway, it must:
  1. Support the OpenAI API format
  2. Allow customizing the base URL / endpoint
  3. Send requests with proper Authorization headers

cURL

For testing or shell scripts:
curl https://your-ai-subdomain.ngrok.app/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

HTTP request format

Any HTTP client can call the AI Gateway using the OpenAI API format:
POST /v1/chat/completions HTTP/1.1
Host: your-ai-subdomain.ngrok.app
Content-Type: application/json
Authorization: Bearer your-api-key

{
  "model": "gpt-4o",
  "messages": [
    {"role": "user", "content": "Hello!"}
  ]
}

Using provider prefixes

All clients support routing to specific providers using model prefixes:
# OpenAI
model: "openai:gpt-4o"

# Anthropic
model: "anthropic:claude-3-5-sonnet-latest"

# Self-hosted
model: "ollama:llama3.2"

# Auto-select based on your strategy
model: "ngrok/auto"

Supported endpoints

The AI Gateway supports these OpenAI-compatible endpoints:
EndpointDescription
/v1/chat/completionsChat completions (GPT-4, Claude, etc.)
/v1/completionsLegacy completions
/v1/embeddingsText embeddings
/v1/modelsList available models

Next steps