The AI Gateway provides two flags to restrict which providers and models clients can use. This is useful for cost control, compliance, and ensuring applications only use approved AI services.Documentation Index
Fetch the complete documentation index at: https://ngrok.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Restricting providers
only_allow_configured_providers
When set to true, only providers explicitly listed in your configuration are allowed. Requests to other providers are rejected.
- ✅ Requests to OpenAI models work
- ✅ Requests to Anthropic models work
- ❌ Requests to Google, DeepSeek, or other providers are rejected
Disabling specific providers
You can also disable individual providers while allowing others:Restricting models
only_allow_configured_models
When set to true, only models explicitly listed in your provider configurations are allowed. Requests for other models are rejected.
- ✅ Requests for
gpt-4owork - ✅ Requests for
gpt-4o-miniwork - ❌ Requests for
gpt-3.5-turboor other models are rejected
Disabling specific models
Disable individual models without removing them from configuration:Combined restrictions
For maximum control, combine both flags:- Only OpenAI and Anthropic providers
- Only three specific models across those providers
- All other requests are rejected with clear error messages
Use cases
Cost control
Limit access to expensive models:Compliance
Ensure only approved providers are used:Development versus production
Development allows all models:Team-specific access
Different gateway endpoints for different teams:Error messages
When a request is rejected, the gateway returns a clear error: Provider not allowed:Next steps
- Configuring Providers - Full provider setup
- FAQ - Common error troubleshooting