Error codes reference
ERR_NGROK_3800
Message The request could not be proxied to the AI provider successfully HTTP Status 502 Bad Gateway Causes- Network connectivity issues between ngrok and the provider
- Provider endpoint is down or unreachable
- DNS resolution failure for provider URL
- Check provider status pages (for example, OpenAI Status)
- Verify custom provider
base_urlis correct and accessible - Configure failover to alternative providers
ERR_NGROK_3801
Message The request’s body could not be parsed: HTTP Status 400 Bad Request Causes- Request body is not valid JSON
modelfield is not a stringmodelsfield is not an arraymodelsarray contains non-string values
- Validate your request JSON before sending
- Ensure the
modelfield is a string - If using
modelsarray, ensure all entries are strings
ERR_NGROK_3802
Message No API key was provided in the request or in the provider configuration HTTP Status 400 Bad Request Causes- No
Authorizationheader in the client request - No API keys configured for the provider in the gateway
- API key selection strategy filtered out all keys
- Include an API key in your request:
Authorization: Bearer sk-xxx - Configure API keys in your gateway’s provider configuration
- Review your
api_key_selectionstrategy if configured
ERR_NGROK_3803
Message Model selection strategy failed: HTTP Status 424 Failed Dependency Causes- CEL expression syntax error in
model_selection.strategy - Runtime error evaluating the strategy expression
- Reference to undefined variable in strategy
- Check variable names and ensure they are available and correctly referenced.
- Check for typos in variable names (for example,
ai.modelsnotai.model) - Review CEL Functions Reference for available functions.
- Try and validate your CEL expressions using the CEL playground.
ERR_NGROK_3804
Message Unable to route request - no models matched both your gateway configuration and client request HTTP Status 422 Unprocessable Entity Causes- Typo in the model name (for example,
gpt-4ainstead ofgpt-4o) - Model not in the Model Catalog and no provider prefix
only_allow_configured_models: trueand model not in provider configonly_allow_configured_providers: trueand provider not configured
- Check your spelling. Common models include
gpt-4o,gpt-4,claude-3-5-sonnet-20241022 - For unknown models, prefix with
provider:(for example,openai:custom-model) - Review your gateway’s
providersconfiguration - Check if restriction flags
only_allow_configured_modelsoronly_allow_configured_providersare excluding the model you are trying to use
ERR_NGROK_3805
Message All model selection strategy expressions resulted in an empty set after filtering by configured providers. Check your model selection strategy and provider configuration. HTTP Status 422 Unprocessable Entity Causes- A strategy either returned empty, had an error, or returned models not in the catalog or matched the client’s requested models. The gateway tries each in order and fails immediately if it encounters an error or an empty set, and after exhausting all of them.
- Double check your model selection strategies and ensure they are correctly configured to return models that exist in the catalog or in your gateway configuration
- Double check whether you have enabled
only_allow_configured_modelsoronly_allow_configured_providers - If clients specify models in requests, ensure your model selection strategies will return those models
ERR_NGROK_3806
Message Model selection strategy expression '' returned an invalid type: expected AIModel or []AIModel, got HTTP Status 422 Unprocessable Entity Causes- Strategy expression returned a string, number, or other value instead of a model object or list of model objects
- Use
ai.models,ai.models.randomize(),ai.models.random(), orai.models.filter()which return model objects - Don’t return raw strings like
"gpt-4o"or numbers like123from your strategy expressions
ERR_NGROK_3807
Message All AI providers failed to respond successfully. The request could not be completed. HTTP Status 424 Failed Dependency Causes- All configured providers returned errors
- All API keys exhausted (rate limits, invalid keys)
- Network issues to all providers
- Provider-specific errors (invalid model, authentication failures)
- Check the error details for specific provider failures (errors are listed in the response)
- Verify API keys are valid and have available quota
- Add more providers or API keys for better failover
- Use debugging to inspect individual attempt errors
ERR_NGROK_3808
Message API key selection strategy failed: HTTP Status 424 Failed Dependency Causes- No API key sent in request and no API keys configured for the provider (only for official providers like OpenAI)
- All
api_key_selectionstrategies returned no keys or had errors - Strategy expression returned an invalid type instead of a key or list of keys
- Include an API key in your request:
Authorization: Bearer sk-xxx - Configure API keys in your gateway’s provider configuration
- Check the error message for details about which strategy failed
ERR_NGROK_3809
Message Unsupported proto for AI Gateway action: HTTP Status 424 Failed Dependency Causes- AI Gateway action used on a non-HTTP endpoint
- Endpoint configured for TCP or TLS protocol
- AI Gateway only supports HTTP/HTTPS endpoints
- Check your endpoint configuration and ensure it uses HTTP protocol
ERR_NGROK_3810
Message AI gateway total timeout of was exceeded. HTTP Status 504 Gateway Timeout Causes- All failover attempts took longer than
total_timeout - Providers responding slowly
- Too many failover candidates with slow responses
- Increase
total_timeoutif appropriate for your use case - Reduce
per_request_timeoutto fail faster on slow providers - Use model selection strategies to prioritize faster providers
- Reduce the number of failover candidates
ERR_NGROK_3811
Message Request input exceeds maximum token limit. Input tokens: , Max allowed: HTTP Status 413 Payload Too Large Causes- Prompt and context exceed
max_input_tokenssetting - Very long conversation history
- Large embedded content in messages
- Reduce prompt length or conversation history
- Increase
max_input_tokensin gateway configuration - Implement client-side token counting or compression before sending