Error codes reference
ERR_NGROK_3800
Message Could not reach the AI provider. The upstream request failed. HTTP Status 502 Bad Gateway Causes- Network connectivity issues between ngrok and the provider
- Provider endpoint is down or unreachable
- DNS resolution failure for provider URL
- Check provider status pages (for example, OpenAI Status)
- Verify custom provider
base_urlis correct and accessible - Configure failover to alternative providers
ERR_NGROK_3801
Message Invalid request body:ERR
HTTP Status
400 Bad Request
Causes
- Request body is not valid JSON
modelfield is not a stringmodelsfield is not an arraymodelsarray contains non-string values
- Validate your request JSON before sending
- Ensure the
modelfield is a string - If using
modelsarray, ensure all entries are strings
ERR_NGROK_3802
Message No API key found. Check your SDK / AI Gateway configuration, or add API keys to your AI Gateway. HTTP Status 400 Bad Request Causes- No
Authorizationheader in the client request - No API keys configured for the provider in the gateway
- API key selection strategy filtered out all keys
- Include an API key in your request:
Authorization: Bearer sk-xxx - Configure API keys in your gateway’s provider configuration
- Review your
api_key_selectionstrategy if configured
ERR_NGROK_3803
Message Model selection failed:ERR
HTTP Status
422 Unprocessable Entity
Causes
- CEL expression syntax error in
model_selection.strategy - Runtime error evaluating the strategy expression
- Reference to undefined variable in strategy
- Check variable names and ensure they are available and correctly referenced.
- Check for typos in variable names (for example,
ai.modelsnotai.model) - Review CEL Functions Reference for available functions.
- Try and validate your CEL expressions using the CEL playground.
ERR_NGROK_3804
Message No matching models found:ERR
HTTP Status
422 Unprocessable Entity
Causes
- Typo in the model name (for example,
gpt-4ainstead ofgpt-4o) - Model not in the Model Catalog and no provider prefix
only_allow_configured_models: trueand model not in provider configonly_allow_configured_providers: trueand provider not configured
- Check your spelling. Common models include
gpt-4o,gpt-4,claude-3-5-sonnet-20241022 - For unknown models, prefix with
provider:(for example,openai:custom-model) - Review your gateway’s
providersconfiguration - Check if restriction flags
only_allow_configured_modelsoronly_allow_configured_providersare excluding the model you are trying to use
ERR_NGROK_3805
Message Model selection returned no results:ERR
HTTP Status
422 Unprocessable Entity
Causes
- A strategy either returned empty, had an error, or returned models not in the catalog or matched the client’s requested models. The gateway tries each in order and fails immediately if it encounters an error or an empty set, and after exhausting all of them.
- Double check your model selection strategies and ensure they are correctly configured to return models that exist in the catalog or in your gateway configuration
- Double check whether you have enabled
only_allow_configured_modelsoronly_allow_configured_providers - If clients specify models in requests, ensure your model selection strategies will return those models
ERR_NGROK_3806
Message Expression ‘EXPRESSION’ must return an AIModel or []AIModel, got ACTUAL_TYPE. See https://ngrok.com/docs/ai-gateway/guides/troubleshooting#err-ngrok-3806 for more information.
HTTP Status
422 Unprocessable Entity
Causes
- Strategy expression returned a string, number, or other value instead of a model object or list of model objects
- Use
ai.models,ai.models.randomize(),ai.models.random(), orai.models.filter()which return model objects - Don’t return raw strings like
"gpt-4o"or numbers like123from your strategy expressions
ERR_NGROK_3807
Message All providers failed:ERR
HTTP Status
503 Service Unavailable
Causes
- All configured providers returned errors
- All API keys exhausted (rate limits, invalid keys)
- Network issues to all providers
- Provider-specific errors (invalid model, authentication failures)
- Check the error details for specific provider failures (errors are listed in the response)
- Verify API keys are valid and have available quota
- Add more providers or API keys for better failover
- Use debugging to inspect individual attempt errors
ERR_NGROK_3808
Message API key selection strategy failed:ERR
HTTP Status
422 Unprocessable Entity
Causes
- No API key sent in request and no API keys configured for the provider (only for official providers like OpenAI)
- All
api_key_selectionstrategies returned no keys or had errors - Strategy expression returned an invalid type instead of a key or list of keys
- Include an API key in your request:
Authorization: Bearer sk-xxx - Configure API keys in your gateway’s provider configuration
- Check the error message for details about which strategy failed
ERR_NGROK_3809
Message AI Gateway action can only be used on HTTP endpoints. This endpoint is usingPROTOCOL. See https://ngrok.com/docs/ai-gateway/guides/creating-endpoints for help creating an endpoint.
HTTP Status
422 Unprocessable Entity
Causes
- AI Gateway action used on a non-HTTP endpoint
- Endpoint configured for TCP or TLS protocol
- AI Gateway only supports HTTP/HTTPS endpoints
- Check your endpoint configuration and ensure it uses HTTP protocol
ERR_NGROK_3810
Message Request timed out afterTIMEOUT. Try a shorter prompt or increase your timeout. See https://ngrok.com/docs/ai-gateway/guides/troubleshooting#err-ngrok-3810 for more information.
HTTP Status
504 Gateway Timeout
Causes
- All failover attempts took longer than
total_timeout - Providers responding slowly
- Too many failover candidates with slow responses
- Increase
total_timeoutif appropriate for your use case - Reduce
per_request_timeoutto fail faster on slow providers - Use model selection strategies to prioritize faster providers
- Reduce the number of failover candidates
ERR_NGROK_3811
Message Input too large:INPUT_TOKENS tokens (max MAX_ALLOWED). Shorten your prompt. See https://ngrok.com/docs/ai-gateway/guides/troubleshooting#err-ngrok-3811 for more information.
HTTP Status
413 Payload Too Large
Causes
- Prompt and context exceed
max_input_tokenssetting - Very long conversation history
- Large embedded content in messages
- Reduce prompt length or conversation history
- Increase
max_input_tokensin gateway configuration - Implement client-side token counting or compression before sending