
Put your APIs online the easy and composable way
We’ve built the most composable API gateway—approachable, reusable, scalable, and endlessly mix-and-match-able. Let me introduce you to our cast of characters.
First, you have Cloud Endpoints.
Cloud endpoints are persistent and globally available on the ngrok network, like a serverless function. They accept traffic on a URL like https://api.your-company.com
, take action on requests entirely within our network, and, in the case of an API gateway, route to other endpoints.
Speaking of which: internal Agent Endpoints.
These are created by the ngrok agent—could be on the CLI, Kubernetes Operator, or SDK—alongside a secure tunnel. Unlike a public agent endpoint, which can accept traffic from the public internet, an internal agent endpoint runs on a URL like https://your-api.internal
, which means it can only accept traffic from another endpoint owned by your ngrok account—in this case, your cloud endpoint.
Finally, Traffic Policy brings everyone together.
It’s a configuration language that lets you orchestrate requests as they pass through any of your endpoints. Each part of Traffic Policy can be wired together in exactly the right way to make your API services secure, flexible, and highly available.
We love all parts of Traffic Policy, from variables to macros, but for now, we have our eyes set on one part: the forward-internal action. This action forwards traffic from one endpoint to another, connecting your cloud and internal agent endpoints and giving you complete control of when your API service is available to the open internet.

Let me explain the steps.
- A user (human or machine) fires off a request to
https://api.your-company.com
. - Your cloud endpoint accepts traffic and applies the
forward-internal
action, which forwards everything tohttps://your-api.internal
. - Your internal agent endpoint receives traffic through the secure tunnel and forwards it to your upstream API service.
- Your service does a little magic with
0
s and1
s and returns a response that passes back through the secure tunnel and endpoints.
With these three characters speaking this way, you can wire up ngrok’s API gateway in a few minutes and get traffic orchestration functionality that would take you days or weeks with other providers. It’s also important to
But this architecture is also just the beginning of what you can compose together.
This composable shape works no matter how many upstream API services you have. Add an internal agent endpoint for each upstream service, plus a new forward-internal action based on how you want to route traffic between them.
You can route by path (e.g. https://api.your-company.com/foo
vs. https://api.your-company.com/bar
) to different upstreams, with a 404 catch-all for all other paths.
on_http_request:
- expressions:
- req.url.path.startsWith('/foo')
actions:
- type: forward-internal
config:
url: https://foo.internal
- expressions:
- req.url.path.startsWith('/bar')
actions:
- type: forward-internal
config:
url: https://bar.internal
- actions:
- type: deny
config:
status_code: 404
Or maybe you use a subdomain model.
on_http_request:
- expressions:
- req.host.startsWith('foo')
actions:
- type: forward-internal
config:
url: https://foo.internal
- expressions:
- req.host.startsWith('bar')
actions:
- type: forward-internal
config:
url: https://bar.internal
- actions:
- type: deny
config:
status_code: 404
As you extend your API gateway to more upstream API services, you’ll eventually want to stop adding more forward-internal actions for each—enter CEL interpolation, which lets you manage requests based on their attributes. If you’re routing by path, you can automatically ensure that any request to https://foo.your-company.com
forwards to your API service on https://foo.internal
and so on without having to manually add new rules.
on_http_request:
- actions:
- type: forward-internal
config:
url: https://${req.host.split('.your-company.com')[0]}.internal
- actions:
- type: deny
config:
status_code: 404
Just one example of how ngrok tamps down the DevOps burden not just at implementation-time, but for the entire lifespan of your API gateway.
The best use case for cloud endpoints, aside from being always-on, is that they let you set unified policies across all your API services no matter how far you scale out.
If you’re a DevOps or infrastructure engineer, that’s music to your ears, and some of the lowest-hanging fruit to manage centrally include:
- Layers of authentication and security with JWT validation, IP restrictions, or mutual TLS.
- Geoblocking based on the country code or region traffic originates from.
- Rate limiting for fairness and abuse prevention.
- Safely integrating your services with external platforms by verifying webhooks.
- Add headers to enrich your upstream services or add additional details to your logs.
As I mentioned earlier, Traffic Policy runs on any of your endpoints.
With certain policies running centrally, there are others that you would want to attach to only a single API service. With ngrok’s API gateway, you compose these Traffic Policy rules to the internal agent endpoint associated with that service, which is perfect for letting API developers self-service specific API gateway behavior without undermining the centralized policies.
What policies would you compose? How about:
- URL rewrites and redirects to handle changes to your API’s spec or to transparently handle major version changes (e.g.
/v1/…
to/v2/…
). - A circuit breaker to reject requests if your API service lands in an error state.
- Customized error pages for specific endpoints.
- Nuanced rate limiting based on whether a user is authenticated, include a specific header in their request (e.g.
X-tier: platinum
), or on a specific route.

With composability, you have so many ways to mix-and-match ngrok’s components to build the perfect API gateway for you and your team. This composable shape sets up a foundation that’s both easy to operate and effortless to expand.
Create an account to get started—you can create this kind of stack on any plan.
If you’d like to learn more about the composable API gateway:
- Check out our Traffic Policy docs.
- Read about internal and cloud endpoints on our blog.
And if you’re ready to jump straight in:
- Follow our end-to-end API gateway tutorial.
- Or use endpoint pooling for the easiest path to load balancing.
If you still have questions about production-ready API gateway patterns and advanced usage—or want to share some powerful setups that are already working for you—join us on the next monthly Office Hours livestream! I’ll be there, alongside other ngrok experts and our community, to answer your questions live and demo some of our newest API gateway features.
Excited to see what you compose next!