
How to write policies with patterns that scale
A year ago, when I first started writing Traffic Policy rules around jobs like blocking traffic from specific countries or routing to internal endpoints, they felt relatively simple. A few actions in sequence, maybe an expression or two... nothing that required translating particularly complex logic into YAML.
But Traffic Policy has changed a lot since then, with more actions and possibilities. Folks are picking up the pieces in really interesting ways to:
- Ship faster by offloading really complex business logic (auth or security) to your gateway instead of building it yourself.
- Standardize how traffic gets moved and managed across multiple environments and APIs.
- Solve ingress problems we hadn't even thought of yet.
Building out our examples gallery has kept me on my toes, and reinforced the three patterns of good policies: chaining, grouping, and catch-alls.
But first, let's traipse through some essential terms for you to understand:
- Phases are moments in the request lifecycle where ngrok can inspect, process, and manage traffic. There are three:
tcp_connection
,on_http_request
, andon_http_response
. - Rules define how you want ngrok to manage your traffic.
- Expressions create conditions for when you do and don't want to run certain rules.
- Actions do the heavy lifting to manipulate, route, authenticate, and
insert more good verbs here
your traffic.
Pattern 1: Understand how rules chain and terminate
First and foremost, ngrok executes rules based on phase: on_tcp_connection
, on_http_request
, and then on_http_response
. That means all the rules you've added to on_tcp_connection
will have finished up by the time anything happens on_http_request
.
Fun fact: If you're running TLS or TCP endpoints, you only have to worry about on_tcp_connection
!
Within each phase, ngrok executes rules from top to bottom, which means you're in full control of the order of operations.
Together, this means ngrok runs all rules in on_tcp_connect
, then all rules in on_http_request
, and so on. There are two reasons why ngrok doesn't execute a rule:
- You've added an expression and it evaluates to false.
- You added a certain rule that short-circuits the phase and prevents any other rules from running.
There are four actions that short-circuit your policy: deny
, restrict-ips
, forward-internal
, and custom-response
. If you put these actions into your rules in the wrong order, or without an expression that specifies they only run in certain situations, you're going to immediately terminate execution of your rules in ways you probably don't want.
Let's have some fun with ASCII—I've broken an example policy into boxes that show how Traffic Policy breaks your YAML into expressions to evaluate and expressions to run.
# Request arrives on the `/api/` path
┌──────────────────────────────────────────────────────────────────────┐
│ # ngrok executes this phase first, but it's empty │
│ on_tcp_connect: [] │
└──────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────┐
│ # Second phase to execute │
│ on_http_request: │
│┌────────────────────────────────────────────────────────────────────┐│
││# First rule to execute ││
││# There's no expression, so it always runs ││
││ - actions: ││
││ - type: add-headers ││
││ config: ││
││ x-client-country: ${conn.client_ip.geo.location.country}. ││
│└────────────────────────────────────────────────────────────────────┘│
│┌────────────────────────────────────────────────────────────────────┐│
││# Second rule to execute ││
││# If the path of the request starts with `/api`, the expression ││
││# evaluates to true, runs the action, and terminates the phase ││
││ - expressions: ││
││ - "req.path.url.startsWith('/api') ││
││ actions: ││
││ - type: forward-internal ││
││ config: ││
││ url: https://api.internal ││
│└────────────────────────────────────────────────────────────────────┘│
│┌────────────────────────────────────────────────────────────────────┐│
││# Third rule to execute ││
││# There's no expression, so it always runs, but only if the second ││
││# rule didn't terminate the policy ││
││ - actions: ││
││ - type: custom-response ││
││ config: ││
││ status_code: 404 ││
││ body: '{"message":"Service unavailable"}' ││
│└────────────────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────┐
│ # Final phase │
│ on_http_response: │
│┌────────────────────────────────────────────────────────────────────┐│
││# Last rule to execute ││
││ - actions: ││
││ - type: compress-response ││
││ config: ││
││ algorithms: ││
││ - gzip ││
││ - br ││
││ - deflate ││
││ - compress ││
│└────────────────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────────────┘
Pattern 2: Group related actions wherever possible
Each rule can have multiple actions grouped within it. Grouping doesn't necessarily change the top-down order of execution, but it can be very helpful in making your policy more DRY, predictable, and readable for others.
First, group actions under a single rule whenever possible.
on_http_request:
- actions:
- type: add-headers
config:
headers:
x-client-geo: ${conn.client_ip.geo.location.country_code}
- type: rate-limit
config:
name: Only allow 30 requests per minute
algorithm: sliding_window
capacity: 30
rate: 60s
bucket_key:
- ${conn.client_ip}
- type: forward-internal
config:
url: https://foo.internal
This policy adds a header, applies a rate limit, and then forwards the request to your upstream service, and because you want to execute all of them in order on all requests, there's no need to repeat yourself with multiple sets of rules.
Second, group your actions under a single expression. For example, don't do this:
on_http_request:
- expressions:
- "req.url.path == '/foo'"
actions:
- type: oauth
config:
provider: google
- expressions:
- "req.url.path == '/foo'"
actions:
- type: forward-internal
config:
url: https://foo.internal
This policy first filters for traffic on the /foo
path and applies OAuth authentication. Then it filters again for the same traffic and forwards it to an internal service.
Instead, collapse them into the same expression to make sure you can easily understand the path a given request should take based on its qualities.
on_http_request:
- expressions:
- "req.url.path == '/foo'"
actions:
- type: oauth
config:
provider: google
- type: forward-internal
config:
url: https://foo.internal
Whether you need complex expression logic, like apply an IP restriction OR JWT validation, or just want a straight shot of actions to transform traffic, grouping simplifies the "story" and makes robust policies easier to build and debug.
Pattern 3: Add failovers and catch-alls
The most popular way to use Traffic Policy right now is to forward traffic to internal agent endpoints and onto upstream services, i.e. some variant of the "front door" pattern. You already saw an example of this when I was talking about order and terminating actions.
When you set up your routing this way, you should cover your bases in case something happens to your ngrok agent or services, which is where this pattern works wonders.
Failovers
The forward-internal
action has an on_error: continue
configuration, which lets you define a failover action to execute if the forward doesn't work for any reason. Failing over to the custom-response
action to deliver a custom error page or JSON response makes for a better user experience.
on_http_request:
- actions:
- type: forward-internal
config:
url: "https://foo.internal"
on_error: continue
- type: custom-response
config:
body: "Not found"
status_code: 404
You could also failover to the log action to make sure you pipe over all possible relevant data to your observability platform, then failover to backup internal agent with another forward-internal.
Failover helps your gateway act in very certain circumstances—when X resource fails, do Y, but what if your users do something unexpected, like Z?
Catch-all rules
Catch-all rules help you respond to every request in a meaningful way—either by delivering helpful information for the end user or giving you the confidence you've got every possible edge case handled.
Let's look at an example API gateway with two paths that route to two internal services like so:
/api/auth/
→https://auth.internal
→ upstream authentication service/api/accounts/
→https://accounts.internal
→ upstream account database/service
The basic version uses two expressions to filter for requests to those paths and execute the forward-internal
action.
on_http_request:
- expressions:
- "req.url.path.startsWith('/api/auth')"
actions:
- type: forward-internal
config:
url: "https://auth.internal"
- expressions:
- "req.url.path.startsWith('/api/accounts')"
actions:
- type: forward-internal
config:
url: "https://accounts.internal"
What happens if someone requests the /api/
path? Or /foo/bar/
? Whether they're making an honest mistake or trying to probe your API surface for exploits, you can handle it with a catch-all custom-response
action that ngrok executes for all requests that don't match either expression.
on_http_request:
- expressions:
- "req.url.path.startsWith('/api/auth')"
actions:
- type: forward-internal
config:
url: "https://auth.internal"
on_error: continue
- expressions:
- "req.url.path.startsWith('/api/accounts')"
actions:
- type: forward-internal
config:
url: "https://accounts.internal"
on_error: continue
- actions:
- type: custom-response
config:
body: '{"message": "Not found or unavailable."}'
headers:
content-type: "application/json"
status_code: 404
I also threw in failover with on_error: continue
for good measure—now this policy handles both random requests and internal outages.
Bonus tips for robust policies
A bunch of ngrokkers put their heads together for some quick ways to get even more from Traffic Policy:
- If you use YAML, you can add whitespace between phase rules and add comments to document the logic you've built for better readability/maintainability. Clearly, I like said whitespace, but the choice is yours.
- If you want to deny requests en masse (like for AI bots or blocklists), better to do that
on_tcp_connect
—it blocks the request even before the TCP handshake. - If you don't want to use full-on auth (even Basic Auth), you can "mock" an API key by creating an expression that filters for only traffic that contains an
Authorization: Bearer
header and a secret string:"req.headers["authorization"][0] == 'Bearer hunter2'"
- If any variable feels too long or hard to remember, use create your own with
set-vars
. - You might see a name field in some example rules, but they're optional—use them if they help with readability, or skip if they feel like clutter.
Traffic Policy needs your wisdom
To help illustrate even more ways you can write powerful Traffic Policy with these three patterns, I've been working hard to flesh out a gallery of examples based on specific jobs you've dealt with before and might come across again. Think:
- Route to globally distributed endpoints based on geography
- Deploy custom error pages
- Securely expose your n8n workflows
- Validate requests against an internal identity service
- Offload analytics to a secondary service
If you have an example of your own, I'd love to read about it and share—send in your contribution for a shot at glory... and some famous ngrok swag.
For the rest, find me and other ngrokkers hiding out on Discord, where we're always, truly always, willing to talk Traffic Policy.