Migrating ngrok Edges to Cloud Endpoints
This guide walks you through manually migrating ngrok Edge configurations (HTTPS, TLS, TCP) to Cloud Endpoints using .
📋 Prerequisites
- An ngrok account with an ngrok API key.
- The ngrok API documentation.
- Tools like
curl
or Postman (or ngrok's own API libraries). - Understanding of your existing edge configurations.
- YAML formatting skills for creating traffic policies.
- A text editor to prepare YAML payloads.
✅ What You'll Be Migrating
Edge Type | Replaced By | Policy Phases Used |
---|---|---|
HTTPS | Cloud Endpoint + Agent Endpoint | on_http_request , on_http_response |
TLS/TCP | Cloud Endpoint + Agent Endpoint | on_tcp_connect |
⚙️ Step 1 — Set Up Environment
Make sure you have:
- NGROK_API_TOKEN (your personal or organization token).
- API Base URL:
https://api.ngrok.com
.
Example Header for all API requests:
Loading…
📋 Step 2 — List Your Edges
Use the ngrok API to list all existing edges:
Loading…
If a response includes a non-empty next_page_uri
you’ll want to follow it until it is null
:
Loading…
Just make sure to keep your API rate limits in mind! Repeat this process for:
/edges/tls
/edges/tcp
🧠 Step 3 — Determine the Target URL
Each edge has one or more hostports
. Use those to define a cloud endpoint url
:
Edge Type | URL Format |
---|---|
HTTPS | https://yourdomain.com |
TLS | tls://yourdomain.com:443 |
TCP | tcp://yourdomain.com:12345 |
Note, you’ll have to create a cloud endpoint for each hostport
attached to an edge if you want multiple hostports
to service the same traffic flow.
🛠️ Step 4 — Create a Traffic Policy
Each cloud endpoint will have a YAML traffic policy.
Here is the base structure of a Traffic Policy:
Loading…
Use only the relevant phases for the edge type.
🔁 Step 5 — Migrate Routes (HTTPS Only)
HTTPS edges have routes, and we will need to convert each route’s to a expression based on it’s match_type
for use on every rule that is defined.
Here are some example CEL expressions for routes based on the match_type
:
Match Type | CEL Expression |
---|---|
exact_path | req.url.path == "/foo" |
path_prefix | req.url.path.startsWith("/api") |
Create these for later use in each expressions:
block in your policy rules.
🔧 Step 6 — Convert Modules to Actions
Each module on the edge or its routes maps to one or more policy actions:
Module | Actions |
---|---|
oauth | oauth + set-vars + custom-response for restriction |
oidc | oidc with optional auth_id + session durations |
ip_restrictions | restrict-ips |
request_headers | add-headers , remove-headers |
response_headers | add-headers , remove-headers |
circuit_breaker | circuit-breaker |
webhook_verification | verify-webhook |
compression | compress-response |
user_agent_filter | set-vars + deny (based on user-agent match) |
websocket_tcp_converter | ⚠️ Not supported |
You’ll want to translate each Module configuration to Action Configuration:
Add these actions under the correct phase (with matching route expressions when the endpoint is HTTP or HTTPS).
🎯 Step 7 — Convert Backends
Your edge’s backend defines where traffic goes. These should be placed after all other actions you defined above. These should be added to the end of the phase.
Tunnel Group Backends
If the edge uses a tunnel_group
backend (identified by labels):
- Construct an internal domain for each label (e.g.,
service=app
→service-app.internal
) - Forward traffic to each internal domain using the
forward-internal
action and fall through with theforward-internal
action. - Run the agent or create a cloud endpoint with that internal domain to receive traffic.
Example Policy (HTTPS):
Loading…
Here we are defining forward-internal
to forward to https://service-app.internal
and when an error occurs (like the endpoint being offline) fallback to some custom text (by leveraging on_error: continue
and custom-response
) describing how to get back online (this is optional).
Running internal agent endpoint via the CLI:
Loading…
Here, we have started an agent endpoint with the URL https://service-app.internal
which points to a service running locally on the machine on port 80.
TCP/TLS Backends
These are generally Tunnel Groups and should follow the same rules as above. However, wanted to call out that you should use the on_tcp_connect
phase here instead:
Loading…
Additionally, you cannot use the custom-response
action for an HTTP fallback.
HTTP Response Backends
Use custom-response
to serve static content:
Loading…
Failover Backends
- Loop through the list of failover backends
- For each backend type, applying the same rules in group in sequence but on each Tunnel Group, use
on_error: continue
to fall-through to the next group, or HTTP Response backend.
Weighted Backends → Not yet supported
These are not officially supported today in traffic policies.
You can pseudo-replicate this functionality today using multiple internal endpoints and the rand
macro to randomly send traffic:
Loading…
Since our CEL environment doesn’t support reduce or fold you’ll have to use an external script to determine the cumulative weights, chance, and lookups:
Loading…
✏️ Step 8 — Create the Cloud Endpoint
Each edge hostport
should become a separate cloud endpoint with its configuration represented as JSON, and the traffic_policy
embedded as stringified YAML.
Endpoint configuration object
Loading…
⚠️ Ensure all newlines (\n) and indentations are preserved when stringifying YAML into JSON. It’s best to use a library or third-party service to stringify your YAML into an object and then generate the resulting cloud endpoint JSON object.
Submit to:
Loading…
🧹 Step 9 — Clean Up the Old Edge
Once you have validated that your edge has been migrated and works, you can clean up the edge from your account:
Loading…
Repeat for all HTTPS, TLS, and TCP edges.
✅ Final Checklist
Step | Done? |
---|---|
Listed all edges and their hostports | |
Converted modules to policy actions | |
Converted backends to forward-internal or custom-response | |
Created necessary internal endpoints | |
Created cloud endpoints | |
Deleted the old edge |