Introducing Cloud Endpoints

Have you ever wanted an ngrok endpoint that doesn’t go offline when you get disconnected from the internet?

Today, we are excited to introduce Cloud Endpoints: persistent, always-on ngrok endpoints that are managed centrally via the dashboard and API. They let you route and respond to traffic even if your upstream services are offline. Cloud endpoints use ngrok’s new Traffic Policy system to configure traffic handling just like the Agent Endpoints (aka tunnels) that you're familiar with.

Cloud endpoints solve a number of problems for ngrok developers, let’s take a closer look:

  • Always on — Because they are not tied to the lifecycle of an agent process and live persistently until deleted, cloud endpoints let you handle traffic even if your upstream service goes offline. They’re frequently used to render a custom error page or fail over to another service.
  • Centrally managed — We see customers choose cloud endpoints to be the "front door" where they standardize how to handle, authenticate, and route traffic to their apps and APIs. This allows you to create architectures where you treat the agent endpoints (aka tunnels) as ‘dumb pipes’ by moving the smarts to the centrally-managed cloud endpoints.
  • Traffic Policy configuration — Cloud endpoints use the exact same Traffic Policy configuration as agent endpoints so that you can transition between them with just a simple copy/paste. You only need to learn a single configuration language because you can use it with every endpoint.
  • API automation — The Endpoints API resource can be used to automate management of your cloud endpoints. You can automate cloud endpoint creation via the ngrok API, API client libraries, and Kubernetes Operator CRD.  
  • Replacement for Edges — Cloud endpoints deprecate Edges. They are more flexible and easier to work with. See the guide on how to migrate off of Edges to cloud endpoints. Edges will continue to work while we work to migrate everyone off.
  • Fully integrated — Cloud endpoints are a first-class feature of the ngrok platform which means that just like agent endpoints:

Cloud endpoints are available today to users on our free and pay-as-you-go plans. You can read the cloud endpoints documentation to get into the nitty-gritty details about how they work.

How to create a cloud endpoint 

You can create a cloud endpoint on the ngrok dashboard or via API. For the example below, we’re going to use the API via the ngrok agent CLI (you may need to ngrok update first!). 

Creating a cloud endpoint is a single API call where you specify the endpoint’s URL and its Traffic Policy:

ngrok api endpoints create \
  --api-key {YOUR_API_KEY} \
  --type cloud \
  --url https://inconshreveable.ngrok.app \
  --traffic-policy ‘‘{"on_http_request":[{"actions":[{"type":"custom-response","config":{"status_code":200,"content":"hello world from my new cloud endpoint"}}]}]}’

Now let’s try it out:

$ curl https://inconshreveable.ngrok.app
> hello world from my new cloud endpoint

Easy. You’ve got a cloud endpoint online serving requests! Now that we know how to create a cloud endpoint, let’s take a deeper look into what you’ll use them for.

Cloud endpoints as your 'front door'

Combining cloud endpoints with agent endpoints gives developers full autonomy over when and where their services become accessible.

For instance, an Ops team can create a public cloud endpoint, such as api.example.com, and configure OAuth to validate and authorize client requests before they reach your internal service. Meanwhile, developers can keep building critical functionality, such as pricing, on an agent endpoint with an internal binding like api-pricing.example.internal. When ready, Ops can enable public API access via api.example.com/pricing and route to api-pricing.example.internal using forward-internal action. 

When client requests hit api.example.com/pricing, ngrok forwards them to the agent endpoint (api-pricing.example.internal). This setup empowers developers to manage service delivery using the ngrok API gateway, eliminating the usual friction of filing tickets for Ops.

Here is the Traffic Policy snippet that makes this possible:

on_http_request:
  - actions:
      - type: oauth
        config:
          provider:
            google:
              client_id: YOUR_GOOGLE_CLIENT_ID
              client_secret: YOUR_GOOGLE_CLIENT_SECRET
  - name: Route /pricing/* to internal endpoint
    expressions:
      - req.url.path.startsWith('/pricing')
    actions:
      - type: forward-internal
        config:
          url: https://api-pricing.example.internal
  - name: Route all other traffic to production
    actions:
      - type: forward-internal
        config:
          url: https://api.example.com.internal

To dig deeper on how to do set up the routing that makes Ops control and developer self-service possible, check out a few of our resources:

Show a custom error page if your traffic comes from blocklisted IPs

In this real world example (that we use at ngrok) we send traffic to a cloud endpoint that routes anonymous traffic (that we identify with IP Intelligence) to a specific error page designed just for them.

on_http_request:
  - expressions: ["'proxy.anonymous' in conn.client_ip.categories"]
    actions:
      - type: custom-response
        config:
          status_code: 403
          content: "<!doctype html><html><body>We do not allow access from anonymous proxies. (ERR_NGROK_22000)</body></html>"
          headers:
            content-type: text/html

Another niche element at play here is that we don’t have to host a specific service or webpage for your error messages, you can simply utilize ngrok’s Traffic Policy to serve up static content. (More examples here).

Having issues with anonymous traffic to your app? Give this example a try.

Show a custom error page if your app is offline

We all have a bad day. Help your end users understand what’s going on if your service goes down by 

  1. Creating a cloud endpoint with a Traffic Policy that includes a forward-internal action to your agent endpoint and a custom response for when the agent is offline.
  2. Configure the Traffic Policy to use the custom-response action when the forward action fails.

The Traffic Policy example will look like this:

on_http_request:
  - actions:
      - type: forward-internal
        config:
          url: https://your-agent-endpoint.internal
          on_error: continue
      - type: custom-response
        config:
          status_code: 503
          content: |
            <!DOCTYPE html>
            <html>
            <body>
              <h1>Service Temporarily Unavailable</h1>
              <p>We apologize, but our service is currently offline. Please try again later.</p>
            </body>
            </html>
          headers:
            content-type: text/html

Again, we don’t have to host a specific service or webpage for your error messages, just use ngrok’s Traffic Policy to serve up static content, and make it your own with HTML.

ngrok.com runs on ngrok cloud endpoints

At ngrok, we dogfood everything we ship to customers. We’ve already been using cloud endpoints and find all sorts of uses for them. You’re even accessing one right now!

The https://ngrok.com site is a cloud endpoint itself with a chain of Traffic Policy rules to filter and take action on requests as they hit our network. For one, we block Tor traffic using the custom error page shown just above, add redirects, and route traffic to multiple external services, like our blog, docs, and downloads page.

For example, here’s how we forward ngrok.com/downloads to a Vercel app with the upcoming forward-external Traffic Policy action: 

on_http_request:
  - expressions:
     - req.url.path.startsWith('/downloads') ||
       req.url.path.startsWith('/__manifest')
    actions:
      - type: forward-external
        config:
          url: https://<NGROK_DOWNLOADS_DEPLOY>.vercel.app

Stay tuned for a full breakdown of the entire ngrok.com dogfooding story in an upcoming post from our infra team!

Replacement for ngrok Edges

Cloud endpoints may feel familiar if you’ve used Edges before. They replace and deprecate Edges with a primitive that is both simpler and more flexible. They are powered by our expressive Traffic Policy engine that was built with modern traffic routing needs in mind. Cloud endpoints improve on Edges with:

  • Simplified object model: You don’t have to grapple with Tunnel Groups, Backends, Modules, Edge Routes, or labeled tunnels. Everything is now an enndpoint with an associated Traffic Policy. Traffic management becomes more intuitive, reducing the learning curve.
  • Simplified traffic routing: With Traffic Policy, cloud endpoints enable you to route traffic not just by path but also by headers, subdomains, and more. This added flexibility gives you greater control over how traffic flows to your services.
  • Fewer API calls: Setting up a cloud endpoint requires just a single API call, unlike Edges, which involve multiple calls for tunnel groups, edge routes, and modules. This reduces complexity and minimizes the risk of failures during automation.

Want to get off Edges? See the guide on how to migrate off of Edges to cloud endpoints. There is no planned end-of-life date for Edges yet. That will be announced separately with plenty of time to make a transition along with automated tooling to help you migrate.

Wrapping up

To close, we’re pretty pumped up about cloud endpoints and the flexibility they bring to managing your traffic. So excited, that we’re actually using them ourselves. Stay tuned for more in-depth guides on how you can utilize cloud endpoints in your own workflows. Until then, peace.

Share this post
Nijiko Yonskai
Niji is a Principal Product Manager who helps shape ngrok user experience. Previously product at Kong and Postman.
API gateway
Cloud edge
Networking
Company
Gateways
Production
Development