Skip to main content

Forwarding Traffic to and Load Balancing Internal Endpoints with Cloud Endpoints

warning

Cloud endpoints are currently in private beta. The functionality listed below is not yet publicly available.

This guide provides an example of using an ngrok Cloud Endpoint to route traffic between multiple internal endpoints.

Core Concepts

  • Cloud endpoints are centrally managed endpoints in the cloud that can be used to route traffic to agent endpoints.
  • Internal endpoints are endpoints that are not publicly accessible and are only reachable from within your network by cloud endpoints via the forward action.
  • Load balancing improves application performance and reduces load by distributing incoming traffic across servers. This leads to faster response times for user-facing applications.

Prerequisites

To follow this guide, you will need a local computer with ngrok installed. You can download ngrok here..

If you are going to be following along using the ngrok CLI, you will need:

If you are going to be following along using CURL, you will need:

  • An ngrok API key as an environment variable named NGROK_API_KEY.

Step 1 — Create an Internal Endpoint

Let's start by creating a internal endpoint, initiated from the agent. The endpoints will be in HTTP for this guide, but you can also use TCP or TLS.

ngrok http 80 \
--url https://your-domain-name.internal

After running the above command, you should see an internal endpoint with an "online" status for your domain. This endpoint will route traffic to the app running at your local port 80. You can set up any HTTP server to run on this port locally, and it the endpoint will internally route traffic to that server.

Step 2 - Create a Domain

Next, let's create the domain that the cloud endpoint will reside on. For the purpose of this guide, we will create a ngrok HTTP domain.

ngrok api reserved-domains create \
--domain ${NGROK_SUBDOMAIN}.ngrok.app
  • Replace or set ${NGROK_SUBDOMAIN} as the subdomain you'd like to use for this guide.

After running, you should see the following:

200 OK
{
"id":"rd_2MT5Bqt0UzU0mFQ0zr8m1UQWCfm",
...
}

When you have completed this step, you can move on to the next step.

Step 3 — Create a Traffic Policy for your Cloud Endpoint

Unlike agent endpoints in which traffic policies are optional, cloud endpoints require a traffic policy so they know how to handle incoming traffic. This guide will showcase one of the simplest and most common use cases: forwarding traffic to other endpoints (the internal endpoint we created in Step 1) via the forward action.

---
on_http_request:
- actions:
- type: forward-internal
config:
url: https://your-domain-name.internal
binding: internal

Step 4 — Create a ngrok Cloud Endpoint

Now we can create a ngrok Cloud Endpoint that points to our newly created internal endpoint. Cloud Endpoints can be created via the API or ngrok dashboard.

We use the URL from Step 2 as the url value in the traffic policy, and the traffic policy from Step 3 as the traffic-policy value. The binding is set to public so anyone can access our cloud endpoint.

ngrok api endpoints create \
--api-key YOUR_NGROK_API_KEY \
--bindings public \
--description "sample description" \
--metadata sample-metadata \
--url https://${NGROK_SUBDOMAIN}.ngrok.app
--traffic-policy '{"on_http_request":[{"actions":[{"type":"forward-internal","config":{"url":"https://clep.internal","binding":"internal"}}]}]}'
  • Replace NGROK_SUBDOMAIN with the value used in step 2.

Step 5 — Load Balancing with Endpoint Pools

Load balancing is simple with ngrok Endpoint Pools. Simply start another agent with the same internal endpoint URL, and the traffic will be automatically load balanced between the two agents in the pool!

ngrok http 80 \
--url https://your-domain-name.internal

Conclusion

And we're done! You've successfully created a cloud endpoint that can be accessed from the URL you reserved: ${NGROK_SUBDOMAIN}.ngrok.app The cloud endpoint will refer to its traffic policy and forward traffic to our internal endpoint. The internal endpoint finally exposes the local port on which it was created on in Step 1.