Deploy a globally-distributed API gateway with DigitalOcean and ngrok

The big cloud providers like AWS and GCP offer hundreds of features and integrated platforms… and charge you dearly. But what if you don’t need all their bells and whistles.? In that case, you might get better results with a provider like DigitalOcean (or Linode), which focuses more heavily on simplicity and pricing aimed at early-stage cloud users.

You don’t have to look far to find stories of startups deploying resilient services off a single Droplet, a few GB of RAM, and a few hours of their time.

That strategy keeps your cloud budget low, but what happens when your usage starts to grow? What if you realize you need to serve API consumers with fast latency in not just one region, but three or more?

While you can use DigitalOcean’s native load balancers, they are limited to VMs in the same data center. DigitalOcean also recently released a new beta version of their global load balancer, but that vendor-locks you to their services—exactly why you avoided AWS or GCP in the first place.

Here is where ngrok excels: Our always-on Global Server Load Balancer (GSLB) can route traffic to any number of Droplets running your API service and the ngrok agent, regardless of where they are—and where your customers are. Traffic comes into the GSLB through one of ngrok’s many points of presence (PoPs) around the globe is routed instantly to whichever Droplet offers the lowest latency.

Not only do you get speed and resiliency, but also extra features you can’t access with DigitalOcean’s new offering.

How does ngrok’s global load balancing ensure API gateway availability and high performance?

For example, you might deploy your API service into three Droplets located in DigitalOcean data centers in San Francisco, Berlin, and Sydney.

In the graphic below, the API consumer is located in Berlin, so ngrok would route traffic first to its Europe-based PoP eu-fra-1, then onto your Droplet based in Berlin.de), which focuses more on simplicity and pricing aimed at early-stage cloud users.

Because the next API consumer is based in San Francisco, ngrok routes their traffic to ngrok’s California-based PoP, then to your San Francisco-based Droplet.

In both cases, your users benefit from minimal latency and caching.

In addition to increasing performance, ngrok’s GSLB also improves resilience. As mentioned before, ngrok has dozens of PoPs deployed around the globe, which ensures that in the case of any network interruption, ngrok will route user traffic to the next-closest reachable alternative. All this geo-aware rerouting happens without your intervention, ensuring maximum availability for your applications with none of the complex networking tasks.

Elevate Global Load Balancing for your APIs (and apps) with ngrok

What else can ngrok’s GSLB and API gateway do that other load balancers can’t?

Traffic policy and management

ngrok has been far more than a tunneling service for a long time, and with our Traffic Policy module, you can enable must-have features for API gateways, like JWT validation and rate limiting, on the GSLB level instead of directly next to your API service in DigitalOcean.

Traffic Policy gives you a familiar developer experience for configuration, starting with YAML or JSON files. From there, you write expressions using the Common Expression Language (CEL), which look similar to what you already build in C, JavaScript, or Python. We’ve supplemented the built-in CEL variables with more than 100 others related to the nature of requests and responses to help you build flexible policies that match your API gateway requirements directly.

While you configure policies next to your ngrok agents, all compilation and execution happens on ngrok’s GSLB, meaning that all traffic that does not meet your requirements, like unauthorized requests, never reaches your API services running on DigitalOcean.

Multi-cloud support

Another powerful use case for ngrok’s GSLB over the alternatives is multi-cloud enablement.

Instead of provisioning three Droplets in three regions, you could deploy two with DigitalOcean and a third with AWS. ngrok then automatically distributes requests among deployments for additional resiliency or to test your cloud options with production traffic and workloads.

Environment independence

Finally, ngrok is truly environmentally agnostic. This same load-balancing orchestration happens no matter how you deploy ngrok to your Droplets or other cloud VMs, whether that’s the CLI agent, through one of our SDKs, or via the Kubernetes Operator. You can start with VMs in one cloud+region, then add on a Kubernetes cluster in a second cloud+region, and your end users won’t know a thing.

Developer-friendly 

At this point, you might be thinking, “Doesn't it take a ton of networking know-how to set all this up?” First, ngrok’s GSLB is enabled for all endpoints automatically, whether you’re using a single ngrok agent or a dozen. When you add a branded domain to ngrok, we also create and manage TLS certificates.

Beyond that, using ngrok doesn’t require you to modify your application or API service. You just run the ngrok agent on each of your DigitalOcean Droplets and we handle the rest of the networking and security complexity.

Start load balancing at global scale 

Ready to try load-balancing with your DigitalOcean workloads and an API gateway? Check out our comprehensive integration guide for details on installing the agent and using the ngrok API to establish weighted tunnels.

By the end of that guide, you’ll have provisioned three Droplets, deployed an example service, and established an equally-weighted ngrok endpoint to handle all your traffic with minimal latency and disaster-proof resiliency.

Eager to jump into ngrok agents and endpoints, but wish you had a path to your first production-grade tunnels with a smoother ramp-up, why not reserve your spot in the next edition of ngrok’s Office Hours? Join our DevEd and Product teams to demo common solutions, learn about endpoints and tunnels together, and answer your burning questions live.

Share this post
API gateway
Networking
Gateways
Load balancer
Production