API gateway gallery: Drop-in API policy management examples

May 30, 2024
|
10
min read
Joel Hans

API developers deserve API gateways that are flexible enough to operate in whichever way gets them to production fastest. Simple enough for them to understand the current state without having to loop in a peer in operations. Programmable enough to quickly make the necessary changes to protect their APIs reliability, performance, and developer experience for the consumer downstream.

While other API gateway providers seem to operate under the assumption that it’s impossible to achieve, ngrok starts with an API gateway that’s truly developer-defined.

Let’s take a closer look at the difference between ngrok’s API toolkit and the entrenched (aka expensive) coterie of deployed and cloud API gateways—but if you’re already onboard and just want to see what policy management magic you can get up to in a few minutes, feel free to skip down to the gallery.

Does your API gateway make policy management accessible to developers?

Unfortunately, if you’re using any of the most popular deployed or cloud API gateways, the answer ranges from a pained “not particularly” to a flat-out “that’s impossible.”

Most deployed API gateways come with missing signs, potholes, and guardrails that are a little too ambitious, not just keeping you on the road, but forcing you into a lane that keeps getting smaller. As a developer, trying to enact change on these API gateways is expensive, cumbersome, and slow, because they:

  • Force you to create more than one deployment to cover multiple regions, which means you’re actually maintaining two or more separate gateways to have a global presence.
  • Ask you to pay extra for policy plugins you consider essential, like advanced authentication, or request/response modification.
  • Require weeks or months of coordination with operation teams to spin up.
  • Often rely on tools and languages you’re not familiar with, like XML (yikes) and CSharpScript (double yikes), or force you to install entire ecosystems of tools (Make, Docker, Go, plus “special” images) just to write a basic custom policy for your API.

On the other hand, cloud-based API gateways are often easier to deploy and simpler to use than their deployed counterparts, but are often far more limited in features and lock you into specific environments. You can’t add all the policies you’d like or go multi-cloud without once again begging your operations peers for help that might take them days of work and weeks of waiting for sign-offs from networking and security stakeholders you barely know.

How ngrok lets you quickly add API policy and traffic management

A truly developer-defined API gateway allows you to flexibly deploy and configure in ways that best serve your API consumer. With ngrok’s API gateway, you can:

  1. Deploy the ngrok agent in whichever way best gets your API to your consumers quickly and reliably, including directly on a Linux/macOS/Windows system, within any Kubernetes cluster, or directly within your Go, JavaScript, Python or Rust app with one of our SDKs.
  2. Configure your API gateway either (or both) at the agent level or as an Edge in the ngrok dashboard.
  3. Run all policy and traffic management workloads on the ngrok Cloud Edge at a Point of Presence (PoP) closest to your API consumer, for a consistent and consistently fast global presence.

With ngrok, you enable ingress at the runtime level (and can even configure it there, too) but also decouple its operation. Your API is then portable across all possible environments, letting you freely test it locally, in a CI/CD environment, or on multiple cloud providers with identical behavior and results for your API consumer.

At the heart of this flexibility is our new Traffic Policy module, which provides a flexible, programmable, and uniform approach to managing API requests and responses across all the ways you use ngrok. This module lets you securely connect your APIs, whether they’re in local testing environments or production deployments, using a single configuration, with support for essential security and availability policies like JWT authentication and rate limiting.

Unlike both traditional deployed API gateways and their newer cloud alternatives, ngrok’s developer-defined option is feature-rich, works everywhere you do, and lets you self-serve your way to production without the operational headaches, red tape, or explosive costs.

Today’s testbed: a simple Go-based API using the ngrok SDK

Sometimes you just don’t want to distribute another binary or manage a separate process to start accepting traffic on your new API—that’s the entire pain point of deployed API gateways, after all. When you embed the ngrok agent directly into your app using one of our SDKs, you can build business logic and ingress at the same time, and in the same repository, using all your favorite tools.

If you’d like to see this lifecycle in action, you can quickly deploy an API using Go and the ngrok SDK from your local workstation. The API is supremely simple—it doesn’t have a database or even fully-functioning CRUD—but it will adequately show the flexibility and programmability of the ngrok API gateway.

If you’d like to just start shaping traffic already, you can skip directly to the examples.

Start by setting up a basic Go project with the packages you’ll need.

Create main.go and paste the following Go code into it.

Create an ngrok domain, which we’ll refer to as < YOUR-NGROK-DOMAIN > from here on out.

Next, you want to manage your API gateway with an Edge. Head over to Edges -> New Edge -> Attach a domain I already have, and choose the domain you just created. You can now configure your ngrok agent to attach a new tunnel to that Edge. Look just under the name of your Edge to see a label string that begins with edghts_ and copy it.

Paste both the Edge label, followed by your ngrok Authtoken, into the command below:

If you refresh your ngrok dashboard, you’ll see that you have a tunnel running.

Now you can create a POST request to your new API:

The response indicates you added a new legendary creature successfully, and could continue expanding your “database” as desired:

Drop-in API policy management examples with ngrok

As mentioned earlier, you can configure your ngrok API gateway in two ways:

  1. At your ngrok Edge using a web-based editor.
    You gain a few advantages when you establish API policy management at the Edge level. First, you can apply any of the YAML-based drop-in policies shown below agnostically of how you’re using ngrok, which means you don’t need to learn and apply multiple syntaxes and patterns. Second, applying policy at the ngrok Edge won’t interrupt the lifecycle of your upstream server. Finally, you can attach multiple ngrok agents to a single Edge—for example, if you’re deploying your API from multiple regions—to manage and apply policies across all of them consistently and instantly.
  1. Directly with the ngrok agent as an ngrok endpoint.
    You can configure the agent itself—such as via the Go SDK, agent CLI, and beyond—to store your API gateway configurations as close to your business logic as possible. This lets you more tightly version-control your policies and makes your deployments declarative and repeatable.

No matter how you decide to apply your API policies, just remember they are evaluated at runtime and in sequential order, so place your highest-priority policies at the top. Only policies without expressions, or those with expressions that return true, are executed.

The drop-in API policy management examples below use the first option: on the Edge and using YAML. You can edit an existing Edge by opening the Traffic Policy module. Click Edit Traffic Policy and paste in any drop-in policy below or mix-and-match actions based on what you need from your API gateway or what provides the best experience for your consumers. When you’re done, click Save at the top-right of the ngrok dashboard to apply your new API traffic policy instantly.

Template #1: Add JWT authentication and key-based rate limiting

This drop-in policy is the de facto standard of all API gateways. It rejects access to your API to those who haven't properly authenticated their machine-to-machine requests with JSON Web Tokens (JWTs) and restricts their usage to reasonable limits. This prevents an accidental distributed denial-of-service (DDoS) attack on your upstream service and helps control your costs.

For this policy to work, you must have defined your API with an identity provider like Auth0, which issues JWTs on your behalf for ngrok to validate with every subsequent request.

Template #2: Rate limit API consumers based on authentication status

If you have a public API, you may want to let consumers try it out, albeit with strong restrictions, but also allow those who have signed up for your service and received their authentication token to access it more freely.

In the example below, ngrok applies two tiers of rate limiting: 10 requests/minute for unauthorized users and 100 requests/minute for users with a JWT token and the appropriate Authorization request header.

Template #3: Rate limit API consumers based on pricing tiers

This policy enforces three tiers of rate limiting—free, bronze, silver, and gold—based on the headers present in API requests—or lack thereof. 

You would then need to instruct your API consumers to use the appropriate header based on their pricing tier, ideally through your developer documentation.

Looking for a quick way to test your new drop-in rate limiting policies? This loop prints out the response status code from curl, showing you exactly when good 200 status codes become 429, indicating Too Many Requests.

Template #4: Block traffic from specific countries

Sometimes, you must refuse traffic from specific countries due to internal policy or sanctions applied by the country from which you operate. With the conn.Geo.CountryCode connection variable, ngrok's API gateway lets you send a custom response with a status code and content to deliver as much context as you want or are required to provide to the failed request.

Replace < COUNTRY-01 > and < COUNTRY-02 >, or add more countries with any of the standard ISO country codes.

Template #5: Maintain and deprecate API versions

As you continue improving your API, whether to add features or fix security flaws, you’ll eventually want to migrate consumers to newer versions. If your developer documentation instructs consumers to use an X-Api-Version header with their requests, you can quickly increment the supported version and deny requests to others.

This example also demonstrates how your custom responses can also be formatted in JSON.

Template #6: Manipulate headers on inbound requests

When you manipulate headers on requests, you can then provide your upstream service more context and detail to perform custom business logic. If your API returns prices on goods for sale, for example, your upstream service could localize prices using the API consumer’s country code.

Your headers can use arbitrary strings, like the is-ngrok header in the example before, or any request variable.

Template #7: Add compression to your responses

If your upstream service can't compress responses or you would like ngrok to do the work, you can compress all responses using the gzip, deflate, br, or compress algorithms.

Template #8: Enforce the TLS version of requests

ngrok’s API gateway lets you quickly add checks to requests to ensure they meet your internal security requirements and send an informative error message if not.

Template #9: Log unsuccessful events to your observability platform

This API policy logs every unsuccessful request to ngrok's eventing system, by checking all responses with status codes of less than 200 or greater than or equal to 300, letting you observe the effectiveness of any API traffic policy in real time.

Template #10: Limit request (POST/PUT) size limits

If your API accepts new documents or updates to existing ones via user input, you could be at risk of excessively large requests—either accidental or malicious in origin—that create performance bottlenecks in your upstream server or excessive costs due to higher resource usage.

What’s next?

Get started with the ngrok API gateway by signing up for ngrok and checking out the Traffic Policy engine on your first Edge. Once your ngrok agent is running, you can use these drop-in API policy management examples and start shaping the security and availability of your endpoints in a few minutes.

Don’t be afraid to experiment with API policies! Feel free to mix and match the examples provided, add in additional actions we haven’t covered, and even try your hand at custom logic using the Common Expression Language (CEL) expressions at your disposal. When you apply API policies directly on the ngrok dashboard, we’ll validate your syntax and suggest improvements to ensure your upstream service is always accessible.

We’re also building a Rule Gallery in our documentation for common-to-unconventional use cases for API policy management. If you extend one of the drop-in templates or create your own, we’d love to see a pull request in the ngrok-docs repository or a message on the ngrok community on Slack to see what you’ve built.

Share this post
Joel Hans
Joel Hans helps open source and cloud native startups generate commitment through messaging and content, with clients like CNCF, Devtron, Zuplo, and others. Learn more about his writing at commitcopy.com.
API gateway
Gateways
Production