From nginx to ngrok: Dogfooding our own website with Traffic Policy

When you write software, at some point, you need to make it accessible to other networks in order to provide value to others. ngrok’s mission is to make this as simple as possible for our users, but we're no exception to this concept.

In addition to our primary tunneling service, we also had things we needed to put on the internet, like our Python-based website, Go REST API, and so on. In the beginning, we did what many of us have done before as infrastructure engineers: We spun up an nginx proxy and wrote some configurations to route to each of our services based on hostnames. After getting it working in Kubernetes, provisioning some certificates, and configuring DNS, we were online! 

It definitely took a bit more than “one line to online” though. 😊

As the company and product matured, we started making more applications and services that we needed to put online, like dashboards, APIs receiving webhooks, internal tooling, etc. As this happened, we began to ask the question, “Could we create ingress to these services using ngrok?” After a quick POC, we realized, Yes!, we could, and with relative ease. 

Thus, the concept of dogfooding took root at ngrok.

Our dogfooding origin story

Dogfooding—if you haven’t heard of it—is the practice of using your own product internally. The term’s origin is a bit murky: some say it came from an Alpo spokesperson who claimed they fed the product to their own dog, while others credit the president of Kal Kan Pet Food, who allegedly ate it during shareholder meetings. Microsoft later helped popularize the concept by encouraging teams to use their software in development. 

Today, while the term may still feel niche, many companies embrace dogfooding to build empathy and tighten feedback loops. At ngrok, it’s a core tenet of how we shape our product—and thankfully, we’re in software, not pet food.

Using ngrok worked very well for all the new things we were creating. It was simple to spin up a new application or service and provide ingress to it with ngrok. Since this worked so well, we wanted to make this our primary and only way of hosting our applications and services, and we turned our eyes to our nginx proxy that had been serving traffic for our website and API since the beginning. 

Our API was easy to transition, since we were only using nginx to proxy api.ngrok.com to our API pods. However, when we looked at the configurations for ngrok.com, we realized we were going to have some issues.

As many of us have experienced before, software tends to just grow in complexity over time. What had started as a simple “route ngrok.com to the Python web app pods” had exploded in complexity as we onboarded new teams and got new requirements.

  • Our documentation pages were re-written as a static site using Docusaurus and hosted in S3 using nginx to proxy requests to /docs to S3. 
  • Our marketing team began re-doing some of our website pages and created blog posts using the Webflow CMS which again nginx needed to route various unique paths to. 
  • Website paths, of course, changed over time as well, so we added various redirects and path rewrites.
  • Over time, it also became home to a growing list of one-off redirects—like quick links to our Discord community at https://ngrok.com/discord
  • Lots of previous subdomains like download.ngrok.com, docs.ngrok.com, and redirects respecting different paths like http://blog.ngrok.com/posts/nginx-ngrok-dogfooding had been added in over time.

We quickly realized that we couldn't tackle all these requirements one at a time to host ngrok.com with ngrok. Because we would have to change DNS for the whole domain, we needed to be able to handle all of these requirements with ngrok. At the time—around two years ago—ngrok couldn’t yet support all of these use cases. We could provide ingress, some basic path-based routing, and various other features like OAuth protection, but we couldn’t do basic things like a URL rewrite or redirect.

You may be asking, “But wait, can’t you just use ngrok to provide ingress to nginx?” Yes, technically, we could have. But to really embody the spirit of dogfooding, you shouldn't side-step your product gaps and fill those with something else. We took the stance that the ngrok product should fill these gaps, put this idea on hold, and went back to the drawing board.

Dogfooding with a more capable API gateway

Fast forward to the present, and with the introduction of Endpoints and Traffic Policy, ngrok had transformed. What was once simple ingress to an application with the capability to add features like authentication and IP allowlists has now become a full API gateway solution. 

At the heart of this evolution is Traffic Policy: an expressive configuration format that lets you manipulate traffic with fine-grained control. Think nginx configuration, but for the ngrok cloud, plus all of the ngrok goodies you're used to, like easy authentication and webhook verifications, built with the simplicity and user experience focus that's at the heart of ngrok's core values.

With Traffic Policy and Endpoints in hand, we set out to retire our nginx proxy once and for all. But before we could do that, we had to make sure ngrok could take over everything nginx had been responsible for. It wasn’t just about routing traffic—we needed to match all the little behaviors and edge cases that had accumulated over time.

Here’s what our nginx setup was handling:

  • TLS termination using manually-managed certificates
  • Access logs piped from nginx pods into Datadog
  • Custom response headers on non-production sites to discourage indexing
  • A growing list of redirects and regex-based path rewrites
  • Proxying requests to external services like our docs site and CMS

These weren’t just nice-to-haves—they were blockers for switching over. We went through them one by one and asked: “Can we do this with ngrok now?”

SSL certs and TLS termination

Out of the gate, TLS termination was one area where ngrok already had us covered. With nginx, we had to manually manage TLS certificates for every domain—creating Kubernetes secrets for each one, referencing them in Ingress objects, and ensuring the nginx ingress controller could mount and serve them properly. It worked, but it wasn’t exactly elegant.

With ngrok, this all just… goes away. When we register a domain in our dogfood ngrok account and add the appropriate DNS records to prove ownership, ngrok automatically provisions and renews certificates for us. TLS termination happens at the ngrok edge, and everything is handled behind the scenes. No more worrying about expiring certs or fiddling with secrets.

You can read more about how ngrok manages branded domains and dive into TLS-specific details.

Logs

With nginx, getting logs into Datadog was straightforward—we already had a Datadog agent running in our cluster, and it slurped up stdout logs from the nginx pods. It wasn’t glamorous, but it worked.

When we moved to ngrok, the part of the infrastructure responsible for handling inbound traffic shifted from our Kubernetes cluster to ngrok’s cloud infrastructure. Instead of logging requests from a pod inside our cluster, we now needed a way to observe requests handled by ngrok’s global points of presence.

Thankfully, ngrok already supported this out of the box. Every request that flows through an ngrok Endpoint generates a Traffic Event: structured logs that include all the usual details you'd expect in a proxy access log—source IP, method, headers, latency, response status, and more.

We configured our dogfood account with an Event Destination to forward these logs to Datadog. There was no special setup required, no sidecars to deploy, and no extra wiring to get the logs flowing.

As a bonus, ngrok also offers Traffic Inspector: a real-time stream of traffic events that you can filter by path, status code, method, or endpoint. It’s not a full replacement for a log aggregation system, but it’s a surprisingly handy tool for debugging issues or validating config changes.

Custom headers for non-production sites

One small but important behavior we had to carry over from nginx was setting custom response headers on some of our non-production environments. While these environments aren’t security-sensitive, we still prefer they not show up in search engine indexes.

With nginx, we had a global configuration that added headers like X-Robots-Tag: noindex to discourage crawlers from indexing these sites and handled this with a shared ConfigMap:

# ConfigMap with headers
kind: ConfigMap
apiVersion: v1
metadata:
  name: no-index-headers
  namespace: ingress-nginx
data:
  X-Robots-Tag: noindex
# Main nginx config pointing to it
add-headers: "ingress-nginx/no-index-headers"

While it’s a bit more manual than a global config, it has the advantage of being clear and portable—every endpoint declares its own behavior, and it’s obvious what’s being applied where. We’ll take that tradeoff any day.

Redirects

With the launch of Traffic Policy, ngrok finally supported powerful path-based redirects, which blocked us from fully replacing our nginx setup.

Previously, nginx handled redirects using Kubernetes Ingress annotations. For example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blog-redirect
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: https://ngrok.com/blog-post/$2
spec:
  rules:
    - host: blog.ngrok.com
      http:
        paths:
          - path: /posts(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80

This redirected URLs under /posts/... to the new blog-post path using $2 to capture the tail part of the path.

With Traffic Policy, redirects are now native and straightforward:

on_http_request:
  - name: Redirect old blog URLs
    expressions:
      - req.url.path.startsWith('/posts')
    actions:
      - type: redirect
        config:
          from: https://blog.dev-ngrok.com/posts(/|$)(.*)
          to: https://dev-ngrok.com/blog-post/$2

These rules use CEL (Common Expression Language) for the from and to patterns and match on request paths, giving us full control over redirect behavior based on precise rules.

This was one of the last missing pieces—once redirect support landed, ngrok could handle it just as well as nginx did, and we no longer had to rely on Kubernetes annotations.

Forwarding traffic

Forwarding requests to external services—like our static docs site on S3—was the last major gap before fully migrating off nginx.

Before we switched over, here’s how we handled /docs paths in nginx:

  • We hosted a Docusaurus-built static site on S3.
  • nginx stripped the /docs prefix and forwarded the request to the S3 bucket.

Our Kubernetes Ingress required several annotations to rewrite the path, set the upstream host, enable TLS, and ensure secure proxy headers. Being unfamiliar with nginx configurations, I recall trying various combinations of annotations and snippets until it finally worked—so I shipped it.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: doc-site
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "https"
    nginx.ingress.kubernetes.io/rewrite-target: "/$2"
    nginx.ingress.kubernetes.io/upstream-vhost: "docs-s3.ngrok.com"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_ssl_name docs-s3.ngrok.com;
      proxy_ssl_server_name on;
spec:
  rules:
    - host: ngrok.com
      http:
        paths:
          - path: /docs(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 443
  tls:
    - hosts:
        - ngrok.com

There may have been a simpler way to do this—or some of these annotations may not have even been required—but this became the boilerplate that got copied as needed.

With Traffic Policy and the forward-external action, though (request access yourself via developer preview and we'll let you know when it's ready for public use!), the configuration is much more straightforward:

on_http_request:
  - name: docs
    expressions:
      - req.url.path.startsWith('/docs')
    actions:
      - type: url-rewrite
        config:
          from: /docs(/|$)(.*)
          to: /$2
      - type: forward-external
        config:
          url: https://docs-s3.ngrok.com

Here’s what’s happening:

  • Strip the /docs prefix via url-rewrite, so Docusaurus sees requests at /.
  • Proxy the request to our S3-hosted docs site using forward-external.

Comparing the two side by side, the Traffic Policy approach is far more straightforward—no juggling annotations, TLS flags, or header overrides. This clean simplicity was a welcome relief and a strong indicator that ngrok had truly grown into a real API gateway.

Setup using the ngrok Kubernetes Operator

With all our requirements checked off, we were ready to begin the final migration—and as with the rest of our infrastructure, we reached for the ngrok-operator.

We’d already been using the operator for other services with backing pods, but the ngrok.com configuration was a bit different. There were no application pods to route to—just a collection of redirects and proxy rules to external systems like S3 or our CMS. So we created Cloud Endpoint CRDs, and here are a few examples:

apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: CloudEndpoint
metadata:
  name: docs-subdomain-redirect
spec:
spec:
  url: https://docs.ngrok.com
  trafficPolicy:
    policy:
      on_http_request:
        - name: docs-subdomain-redirect
          actions:
            - type: redirect
              config:
                to: https://ngrok.com/docs
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: CloudEndpoint
metadata:
  name: blog-subdomain-redirect
spec:
  url: https://blog.ngrok.com
  trafficPolicy:
    policy:
      on_http_request:
        - name: posts-redirect
          expressions:
            - req.url.path.startsWith('/posts')
          actions:
            - type: redirect
              config:
                from: https://blog.ngrok.com/posts(/|$)(.*)
                to: https://ngrok.com/blog-post/$2
        - name: author-redirect
          expressions:
            - req.url.path.startsWith('/author')
          actions:
            - type: redirect
              config:
                from: https://blog.ngrok.com/author(/|$)(.*)
                to: https://ngrok.com/blog-author/$2
        - name: root-redirect
          expressions:
            - req.url.path.startsWith('/')
          actions:
            - type: redirect
              config:
                to: https://ngrok.com/blog

Each of these resources lives in our infrastructure repository as code, just like our other Kubernetes CRDs. They’re version-controlled, reviewed, and rolled out through the same deployment pipelines.

Once everything was defined, all we had to do was flip the DNS from our cluster’s nginx ingress to the ngrok-managed endpoints. Just like that, we were live.

What we loved about dogfooding ngrok

With nginx, if we had any cluster or pod issues, we’d have a service disruption to our site.

Now we use the Operator to manage ngrok API via Kubernetes CRDs, but we don't actually rely on it to keep traffic moving to these endpoints. They're all ngrok Cloud Endpoints that execute solely on the ngrok network—we could fully delete the operator pods from our cluster and our site would be fine.

This means none of this traffic actually goes to this cluster anymore as well. Instead of having to route through multiple layers of AWS networking to reach a pod running in Oregon for a simple redirect or custom response, our cloud endpoint can respond from the closest ngrok PoP. This cuts out some network traffic to our cluster, dependency on our pods being healthy, and is also faster depending on where you're hitting ngrok.com from.  

With nginx, we only ran it in our single Control Plane cluster in Oregon that hosts the shared services that our dataplanes use. This meant that a request from Australia would have to traverse the open internet to Oregon before it could be proxied by nginx to our docs site in S3 fronted by CloudFront. By using ngrok, we automatically gain the benefits of the ngrok Global Load Balancer and having datacenters across the globe. Now requests from Australia hit our Australian datacenter and get routed to a nearby Cloudfront PoP. Much closer and faster!

We also noticed that other teams could self-service changes to this setup more easily. For example, the frontend team created a new separate static app for our Downloads page that they wanted to drive traffic to and set up redirects for. Since they didn’t know nginx configurations, they would've likely logged a ticket to our team to update it for them—but since it's now using a traffic policy they understand, they were able to self-service the rules and iterate on changes on their own with confidence.

What we're looking to dogfood next

Just like how evolving your product never ends, dogfooding never ends—it’s a constant process, like a snake eating itself.

Throughout this migration, we got to provide feedback on new features while they were still being designed, catch bugs before they hit customers, and help shape the user experience of ngrok’s Traffic Policy system as it matured. And that cycle continues. We’re already eyeing the next set of features we want to try out in production:

Everything customer-facing service at ngrok now runs on ngrok—our website, our dashboard, our API, our internal admin tools. Even our CI, build system, and dev environments are wired through the platform. That means we know immediately if something goes wrong, and we get daily feedback from every corner of the company on how things could be better.

The migration may be complete, but the dogfooding never stops.

Share this post
Alex Bezek
Alex is an Infrastructure Engineer at ngrok helping to manage our internal developer platform. He loves all things cloud native & is obsessed with Kubernetes!
API gateway
Kubernetes Operator
Networking
Traffic Policy
Company
Gateways
Kubernetes
Production