Tired of slow Kubernetes dev loops? Try local projection with bindings

As someone who's development work has more or less dwindled down to simple demo apps and API services that I only need to update every few months, even I hate the remarkably slow standard-issue dev loop in Kubernetes.

You know the drill. Save your code -> commit and push changes to your repo -> wait for the CI pipeline to build and publish new images -> wait for the cluster to pull in new versions of images -> wait for the pods to restart -> finally start to check how your changes impact this development or staging environment.

And on the flip side, I know enough infrastructure and platform engineers to empathize with their pain in trying to support their peers with better developer environments and "golden path" tooling.

Enter the idea of Kubernetes-bound endpoints—just one part of our recent push into complete and composable support for Kubernetes ingress. As an app or API developer, these bindings reduce the Kubernetes dev loop to zero, letting you instantly see how your service behaves in remote clusters. If you're on the infra/DevOps/platform side, you'll soon have a new tool for enabling self-service without having to balloon your infrastructure with complex networking tools.

What might that handshake look like?

Infra/platform engineers: Enable Kubernetes bindings

The Operator runs in three pods, ngrok-operator-manager, ngrok-operator-agent, and ngrok-operator-bindings-forwarder, and by default, it doesn't affect any existing Ingress or Gateway API implementations you might already have, so you can safely install it alongside existing solutions.

I hope that if you're reading this, you're already onboard what we're doing for ingress into your Kubernetes services, and want to bring those over to the ngrok way, but if not, you can stick to just bindings for now.

Our K8s docs have full installation instructions, but here's the gist—first, set up environment variables for your authtoken and API key, both of which you can access in the dashboard.

export NGROK_AUTHTOKEN=<YOUR_NGROK_AUTHOKEN>
export NGROK_API_KEY=<YOUR_API_KEY>

Install the Operator itself with helm:

helm install ngrok-operator ngrok/ngrok-operator \
  --namespace ngrok-operator \
  --create-namespace \
  --set bindings.enabled=true \
  --set credentials.apiKey=$NGROK_API_KEY \
  --set credentials.authtoken=$NGROK_AUTHTOKEN

If you're already using the Operator, update your Helm installation to support bindings:

helm upgrade ngrok-operator -n ngrok-operator ngrok/ngrok-operator --reuse-values \
  --set bindings.enabled=true

You can also get more granular with exactly which Kubernetes-bound endpoints you allow into your cluster with the endpointSelectors value, which accepts a CEL expression using any of our endpoint variables. For example, the following setting allows only endpoints created at a URL like http://foo.staging—just make sure that you've created a staging namespace in your cluster to match.

helm upgrade ngrok-operator -n ngrok-operator ngrok/ngrok-operator --reuse-values \
  --set "bindings.endpointSelectors={\"endpoint.host.endsWith('.staging')\"}"

At this point, you can give developers their permission slip to create Kubernetes-bound endpoints!

App/API developers: Project local services to Kubernetes with abandon

You're developing a service locally and want to try it on a staging cluster not 15 minutes from now, not after the slow Kubernetes dev loop, but now-now.

Assuming your service is listening on port 12345, you can create a Kubernetes-bound endpoint with the URL of your choosing, or based on the endpoint selectors your friends over at infra have (with kindness) restricted you to.

ngrok http 12345 --url http://<YOUR_SERVICE>.staging --binding kubernetes

If you're curious to see how it works, you can test out the binding straightaway by running a temporary curl image

kubectl run -i --tty --rm debug --restart=Never --image=appropriate/curl -- /bin/sh

Then send a request to your Kubernetes-bound endpoint.

curl http://<YOUR_SERVICE>.staging

You'll see a response inside the pod and your ngrok agent!

If other K8s services on the remote cluster are already set up to "talk" to a service at http://foo.staging, then you should start seeing those requests filter onto your locally-hosted service, too, but more likely than not, you'll need to reconfigure another pod or two to point traffic to your Kubernetes-bound endpoint.

Here's a setup we've already tested internally—let's say your cluster already runs service bar with a FOO_URL environment variable to specify where it should send its requests. You can:

  1. Start a Kubernetes-bound endpoint with ngrok http 12345 --url http://$YOUR_SERVICE.staging --binding kubernetes.
  2. Edit the ConfigMap for bar to change that environment variable to http://$YOUR_SERVICE.staging.
  3. Restart the restart bar pods with kubectl rollout restart deployment bar.

As bar sends requests to FOO_URL, you'll see them show up in your agent CLI. When you're done developing locally, you can change FOO_URL back to its default, perform one more rollout, and close down the ngrok agent on your local machine to bring the staging cluster back to its original state.

We imagine a future where the ergonomics around projecting into a microservices environment are even better, like a true "intercept" akin to Telepresence, but we think this is a very powerful alternative already.

How does this 'magical' projection work?

In ngrok, bindings specify where an endpoint is accessible from. A Kubernetes binding means it's only accessible inside a cluster where you've already installed the ngrok Kubernetes Operator with your account's credentials. That also means it's completely inaccessible from the public internet, letting you do this projection without port forwarding or otherwise exposing either your local system or remote cluster to the public internet.

When the Operator detects a new Kubernetes-bound endpoint in your account, it creates a BoundEndpoint custom resource, which in turn becomes a Service resource. That Service resource then can interact with and be interacted with by any other Service already running in your cluster. That allows for a lot more than the projection of local services—with bindings, you can:

  • Project an existing Kubernetes Service into a second, remote Kubernetes cluster, enabling cluster-to-cluster communication.
  • Use our Traffic Policy engine in a "Service to Service" network to manipulate east-west traffic with rate limiting, header manipulation, and more, like a service mesh.
  • Sync clusters between multiple environments, like on-premises and cloud clusters.

I'm looking forward to experimenting with and writing all about how you can implement those more 301/401-level ideas, but for now, just remember that bindings let you define a consistent and secure way to project services into or between Kubernetes clusters that you control.

As a DevOps/infra/platform engineer, you reduce your reliance on VPNs and no longer demand that developers run fully-fledged Kubernetes workloads on their laptops. And as an app/API developer, you can instantly test your next service against a production-like environment without rebuilds, re-deploys, or waiting around for a CI pipeline to finally wrap up.

Just like all other parts of Kubernetes networking, bindings help you work on Kubernetes faster while also collapsing a ton of networking complexity into just the Operator or CLI.

What will you project next?

Whether you’re a platform engineer tired of wrangling dev clusters or a developer tired of waiting for CI to catch up, Kubernetes bindings give you a fast, flexible path to building inside your real environment—without deploying to it.

Ready to get started?

Your CI pipeline can wait. Your dev loop doesn’t have to—and your infra team won’t have to carry the weight alone.

Share this post
Joel Hans
Joel Hans is a Senior Developer Educator. Away from blog posts and demo apps, you might find him mountain biking, writing fiction, or digging holes in his yard.
Kubernetes Operator
Networking
Features
Kubernetes
Development