Aug 29, 2024
Aug 29, 2024
ngrok is happy to announce that we’ve added support for Kubernetes Load Balancer services to our Kubernetes Operator to streamline connectivity to your applications. This addition unifies the process of getting TCP and TLS connectivity to services running in your Kubernetes clusters.
Prior to this feature, you could connect to services in Kubernetes using ngrok’s TCPEdge and TLSEdge custom resource (CR). However, these custom resources were a bit cumbersome to consume, prone to misconfiguration, and tedious to template out. Support for Kubernetes Load Balancer services provides additional benefits over using ngrok’s custom resources, as discussed below.
This post uses an example telnet server — ngrok-ascii — built from the Docker image ngroksamples/ngrok-ascii to demonstrate how ngrok supports TCP and TLS connections. This server runs on port 9090 and returns a sequence of bytes, some of which are ASCII control characters and ANSI terminal escape codes, that spell ngrok whenever a client connects. Like so:
To run the server in Kubernetes, you can apply a manifest like the one below that defines Service and Deployment resources.
1---2kind: Deployment3apiVersion: apps/v14metadata:5 name: ngrok-ascii6 namespace: default7 labels:8 app.kubernetes.io/name: ngrok-ascii9spec:10 replicas: 111 selector:12 matchLabels:13 app.kubernetes.io/name: ngrok-ascii14 template:15 metadata:16 labels:17 app.kubernetes.io/name: ngrok-ascii18 spec:19 containers:20 - name: ngrok-ascii21 image: jonstacks/ngrok-ascii:latest22 imagePullPolicy: Always23 args: ["serve", "9090"]24 env:25 # Filter out health checks that come from the kubelet26 - name: FILTER_LOGS_ON_HOST_IP27 valueFrom:28 fieldRef:29 fieldPath: status.hostIP30 ports:31 - containerPort: 909032 name: telnet33 livenessProbe:34 tcpSocket:35 port: 909036 initialDelaySeconds: 2 #Default 037 periodSeconds: 60 #Default 1038 timeoutSeconds: 2 #Default 139 successThreshold: 1 #Default 140 failureThreshold: 5 #Default 341 resources:42 limits:43 cpu: 100m44 memory: 64Mi45 requests:46 cpu: 100m47 memory: 64Mi48---49kind: Service50apiVersion: v151metadata:52 name: ngrok-ascii53 namespace: default54 labels:55 app.kubernetes.io/name: ngrok-ascii56spec:57 selector:58 app.kubernetes.io/name: ngrok-ascii59 ports:60 - name: telnet61 port: 2362 targetPort: telnetThe server is now running in the cluster and other applications can access it via the ngrok-ascii service, but how do you expose it outside the cluster and to the internet?
You can add ingress to your Kubernetes services using the ingress controller feature of ngrok’s Kubernetes Operator. Previously, you would have to create the following two resources to connect the ngrok-ascii service to the internet with ngrok:
1kind: TCPEdge2apiVersion: ingress.k8s.ngrok.com/v1alpha13metadata:4 name: ngrok-ascii-edge5 namespace: default6spec:7 ipRestriction:8 policies:9 - ipp_2KZtV8hrTPdf0Q0lS4KCDGosGXl10 backend:11 labels:12 k8s.ngrok.com/namespace: default13 k8s.ngrok.com/service: ngrok-ascii14 k8s.ngrok.com/port: "23"15---16kind: Tunnel17apiVersion: ingress.k8s.ngrok.com/v1alpha118metadata:19 name: ngrok-ascii-tunnel20 namespace: default21spec:22 forwardsTo: ngrok-ascii.default.svc.cluster.local:2323 labels:24 k8s.ngrok.com/namespace: default25 k8s.ngrok.com/service: ngrok-ascii26 k8s.ngrok.com/port: "23"The first thing to note is that the Tunnel resource’s labels must match the TCPEdge resource’s backend labels so the TCPEdge can correctly select the matching tunnel. Second, it was necessary to specify the value for forwardsTo correctly so that the tunnel forwards the traffic correctly to the ngrok-ascii service on the correct port.
In addition to creating a core Kubernetes Service, you need to create two more complex custom resource objects with many places for mistakes. The following section discusses how the new LoadBalancer service simplifies this process.
ngrok now offers the Kubernetes-native method for getting L4 traffic into your cluster with our new LoadBalancer class. Load Balancer services are implementation-specific. They provision an external load balancer (AWS NLB, Google Cloud Load Balancer, etc.) and communicate the external IP or hostname to the service. To use ngrok’s Kubernetes Load Balancer controller, create a Service resource with type=LoadBalancer and loadBalancerclassName=ngrok using a manifest like the one below.
1---2apiVersion: v13kind: Service4metadata:5 name: ngrok-ascii6 namespace: default7 labels:8 app.kubernetes.io/name: ngrok-ascii9spec:10 allocateLoadBalancerNodePorts: false11 loadBalancerClass: ngrok12 ports:13 - name: telnet14 port: 2315 protocol: TCP16 targetPort: telnet17 selector:18 app.kubernetes.io/name: ngrok-ascii19 type: LoadBalancerTo switch from using ngrok’s TCPEdge and TLSEdge custom resources to using the new load balancer service, change the service's manifest as follows:
LoadBalancer field to the Service definition to designate it as a Load Balancer service.loadBalancerClass: ngrok to the Service definition.
The ngrok service controller will watch for services with loadBalancerclassName=ngrok, automatically create the necessary TCPEdge/TLSEdge, Domain, and Tunnel resources for you, and manage their lifecycle.allocateLoadBalancerNodePorts: false to the Service definition.
As discussed in detail in a later section, the ngrok LoadBalancer class doesn't require you to allocate node ports.Now, running kubectl get services -o yaml ngrok-ascii displays something like the following in the status field:
1status:2 loadBalancer:3 ingress:4 - hostname: 5.tcp.ngrok.io5 ports:6 - port: 241147 protocol: TCPAnd that's it!
The ngrok service controller will automatically create your resources and manage their lifecycle.
You can now access the ngrok-ascii service at 5.tcp.ngrok.io:24114!
In this example, you would runtelnet 5.tcp.ngrok.io 24114:
And just like that, the TCP service is available on the internet.
If you don’t want to use a random port for your services and would like something easier to remember, you can simply specify the domain with an annotation such as k8s.ngrok.com/domain: ascii.ngrok.io.
ngrok will provision a valid certificate for the service and encrypt traffic between the client and ngrok while serving the application on port 443.
To use TLS, modify the ngrok-ascii service as indicated below:
1apiVersion: v12kind: Service3metadata:4 name: ngrok-ascii5 namespace: default6 labels:7 app.kubernetes.io/name: ngrok-ascii8 annotations:9 k8s.ngrok.com/domain: ascii.ngrok.io # <--- Use a TLS Edge10spec:11 allocateLoadBalancerNodePorts: false12 loadBalancerClass: ngrok13 ports:14 - name: telnet15 port: 2316 protocol: TCP17 targetPort: telnet18 selector:19 app.kubernetes.io/name: ngrok-ascii20 type: LoadBalancerThis service is now accessible by running the following command: openssl s_client -connect ascii.ngrok.io:443
Let's say you haveexternal-dns running in your cluster and configured to manage DNS for mydomain.com.
Prior to this release of ngrok’s Kubernetes Operator, you could achieve connectivity to your TCP and TLS services running in Kubernetes, but external-dns didn’t know how to communicate with the resulting custom resource objects.
Providing this connectivity through the standard Kubernetes LoadBalancer Service resource allows integration with external-dns.
You can quickly provide access to the myapp service at myapp.mydomain.com by adding the following annotations to the Service definition for myapp:
1apiVersion: v12kind: Service3metadata:4 name: myapp5 namespace: default6 labels:7 app.kubernetes.io/name: myapp8 annotations:9 external-dns.alpha.kubernetes.io/hostname: myapp.mydomain.com10 k8s.ngrok.com/domain: myapp.mydomain.com11spec: …Now, if you check the status of the myapp service you'll see the following in the status field:
1status:2 loadBalancer:3 ingress:4 - hostname: 2r93fef65h7ku1vtu.4raw8yu7nq6zsudp4.ngrok-cname.com5 ports:6 - port: 4437 protocol: TCPWithin a few minutes, external-dns will create a CNAME record for myapp.mydomain.com pointing to 2r93fef65h7ku1vtu. 4raw8yu7nq6zsudp4.ngrok-cname.com, and you can access your myapp service at myapp.mydomain.com on port 443.
Usually, when creating a LoadBalancer service, Kubernetes allocates a port on each node that forwards traffic to healthy endpoints via kube-proxy.
This is because the provisioned load balancer sits outside the cluster and forwards traffic to the node port.
If pod IPs are routable from outside the cluster, you can set allocateLoadBalancerNodePorts to false.
ngrok works uniquely by creating an outbound tunnel that can receive traffic back over the same tunnel. The provisioned load balancer that forwards traffic back into your cluster is environment-agnostic—including cloud and on-prem. Since this forwarding happens inside the cluster, it allows for even more restrictive firewall/security group rules on your Kubernetes nodes. You get connectivity to your applications while allowing only outbound traffic. Thus, there is no need to allocate node ports.
In addition to ngrok’s Kubernetes Operator providing ingress-as-a-service, the release of support for the LoadBalancer service means your services work the same locally as they do in production across cloud providers and on-prem clusters.
To learn more about ngrok’s Kubernetes Operator, check out these other posts on our blog.
Sign up, try ngrok for free today, and chat with us in our Community Repo if you have questions or feedback.