In this guide, you’ll launch a new cluster with Azure Kubernetes Service (AKS) and a demo app. You’ll then add the ngrok Kubernetes Operator to route public traffic directly to your demo app through an encrypted, feature-rich tunnel for a complete proof of concept.
In the end, you’ll have learned enough to deploy your next production-ready Kubernetes app with AKS, with the ngrok Kubernetes Operator giving you access to additional features, like observability and resiliency, with no extra configuration complexity.
Here is what you’ll be building with:
- The ngrok Kubernetes Operator: ngrok’s official controller for adding secure public ingress and middleware execution to your Kubernetes apps with ngrok’s cloud service. With ngrok, you can manage and secure traffic to your apps at every stage of the development lifecycle while also benefitting from simpler configurations, security, and edge acceleration.
- Azure Kubernetes Service (AKS): A managed Kubernetes environment from Microsoft. AKS simplifies the deployment, health monitoring, and maintenance of cloud native applications, whether you deploy them in Azure, in on-premises data centers, or at the edge. With 40 regions, you should be able to deploy a cluster close to your customers.
What you’ll need
- An Azure account with permissions to create new Kubernetes clusters.
- An ngrok account.
- kubectl and Helm
3.0.0+ installed on your local
workstation.
- The ngrok Kubernetes Operator installed on
your cluster.
- A reserved domain, which you can get in the ngrok
dashboard or with the ngrok
API.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
https://api.example.com.
- We’ll refer to this domain as
<NGROK_DOMAIN>.
Create your cluster in AKS
Start by creating a new managed Kubernetes cluster in AKS. If you already have one, you can skip to Add ngrok’s Kubernetes ingress to your demo app.
-
Go to the Kubernetes services section in your Azure console and click Create→Create a Kubernetes cluster.
-
Configure your new cluster with the wizard. The default options are generally safe bets, but there are a few you might want to look at depending on your requirements and budget:
- Cluster present configuration: You can choose a production-ready deployment, a dev/test deployment, and others.
- Region: The data center where AKS will deploy your cluster—pick a region geographically close to your primary customers and/or your organization.
- AKS pricing tier: The Free tier works great with less than 10 nodes, and you can always upgrade to the production tier after deployment.
-
Click Review + create and wait for Azure to validate your configuration. If you see a Validation failed. warning, check out the errors—they’re likely related to quota limits. When it’s ready, click Create. Grab a cup of coffee—deployment will take a while.
-
When AKS completes the deployment, click Go to deployment, then Connect, which will show you options for connecting to your new cluster with kubectl. Follow the instructions to use the Cloud shell or Azure CLI, then double-check AKS has successfully deployed your cluster’s underlying services:
kubectl get deployments --all-namespaces=true
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-system calico-kube-controllers 1/1 1 1 5m
calico-system calico-typha 1/1 1 1 5m
kube-system ama-metrics 1/1 1 1 5m
kube-system ama-metrics-ksm 1/1 1 1 5m
kube-system coredns 2/2 2 2 5m
kube-system coredns-autoscaler 1/1 1 1 5m
kube-system konnectivity-agent 2/2 2 2 5m
kube-system metrics-server 2/2 2 2 5m
tigera-operator tigera-operator 1/1 1 1 5m
Deploy a demo microservices app
To showcase how this integration works, you’ll deploy the AKS Store app, which uses a microservices architecture to connect frontend UI to API-like services, passing data to RabbitMQ and MongoDB in the backend. To showcase the features of AKS, you’ll deploy this demo app directly in the Azure Portal.
If you prefer the CLI, save the YAML below to a .yaml file on your local workstation and deploy with kubectl apply -f ....
-
Click Create→Apply a YAML.
-
Copy and paste the YAML below into the editor.
showLineNumbers collapsible
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: rabbitmq
image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
ports:
- containerPort: 5672
name: rabbitmq-amqp
- containerPort: 15672
name: rabbitmq-http
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: rabbitmq-enabled-plugins
mountPath: /etc/rabbitmq/enabled_plugins
subPath: enabled_plugins
volumes:
- name: rabbitmq-enabled-plugins
configMap:
name: rabbitmq-enabled-plugins
items:
- key: rabbitmq_enabled_plugins
path: enabled_plugins
---
apiVersion: v1
data:
rabbitmq_enabled_plugins: |
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
kind: ConfigMap
metadata:
name: rabbitmq-enabled-plugins
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
app: rabbitmq
ports:
- name: rabbitmq-amqp
port: 5672
targetPort: 5672
- name: rabbitmq-http
port: 15672
targetPort: 15672
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 1
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: order-service
image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- containerPort: 3000
env:
- name: ORDER_QUEUE_HOSTNAME
value: "rabbitmq"
- name: ORDER_QUEUE_PORT
value: "5672"
- name: ORDER_QUEUE_USERNAME
value: "username"
- name: ORDER_QUEUE_PASSWORD
value: "password"
- name: ORDER_QUEUE_NAME
value: "orders"
- name: FASTIFY_ADDRESS
value: "0.0.0.0"
resources:
requests:
cpu: 1m
memory: 50Mi
limits:
cpu: 75m
memory: 128Mi
initContainers:
- name: wait-for-rabbitmq
image: busybox
command:
[
"sh",
"-c",
"until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;",
]
resources:
requests:
cpu: 1m
memory: 50Mi
limits:
cpu: 75m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
type: ClusterIP
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: order-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-service
spec:
replicas: 1
selector:
matchLabels:
app: product-service
template:
metadata:
labels:
app: product-service
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: product-service
image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
ports:
- containerPort: 3002
resources:
requests:
cpu: 1m
memory: 1Mi
limits:
cpu: 1m
memory: 7Mi
---
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
ports:
- name: http
port: 3002
targetPort: 3002
selector:
app: product-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-front
spec:
replicas: 1
selector:
matchLabels:
app: store-front
template:
metadata:
labels:
app: store-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: store-front
image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- containerPort: 8080
name: store-front
env:
- name: VUE_APP_ORDER_SERVICE_URL
value: "http://order-service:3000/"
- name: VUE_APP_PRODUCT_SERVICE_URL
value: "http://product-service:3002/"
resources:
requests:
cpu: 1m
memory: 200Mi
limits:
cpu: 1000m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: store-front
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: store-front
type: LoadBalancer
-
Click Add to deploy the demo app. To double-check services deployed successfully, click Workloads in the Azure Portal and look for
store-front, rabbitmq, product-service, and order-service in the default namespace. If you prefer the CLI, you can run kubectl get pods for the same information.
Add ngrok’s Kubernetes ingress to your demo app
Next, you’ll configure and deploy the ngrok Kubernetes Operator to expose your demo app to the public internet through the ngrok cloud service.
-
In the Azure Portal, click Create→Apply a YAML.
-
Copy and paste the YAML below into the editor. This manifest defines how the ngrok Kubernetes Operator should route traffic arriving on
NGROK_DOMAIN to the store-front service on port 80, which you deployed in the previous step.
Make sure you edit line 9 of the YAML below, which contains the NGROK_DOMAIN variable, with the ngrok subdomain you created in the second step.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: store-ingress
spec:
ingressClassName: ngrok
rules:
- host: NGROK_DOMAIN
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: store-front
port:
number: 80
-
Click Add to deploy the ingress configuration.
You can check on the status of the ingress deployment in the Azure Portal at Services and ingresses→Ingresses. You should see the
store-ingress name and your ngrok subdomain. If you need to edit your ingress configuration in the future, click the ingress item and then the YAML tab.
-
Navigate to your ngrok subdomain, for example,
https://NGROK_DOMAIN.ngrok.app, in your browser to see the demo app in action. Behind the scenes, ngrok’s cloud service routed your request into the ngrok Kubernetes Operator, which then passed it to the store-front service.
Add OAuth authentication to your demo app
Now that your demo app is publicly accessible through the ngrok cloud service, you can quickly layer on additional capabilities, like authentication, without configuring and deploying complex infrastructure. Let’s see how that works for restricting access to individual Google accounts or any Google account under a specific domain name.
With our Traffic Policy system and the oauth
action, ngrok manages OAuth protection
entirely at the ngrok cloud service, which means you don’t need to add any
additional services to your cluster, or alter routes, to ensure ngrok’s edge
authenticates and authorizes all requests before allowing ingress and access to
your endpoint.
To enable the oauth action, you’ll create a new NgrokTrafficPolicy custom
resource and apply it to your entire Ingress with an annotation. You can also
apply the policy to just a specific backend or as the default backend for an
Ingress—see our doc on using the Operator with
Ingresses.
-
Edit your existing ingress YAML with the following. Note the new
annotations field and
the NgrokTrafficPolicy CR.
...
---
# Configuration for ngrok's Kubernetes Operator
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: game-2048-ingress
namespace: default
annotations:
k8s.ngrok.com/traffic-policy: oauth
spec:
ingressClassName: ngrok
rules:
- host: <NGROK_DOMAIN>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: game-2048
port:
number: 80
---
# Traffic Policy configuration for OAuth
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: NgrokTrafficPolicy
metadata:
name: oauth
namespace: default
spec:
policy:
on_http_request:
- type: oauth
config:
provider: google
-
When you open your demo app again, you’ll be asked to log in via Google.
That’s a start, but what if you want to authenticate only yourself or colleagues?
-
You can use expressions and CEL
interpolation to filter out
and reject OAuth logins that don’t contain
example.com. Update the
NgrokTrafficPolicy portion of your manifest after changing example.com to
your domain.
# Traffic Policy configuration for OAuth
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: NgrokTrafficPolicy
metadata:
name: oauth
namespace: default
spec:
policy:
on_http_request:
- type: oauth
config:
provider: google
- expressions:
- "!actions.ngrok.oauth.identity.email.endsWith('@example.com')"
actions:
- type: custom-response
config:
body: Hey, no auth for you ${actions.ngrok.oauth.identity.name}!
status_code: 400
-
Check out your deployed app once again. If you log in with an email that
doesn’t match your domain, ngrok rejects your request.
What’s next?
You’ve now used the open source ngrok Kubernetes Operator to add public ingress to a demo app on a cluster managed in AKS without having to worry about complex Kubernetes networking configurations. Because ngrok abstracts ingress and middleware execution to its cloud service, you can follow a similar process to route public traffic to your next production-ready app.
For next steps, explore our Kubernetes docs for more details on
how the Operator works, different ways you can integrate ngrok with an existing
production cluster, or use more advanced features like
bindings or endpoint
pooling.