Distribute Traffic Between Canary Deployments
Canary deployments are a trusted method for rolling out new features or major versions of your services. By controlling who accesses the new version, either through the random distribution of traffic or a specific header attached to requests from certain customers, you can see exactly if and where something breaks before releasing it to all your users.
With ngrok, you can handle to canary deployments by:
- Choosing between random distribution across all your customers or applying a specific header to some customers' requests to opt them in to a new version, i.e. a feature flag.
- Applying different policies to each version depending on how they behave.
- Testing a migration from one environment to another, like on-premises VMs to a cloud-based Kubernetes cluster.
1. Create endpoints for your services
Start internal Agent Endpoints, replacing $PORT
based on where your service listens, for each of your services on the systems where they run.
You can also use one of our SDKs or the Kubernetes Operator.
Loading…
2. Reserve a domain
Navigate to the Domains section of the ngrok dashboard and click New + to reserve a free static domain like https://your-service.ngrok.app
or a custom domain you already own.
We'll refer to this domain as $NGROK_DOMAIN
from here on out.
3. Create a Cloud Endpoint
Navigate to the Endpoints section of the ngrok dashboard, then click New + and Cloud Endpoint.
In the URL field, enter the domain you just reserved to finish creating your Cloud Endpoint.
4. Add canary routing with Traffic Policy
While still viewing your new Cloud Endpoint in the dashboard, copy and paste one of the two policies below into the editor, depending on whether you want to distribute traffic randomly or by a header like x-canary
.
Loading…
What's happening here?
This policy first returns a random double
between 0
and 1
.
If that double is less than or equal to 0.2
, it routes the request to your canary deployment, effectively routing 20% of incoming traffic to the canary.
The policy then routes the remaining 80% of traffic to your stable version.
5. Try out your endpoints
Visit the domain you reserved either in the browser or in the terminal using a tool like curl
.
If you chose random distribution, you should see each service responding at roughly the percentage you defined.
If you choose to distribute by header, add x-canary: true
to your requests to verify ngrok is routing traffic to your canary.
6. Tweak the distribution to your canary
If you opted for random distribution, you should increase the volume of traffic forwarded to your canary at regular intervals as you gain confidence in its stability. For example, to split traffic evenly between your two versions:
Loading…
7. Cut over to the new version and clean up
When you promote your canary deployment to stable, you can edit your policy to route traffic to the new version and close down the ngrok agent forwarding to https://service.internal
.
Loading…
If you want to maintain https://service.internal
as the canonical name for the stable production deployment, you can:
- Start another agent forwarding to your new version at this canonical internal URL.
- Edit your policy to route to it.
- Shut down the agent forwarding to
https://service-canary.internal
.
What's next?
- View your Traffic Inspector to see how your canary behaves with production traffic, which may signal that you pause or roll back the deployment entirely.
- Learn how to implement blue-green deployments if you prefer faster rollouts at the expense of maintaining two production environments.
- Explore other common gateway setups, like multiplexing to many services or shipping a custom "maintenance mode" during an outage or planned downtime.
- Start automating your deployment strategies with the ngrok API or our Kubernetes Operator.