Deploying ngrok in Production
Shipping software is hard. Doing it consistently and predictably is even harder.
In contrast, when we write our first lines of code, life is easy. We have few constraints, the architecture is imagined but doesn’t exist and all those little details have yet to bite us.
As you move from development to production, your mindset shifts from the simple goal of “make it work” to a larger goal of “make it manageable.” We know that basic connectivity isn’t enough, we need to know who started the tunnel, what it’s connected to, when it goes on or offline, and what is traversing it. Further, we also need to know that when we restart a tunnel, the configuration will be consistent and exactly what we set. With those needs in mind, let’s consider the steps to move ngrok from a testing tool on one development machine to a useful and powerful part of your production infrastructure.
We think of this in three parts:
- inside your organization or from your local service to the ngrok cloud
- outside your organization or from the external clients to the ngrok cloud
- and the overall control plane which controls and monitors everything
ngrok inside your organization
The scariest thing for most IT/Security teams is unknown software running accidentally or maliciously providing a backdoor into internal systems. That said, if we assume that most employees are not malicious and instead trying to do their job, shadow IT more likely points to a gap in tools or capabilities your coworkers are fixing themselves.
This is how most teams start using ngrok. While IT/Security could blindly block ngrok from your network, your teams will just find another, less secure option and work harder to hide it from you next time. Instead, we recommend adding observability and management to your ngrok Agents.
First, we recommend IT/Security block all *.ngrok.io traffic and use our Custom Ingress to create their own named ngrok entry points such as ingress.example.com. This eliminates personal ngrok accounts from your network while allowing centralized policy management across all connections.
Once you have centralized policies, you’ll realize every project has a slightly different set of requirements. While you could defer this configuration to the local agents, using our ngrok Terraform Provider to plug directly into your CI/CD systems is a better, more scalable, and auditable approach. Now your tunnels are just another part of your configuration.
Finally, the ngrok platform implements Event Subscriptions to integrate events into your SIEM for near-real time monitoring across the entire platform. This gives your security operations team a simple way to know who is running which tunnels, what they’re connected to, and whenever policies change. You gain observability into events as they happen instead of dealing with policy issues days or months later.
ngrok outside your organization
Outside your organization, IT/Security has zero control over the individual clients but can set the terms of who connects and how.
First, we recommend IT/Security use their own domain - such as tunnel.company.com - to standardize URLs across external systems and apply their own TLS certificate to the traffic. This creates consistent, predictable naming and makes end to end encryption the default.
Next, we can apply the bluntest - but sometimes effective - tool with IP Restrictions. Configured well, IP Restrictions will ensure only the expected IPs can access your tunnels. This isn’t as useful in dynamic environments or with a large user base but it’s still available if and when you need it.
Finally, we can limit who can connect by adding OAuth 2.0 or OpenID Connect (OIDC) or Mutual TLS on the edge. With OAuth and OIDC, you can use an identity provider such as Google or Github to limit access to only the people who should have access. Alternatively, with Mutual TLS, you’re building a trusted connection with a machine or person to prove they’re in posession of the expected certificate.
ngrok control plane
As you secure and monitor both the internal and external ngrok aspects, the requirements you thought were complete and expansive will show gaps. You’ll find places where the real, production usage shows different patterns than you expected. You’ll find novel attack attempts that you didn’t foresee. To address those quickly and flexibly, we turn to the ngrok control plane and Dashboard.
Just as we can use IP Restrictions on ngrok tunnels, we can expand that out to any portion of ngrok - the Agent, tunnel access, the Dashboard, and even the API - to limit access to known good IP ranges. Once again, this gives you fine-grained control over the people and systems who can connect to various parts of ngrok.
More importantly, the control plane itself is wholly programmable. While the Terraform provider is the most obvious aspect, we also have a command line interface (CLI), API, SDKs, and more on the drawing board. This allows you to test configurations quickly with the CLI, set them dynamically with the SDKs, lock them with the Terraform provider, and interrogate them via API. You can integrate ngrok into any or every process that you need.
ngrok Best Practices
Overall, our perspective is that we can’t know your exact requirements, desired security posture, or the details of your use cases. What we can do is create a set of building blocks or primitives so you can configure and build the security policies your team and organization requires. Further, by plugging into the larger ecosystem of IDPs, SIEMs, and open protocols, ngrok fits seamlessly into your existing infrastructure and processes.
From concept to connection, ngrok provides a secure, scalable tunnel to share your system with the world or just the single person you need.
Shipping software is hard, securing it shouldn’t be.
Join our Slack to learn more about ngrok and chat about what you're building.