
How I build secure (and VPN-less!) industrial IoT connectivity
As a customer engineer, I get to discuss and help improve networks in places I’d never otherwise go, like smart factories. The problem is, once these folks show me what their networks look like, all I see is a mess of complex firewall access, dated VPN troubleshooting, and exposed private networks.
After getting to help a few of these companies out, saving them countless headaches as they look to gain access to remote services, I thought I’d share the story of how they're now using ngrok.
Here’s the scenario: A network of smart factories is coming online, filled with factories that contain IoT-connected machines, telemetry sensors, and a real-time monitoring dashboard. The goal? Secure, controlled remote access to critical factory systems without exposing them to the public internet or relying on complex VPN setups.
Three key services need secure access in each factory:
- A telemetry API, which streams real-time machine data (always accessible).
- A sensor database for storing telemetry logs (always accessible).
- A factory web dashboard used by technicians for maintenance (only accessible on demand).
And the challenges are varied and complicated on their own:
- Factory networks block inbound connections (no public IPs).
- Technicians need temporary access to the dashboard without leaving it exposed.
- API and database must remain permanently accessible from the company’s cloud.
- Access to the dashboard must be authenticated via Azure AD.
- The solution must scale to support multiple factories without requiring complex VPNs or multiple agents per site.
ngrok is a universal gateway, which means it allows you to create secure ingress to any app, IoT device, or service without spending hours learning arcane networking technologies. When you deploy a single ngrok agent per factory, you can control access to all your systems without exposing any machines to the public internet. That's a big win for security and reliability.
Let's look at an architecture diagram:

Why only one ngrok agent per factory?
Traditionally, you might assume that every device inside the factory needs its own ngrok agent, but I've found this isn't necessary. A single ngrok agent is installed on a network-accessible machine inside the factory, and it:
- Acts as a central gateway (jumpbox) that can reach any machine on the local network, eliminating the need for multiple agents.
- Creates Internal Endpoints so that each API, database, and dashboard is securely exposed inside ngrok, never publicly visible.
- Uses Cloud Endpoints for controlled access, where external cloud apps can access only what they need, and the dashboard is only started when requested.
- Runs as a background service configured to automatically start on boot, restart after crashes, and log events.
- Dynamically manages tunnels, as The agent API can start and stop tunnels as needed.
This setup minimizes security risks, simplifies deployment, and ensures continuous uptime for mission-critical services.
Understanding cloud and internal endpoints
Before I show you how to build up this architecture for yourself, let me tell you about the components that make it possible.
An internal endpoint enables a service inside the factory network to be reachable within ngrok, without being publicly exposed. They can:
- Only receive traffic from cloud endpoints or internal services that explicitly route traffic to them.
- Not be accessed directly from the internet.
- Be used for telemetry APIs, databases, and dashboards.
Here’s an example: The factory’s telemetry API runs on a local server (192.168.1.100:8080
). Instead of exposing it publicly, you can create an internal endpoint:
version: 3
agent:
authtoken: YOUR_NGROK_AUTHTOKEN
endpoints:
- name: example
url: https://api.internal
upstream:
url: http://192.168.1.100:8080
Now this API is only accessible inside ngrok’s private network.
A cloud endpoint is a permanent, externally accessible entry point into the factory network that’s also:
- Managed centrally via the ngrok API or dashboard.
- Always on, not tied to the lifecycle of the agent.
- Does not forward traffic to the agent by default—it must be configured to route traffic to internal endpoints.
- Used for exposing services to external cloud apps securely.
For example, the factory’s telemetry API is accessible via https://factory.example.com/api
, but instead of exposing the API directly, a cloud endpoint forwards traffic to its internal endpoint:
on_http_request:
- expressions:
- req.url.path.startsWith("/api")
actions:
- type: forward-internal
config:
url: https://api.internal
Still curious about why this "shape" of cloud and internal endpoints is perfect for getting APIs online securely and easily? We have an explainer for that.
Define internal endpoints in ngrok.yaml
After installing the ngrok agent, define all required internal endpoints inside the ngrok configuration file, which is at /etc/ngrok.yml
on Linux or C:\ngrok\ngrok.yml
on Windows.
version: 3
agent:
authtoken: YOUR_NGROK_AUTHTOKEN
endpoints:
- name: telemetry-api
url: https://telemetry-api.internal
upstream:
url: http://192.168.1.100:8080 # API inside factory network
- name: sensor-db
url: tcp://database.internal:5432
upstream:
url: tcp://192.168.1.101:5432 # PostgreSQL database
- name: factory-dashboard
url: https://dashboard.internal
upstream:
url: http://192.168.1.102:3001 # Web dashboard
- name: agent-api
url: https://agent-api.internal
upstream:
url: http://localhost:4040 # Expose Agent API internally
Install ngrok as a background service
Now, install and start the service.
ngrok service install --config /etc/ngrok.yml
ngrok service start
This will start all tunnels defined in the configuration file, ensure ngrok runs persistently in the background, and integrate with native OS service tooling.
Reserve a TCP address for your TCP-based cloud endpoint
When you reserve a TCP Address, you can create a TCP cloud endpoint that binds to that domain. Reserved TCP addresses are available on ngrok’s pay-as-you-go plan.
curl -X POST \
-H "Authorization: Bearer <NGROK_API_KEY>" \
-H "Content-Type: application/json" \
-H "Ngrok-Version: 2" \
-d '{"description":"SQLdb Address", "region":"us"}'\
https://api.ngrok.com/reserved_addrs
Reserve a custom wildcard domain
Creating a custom wildcard domain will allow you to create endpoints and receive traffic on any subdomain of your domain. Wildcard domains are available on ngrok pay-as-you-go-plans when you verify with support. It can be helpful to create a separate subdomain for each factory you wish to connect to.
curl \
-X POST \
-H "Authorization: Bearer <NGROK_API_KEY>" \
-H "Content-Type: application/json" \
-H "Ngrok-Version: 2" \
-d '{"domain":"*.api.acme.com","region":"us"}' \
https://api.ngrok.com/reserved_domains
Create a bot user and authtoken for the clinic
Create a bot user so that you can create an agent authtoken independent of any user account. A bot user represents a service account, and allows each customer network to have its own authtoken. In the case one authtoken is compromised, only that customer network may be affected rather than all of them.
curl \
-X POST \
-H "Authorization: Bearer <NGROK_API_KEY>" \
-H "Content-Type: application/json" \
-H "Ngrok-Version: 2" \
-d '{"name":"bot user for factory 1"}' \
https://api.ngrok.com/bot_users
Navigate to the Authtokens section of your ngrok dashboard and create an authtoken for that bot user and assign it an ACL binding, so that it can only create endpoints bound to the reserved wildcard domain for that factory’s telemetry API.

Create cloud endpoints for always-on API and database access
Since the telemetry API and database must be always accessible, create permanent cloud endpoints that route traffic to their internal endpoints.
Create an HTTPs cloud endpoint for the API using the ngrok platform API.
curl -X POST \
-H "Authorization: Bearer <NGROK_API_KEY>" \
-H "Content-Type: application/json" \
-H "Ngrok-Version: 2" \
-d '{
"type": "cloud",
"url": "https://factory1.api.acme.com",
"traffic_policy": {
"on_http_request": [
{
"expressions": [{ "req.url.path.startsWith": "/api" }],
"actions": [{ "type": "forward-internal", "config": { "url": "https://api.internal" } }]
}
]
}
}' \
https://api.ngrok.com/endpoints
Now factory1.api.acme.com/api
permanently forwards traffic to api.internal
inside the factory.
Next, you need to create a TCP cloud endpoint for the sensor database because it's on a TCP-based internal endpoint.
curl -X POST \
-H "Authorization: Bearer <NGROK_API_KEY>" \
-H "Content-Type: application/json" \
-H "Ngrok-Version: 2" \
-d '{ "url": "reserved TCP address",
"type": "cloud",
"bindings": ["public"],
"traffic_policy": "on_tcp_connect:\n - actions:\n - type: forward-internal\n
config:\n url: \"tcp://database.internal\"\n - type: restrict-ips\n
config:\n enforce: true\n allow:\n - 203.0.113.0/24\n
deny:\n - 192.0.2.0/24" }' \
https://api.ngrok.com/endpoints
Now, your reserved TCP address forwards TCP connections to the factory database and only allows access from 203.0.113.0/24
and denies traffic from 192.0.2.0/24
.
Enable on-demand web dashboard access
Since the web dashboard should only be online when needed, use ngrok’s agent API to dynamically start and stop tunnels.
As a technician, you can start the tunnel by running:
curl -X POST \
-H "Content-Type: application/json" \
-d '{
"name": "dashboard",
"proto": "http",
"addr": "3001",
"domain": "app.factory.com"
}' \
https://agent.example.com/api/tunnel
Now, app.factory.com
is live—only for this session.
Secure API access with Google OAuth and mTLS with Traffic Policy
Navigate to your newly created telemetry API cloud endpoint in the endpoints tab on the dashboard, and apply a Traffic Policy. Make sure you have a certificate on hand or generate them using the instructions in our terminate-tls
docs.
on_tcp_connect:
- actions:
- type: terminate-tls
config:
mutual_tls_certificate_authorities:
- |-
-----BEGIN CERTIFICATE-----
... certificate ...
-----END CERTIFICATE-----
on_http_request:
- actions:
- type: oauth
config:
provider: google
- type: forward-internal
config:
url: https://telemetry.api.internal:443
Now, only Azure-AD authenticated users can access the web dashboard and the connection is secured with mTLS.
Welcome to your secure, connected, and automated factory
You officially made it! You have now integrated a system that allows you to seamlessly and securely access any and all remote services within your enterprise. Let’s recap what what you've built:
- One ngrok agent per factory and no need for multiple installs.
- Always-online API and dashboard, securely available via cloud endpoints.
- A web dashboard that spins up on-demand with authentication via Azure AD and mTLS security.
- An Agent API that dynamically manages tunnels with automatic provisioning and deprovisioning.
- ngrok runs as a background service, which means it's reliable and will always auto-restart.
Have questions or need a boost from a customer engineer like myself to get you started? Our device gateway tutorial has some helpful generic advice, but you can always email me directly to get some more information on adapting this architecture and request flow to your factories and services.
For everything else, contact our team.
And don’t forget to sign up for your free ngrok account!