Secure Remote Access with ngrok for SSH and RDP
By creating secure TCP endpoints, ngrok enables easy, centralized, access for technicians, engineers, and IT admins to maintain and update remote devices and services. This guide will help you to streamline connectivity without compromising security when it comes to remote access using SSH or RDP.
Examples where this is applicable include secure SSH into remote IoT devices or connecting to a Windows RDP server in a remote network. This guide demonstrates usability for a single remote network with an edge gateway per network (edge gateway 1). It can be scaled to many networks either with or without a central gateway.
Architectural Reference
Why only one ngrok agent per remote network?
Traditionally, you might assume that every service/device inside the network needs its own ngrok agent, but this isn't necessary. A single ngrok agent is installed on a network-accessible server inside the remote network, and it:
- Acts as a central gateway that can reach any service on the local network, eliminating the need for multiple agents.
- Creates Internal Endpoints so that each server/device is securely exposed inside ngrok, never publicly visible.
- Uses Cloud Endpoints for controlled, granular access. External cloud apps can access only what they need.
- Runs as a background service configured to automatically start on boot, restart after crashes, and log events.
This setup minimizes security risks, simplifies deployment, and ensures continuous uptime for mission-critical connections within your remote devices and servers.
What you'll need
- An ngrok account. If you don't have one, sign up.
- An ngrok agent, configured on your edge gateway or remote device/service. See the getting started guide for instructions on how to install the ngrok agent.
- An ngrok API Key. You'll need an account first.
1. Create a bot user and authtoken to enable isolated agent management
Create a bot user so that you can create an agent authtoken independent of any user account. A bot user represents a service account, and allows each server to have its own authtoken. In the case one authtoken is compromised, only that agent may be affected rather than all of them.
Loading…
Create an authtoken assigned to this specific bot user. Apply an ACL rule to ensure this bot user can only create internal endpoints for edge gateway 1.
Loading…
2. Define internal endpoints in ngrok.yml for privatized device access
An internal endpoint enables a service inside the clinic to be reachable within ngrok, without being publicly exposed. They can:
- Only receive traffic from cloud endpoints or internal services that explicitly route traffic to them.
- Not be accessed directly from the internet.
- Be used for telemetry APIs, databases, and dashboards.
After installing the ngrok agent, define all required internal endpoints inside the ngrok configuration file. You can install ngrok and its configuration file in /path/to/ngrok/ngrok.yml
and the executable in /path/to/ngrok/ngrok
.
Loading…
::: If there is no edge gateway present in your network, you must configure each agent on each device with an internal endpoint for that device. :::
3. Reserve a tcp address for each device/server
Reserving a TCP address is required for creating a cloud endpoint which will be done in a later step. By reserving a TCP address, this address is held specifically for your ngrok account. You will do this for each device connected to an edge gateway as well as for the rdp server.
Loading…
Repeat the above command for each device/server you need remote access to.
4. Create your TCP cloud endpoint and attach a Traffic Policy
A cloud endpoint is a permanent, externally accessible entry point into the network that's also:
- Managed centrally via the ngrok API or dashboard.
- Always on, not tied to the lifecycle of the agent. Configuration can be modified at the cloud level via the dashboard/API (no need to change the config at the device itself)
- Does not forward traffic to the agent by default—it must be configured to route traffic to internal endpoints.
- Used for exposing services to external cloud apps securely.
The curl command below leverages ngrok's platform API to create a cloud endpoint and attach a traffic policy to it. ngrok's Traffic Policy is the perfect way to forward and manipulate traffic in a flexible and robust manner. In this case, this command creates the endpoint and attaches a forward-internal action which will forward traffic from the cloud endpoint to the internal endpoint. This will also need to be done for each device connected to the edge gateway where the agent sits.
Loading…
5. Secure your cloud endpoint with IP Restrictions
Navigate to your newly created cloud endpoint in the endpoints tab on your ngrok dashboard, and apply a restrict-ips traffic policy action to enable a source IP whitelist. By enabling IP restrictions, we can directly filter who/what can use the endpoint and prevent port scanners or other malicious actors. You can add this action directly to the cloud endpoint's YAML configuration. The properly formatted config for this action can be seen below:
Loading…
You can also create the TCP endpoint and attach its traffic policy with the forward-internal and restrict-ips actions all in one API call:
Loading…
6. Use your ngrok TCP endpoints with your SSH/RDP Clients
Now that you have created and secured your tcp cloud endpoints and they forward to the correct upstream devices, you can use these TCP endpoints in your existing SSH/RDP client setups to test your remote connectivity.
Additional ngrok Features
Enable endpoint pooling
You can use endpoint pooling with multiple internal agent endpoints to achieve redundancy and high availability for services inside your network. If desired, you can install a second agent within the remote network as a failover in case agents in the wild go offline.
Configure each agent endpoint to use the same ngrok internal URL. This automatically forms an endpoint pool. Incoming traffic to the pooled URL is automatically distributed among all healthy endpoints in the pool. If one endpoint goes offline, traffic is seamlessly routed to the remaining endpoints, ensuring redundancy and failover
Loading…
Create a custom connect URL
This provides a white-labeling capability so that your ngrok agents will connect to connect.example.com instead of the default connection hostname (connect.ngrok-agent.com). Dedicated IPs that are unique for your account which your agents will connect to are also available. This takes away any danger of rogue agents in your network trying to call home and adds an additional layer of security by specializing your ngrok connectivity. Custom connect URLs are available with ngrok's pay-as-you-go plan as an additional paid feature.
Loading…
Once you have created the custom connect url, specify this field within the agent configuration file. Add this section to your agent configuration file to specify the custom connect url:
Loading…
Install ngrok as a background service
Now, install and start the service
Loading…
In most cases, installing ngrok as a service requires administrator privileges.
This will start all tunnels defined in the configuration file, ensure ngrok runs persistently in the background, and integrate with native OS service tooling.
Recap
You officially made it! You have now integrated a system that allows you to seamlessly and securely access any and all devices/servers within your remote network. Let's recap what what you've built:
- One ngrok agent per remote network and no need for multiple installs.
- Always-online devices and servers, securely available via cloud and internal endpoints.
- Granular access with a composable traffic policy offering refined and robust security measures for our endpoints.
- Not sure how to explain ngrok to your end users? Check out this guide which details ngrok's standards on security, trust, compliance, and privacy.