Table Of Contents
The Problem
If you’ve ever tinkered with self-hosting web projects and services, you’re probably already familiar with one of the major webservers like Apache or nginx. You’ve likely also hosted other non-HTTPS services such as SSH, for headless access to a Linux machine or, as I like to do, tunneling your browser traffic through a home Linux server via a SOCKS proxy. This is especially useful when traveling and using open or un-secured WiFi networks, such as at hotels or coffee shops, to prevent your connection from being monitored surreptitiously. Generally, tunneling like that works well and you can run your HTTPS traffic on the standard port 443, and SSH on 22 (or better yet some high-numbered non-standard port). However, you may occasionally run into a situation where outbound ports on that network are blocked, making it difficult or impossible to reach your SSH server. Generally, ports like 80 and 443 are open for outbound traffic to avoid breaking normal HTTP/HTTPS browsing, and we can utilize this fact to get around the blocked port problem.
The web server nginx has a mechanism by which we can use a single incoming open port, in this case 443 (HTTPS), to accept incoming HTTPS and SSH connection requests, so that we can access our remote machine even on networks that block other ports, and still host other HTTPS servers and services on the same machine. Continue reading to find out how.
Prerequisites
I will assume that you are running a suitable platform, such as my personal favorite Arch Linux, and can find and install a recent version of nginx onto your system.
Secondly, I will assume you’ve also done the work necessary related to firewall configuration and port forwarding to enable nginx to accept and handle HTTPS or SSH requests coming in to your public IP on port 443. Once again, if this is unfamiliar, there are online resources to explain this, but I imagine if you’re reading this, you already know how to do those things.
For this tutorial, I’m assuming all nginx configurations are being done in /etc/nginx/nginx.conf
, though I am aware there are other, more modern ways to configure nginx via /etc/nginx/conf.d/*.conf
files and such. Feel free to accomplish your configuration however you prefer. The main points should be portable.
If you’re ready, let’s jump straight to the nginx configuration details.
Using Nginx Stream Directive to Detect Incoming Connection type
Nginx has a stream
directive, which can be used to accept incoming TCP connections, and provide things such as proxying to SSH servers, load balancing HTTPS servers, and many other features related to SSL/TLS features. In our case, we are going to use this in conjunction with the $ssl_preread_protocol
variable to determine whether the incoming connection is HTTPS or something else (SSH in our case, is the assumption).
Nginx recently wrote a blog post detailing how to mutliplex ssh and SSL/TLS traffic on the same port, much like I’m describing here. I found this while trying to solve my origiinal issue of getting around blocked ports on certain public networks, preventing me from accessing my home server’s SSH service. The configuration they mention in the blog is like so:
stream { upstream ssh {
server 192.0.2.1:22;
}
upstream web {
server 192.0.2.2:443;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" web;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
When I attempted to use their example by adding it to my existing nginx.conf
, which contains several other server
blocks listening on port 443, I found that nginx would no longer run when I ran the service, because the stream
directive was already binding to port 443, and the subsequent server
blocks within the main http
block were not allowed to re-bind to that port. (Obviously, in my case, I changed the IP from 192.0.2.1 and 192.0.2.2 to my own machine’s IP and ports.)
Normally, you can have multiple server
blocks within an http
block, where listen
identifies the port to listen to, and server_name
can be used to direct the HTTPS request for a particular domain to whatever internal IP you choose. Those server
blocks can all listen
to the same port, like 443, and only the one that matches the other criteria will be executed. However, when you have a stream
block at the top of the config, and it is binding to 443, you cannot subsquently have server
blocks within your http
block bound to 443 as well. So, you either have to handle all your servers within the stream
block, or use a clever trick to avoid having to move the rest of your config into the stream
block…
The Configuration
Here is a snippet of what I ended up with in my nginx.conf
file:
user http http;
worker_processes 4;
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
events {
worker_connections 10000;
}
stream {
upstream ssh {
server 127.0.0.1:6789;
}
upstream web {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" web;
"TLSv1.3" web;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
The lines near the top are simply configuring this particular machine’s nginx instance to spawn worker processes with the “http” user, and allow for up to 4 processes, and 10000 connections. There is no need to add these to your config or replace what you already have. The line load_module /usr/lib/nginx/modules/ngx_stream_module.so;
pulls in the module for stream processing, because the version of nginx I am using did not have this precompiled. You can try with or without it, depending on your compiled nginx version.
Next, you notice the stream
block, which contains two different upstream
blocks. These are used to name different servers to which traffic can be directed. In my case, both the ssh and the web server(s) are on the same machine runninng nginx, so I used the localhost (127.0.0.1) as the upstream server, with port 6789 (for demonstrative purposes) for the SSH server, and port 444 for the web server. Why 444 you may ask? More on this later.
The next thing you notice is the map
block with the variable $ssl_preread_protocol
. Within this are several options, which the map
block will attempt to match against what is found in the $ssl_preread_protocol
. As described in the nginx blog link, this variable contains a string identifying the SSL/TLS version being used by the incoming connection. In my case, I only match against “TLSv1.2” and “TLSv1.3” because I restrict HTTPS connections to those protocols, and nothing older, but you can add “TLSv1.0” if you so choose, for example. If the TLS version strings are found and match, then the $upstream
variable will be set to web
. If not, then it will use the default of ssh
.
Later, in the server
block, the $upstream
variable is used, now mapped either to ssh
or web
, and proxy_pass
will redirect the incoming connection to either the SSH server/port or the web one.
Now, back to the question of “why 444”? Well, doing this allowed me to make a simple modification to the rest of the http
->server
->listen
directives by simply changing the 443 to a 444, and avoiding having to migrate or rewrite all their server directives within the new stream
directive.
For example, one of my many preexisting http
->server
directives looked like this:
http {
server {
listen 444 ssl;
ssl_certificate /etc/nginx/certs/wildcard/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/wildcard/privkey.pem;
server_name testing.thewalr.us;
location / {
proxy_pass http://127.0.0.1:54321/;
proxy_set_header Host $host;
}
}
}
Previously, the listen 444 ssl;
had said listen 443 ssl;
and failed to work when the new stream
directive was added. Now, with a simple port change to another previously-unused internal port, we can effectively redirect incoming non-SSH SSL/TLS connections to that new port, and let the same nginx instance get a second crack at directing that traffic!
Effectively what this configuration does is this:
- Accept incoming connections on the public interface on port 443, monitored by the
stream
directive - If SSL/TLS protocol versions are NOT present in the stream, we can proxy the connection to our server:port running SSH on the LAN side of our network.
- If SSL/TLS protocol version ARE present in the stream, we can proxy the connection back to ourselves/nginx but on port 444.
- Finally, because our nginx config’s
http
block and its associatedserver
blocks are listening on port 444 later in the same nginx config, the proxy pass to port 444 will be handled “recursively” by the same nginx instance.
This effectively solved my problem of needing to run SSH and HTTPS on the same 443 port to get around port blocks in public networks, where proxying my browser through my SSH server is most needed for security purposes. And, because I only made a simple port change, I only had to search and replace instances of 443 in my existing list of server directives with 444 (or some other suitable port you can choose), minimally impacting the amount of changes needed. In the future, if I decide to no longer bother with this setup, I could simply proxy all requests from the stream
directive to port 444, bypassing the map
step, without any other changes being necessary further down the config file.
Please comment below if this helped you, or if you have any suggestions for improvements!