Where I left off last time, I had just finished building my server and setting it up for access through ssh. If all I wanted to do was host content over my local network, this would have been enough. However, I wanted to host content on the open internet, for anyone who cared enough to want to view it: after all, that’s how you’re able to read this post right now. So I set out to create a website.
What is a Website, Anyway? 🔗
Nowadays websites can be complex pieces of software, but at their core, the purpose of a website is to host content.
But how does that content actually get to your device? The answer is the HyperText Transfer Protocol, or HTTP for short.
All websites use this protocol (or its more secure cousin HTTPS) to serve their content. If you ever wondered what the
http://
or https://
stood for in URLs, now you know. For now, let’s see what happens when you visit the home page to
this website:
As you can see, when your browser goes to my homepage, it sends an HTTP GET request to my domain, asking for the content
at /
. The response to this request includes the HTML for the homepage, so that the browser can display it.
So in order to host a website, I needed something that could respond to HTTP requests.
The Reverse Proxy 🔗
The good news is, there are people much smarter than me who have made this process easy. To handle HTTP requests, I installed a reverse proxy on my machine, specifically nginx (pronounced “engine x”). All I needed to do was create a config file. Well, create is a bit of an overstatement; really I just used templates provided by other tutorials. Regardless, the file I used is below, with some comments explaining what does what.
server {
# Listen for HTTP requests (not secure).
listen 80;
listen [::]:80;
server_name admoore.xyz, www.admoore.xyz;
# Redirect HTTP requests to use HTTPS.
return 301 https://$host$request_uri;
}
server {
# Listen for HTTPS requests.
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name admoore.xyz, www.admoore.xyz;
# SSL certificate shenanigans.
ssl_certificate /etc/letsencrypt/live/admoore.xyz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/admoore.xyz/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
ssl_trusted_certificate /etc/letsencrypt/live/admoore.xyz/chain.pem;
ssl_stapling on;
ssl_stapling_verify on;
# Point to the location of the content I want to serve.
root /srv/www/admoore.xyz/public/;
# The file to serve when / is requested.
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
I chose /srv/
as the location of the content I wanted to serve, because it felt fitting. You can, however, choose any
path you’d like, as long as the nginx service has read access to those files. With the config file written, I simply
needed to put it in /etc/nginx/conf.d
and start the service with sudo systemctl start nginx
.
DNS and Port Forwarding 🔗
So I had a server that could respond to HTTP requests, but how could I let the world know where to send requests for
my website to? The answer is the Domain Name System (DNS). DNS is a protocol that turns domain names (e.g. admoore.xyz)
into IP addresses. The routers that make up the backbone of the internet only deal with IP addresses, so being able to
turn domains into IP addresses is required in order to access websites. You can try making DNS requests yourself using
the nslookup
command to see what IP address various websites are using at the moment. It looks like the IP address for
Google as I am writing this is 142.250.80.110
.
For some extra privacy, I decided to use Cloudflare as my DNS provider. Cloudflare is oriented more towards enterprise users, but they offer a free plan that includes DNS for HTTP and HTTPS traffic. They act as a proxy, forwarding requests so that when someone makes a DNS request for this website, they don’t get my actual IP address, but an intermediary instead.
The last thing I needed to do was update my router’s port forwarding settings. By default, routers won’t know what to do
with incoming traffic. Adding a few rules tells the router to send incoming HTTP traffic to my server, so that it can
handle it. The rules I added are below. In human terms, these tell my router to take all incoming traffic on the ports
used for HTTP and HTTPS, and send it to 192.168.1.4
, which is the local IP address of my server.
Conclusion 🔗
And that’s all! With that, my web server was complete. Overall it was not as hard as I thought it would be, but it did give me a greater understanding and appreciation of all the work behind the scenes that makes the internet exist.