How I made and deployed my personal site

Posted on Jul 14, 2025

Before matriculating, I hosted a simpler personal site on this domain – in fact, you can still view it here! If you happen to be more technically inclined, feel free to critique the code powering my site.

To be honest, I lacked a clear intention when I first built the site. I largely created one because everyone around me was creating one, and I wanted an excuse to practice my frontend development skills with React. Of course, it was not designed to be extensible – adding technical articles and random ideas was painfully (and unnecessarily) tedious.

This time, it’s different. I know what I want – I want a clean, fuss-free and minimal space where I can easily document and share whatever I’m exploring.

Finding a static blog generator

I think this was relatively straightforward. Hugo is one of the most popular, open-source static site generators. It boasted robust community support, detailed documentation and an extensive collection of themes.

To get started, you first have to install hugo (duh). If you are adverse to Homebrew / prefer to build from source, you can also check out Hugo’s official installation guide.

$ brew install hugo

After selecting a theme, it was fairly easy to get everything set up on local development. Hot-reloading (i.e automated generation of static build files upon code change) was a nice-to-have 😊

$ hugo new site findbenn 
$ cd findbenn
$ git init && git submodule add ${THEME_LINK} ${THEME_DIRECTORY}
$ echo "theme = ${THEME_NAME}" >> hugo.toml

To spin up a local instance avaliable on port 1313, simply run in the project root directory.

$ hugo server

Renting Linode

Linode offers cheap Linux-based (i.e Ubuntu LTS) servers that you can rent to host whatever you want. I opted for the smallest (and cheapest) plan – Nanode 1GB for $5/month. This brings the total bill to $60/year 😱 I’m all ears if someone happens to know cheaper cloud providers. One day – if I have bandwidth in the future – I’ll consider setting up my homelab!

Preparing our Linode Server

After your compute instance has been provisioned, you can ssh into your server. The IP address should be reflected in your Linode dashboard.

I find it helpful to always update any packages already installed on the Ubuntu server.

apt update && apt dist-upgrade

Firstly, we have to give our server a meaningful hostname. As someone intending to use findbenn.com, I added it as part of the hostnames to /etc/hostname. After appending, we then associate our localhost - 127.0.1.1 with our desired findbenn.com.

vim /etc/hostname
vim /etc/hosts

Add this line to /etc/hosts

127.0.1.1 findbenn.com

Instead of using root with unlimited privileges, I created a psuedo-user instead – and it’s surprisingly easy. I’ve always thought that user and permissions management is hard to do right.

adduser benn
adduser benn sudo # add to sudo group
reboot

And now, we can use ssh benn@IP instead :-)

Apache2 configuration

To configure our HTTP server, we first need to install Apache2. It also comes with some utility scripts (as we will see later) that simplifies our setup

sudo apt install apache2 apache2-doc apache2-utils
systemctl status apache2

Alternatively, you can also can check from the Linode dashboard whether the server is up.

At this point, there are two folders of interest to us. The first is /etc/apache2/sites-avaliable and the second is /etc/apache2/sites-enabled. As the name implies (and from this post), virtual sites listed under /etc/apache2/sites-enabled are actually the ones being served by Apache.

So usually, the workflow is to create a virtual site in /etc/apache2/sites-avaliable and when it’s ready, we can either either copy via cp it over or create a symbolic link via symlink. And because this is done so often, there’s actually a command dedicated to enabling/disabling virtual sites! Introducing a2dissite and a2ensite. An easy way to remember is to break it apart – for example a2dissite, it’s a2 (short for Apache2),dis (short for disable) and site.

In the sites-avaliable directory, the 000-default.conf is the default configuration file that’s responsible for serving the Apache2 Ubuntu Default Page. It’s the kind that you usually see if the server admins mess something up!

Let’s disable it since we want to serve our own virtual site!

ls /etc/apache2/sites-avaliable
a2dissite 000-default.conf 
sudo systemctl reload apache2

We can create our own virtual site config findbenn.com.conf like so:

<VirtualHost *:80>
    ServerAdmin benn.tan@u.nus.edu
    ServerName  findbenn.com
    ServerAlias findbenn.com

    # Index File and Document Root
    DirectoryIndex index.html index.php
    DocumentRoot /var/www/html/findbenn.com/public_html

    # Log file locations
    LogLevel warn
    ErrorLog  /var/www/html/findbenn.com/log/error.log
    CustomLog /var/www/html/findbenn.com/log/access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =findbenn.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

Essentially, this tells Apache2 where to look if it needs/wants to output something. For instance, if there are any error logs we tell Apache2 to direct to /var/www/html/findbenn.com/log/error.log. We’ll create these directories in a bit :-)

VirtualHosts are also very cool because they essentially allow us to host multiple domains on a singular host!

There were numerous instances when I forgot to open the above file in sudo mode, preventing me from persisting and saving writes-protected file. A useful hack I came across was to:

cmap w!! w !sudo tee > /dev/null %

In short, we are invoking the shell command sudo tee in-place of Vim’s internal write mechanism. We are piping the contents of the file to tee, which writes to standard output and the file provided. Not needing the content of the standard output, we redirect it to dev/null (i.e a black hole).

Another pro tip: if at any point you suspect that your Apache2 virtual site is not running (i.e perharps due to a typo in the config file), you can use this to check and validate the status of the server.

cd /etc/apache2
apache2ctl configtest

When running the above command, I also encountered this error.

Firewall

We’ll also need to allow HTTP (port 80) and HTTPS (port 443) via ufw. It stands for uncomplicated firewall (and it really is). It’s intended as a simpler abstraction over iptables. You can read more about it here.

The OpenSSH profile allows inbound port 22 (so that we can perform ssh and sftp into our Linode server) while the Apache Full profile permits inbound ports 80 and 443 so that a user can connect to our site via HTTP and/or HTTPS. You can verify this by running sudo ufw app info PROFILE

sudo ufw app list
sudo ufw allow 'OpenSSH'
sudo ufw allow 'Apache Full'

Serving static assets

After configuring our Apache2 virtual site and firewall, we can finally get into the meat of our website – our public and static assets! Our HTML files will be served from /var/www/html/findbenn.com/public_html

cd /var/www/html
sudo mkdir findbenn.com
cd findbenn.com
sudo mkdir public_html
sudo mkdir log
sudo mkdir backups

You can either use rsync or FileZilla (i.e sftp) to synchronize the generated static files across both your development and remote server.

At this point, the site should be avaliable via HTTP. Now let’s configure it so that it supports HTTPS

Extending Support for SSL

We first install netstat as part net-tools. This utility will come in handy in reflecting all network-related processes in our server.

sudo apt install net-tools
netstat -tulpn | grep apache

Right now, we’ll only see that our Apache2 server is only listening on port 80. We first need to enable the ssh module in Apache2. It’s avaliable but not enabled. Yes, Apache2 boasts this same pattern across sites and modules! You can verify this via:

cd etc/apache2
ls -l mods-avaliable | grep ssh # reflected here
ls -l mods-enabled | grep ssh   # but not shown here

And just like a2ensite, there’s also something similar but for modules instead!

a2enmod ssl

To simplify our SSL setup, we’ll be using certbot on our Apache server. Internally, it uses Let’s Encrypt to provision a TLS certificate for our server.

The first time you run certbot it will ask you for an email, to agree to their terms of service and specify whether or not you want to receive newsletter email from them.

sudo apt install certbot python3-certbot-apache
sudo certbot --apache -d findbenn.com

If the installation is sucessful, you should see:

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/findbenn.com/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/findbenn.com/privkey.pem

To verify, you can see the output of the certificates like so:

sudo openssl x509 -text -noout -in /etc/letsencrypt/live/findbenn.com/fullchain.pem

And finally to test whether the SSL setup is functioning, you can run this:

netstat -tulpn | grep apache

If everything is working correctly, you will finally see that Apache2 is listening on ports 80 and 443.

tcp6       0      0 :::443                  :::*                    LISTEN      57878/apache2       
tcp6       0      0 :::80                   :::*                    LISTEN      57878/apache2   

Alternatively, you can also run this command from your local terminal.

curl --insecure -vvI https://www.example.com 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }'

A normal, functioning TLS/SSL connection output should look like this:

L connection/ { cert=1 } /^\*/ { if (cert) print }'
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
*  subject: C=US; ST=California; L=Los Angeles; O=Internet Corporation for Assigned Names and Numbers; CN=*.example.com
*  start date: Jan 15 00:00:00 2025 GMT
*  expire date: Jan 15 23:59:59 2026 GMT
*  issuer: C=US; O=DigiCert Inc; CN=DigiCert Global G3 TLS ECC SHA384 2020 CA1
*  SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://www.example.com/
* [HTTP/2] [1] [:method: HEAD]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: www.example.com]
* [HTTP/2] [1] [:path: /]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* Request completely sent off
* Connection #0 to host www.example.com left intact

fin

Volia, your site should finally be up! It was certainly temping to rely on traditional cloud providers like Vercel or Render but I think I’ve gained a deeper appreciation of the work behind setting up a server that serves static files. I also feel like I barely scrapped the tip of a never-ending iceberg.