This website is hosted on a VPS from Vultr, it is cheap - $6 a month, has 1GB of RAM, 1 vCPU, a 25GB disk and 2000GB bandwidth. Adequate for serving a statically generated site. In this post I will detail how I set the VPS up, and share how I created a deployment pipeline which uses two gitlab runners, one on my home network and one on the VPS to deploy the website. The site is served using NGINX, which is a common server, used a lot for load balancing and used for Ingress in Kubernetes. Cloudflare also provides my DNS and protection from DDoS, etc, and it’s free. I use Let’s Encrypt for a free SSL certificate and the certificate is automatically renewed every 90 days by certbot. Cloudflare does have it’s own certificates in front of it though, which are provided by Google.
Post server provision steps
After I have provisioned a Linux server through a VPS provider, there are some post deploy steps I do to lock it down. First I configure SSH to not allow root login, only allow login using asymmetric keys and install fail2ban to limit the amount of login failures that can be received before a rule is added to the firewall to block the IP for a certain amount of time, you can also change the port that SSH listens on, this will prevent a lot of bot traffic you will see trying to bruteforce SSH with default usernames and passwords as they use the default port, but fail2ban will also take care of this by rate limiting them. Of course using non standard usernames and setting a strong complicated password will prevent this too, and also forcing the use of key based authentication. I typically don’t change the port, as it is security through obscurity and anyone who has used nmap before knows it is trivial to find out the ports that are open on the machine and the services that are running on those ports. If you really want to lock things down, some providers provide VPN’s which can be used to keep sensitive services such as SSH behind them. You could also configure the firewall to block all traffic to SSH except traffic originating from certain IPs, which could be your home network, but most ISP’s provide dynamic IPs so there is a chance of lockout.
Setting up NGINX
The base ubuntu repositories come with a recent version of NGINX, you can also use repositories that are provided by the NGINX project which are kept up to date. In my case I used the base repositories, you can install NGINX by running:
|
|
For added security you can even install the headers-more-filter package which allows you to set the server response header to any string you want:
|
|
Configuring a basic NGINX server block
NGINX supports vHosts which look at the request headers to determine which site to serve. In my case I have one set up, to do this, run:
|
|
Add the following block to the newly created file:
Filename:/etc/nginx/conf.d/default.conf
|
|
Customize the above file to your liking, in my case I have a subdirectory in /var/www
that holds the html folder, and the site is named appropriately in the filename and in the server block. Next to reload the changes run:
|
|
Create an index.html in location specified in the root definition in the above server block and add this text:
Filename:/var/www/html/index.html
|
|
Now you should see the above page when browsing to your website.
Setting up Let’s Encrypt
To use free SSL certs provided by Let’s Encrypt, we need to configure certbot, which will automatically provision the cert when run and keep it updated every 90 days which provides a nice experience for users. Note if you need extended validation and such, Let’s Encrypt does not provide that and a paid option will need to be used, but for a basic blog, Let’s Encrypt should suffice. Below we will install certbot with snap, my server is running Ubuntu which comes with it installed, so install snap, you can find instructions here. You can also install it with pip, which is python’s package manager. Below are the steps to install:
|
|
So now your site should be reachable via https. Next we will cover setting up a deployment pipeline in Gitlab that uses rsync to deploy the site.
Configuring the Gitlab pipeline
In your projects directory, create a file named .gitlab-ci.yml
. Paste the below contents into the file and change it to meet your needs, for example if your main branch is called master, you can change only to master. The below yaml assumes you have .gitlab-ci.yml
in the root and your Hugo site in the site directory. The script uses the hugomods/hugo:debian-node
image, which is a community maintained docker image that can be used for development, and in this case it is used to build the static files for my site. I use Tailwind CSS for this site and use the npm -i
command to install the tailwindcss and @tailwindcss/cli packages that are defined in the package.json
file in the hugo site directory.
.gitlab-ci.yml
|
|
The above pipeline uses two agents, one with the tag “build” and one with the tag “deploy”. In my case I run the build agent in my home network, and deploy is running on the VPS that is hosting my site. I did have it initially set up to use the same agent for both steps and had a deploy account setup on the VPS with rbash set as shell and had a private key setup that was used for login, but I decided to switch to using a gitlab-runner with the shell executor on my VPS that just runs the rsync command to copy the files from the artifact directory to the NGINX web root for my site. This allowed the build/deploy process to speed up, and it takes under a minute to build/deploy any changes that are merged or pushed to my main branch in my private repository. I setup two pipeline variables in my project, one for my site’s base url: ${BASE_URL}
, and one: ${SITE_LOCATION}
, that has the location of my vHosts web root directory for rsync to copy the files to.
Conclusion
Well that is the end of this post, I hope that this could help someone looking for a way to deploy a Hugo site to a VPS using Gitlab and docker.
comments powered by Disqus