I’ve been thinking about starting a blog for a while now. The main motivation for this is that I want to have a written record of things I am passionate about. As part of my job, I go through a lot of documentation taking up hours of my time. It is a requirement for anyone working in this fast-changing and dynamic industry. In most cases, all those hours of reading documentation, boils down to being able to understand what you need to do, making a plan, and implementing that plan to accomplish your goal.

This blog is meant to distill those many hours, into simple, concise, and well explained steps, that will allow me to reproduce them if I ever need to. If it helps you, the reader, than I consider that a huge bonus, and thank you for even taking the time to go through my humble little blog.

The ramp-up

We have come a long way from deploying websites on shared hosting servers, where one physical server installed with Plesk or cPanel used host up to 500 individual websites. These days everything is serverless this, docker that. While I do embrace this new trend, and I use it extensively, every now and then I like to get my hands dirty. I like beeing able to tweak every aspect of a deployment. These kinds of skills, are perishable. If not exercised, in time you forget. Besides, manually deploying your own software, can be rewarding.

So, the decision to start a blog was made. All that was needed was a blogging platform and servers to deploy it on.

So which platform to choose? If you do a quick google search, chances are that the first result that pops up on your screen is Wordpress. It is the most widely used platform on the internet to date. It powers a huge chunk of the websites we visit, whether we realize it or not. However, Wordpress has stopped being just a simple blogging platform, and has evolved into a full fledged CMS, with a wide range of use cases. That is fine for the most part, and it did contribute to the wide adoption rate is currently enjoys, but I needed something that was just a simple blogging platform.

Enter Ghost. Ghost is a no nonsense, simple blogging platform. I have always been more comfortable with using tools that do not require me to use a GUI, and allow me to simply type what I want rather than click through an interface. When I read that I could simply use markdown to add content, I was sold. Also, the fact that it is advertised as being lite and fast is a huge plus. I encourage you to read more about this lovely platform on their web page.

The next thing to decide, was where to host it. My initial thought was Digital Ocean. I already had a couple of droplets there, mainly to host a ZNC IRC bouncer, and to occasionally drop a file I wanted to share. The problem is that if you need anything more that a 1 vCPU instance with 512 MB of memory and 20 GB of disk space, it tends to get quite expensive.

For example, a decently powered instance, with 2 vCPUs and 4 GB of memory costs around 20$. If you add a 100 GB disk, that is an extra 10$. So in total, I would be paying a total of 30$ each month. That is almost as much as a bare metal server that you can rent from Hetzner. The one that I am currently running on, has a core i7-4770 CPU, with hyper-threading enabled. In terms of vCPUs, that means roughly 8 (1 CPU = 2 vCPUs). It also has 2 x 2 TB HDDs in RAID1 configuration, and 32 GB of available memory. Total cost of this server: ~39$ (31€).

Needless to say, I went with the Hetzner server. Almost the same cost, but with much more room to grow.

I have to say that Digital Ocean is great. The interface is amazing, the services they offer are great, and you can pretty much be autonomous when it comes to troubleshooting. So if you care about ease of use, I would definitely recommend Digital Ocean.

The setup

When requesting a server from Hetzner, you have the option to select your operating system of choice. I opted for Ubuntu 16.04. The setup process is automated, so after you request the server, it takes around 10-15 minutes before you receive an email with login information.

This is where the fun starts.

I like the idea of containers and isolating each application in its own container. Docker comes to most peoples minds when you mention containers, and for good reason. It is one of the most portable and easiest ways to deploy applications, but as I mentioned at the beginning of this post, I like to get my hands dirty. I still used containers to deploy everything, but instead of using application containers I opted to use operating system containers (there is an important distinction there). So for this exercise, everything was containerized using LXD containers.

If you are curious about setting up LXD, have a look at the getting started page. It describes in detail how to set up LXD on various distros, and how to initialize it. I won’t cover setting up LXD here.

In the following sections we will:

  • Deploy nginx as a reverse proxy on the bare metal node itself
  • Deploy MariaDB in its own container
  • Deploy Ghost in its own container

The diagram of components looks something like this:

Not really rocket science, right?

Installing MariaDB

First, let’s launch a new container for our database:

# This will launch a new container called
# mysql using the default Ubuntu image
# At the time of this writing, the default ubuntu
# image is 16.04
lxc launch ubuntu: mysql

Wait for the container to start, and fetch the IP:

$ lxc list
| NAME  |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
| mysql | RUNNING | (eth0)  |      | PERSISTENT | 0         |

Enter the container:

$ lxc exec mysql bash

And install MariaDB:

apt-get update && apt-get -y dist-upgrade
apt-get install -y mariadb-server-10.0

Edit /etc/mysql/mariadb.conf.d/50-server.cnf and bind the server to listen on all interfaces:

bind-address            =

Then restart MariaDB:

systemctl restart mysql

At this point, this blog does not have many visitors, I will not worry about tuning MariaDB for high traffic, so for the rest of the configuration options, defaults are fine.

While we are here, we might as well create a database and user for our blog:

MariaDB [(none)]> create database ghost;
MariaDB [(none)]> grant all on ghost.* to 'ghost'@'10.200.200.%' identified by 'SuperSecretPassword';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Needless to say, the password needs to be replaces with something more secure.

Installing Ghost

Back on the hardware host, we need to create another container for our blog:

lxc launch ubuntu: ghost

Wait for the container to fully start and get an IP address:

$ lxc list
| NAME  |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
| ghost | RUNNING | (eth0) |      | PERSISTENT | 0         |
| mysql | RUNNING | (eth0)  |      | PERSISTENT | 0         |

Enter the container:

lxc exec ghost bash

First, let’s validate that we can actually connect to the MariaDB server from this container. To do that, we need to install the MariaDB client:

apt-get update
apt-get install mariadb-client-core-10.0

Now try to connect from the ghost container to the mysql container:

mysql -h -u ghost -D ghost -p SuperSecretPassword

If you’ve done everything right, you should get a MariaDB prompt. If not, retrace your steps, and make sure you correctly bound MariaDB to and granted access to user ghost to connect from your Ghost container.

According to the official Ghost documentation, Ghost requires version 6.x of nodejs. I installed it from NodeSource:

apt-get update && apt-get -y dist-upgrade
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
apt-get install nodejs

The easiest way to install Ghost is via ghost-cli. Install the CLI now; we will need it later on:

npm i -g ghost-cli

Add a new user under which we will run Ghost:

useradd -s /bin/bash -m ghost
usermod -aG sudo ghost
echo 'ghost ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/91-ghost

Create a folder in which we will install Ghost:

mkdir -p /var/www/ghost
chown ghost:ghost /var/www/ghost
chmod 755 /var/www/ghost

Login as ghost:

su - ghost

Navigate to the folder where Ghost will be installed:

cd /var/www/ghost

And run the install command. It will complain about not being able to find the nginx package or a local database. That is perfectly fine, as we will be setting those up manually:

ghost install
✔ Checking system Node.js version
✔ Checking current folder permissions
System checks failed with message: 'Missing package(s): nginx'
Some features of Ghost-CLI may not work without additional configuration.
For local installs we recommend using `ghost install local` instead.
? Continue anyway? Yes
ℹ Checking operating system compatibility [skipped]
Local MySQL install not found. You can ignore this if you are using a remote MySQL host.
Alternatively you could:
a) install MySQL locally
b) run `ghost install --db=sqlite3` to use sqlite
c) run `ghost install local` to get a development install using sqlite3.
? Continue anyway? Yes
ℹ Checking for a MySQL installation [skipped]
✔ Checking for latest Ghost version
✔ Setting up install directory
✔ Downloading and installing Ghost v1.21.5
✔ Finishing install process
? Enter your blog URL: https://samfira.com
? Enter your MySQL hostname:
? Enter your MySQL username: ghost
? Enter your MySQL password: [hidden]
? Enter your Ghost database name: ghost
✔ Configuring Ghost
✔ Setting up instance
Running sudo command: chown -R ghost:ghost /var/www/ghost/content
✔ Setting up "ghost" system user
? Do you wish to set up "ghost" mysql user? No
ℹ Setting up "ghost" mysql user [skipped]
? Do you wish to set up Nginx? No
ℹ Setting up Nginx [skipped]
Task ssl depends on the 'nginx' stage, which was skipped.
ℹ Setting up SSL [skipped]
? Do you wish to set up Systemd? Yes
✔ Creating systemd service file at /var/www/ghost/system/files/ghost_samfira-com.service
Running sudo command: ln -sf /var/www/ghost/system/files/ghost_samfira-com.service /lib/systemd/system/ghost_samfira-com.service
Running sudo command: systemctl daemon-reload
✔ Setting up Systemd
✔ Running database migrations
? Do you want to start Ghost? Yes
✔ Checking current folder permissions
✔ Validating config
✔ Checking folder permissions
✔ Checking file permissions
Running sudo command: systemctl start ghost_samfira-com
✔ Starting Ghost
Running sudo command: systemctl enable ghost_samfira-com --quiet
✔ Starting Ghost
You can access your blog at https://samfira.com

Ghost uses direct mail by default
To set up an alternative email method read our docs at https://docs.ghost.org/docs/mail-config

Let’s check if Ghost started:

sudo netstat -tupnl|grep node
tcp        0      0*               LISTEN      2707/node       

Excellent, we have Ghost set up and running on port 2368. However, it is listening on, and we need to configure it to listen on all interfaces. Our reverse proxy is not installed on the same machine, so this appserver needs to be accessible from the hardware host, where nginx will be installed.

Edit /var/www/ghost/config.production.json and change the host setting to the following value:

"host": ""

Now restart Ghost. A systemd service was created as part of the ghost install command. The name of the service is of the form ghost_example-com.service. In my case, I used samfira.com as a domain name, so the service name is called ghost_samfira-com.service. If you look closely at the output from ghost-cli you will see a line like this:

✔ Creating systemd service file at /var/www/ghost/system/files/ghost_samfira-com.service

Let’s restart it:

sudo systemctl restart ghost_samfira-com.service

Check again:

sudo netstat -tupnl|grep node
tcp        0      0  *               LISTEN      2825/node 

Install NginX

Back on the hardware host.

The version of nginx that comes with Ubuntu 16.04 should be adequate:

sudo apt-get install nginx-full

Obtain a certificate

Obtaining a certificate used to be quite a process, and that meant most people opted to enable HTTPS on their websites, only if absolutely necessary.

Nowadays there is Let’s Encrypt, an open certificate authority that aims to create a more secure web for us all, by giving users the ability to automatically generate certificates for their servers, free of charge. So there is really no excuse to not enable SSL anymore.

Let’s Encrypt provides a really nice script called certbot which allows you to set up SSL, and renew any installed certificate in just a few simple steps. The certificates expire every 3 months, but you can automatically renew them using a simple cron job.

Installing certbot

Add the PPA:

sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update

Install certbot:

sudo apt-get install python-certbot-nginx

Before running certbot make sure you create an A record for the domain or subdomain you will be using for your blog, and point it to the IP address of your server. Now, to request a new certificate, and configure nginx to use it, run the following command:

certbot --nginx

Certbot will scan through all your configured virtual hosts, find the one for your domain, and enable SSL for it. If it cannot find one, it will automatically configure the default virtual host for you. In my case that was more than enough. I am not planning on hosting any other website on this server anytime soon.

You can opt to allow certbot to create the necessary server configuration to redirect non https traffic to your https virtual host, and I highly recommend you do that.

Now that we have our web server set up and SSL is enabled, let’s move on.

Configuring nginx

Nginx provides a nice module called upstream, which allows you to create groups of servers that can be referenced by the proxy_pass directive.

In my case, there is only one container servicing this blog (for now), and I could simply use the IP address of the Ghost container as an argument for proxy_pass. However, I prefer to set up an upstream section for it now. Later, I can snapshot my container and replicate it to as many servers I want, enabling me to scale by adding the IP addresses of those containers to the upstream definition. The upstream module also allows you to service individual backend servers without worrying about downtime. Nginx is smart enough to automatically route traffic to any of the remaining servers in the group, if you take one offline.

By default all servers added to an upstream block are load balanced using the round-robin method. This is configurable of course.

To make it easier to manage upstream definitions, I created a separate folder that mirrors how configuration files containing virtual hosts are organized.

Edit /etc/nginx/nginx.conf and at the bottom of that file you should have something similar to:

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

Add another line, so that the config now looks like this:

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
# Upstream configuration files
include /etc/nginx/upstream-enabled/*;

Create the upstream config folders:

# We can store any upstream definitions here
mkdir /etc/nginx/upstream-available

# Ony enable the ones we need
mkdir /etc/nginx/upstream-enabled

And reload nginx:

sudo systemctl reload nginx

Putting it all together

So we have our database set up, we have Ghost set up and connected to the database. We’ve also configured nginx to include config files from an upstream definition folder. Time to configure nginx to pass requests to our deployment of Ghost.

Create a new file /etc/nginx/upstream-available/ghost.conf that will contain the upstream servers for our ghost blog:

upstream ghost_backend {
    server       weight=5;

The IP address used here should be the address of the container running Ghost. You can run lxc list to fetch the IP.

Now we need to enable this upstream, so we create a symbolic link from the upstream-available to the upstream-enabled folder:

ln -s /etc/nginx/upstream-available/ghost.conf \

Good! Now we need to edit our virtual host and pass traffic to the newly defined upstream. I used the default virtual host, so in my case, I need to edit /etc/nginx/sites-enabled/default. This is the full contents of that config:

server {
        root /var/www/html;
        index index.html index.htm;
        server_name samfira.com; # managed by Certbot

        location / {
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header HOST $http_host;
                proxy_set_header X-NginX-Proxy true;

                # Proxy pass to the backend we defined earlier
                proxy_pass http://ghost_backend;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_redirect off;

        listen [::]:443 ssl ipv6only=on; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/samfira.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/samfira.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

server {
        # Redirect non https traffic to https
        if ($host = samfira.com) {
                return 301 https://$host$request_uri;
        } # managed by Certbot

        listen 80 ;
        listen [::]:80 ;
        server_name samfira.com;
        return 404; # managed by Certbot

The interesting bit is:

# Proxy pass to the backend we defined earlier
proxy_pass http://ghost_backend;

This tells nginx to proxy pass to the backend you defined. If you don’t want to use an upstream here, feel free to replace

proxy_pass http://ghost_backend;



Now reload or restart nginx:

sudo systemctl restart nginx

And you are done! Enjoy!