You don't have a backup until you restored it. Until you actually test that your backups are working you can't know if they will be of any use in the furure. In a real production environment you would use CD/CI for the backup sytem too. But in this post we will talk about creating simple backups for the Ghost CMS with the help of Ansible.

Deployment process

First, lets talk about how I deploy this blog. I use docker-compose, so I can forget about installing or maintaining dependencies. The script looks like this:

version: '3.1'

services:

  ghost:
    image: ghost:2
    restart: always
    volumes:
      - ./ghost/content:/var/lib/ghost/content
    ports:
      - 0.0.0.0:80:2368
    environment:
      url: "http://dev.libreneitor.com"

The ghost docker image by default uses SQLite3, and stores it in /var/lib/ghost/content/data/ghost.db. Therefore by simply copying that file we would be making a backup of the database.

But, what we really want is to backup everything. To do that, we need to backup the content folder. And that's it! So, to make our lives easier I decided to mount the content folder directly into the the host's ghost/content folder.

Ansible script

In this context the steps necessary to make the backup are:

  • Ensure that we have aws already set up with all the necessary permissions in our local machine.
  • We install python in our remote server to allow Ansible to run.
  • We stop Docker to prevent ourselves to manipulate data on an invalid state.
  • We compress and download the ghost folder.
  • We start Docker again.
  • And finally, we upload the new backup to AWS S3.

This is the Ansible script for the previously described steps:

- name: check access to s3
  hosts: localhost
  connection: local
  gather_facts: no
  tasks:
    - name: aws s3 ls
      shell: aws s3 ls s3://your-backup-bucket/

- name: ensure python is installed in remote
  hosts: blog-servers
  gather_facts: no
  tasks:
    - name: check Python
      raw: test -e /usr/bin/python
      changed_when: false
      failed_when: false
      register: check_python
    - name: install Python
      raw: sudo apt-get install -y python 
      when: check_python.rc != 0


- name: backup blog
  hosts: blog-servers
  gather_facts: no
  tasks:
    - name: docker-compose down
      shell: /snap/bin/docker-compose down
      args:
        chdir: /home/ubuntu
    - name: outfile name
      shell: echo "ghost_{{inventory_hostname}}_$(date '+%F-%T').tar.gz"
      register: backup_file_name
    - name: compress files
      shell: tar -czvf  /tmp/{{backup_file_name.stdout}} ghost
      args:
        chdir: /home/ubuntu/
    - name: docker-compose up
      shell: /snap/bin/docker-compose up -d
      args:
        chdir: /home/ubuntu
    - name: copy backup to local machine
      block:
        - name: ensure backups folders exists
          connection: local
          file:
            path: ./backups
            state: directory
        - name: copy backup file to localhost
          fetch:
            src: /tmp/{{backup_file_name.stdout}}
            dest: ./backups/{{backup_file_name.stdout}}
            flat: true
    - name: delete temporal files
      shell: rm /tmp/ghost_{{inventory_hostname}}_*


- name: upload backup
  hosts: localhost
  connection: local
  gather_facts: no
  tasks:
    - name: aws s3 sync
      shell: aws s3 sync backups s3://your-backup-bucket/blog/

Restore

With the previous Ansible script, we created .tar.gz files with all the Ghost's files necessary to restore our site and we uploaded them to S3. But how do we know that this actually works? Well:

  • We download the latest S3 backup.
  • Then, we restore all data into a local test Ghost Docker.

We can run locally the original docker-compose.yml with all the data decompressed in the same folder. Then, we can simply check if it works.

Obviously, this is a very minimalistic approach, but I think this is good enough for this type of websites. After all, even big companies don't automate/test their backup restoration process.