For the past six years I have been tracking my family's predictions for reality tv.1 The first version was paper and pencil (with some phone calls and emails thrown in). Once my family decided that it was fun I decided to build a website to keep track. Fantasy Reality has been my playground for new technologies. I have written versions with PHP, Node.js, Rails, and Sinatra.2

Since there is always a new technology stack deployments are a nightmare. I usually dedicate a weekend to futzing around with a VPS on Digital Ocean before the new tv season. This chaotic process works but is stressful and time consuming. So this fall, instead of creating a new version3, I focused on deployment.

The goal was to find a generic deployment method that support whatever programming language / technology stack I choose to use. I experimented with Chef and Salt4 before settling on Docker. Docker was inspired by the shipping industry, where everything is packed into a standard size container.5 The container is a elegant concept that provides a clear boundary separating responsibility. Inside the container an application can do whatever the developer wants. While from the outside all of the containers function the same and can be deployed, linked, and scheduled independently from their internal mechanics.

Architecture

Now that we got the introduction out of the way, let's take a look at the architecture for Fantasy Reality.

Architecture Overview

I am using a VPS at Digital Ocean6 as the base infrastructure. Since there are only a handful of users I am able to get away with a single 512 MB instance. Fantasy Reality uses three docker containers: a proxy, an application server, and a data container.

Proxy Server

The role of the Proxy server is to send traffic from the publicly accessible port 80 to an available application server. The proxy server also filters out malformed requests. Since the application server will be deployed with a random port the proxy server will also be responsible for dynamically generating configuration files and restarting itself when a new application server is deployed.

Jason Wilder created an excellent tool nginx-proxy for this exact job and wrote a blog post explaining how it works. tl;dr it listens to the docker api for new machines and spits out an nginx config with the proper upstream locations defined. This is the tool I am currently using for Fantasy Reality, but I left the diagram ambiguous just in case there is an unforeseen issue with nginx-proxy.

Application Server

The application server encapsulates ruby, rails, the fantasy reality code repository, and gems. This is a wonderful change from using RVM and gemsets. The built docker image is available on the docker hub. The Dockerfile used to generate the image is available on my git repository and a snapshot is included below.

# DOCKER-VERSION 1.0.0

FROM rampantmonkey/base-ubuntu:14.04
MAINTAINER Casey Robinson <casey@rampantmonkey.com>

RUN apt-get update -y
RUN apt-get install -y -q build-essential sqlite3 libsqlite3-dev
RUN apt-get install -y -q ruby2.0 ruby2.0-dev git nodejs
RUN ln -fs /usr/bin/ruby2.0 /usr/bin/ruby
RUN ln -fs /usr/bin/gem2.0 /usr/bin/gem

RUN gem install bundler

ENV RAILS_ENV production
ENV RACK_ENV production

ADD . /var/fr
RUN cd /var/fr && bundle install

EXPOSE 9292

WORKDIR /var/fr

ENTRYPOINT ["bundle", "exec", "rackup"]

Database Volume

The database volume is a container that encapsulates a directory tree and exposes it to other containers. This level of indirection allows the application server container to be portable and ignore backups and file system intricacies. One trick that I learned is how to backup and restore the database. The goal is to build a container that acts as a bridge between the database volume and the local file system. Create a new container with --volumes-from the database volume and a second volume, -v /tmp/bridge:/bkup, that maps to a local directory. Here are the scripts that I use for this process.7

deploy@fantasyreality:~# cat backup.sh
#!/bin/bash
set -e
name="$(date +%Y%m%d_%H%M%S).sqlite3"
docker run --volumes-from=frdata --volume="/tmp/bridge:/bkup" --rm=true --tty=false rampantmonkey/base-ubuntu cp /var/db/production.sqlite3 /bkup/"$name"
deploy@fantasyreality:~# cat restore.sh
#!/bin/bash
set -e
name="$1"
docker run --volumes-from=frdata --volume="/tmp/bridge:/bkup" --rm=true --tty=false rampantmonkey/base-ubuntu cp "/bkup/$1" /var/db/production.sqlite3

Startup

Now that we have all of the pieces we need a method for reliably starting all of the containers. The orchestration of docker containers is an unsolved problem with many attempts at solutions. Fortunately Fantasy Reality only requires one machine so we can use a simple bash script.

deploy@fantasyreality:~# cat spinup.sh
#!/bin/bash
set -e

docker run --name=proxy --detach=true --publish=80:80 --volume=/var/run/docker.sock:/tmp/docker.sock --tty=false jwilder/nginx-proxy
docker run --name=frdata --volume=/var/db --detach=true rampantmonkey/base-ubuntu true
docker run --volumes-from=frdata --detach=true --env=VIRTUAL_HOST=fantasyreality.org --publish-all=true --tty=false rampantmonkey/fantasyreality

Update

Docker containers are composed of read-only file system layers (images in Docker parlance). Thus, to deploy an update we have to start a new container and destroy the old one. The following script does that for fantasy reality

deploy@fantasyreality:~# cat update.sh
#!/bin/bash

docker pull rampantmonkey/fantasyreality
old_container_id=$(docker ps | grep 9292/tcp | head -n 1 | awk '{print $1}')
docker run --volumes-from=frdata --detach=true --env=VIRTUAL_HOST=fantasyreality.org --publish-all=true --tty=false rampantmonkey/fantasyreality
docker stop "$old_container_id"
docker rm "$old_container_id"

Future improvements

This setup works better than anticipated. There are a few improvements I would like implement.

  • automatic container orchestration (beyond my rudimentary shell scripts)
  • backup database dump to different system
  • stats (including resource usage) tracking
  • log aggregation

  1. Mostly The Amazing Race and Survivor. 

  2. Rails is the current implementation, but be on the lookout for a golang implementation. 

  3. I did upgrade from Rails 3 to Rails 4.1 and add support for touch devices. 

  4. Chef is out, but I still am considering Salt for container orchestration (should be a whole other blog post). 

  5. The Box 

  6. My referral link for Digital Ocean if you are so inclined. 

  7. Since I am using sqlite the backup process is just a cp, but a similar approach could work with pgdump or mysql_dump