uncategorized

Putting it Altogether, Triton, Consul, Nginx and CloudFlare

In earlier posts I discussed updating Nginx dynamically dynamic nginx and updating CloudFlare leveraging that cloudflare. Today I would like to tie it all together.

Auxiliary Code:

  1. Container Buddy ContainerBuddy
  2. Consul Consul
  3. CouldFlare CloudFlare
  4. Manta Manta
  5. Joyent sdc Joyent sdc
  6. Docker-Compose Docker Compose

Containers

  1. Nginx (from node:argon with nginx installed)
  2. Consul (from progrium/consul:latest)
  3. App containers, 1 for each app.

You will notice that there are no database containers. I am using mongodb in my applications but that is hosted externally at mongolabs.

ContainerBuddy is at the heart of this architecture and we can use it essentially as a “black box” for coordination. That is a good thing, because, although I can read Go I cannot write it (That is on my bucket list though).

ContainerBuddy uses a Docker Compose file as it’s source for coordination.

1
consul:
    image: progrium/consul:latest
    command: >
      -server
      -bootstrap-expect 1
      -ui-dir /ui
    mem_limit: 128m
    ports:
    - 53
    - 8300
    - 8301
    - 8302
    - 8400
    - 8500
    - 8600
    restart: always


nginx:
    image: donniev/projects:nginx.testmanta
    mem_limit: 128m
    ports:
    - 80
    links:
    - consul:consul
    - hexo:hexo
    - socketserver:socketserver
    - crypto:crypto
    - ninja:ninja
    - notify:notify
    restart: always
    environment:
    - CONTAINERBUDDY=file:///opt/containerbuddy/nginx.json
    command: >
      /opt/containerbuddy/containerbuddy
      nginx -g "daemon off;"

socketserver:
    image: donniev/projects:socketserver.test
    mem_limit: 128m
    ports:
    - 2200
    links:
    - consul:consul
    restart: always
    command: >
      /opt/containerbuddy/containerbuddy
      -config file:///opt/containerbuddy/socketserver.json
      /usr/local/bin/node /application/index.js

crypto:
     image: donniev/cryptogram:test
     mem_limit: 128m
     ports:
     - 8081
     links:
     - consul:consul
     - socketserver:socketserver
     restart: always
     command: >
       /opt/containerbuddy/containerbuddy
       -config file:///opt/containerbuddy/crypto.json
       /usr/local/bin/node /application/bin/www.js

ninja:
    image: donniev/lendingclubninja:test
    mem_limit: 128m
    ports:
    - 3500
    links:
    - consul:consul
    restart: always
    command: >
      /opt/containerbuddy/containerbuddy
      -config file:///opt/containerbuddy/ninja.json
      /usr/local/bin/node /application/bin/www.js

Notice that the entry point for all containers is the containerbuddy executable. Each container also has the entry point you would use if you were running this with Docker outside of container buddy (e.g. /usr/local/bin/node /application/bin/www.js) and each container also has a config file used by containerbuddy to register services with consul.

1
{
  "consul": "consul:8500",
  "onStart": "/opt/containerbuddy/reload-nginx.sh",
  "services": [
    {
      "name": "nginx",
      "port": 80,
      "interfaces": ["eth0"],
      "health": "/usr/bin/curl --fail -s -o /dev/null http://localhost/health.txt",
      "tags": ["joyent.nginx.dailycryptogram.com"],
      "poll": 10,
      "ttl": 25
    }
  ],
  "backends": [
    {
      "name": "crypto",
      "poll": 7,
      "onChange": "/opt/containerbuddy/reload-nginx.sh"
    },{
      "name": "hexo",
      "poll": 7,
      "onChange": "/opt/containerbuddy/reload-nginx.sh"
    },{
      "name": "notify",
      "poll": 7,
      "onChange": "/opt/containerbuddy/reload-nginx.sh"
    },{
      "name": "socketserver",
      "poll": 7,
      "onChange": "/opt/containerbuddy/reload-nginx.sh"
    },{
      "name": "ninja",
      "poll": 7,
      "onChange": "/opt/containerbuddy/reload-nginx.sh"
    }
  ]
}

For nginx the config file instructs containerbuddy to reload nginx whenever there is a change in the associated app containers. We leverage that by updating the dns files in reload-ngninx.sh (well, we call a node program from there which does the update). Notice there is also a health attribute which intructs consul to poll the service for health status.

The other config files are simpler because they are not dependent on backend changes:

1
{
  "consul": "consul:8500",
  "onStart": "/opt/containerbuddy/reload-hexo.sh",
  "services": [
    {
      "name": "hexo",
      "port": 4000,
      "tags":["joyent.blog.vawter.com"],
      "health": "/usr/bin/curl --fail -s -o /dev/null  http://localhost:4000/",
      "poll": 10,
      "ttl": 25
    }
  ],
  "backends": [
      ]
}

Starting consul and nginx

We have a bash script that starts consul and nginx. The following is based on the containerbuddy example start script. The major difference is that we do not start the individual applications and that when we start nginx we start it with the –no-deps flag. If you leave out that flag all your apps will be launched.

1
#!/bin/bash

COMPOSE_CFG=
PREFIX=example

while getopts "f:p:" optchar; do
    case "${optchar}" in
        f) COMPOSE_CFG=" -f ${OPTARG}" ;;
        p) PREFIX=${OPTARG} ;;
    esac
done
shift $(expr $OPTIND - 1 )

COMPOSE="docker-compose -p ${PREFIX}${COMPOSE_CFG:-}"
CONFIG_FILE=${COMPOSE_CFG:-docker-compose.yml}

echo "Starting example application"
echo "project prefix:      $PREFIX"
echo "docker-compose file: $CONFIG_FILE"

echo 'Pulling latest container versions'
${COMPOSE} pull consul
${COMPOSE} pull nginx

echo 'Starting Consul.'
${COMPOSE} up -d consul

# get network info from consul and poll it for liveness
if [ -z "${COMPOSE_CFG}" ]; then
    CONSUL_IP=$(sdc-listmachines --name ${PREFIX}_consul_1 | json -a ips.1)
else
    CONSUL_IP=${CONSUL_IP:-$(docker-machine ip default)}
fi

echo "Writing template values to Consul at ${CONSUL_IP}"
while :
do
    # we'll sometimes get an HTTP500 here if consul hasn't completed
    # it's leader election on boot yet, so poll till we get a good response.
    sleep 1
    curl --fail  -X PUT --data-binary @./nginx/default.ctmpl \
         http://${CONSUL_IP}:8500/v1/kv/nginx/template && break
    echo -ne .
done

echo
echo 'Copying consul ip'
#open http://${CONSUL_IP}:8500/ui
echo "${CONSUL_IP}" > /tmp/consulip

echo 'Starting Nginx'
${COMPOSE} up -d --no-deps nginx

The script pulls the latest consul and nginx containers, launches consul, waits for it to go live, copies the consul-template for nginx as a key-value pair to consul, copies the consul ip to a temp file for later use, and launches nginx.

Starting individual apps

1
#!/bin/bash

COMPOSE_CFG=
PREFIX=example

while getopts "f:p:s:" optchar; do
    case "${optchar}" in
        f) COMPOSE_CFG=" -f ${OPTARG}" ;;
        p) PREFIX=${OPTARG} ;;
        s) SERVICE=${OPTARG} ;;
    esac
done
shift $(expr $OPTIND - 1 )

COMPOSE="docker-compose -p ${PREFIX}${COMPOSE_CFG:-}"
CONFIG_FILE=${COMPOSE_CFG:-docker-compose.yml}
echo $COMPOSE
echo 'Pulling latest container versions'
${COMPOSE} pull $SERVICE

CONSUL_IP=`cat /tmp/consulip`
CONSUL_IP=${CONSUL_IP:-localhost}

echo "$SERVICE"
${COMPOSE} up -d $SERVICE

This just pulls the latest image for the service and launches it with docker-compose. Note that we get the consul ip from the temp file we created in start.sh

Controlling and Monitoring.

We have a little bash script that wraps all the commands into a command line menu:

1
#!/usr/bin/env bash
source /Volumes/DataDrive/donnievbitbucket/docker_scripts/setuptriton.sh triton sw
PREFIX=$1
PREFIX=${PREFIX:-test}
echo "Using prefix: $PREFIX"
function doit(){


OPTIONS="DSR Start Status tail-nginx open-consul hexo crypto notify ninja dockerlogs socketserver QUIT"
OPTIONS2="consul nginx hexo crypto notify ninja socketserver QUIT"
select opt in $OPTIONS; do
   if [ "$opt" = "QUIT" ]; then
   		echo Done
   		exit
   elif [ "$opt" = "Start" ]; then
        ./start.sh -p $PREFIX
         clear
        echo "Consul and nginx started"
        doit
   elif [ "$opt" = "open-consul" ]; then
         CONSUL_IP=`cat /tmp/consulip`
         CONSUL_IP=${CONSUL_IP:-localhost}
         open http://${CONSUL_IP}:8500/ui
         clear
        echo "Consul console opened"
        doit
   elif [ "$opt" = "Status" ]; then
        echo "services running"
        /usr/local/bin/docker ps |awk '{print $NF}' |sort
        doit
   elif [ "$opt" = "DSR" ]; then
        /Volumes/DataDrive/donnievbitbucket/docker_scripts/cleanjoyent.sh
        clear
        echo "Containers stopped and removed"
         doit
   elif [ "$opt" = "dockerlogs" ]; then
        select opt2 in $OPTIONS2; do
 		  if [ "$opt2" = "QUIT" ]; then
   			echo Done
        	clear
         	doit
    	else
    	    echo "docker logs for ${PREFIX}_${opt2}_1"
  			/usr/local/bin/docker logs ${PREFIX}_${opt2}_1
  			 doit
   		fi
       done
   elif [ "$opt" = "tail-nginx" ]; then
       /usr/local/bin/docker exec ${PREFIX}_nginx_1 tail -f /var/log/nginx/access.log
        echo "Nginx tail finished"
        doit
   else
   ./startservice.sh -p $PREFIX -s $opt
   clear
   echo "$opt started"
   doit
   fi
done
}
doit

It produces a menu:

1
1) DSR		   4) tail-nginx     7) crypto	      10) dockerlogs
2) Start	   5) open-consul    8) notify	      11) socketserver
3) Status	   6) hexo	     9) ninja	      12) QUIT

DSR stops and removes all the containers which is helpful in development.
Selecting an individual service (e.g. hexo) launches it.
Selecting dockerlogs allows you to show the logs for an individual service
tail-ngnix tails the nginx access log
status shows running services
open-consul opens the consul dashboard

Scaling, stopping, restarting are not included. When you stop a service, containerbuddy does not deregister it from consul.
The consul api allows you to deregister a service but I have not yet implemented that.

Summary

Leveraging ContainerBuddy, Nginx, and Consul allows you to launch applications on Joyent Triton running on bare metal. Adding the CloudFlare api allows you to dynamically update the dns records. Adding node to the equation allows you to update dns records for multiple domains. This architecture avoids the VM layer necessary for other providers like AWS.

Share