uncategorized

Down the Docker Rabbit Hole

I have just begin to explore Docker with the goal of using it on AWS to house some small applications. My guess is there is nothing profound here but I thought I would document here the steps that worked along with some comments about missteps I made along the way.

##Creating a base image

The first thing I did was create an image based on the Official docker node image. I wanted this image to set up the environment I use for all my applications. Namely I wanted to install nvm on it and install some global packages I always want available. I wasn’t fluent in the Dockerfile syntax so just pulled down the node base, ran a container based on it, shelled into the container and added/installed what I needed and then committed the package as a new image:

1
docker pull node:argon
docker run -it node:argon /bin/bash

.... do the installs 
exit
docker commit -m "first attempt' -a "Don Vawter"  xxxxxx donniev/testimage

It took me a few tries to get the syntax correct. When you start your container the prompt will contain the id of the container which is what you need for xxxxxx. For the repository you need to prefix it with your username but not with “DockerHub.com or DockerHub.io” because docker will use that as the default.

That process worked fine but obviously cannot be automated so the next step was to try to accomplish what I had done manually but use a Dockerfile.
One of the first problems I had there was making nvm active after it was installed. Manually you just do this by sourcing the nvm.sh file but adding a run command in the dockerfile to do that didn’t work. When I tried to run nvm … later in the file I would always get “nvm not found”. It turns out that the trick is to do things in a single run command not in separate ones. Here is the dockerfile that actually works:

1
FROM node:argon
RUN apt-get update
RUN apt-get install -y less
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
ENV SHIPPABLE_NODE_VERSION=v4.2.2
RUN . $HOME/.nvm/nvm.sh && nvm install $SHIPPABLE_NODE_VERSION && nvm alias default $SHIPPABLE_NODE_VERSION && nvm use default && npm install gulp babel  jasmine mocha serial-jasmine serial-mocha aws-test-worker -g

Notice that setting the default in nvm and doing the npm install all occur in the same run command. All the global packages I install are publically available so there is no need to provide any npm credentials. Individual applications use private npm modules (and may come from different npm users to that will be handled in the indivual application images).

The image is of little use by itself but can be used as the basis for the individual applications as we will see next but first we need to build an image with the dockerfile and push it to docker hub so we can use it in environments other than our local development machine:

1
docker build -t donniev/nodebase:1.0 .
docker push donniev/nodebase:1.0

##Creating an application image
The next step is to use our base image and create an image for an application. Since we need files in the application we bite the bullet and start to learn the syntax of the dockerfile. We put the dockerfile in the root of our application so it has the correct context. In principle we really should not provide many environment variables in the dockerfile but provide them when we run a container. This allows us to reuse the image file in development, staging, production etc. In some cases, however, there are some variables which the application will use regardless of environment so we go ahead and associate those with an image.

1
FROM donniev/nodebase:1.4       //Our base image

COPY dist  /application/        //we use babel in the application and don't need original source just files generated by babel which we keep in "dist"
WORKDIR /application      //We will put our application in /application in the container
ADD .npmrc .              //We use private modules so we need our npm credentials
RUN . $HOME/.nvm/nvm.sh && nvm use default &&  npm install     //activate nvm so that npm install works on correct node version
EXPOSE 8081              // the application runs internally on 8081
ENTRYPOINT ["/bin/bash", "-c" ,"node /application/bin/www.js"]     // It is an express app which is started by calling www.js
ENV AWS_SECRET_ACCESS_KEY=xxxxx
ENV AWS_REGION=xxxx
ENV AWS_ACCESS_KEY_ID=xxxx
ENV BUCKET=xxxx         //These really should be associated with a container not an image but in this case all environments use same values 
ENV CREDSFILE=xxxx      //                    "" 
ENV APPCONFIGFILE=xxxx  //                    ""
ENV COMMONCONFIGFILE=xxxx //                  ""
RUN cd $NVM_DIR/versions/node/$SHIPPABLE_NODE_VERSION/lib/node_modules/aws-test-worker && . $HOME/.nvm/nvm.sh && nvm use default && npm run createviewer

We can then build the new image and test it locally before publishing it

1
docker build .
docker run -d xxxxx -p 80:8081
curl http://localhost

Again xxxx is given to us by docker after it builds.
Once we are satisfied we can tag tag the image and push it to our repo

1
docker build -t donniev/cryptogram:v1 .
docker push donniev/cryptogram:v1

##Logging
We really don’t want to log to files on the container because they are not accessible outside the container and will disappear is stopped. This has a couple of implications:

  1. The paths to your log files shouldn’t be hard coded in the application
  2. The path to which you log must, of course, be visible to the container.

Let’s handle the first case. All my applications call a configure method which, among other things, initialize the loggers. That method is abstracted into a private module which all the applications use and all the application does is provide context.

Here is a snippet from that code. Notice that the logRoot is retrieved from nconf which we use to store all configuration data.

1
var streams = options.streams.map((stream)=> {
		if (stream.stream === 'process.stdout') {
			stream.stream = process.stdout;
		}
		if (stream.path) {
			if (nconf.get("logRoot")) {
				let lr = nconf.get("logRoot");
				lr = lr.endsWith("/") ? lr : lr + "/";
				if (stream.path.startsWith("/")) {
					stream.path = stream.path.substring(1);
				}
				stream.path = lr + stream.path
			}
			enSureExists(stream.path);
		}
		return stream;
	});

We provide logRoot as an environment variable when we run the container. If the application is running outside of docker it will be a local file. If we are running in a docker container it will be a directory on the docker host. The rest of the code just makes sure the path exists and touches the log files so we don’t a file not found error when we start an application the first time.

Ok, so how do we make the directory on the docker host visible to our applications? That is conveniently done with switchs on the docker run command:

1
docker run -d -e "logRoot=/mylogs" -v "/mylogs:/mylogs"   -p 80:8081 --restart always donniev/cryptogram:latest

The -e sets the environment variable logRoot to /mylogs so that is what the applications sees.
The -v maps /mylogs in the application to /mylogs on docker host. The first entry is the host directory, the second the container directory. There is no reason for them to be the same.

##Running on AWS
It is very easy to run our application on AWS by taking a couple of steps:

  1. Create an EC2 instance
  2. Install Docker on the instance
  3. Pull our application with docker pull donniev/cryptogram:v1 (or whatever tag we have published)
  4. Issue the above docker run command. The -d tells docker to run in detached mode and the –restart always tells docker to restart it if the container quits.
  5. Hit the application on the public ip of your EC2 instance.

##Next steps.
More than likely you want to run either multiple instances or your application and/or multiple applications on your docker host so you need to set up nginx as either a proxy or load balancer and have it redirect to your application rather than have your application expose itself directly to the public ip but setting that up is likely to be another post.

Share