A consul, a vault and a docker walk into a bar.

When you develop a non trivial application, you often need to split it in multiple components. I will try to avoid the term micro-service to avoid any religious war here, but, at the bare minimum, you often need to access a database, reach some external services or may be access some cloud based services, like S3, to store files.

In that kind of scenario, you’re quickly facing 2 boring problems: * how to access those services. For example, what is the host and port of your mysql database * where to store your credentials. In the case of Mysql, where do you store your username and password.

One common way to solve both problems, is to store this information in a configuration file. That works, but:

Hashicorp, a SF based company, not only has a cool name but on top of that offers two open sourced tools to address that kind of problem:

To test those tools, you can either run them locally on your machine or, as all the cool kids do these days, you can install them in a docker container.

I guess I am cool then because I plan to show you how to do just that.

As a side note, as both consul and vault are written in go, they don’t require any installation procedure per se: those tools come as a single binary file that you only need to put in a directory in your PATH.

In this article, I will be using docker and docker-compose to setup 3 containers:

A pre-requisite for this article is that you have a basic knowledge of docker and docker-compose.

Consul container

Things are easy here as there is already an official docker image for consul in the official docker repository.

The only specific (but non required) configuration I did was to use different external port for consul ion order not to conflict if I was also running a consul container on my host.

I could have left docker pick for me external ports but it would have then been a little bit less convenient to test: I would have to query docker to get the actual port number being used.

The docker-compose configuration for consul is:

consul:
  container_name: consul.server
  command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
  image: consul:latest
  volumes:
    - ./etc/consul.server/config:/consul/config
  ports:
    - "9300:9300"
    - "9500:9500"
    - "9600:9600/udp"

The config file that we pass to consul trough a docker volume is config.json and contains:

{
  "datacenter": "dc1",
  "log_level": "DEBUG",
  "server": true,
  "ui" : true,
  "ports": {
    "dns": 9600,
    "http": 9500,
    "https": -1,
    "serf_lan": 9301,
    "serf_wan": 9302,
    "server": 9300
  }
}

If you need more information about those configuration parameters, here are a couple of interesting links:

With this setup, the docker server will be available from the host in HTTP using the port 9500.

Vault container

Vault also has an an official docker image available in the official docker repository.

The docker compose configuration section vault is:

vault:
  container_name: vault.server
  image: vault
  ports:
    - "9200:8200"
  volumes:
    - ./etc/vault.server/config:/mnt/vault/config
    - ./etc/vault.server/data:/mnt/vault/data
    - ./etc/vault.server/logs:/mnt/vault/logs
  cap_add:
    - IPC_LOCK
  environment:
    - VAULT_LOCAL_CONFIG={"backend":{"consul":{"address":"${LOCAL_IP}:9500","advertise_addr":"http://${LOCAL_IP}", "path":"vault/"}},"listener":{"tcp":{"address":"0.0.0.0:8200","tls_disable":1}}}
  command: server

I know. The VAULT_LOCAL_CONFIG is a bit messy.

In a more readable form, this variable would read like that:

{
  "backend":{
    "consul":{
      "address":"${LOCAL_IP}:9500",
      "advertise_addr":"http://${LOCAL_IP}",
      "path":"vault/"
    }
  },
  "listener":{
    "tcp":{
      "address":"0.0.0.0:8200",
      "tls_disable":1
    }
  }
}

Better right?

Note:

Based on this github issue, I had to add the advertise_addr parameter to the environment variable, even though this is not really documented.

Without that variable , I would get this error when starting the vault server:

Error detecting redirect address: Get [http://192.168.0.16:9500/v1/agent/self](http://192.168.0.16:9500/v1/agent/self): EOF
Error initializing core: missing redirect address

let’s start docker!

With those containers, you can start docker.

You can ignore the bash.test container for now.

a few docker command

There are a few docker commands which are helpful to get an idea of what’s going on.

Both consul and docker containers are based on the alpine container : this is a minimal docker image based on Alpine linux.

It is so minimal, that there is no bash, but there is a ash shell interpreter that you can connect to in order to explore the container:

docker exec -it vault.server ash

There is not much you can really do from that shell, that’s why we created a…

bash test container

A more convenient way to explore consul and vault is to create a docker container, with a full bash, and preconfigured to access consul and vault.

To do that, we can write a simple Dockerfile:

FROM ubuntu:16.04
MAINTAINER Pierre Carion <pcarion@gmail.com>
ENV VAULT_VERSION 0.7.0
ENV CONSUL_VERSION 0.8.1

RUN apt-get update \
  && apt-get install -y \
  build-essential \
  git \
  curl \
  wget \
  vim \
  net-tools \
  iputils-ping \
  dnsutils \
  zip \
  unzip \
  && wget -O /tmp/vault.zip "https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip" \
  && unzip -d /bin /tmp/vault.zip \
  && chmod 755 /bin/vault \
  && rm /tmp/vault.zip \
  && wget -O /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
  && unzip -d /bin /tmp/consul.zip \
  && chmod 755 /bin/consul \
  && rm /tmp/consul.zip \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/*

VOLUME "/mnt/data"
CMD ["/bin/bash"]

Long file, but no rocket science in there:

The command to build the docker image is:

docker build -t bash.test ./docker_images/bash.test

The last step is to add this container to our docker-compose.yml file.

bash_test:
  container_name: bash.test
  image: bash.test
  environment:
    - CONSUL_HTTP_ADDR=${LOCAL_IP}:9500
    - VAULT_ADDR=http://${LOCAL_IP}:9200
  volumes:
    - ./etc/bash.test/data:/mnt/data
  command: tail -f /dev/null

Pretty straightforward too:

Once we restart all our containers with docker-compose, we can then attach a bash to our bash.test container by doing this:

docker exec -it bash.test bash

In that shell, you can then verify that you have access to consul and vault:

The access to consul seems to be working but the error from vault… is also a good sign.

When a vault server is just started, it is in an not initialized state as describe in that document.

The good news is that we now have a bash script to complete that initialization.

Initializing and unsealing the Vault

The first step is to initializing the vault.

This step will give you keys to unseal and the root token to access the vault from a client.

Time to unseal:

Pretty anticlimactic right?

The only information which should give you joy is: Sealed: false.

For our test, we will be using the initial root token provided during the vault initialization BUT that’s not the proper way to use that token. We’ll see in another article how to properly use tokens in applications.

In order for the client to work, you can set the token in an environment variable:

export VAULT_TOKEN=2ca82ba1-840d-908f-e089-1cd539cb9ace

We can now write and read to our new secret store:

More to come

In this article, we have only explained how to setup consul and vault inside a docker container and verified that the setup was working properly.

In a real life solution, things have to be automated a little bit more and we’ll see in another article how to manage the tokens and how to access to those private data in a real application.

If you have any questions, feel free to email me at pcarion@gmail.com.