Docker overlay networking without swarm

Posted by Mark Hornsby on 10/10/2016

Docker multi-host networking, provided by the overlay driver, was introduced in Docker 1.9. This new feature allows you to virtually network multiple Docker host machines together and thus allows the containers therein communicate with each other directly overlay secure network tunnels.

The Virtual eXtensible LAN (VXLAN) technology that underpins this feature is now officially documented in RFC7348. See here for an explanation of how it works.

There are good resources available that show you how to use overlay networking as part of a Docker Swarm cluster. This is a great feature, but what if you want to set this up without enabling all the features of Swarm? I struggled to get the process working using Docker 1.12.1 and so I thought I'd document the process here in the hopes that others find it useful.

Prerequisites

You will need the following installed:

  1. Docker Engine
  2. Docker Machine
  3. Virtualbox

All the above are installed with Docker Toolbox. If you are using Docker for Mac or Docker for Windows and haven't previously used Docker Toolbox then you may need to install Virtualbox directly.

Setting up your virtual machines

To start you will need 3 virtual machines to run as separate docker hosts (one of these will be our key-value store, running Consul, and the other two will be our virtually networked docker hosts).

We will create our 3 virtual machines as node-0, node-1 and node-2.

At this point it might be sensible to open 3 separate terminals to avoid logging in and out of virtual machines and containers repeatedly

Firstly create node-0 (our key-value store that will use Consul):

docker-machine create -d virtualbox node-0

Next, in your second terminal, our first docker host to be networked:

docker-machine create -d virtualbox node-1

Finally, in your third terminal, our second docker host:

docker-machine create -d virtualbox node-2

Now back in the first terminal check that the creation of the three vm's has been successful:

docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   -        virtualbox   Stopped                                       Unknown   
node-0    -        virtualbox   Running   tcp://192.168.99.103:2376           v1.12.1   
node-1    -        virtualbox   Running   tcp://192.168.99.104:2376           v1.12.1   
node-2    -        virtualbox   Running   tcp://192.168.99.105:2376           v1.12.1   

Note the ip address assigned to node-0 (192.168.99.103 for me, YMMV) - we will need this later when we set up our docker hosts

Now connect to each vm, i.e. in your first terminal:

docker-machine ssh node-0

Repeat for nodes 1 and 2 in your second and third terminals.

Setting up our key-value store

I have chosen to use Consul. I don't have previous experience of using this, but have used Zookeeper before. It is also possible to use etcd. For the purposes of this tutorial I suggest you stick with Consul, but feel free to come back and try the others once you have the system up and running.

In your first terminal you should now be inside the node-0 virtual machine. Run the following docker command to start your Consul container:

docker run -d -p 8500:8500 -h consul --name consul consul:v0.7.0 agent -server -bootstrap -client=0.0.0.0

Notice that we have told Docker to expose port 8500 and we have told Consul to bind it's client listeners to 0.0.0.0 (by default it normally only listens on 127.0.0.1).

You can ensure your Consul is up and running by executing the following in the first terminal window:

curl -G http://localhost:8500/v1/agent/self

This will return the configuration of the agent in json format.

Cluster your Docker Hosts

We will now configure node-1 and node-2 to utilise this key-value store for clustered networking.

In the second terminal window (node-1) we need to discover which linux interface is connected to our virtual box network. You can use ifconfig or ip addr show to determine this. Pick the one that machines the output for node-1 from docker-machine ls above.

ip addr show eth1
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:be:27:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.104/24 brd 192.168.99.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:febe:273f/64 scope link
       valid_lft forever preferred_lft forever

In my case the interface is eth1, again YMMV.

We need to tell docker to use this interface to advertise itself and to utilise Consul at the address we discovered for node-0 using docker-machine ls:

sudo vi /var/lib/boot2docker/profile

Your boot2docker profile will probably already expose the docker remote API over TCP on port 2376, this is set up by docker-machine - DOCKER_HOST='-H tcp://0.0.0.0:2376'

In the EXTRA_ARGS block add the following line, substituting in your node-0 ip address and the interface that node-1 is advertising on:

-H unix:///var/run/docker.sock --cluster-store=consul://<NODE-0-PRIVATE-IP>:8500/network --cluster-advertise=<VBOX-IFACE>:2376

Your completed file should look something like this:

EXTRA_ARGS='
-H unix:///var/run/docker.sock --cluster-store=consul://192.168.99.103:8500/network --cluster-advertise=eth1:2376                                
--label provider=virtualbox

'                                 
CACERT=/var/lib/boot2docker/ca.pem
DOCKER_HOST='-H tcp://0.0.0.0:2376'
DOCKER_STORAGE=aufs
DOCKER_TLS=auto                              
SERVERKEY=/var/lib/boot2docker/server-key.pem
SERVERCERT=/var/lib/boot2docker/server.pem

Now save the file and restart the docker service:

sudo /etc/init.d/docker restart

Once completed you can confirm that docker is using the correct cluster advertisement address:

docker info

...
Cluster Store: consul://192.168.99.103:8500/network
Cluster Advertise: 192.168.99.104:2376
...

Now repeat the same set of instructions for node-2.

Creating the overlay network

At this point you now have two docker hosts that are virtually connected via the key-value store. The next step is to create our overlay network. On node-1 (your second terminal) execute:

docker network create -d overlay --subnet=192.168.3.0/24 overlay_test

This step creates the overlay network and will configure all docker hosts connected to the key-value store to have this same network. We can verify this by listing the networks on node-1 and node-2:

docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
f18fd875bb99        bridge              bridge              local               
feb14db634f0        host                host                local               
f60c65256e1c        none                null                local               
49c8ce243d03        overlay_test        overlay             global     
docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
521b01ed0ae8        bridge              bridge              local               
7ebe03946374        host                host                local               
ffcb1fba5cf3        none                null                local               
49c8ce243d03        overlay_test        overlay             global              

In addition to the default bridge, host and none networks we can see our overlay_test network. Not only is it visible from node-1 where we created it but also on node-2 - welcome to Docker mult-host networking!

Now let's take advantage of our overlay. On node-1 we will run a web server to host a test page in a container which we will call web (don't forget to connect it to the overlay_test network):

docker run -itd --net=overlay_test --name=web nginx

Now on node-2, remembering that this is a separate Docker host to node-1, let's retrieve the default web page from nginx:

docker run -itd --net=overlay_test --name=client ubuntu:xenial

docker exec -it client /bin/bash

apt-get update
apt-get install wget
wget -O- http://web

This should return the nginx default web page. If so we have succesfully connected two docker containers running on different docker hosts \o/