Sticky Session With Docker Swarm (CE) on CentOS 7

Updated on January 11, 2019
Sticky Session With Docker Swarm (CE) on CentOS 7 header image

Introduction

Docker Swarm turns your individual servers into a cluster of computers, facilitating scaling, high-availability and load-balancing. The Swarm load-balancer implements a round-robin load-balancing strategy and this might interfere with the correct functioning of (legacy) stateful applications which require some form of sticky sessions to allow a high-availability setup with multiple instances. Docker Enterprise Edition supports Layer-7 sticky session, but in this guide we will focus on the free (CE) version of Docker. To implement sticky sessions we'll use Traefik.

Prerequisites

  • At least two freshly deployed and updated CentOS 7 instances in the same subnet with private networking enabled
  • Docker CE installed on these instances
  • The instances should be part of the same Swarm and should be able to communicate with each other over the private network
  • Prior knowledge of Docker and Docker Swarm
  • A non-admin user with sudo rights (optional but it's strongly advised to not use the root user)

In this tutorial we'll be using two Vultr instances with private IP addresses 192.168.0.100 and 192.168.0.101, both of them are Docker Swarm manager nodes (which is not ideal for production but enough for this tutorial).

Whoami

This tutorial uses the jwilder/whoami docker image as demo application. This simple container will respond to a REST call with the name of the responding container, making it very easy to test if the sticky sessions are working. This image is only used for demo purposes and needs to be replaced by your own application's image. The whoami-service is configured as follows:

sudo docker network create whoaminet -d overlay
sudo docker service create --name whoami-service --mode global --network whoaminet --publish "80:8000" jwilder/whoami
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --reload

If we would subsequently curl the whoami REST endpoint at http://192.168.0.100/, we can see the round-robin load-balancing of Docker Swarm at work.

curl http://192.168.0.100
I'm a6a8c9294fc3
curl http://192.168.0.100
I'm ae9d1763b4ad
curl http://192.168.0.100
I'm a6a8c9294fc3
curl http://192.168.0.100
I'm ae9d1763b4ad
curl http://192.168.0.100
I'm a6a8c9294fc3

It has no use to test this with modern browsers like Chrome or Firefox because they are designed to keep connections alive (open) and the Docker Swarm load-balancer will only switch to the other container upon each new connection. If you want to test this with a browser you would have to wait at least 30 seconds for the connection to close before refreshing again.

Setting up Traefik

Traefik natively supports Docker Swarm, it can detect and register or de-register containers on-the-fly and it communicates with your application over the internal overlay network. Traefik needs some information about your application before it can start handling requests for it. This information is provided to Traefik by adding labels to your Swarm service.

sudo docker service update --label-add "traefik.docker.network=whoaminet" --label-add "traefik.port=8000" --label-add "traefik.frontend.rule=PathPrefix:/" --label-add "traefik.backend.loadbalancer.stickiness=true" whoami-service

The list below describes what each label means:

  • traefik.docker.network: The Docker overlay network over which Traefik will communicate with your service
  • traefik.port: The port on which your service is listening (this is the internally exposed port, not the published port)
  • traefik.frontend.rule: PathPrefix:/ binds the context root / to this service.
  • traefik.backend.loadbalancer.stickiness: Enables sticky sessions for this service

Now that the whoami-service has been configured with the required labels, we can add the Traefik service to the swarm:

sudo docker service create --name traefik -p8080:80 -p9090:8080 --mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock --mode=global --constraint 'node.role == manager' --network whoaminet traefik --docker --docker.swarmmode --docker.watch --web --loglevel=DEBUG

This command does quite a number of things at once. The list below will explain in more detail:

  • --name traefik: Our new Docker service's name is traefik
  • -p8080:80: We publish Traefik's port 80 to port 8080 (port 80 is already in use by our whoami-service)
  • -p9090:8080: We publish Traefik's own web interface to port 9090
  • --mount ...: We mount the Docker Socket into the container so that Traefik can access the host's Docker runtime
  • --global: We want Traefik containers on each manager node for high availability reasons
  • --constraint 'node.role == manager': We only want Traefik to run on manager nodes because worker nodes can't provide Traefik with the info it needs. For example, docker service ls on a worker node doesn't work, so Traefik wouldn't even be able to discover what services are running
  • --network whoaminet: Connects Traefik to the same network as our whoami-service, otherwise they can't connect. We previously told Traefik to connect to our service over this network with the traefik.docker.network label
  • traefik : Tell docker to use the latest Traefik docker image for this service
  • --docker --docker.swarmmode --docker.watch --web --loglevel=DEBUG: Command line arguments passed directly to Traefik to allow it to run in Docker swarm mode (--loglevel=DEBUG is optional here but interesting during setup and for this tutorial)

All that is left to do is open up the necessary ports in the CentOS firewall:

sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent
sudo firewall-cmd --zone=public --add-port=9090/tcp --permanent
sudo firewall-cmd --reload

How it works

As soon as Traefik starts up, you can see in the logs that Traefik discovers the two whoami containers. It's also outputting the cookie name which it will use to handle the sticky session:

time="2018-11-25T13:17:30Z" level=debug msg="Configuration received from provider docker: {\"backends\":{\"backend-whoami-service\":{\"servers\":{\"server-whoami-service-1-a179b2e38a607b1127e5537c2e614b05\":{\"url\":\"http://10.0.0.5:8000\",\"weight\":1},\"server-whoami-service-2-df8a622478a5a709fcb23c50e689b5b6\":{\"url\":\"http://10.0.0.4:8000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\",\"stickiness\":{}}}},\"frontends\":{\"frontend-PathPrefix-0\":{\"entryPoints\":[\"http\"],\"backend\":\"backend-whoami-service\",\"routes\":{\"route-frontend-PathPrefix-0\":{\"rule\":\"PathPrefix:/\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}"
time="2018-11-25T13:17:30Z" level=debug msg="Wiring frontend frontend-PathPrefix-0 to entryPoint http"
time="2018-11-25T13:17:30Z" level=debug msg="Creating backend backend-whoami-service"
time="2018-11-25T13:17:30Z" level=debug msg="Adding TLSClientHeaders middleware for frontend frontend-PathPrefix-0"
time="2018-11-25T13:17:30Z" level=debug msg="Creating load-balancer wrr"
time="2018-11-25T13:17:30Z" level=debug msg="Sticky session with cookie _a49bc"
time="2018-11-25T13:17:30Z" level=debug msg="Creating server server-whoami-service-1-a179b2e38a607b1127e5537c2e614b05 at http://10.0.0.5:8000 with weight 1"
time="2018-11-25T13:17:30Z" level=debug msg="Creating server server-whoami-service-2-df8a622478a5a709fcb23c50e689b5b6 at http://10.0.0.4:8000 with weight 1"
time="2018-11-25T13:17:30Z" level=debug msg="Creating route route-frontend-PathPrefix-0 PathPrefix:/"
time="2018-11-25T13:17:30Z" level=info msg="Server configuration reloaded on :80"
time="2018-11-25T13:17:30Z" level=info msg="Server configuration reloaded on :8080"

If we curl to http://192.168.0.100:8080 we can see that a new cookie _a49bc has been set:

curl -v http://192.168.0.100:8080
* About to connect() to 192.168.0.100 port 8080 (#0)
*   Trying 192.168.0.100...
* Connected to 192.168.0.100 (192.168.0.100) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.0.100:8080
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 17
< Content-Type: text/plain; charset=utf-8
< Date: Sun, 25 Nov 2018 13:18:40 GMT
< Set-Cookie: _a49bc=http://10.0.0.5:8000; Path=/
<
I'm a6a8c9294fc3
* Connection #0 to host 192.168.0.100 left intact

If, on subsequent calls, we send this cookie to Traefik, we will always be forwarded to the same container:

curl http://192.168.0.100:8080 --cookie "_a49bc=http://10.0.0.5:8000"
I'm a6a8c9294fc3
curl http://192.168.0.100:8080 --cookie "_a49bc=http://10.0.0.5:8000"
I'm a6a8c9294fc3
curl http://192.168.0.100:8080 --cookie "_a49bc=http://10.0.0.5:8000"
I'm a6a8c9294fc3
curl http://192.168.0.100:8080 --cookie "_a49bc=http://10.0.0.5:8000"
I'm a6a8c9294fc3

The cookie contains nothing but the internal, (overlay), IP address of the container to which Traefik should send to request. If you change the cookie value to http://10.0.0.4:8000 then the request would effectively be forwarded to the other container. If the cookie were to never be re-sent to Traefik then the sticky session will not work and requests will be balanced between the application's containers and the Traefik containers.

That's all that is needed to set up Layer 7 Sticky Sessions in Docker CE on CentOS 7.