Using Docker in Jenkins

January 19, 2017

Docker can be used nicely within your CI process in order to build and test your application in a real isolated environment. Gitlab (CI) integrated this possibility natively within the build as soon as you have configured it properly. Unfortunately, Jenkins does not offer this functionality out of the box. There are some plugins available that try to provide Docker related functionality but I was not able to set them properly and I needed an immediate solution. Hence, in the following I demonstrate how to set this up by hand.

flavor wheel

When using Docker it makes a lot of sense to use Jenkins in Docker as well. This leads to the situation during the build process where Jenkins must be able to create further containers. To solve that Docker can use the remoting capabilities (this must be enabled in beforehand) or you can link the docker.sock from the host. The latter is what we gonna use. But first of all, you have to extend the Jenkins image to have the Docker tools available.

This is the Dockerfile:

FROM jenkins
USER root

RUN apt-get update \
  && apt-get install -y apt-transport-https ca-certificates gnupg2 \
  && apt-key adv \
            --keyserver hkp://ha.pool.sks-keyservers.net:80 \
            --recv-keys 58118E89F3A912897C070ADBF76221572C52609D \
  && echo "deb https://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list \
  && apt-get update \
  && apt-get install -y docker-engine

USER jenkins

Then the following docker-compose.yml uses the previous Dockerfile, exposes the port 8080 as 8001 and links the docker.sock into the container. The default Jenkins image uses the user jenkins but this user does not have access to the docker.sock. Therefore, we need to change the user to root. Also you need a volume that can be shared between the Jenkins container and the container that is used to build you application.

version: '2'
services:
  jenkins:
    build: .
    restart: always
    user: root
    ports:
     - "8001:8080"
    volumes:
     - /var/jenkins:/var/jenkins_home
     - /var/run/docker.sock:/var/run/docker.sock

Please note that is not possible to mount a local directory from the Jenkins container into the on-demand container. Although the <code class="highlighter-rouge">docker</code> command is locally available in the Jenkins environment in the end it is a "remote-control" for the Docker daemon that is running on the host. Hence, you only have access to the directories and volumes that exist on the host when you start a new container. This is the reason why we use a dedicated volume which we can reference when we create the on-demand container so we only need little knowledge of the host system.

Copy the Dockerfile and the docker-compose.yml into one directory on the Docker host and run:

docker-compose up -d

Now you should have a running Jenkins container. docker ps will show you the name of the container (e.g. ci_jenkins_1) we need in the next step.

The final step is to create a new Docker container within the Jenkins build on-demand. A possible Jenkinsfile can contain something like this:

stage('Build') {
    sh "docker" +
        " run -i --rm --volumes-from ci_jenkins_1" +
        " --workdir ${env.WORKSPACE}" +
        " openjdk:8-jdk" +
        " ./gradlew build test"
    
    archiveArtifacts artifacts: '**/myApp*.jar', fingerprint: true
}

The --rm flag ensures that the container is deleted after the container has ended. Otherwise, you will collect a lot of unused containers you can find with docker ps -a. Since you link the volume to the same path as in the Jenkins container, you can change into this directory with --workdir, run the build etc. the same way as in Jenkins and after the container exited you can also archive the created artifact as usual.