How to Build Docker Images in a GitLab CI Pipeline – CloudSavvy IT


A common use case for CI pipelines is to build the Docker images that you will use to deploy your application. GitLab CI is a great choice for this because it supports a built-in pull proxy service, which means faster pipelines and a built-in registry to store your built images.

In this guide, we’ll show you how to set up Docker builds that use the above two features. The steps to follow vary slightly depending on the type of GitLab Runner you will be using for your pipeline. We’ll cover Shell and Docker runners below.

Building with the Shell Executor

If you are using the Shell Runner, make sure Docker is installed on the machine hosting your Runner. The executor works by executing regular shell commands using the docker binary on the Runner’s host.

Head to the Git repository of the project you want to create images for. To create a .gitlab-ci.yml file at the root of the repository. This file defines the GitLab CI pipeline that will run when you push changes to your project.

Add the following content to the file:

stages:
  - build

docker_build:
  stage: build
  script:
    - docker build -t example.com/example-image:latest .
    - docker push example.com/example-image:latest

This simplistic setup is enough to demonstrate the basics of pipeline-powered image creations. GitLab automatically clones your Git repository into the build environment in order to work docker build will use your project Dockerfile and make the contents of the repository available as a build context.

Once the construction is complete, you can docker push the image in your registry. Otherwise, it would only be available to the local Docker installation that ran the build. If you are using a private registry, run docker login first to provide the appropriate authentication details:

script:
  - docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD

Set the values ​​for the two credential variables by going to Settings > CI/CD > Variables in the GitLab web UI. Click the blue “Add variable” button to create a new variable and assign a value to it. GitLab will make these variables available in the shell environment used to run your work.

Build with Docker Runner

GitLab Runner’s Docker runner is commonly used to provide a completely clean environment for every job. The task will run in an isolated container so that the docker binary on the Runner host will be inaccessible.

The Docker Runner gives you two possible strategies to build your image: either use Docker-in-Docker or bind the host’s Docker socket into the Runner’s build environment. You then use the official Docker container image as your working image, which makes the docker command available in your CI script.

Docker in Docker

Using Docker-in-Docker (DinD) to build your images gives you a completely isolated environment for each task. The Docker process that performs the build will be a child of the container that GitLab Runner creates on the host to run the CI task.

You must register your GitLab Runner Docker runner with privileged mode enabled to use DinD. Add the --docker-privileged flag when registering your runner:

sudo gitlab-runner register -n 
  --url https://example.com 
  --registration-token $GITLAB_REGISTRATION_TOKEN 
  --executor docker 
  --description "Docker Runner" 
  --docker-image "docker:20.10" 
  --docker-volumes "/certs/client" 
  --docker-privileged

In your CI pipeline, add the docker:dind picture as a service. This makes Docker available as a separate image linked to the task image. You can use the docker command to create images using Docker instance in the docker:dind container.

services:
  - docker:dind

docker_build:
  stage: build
  image: docker:latest
  script:
    - docker build -t example-image:latest .

Using DinD gives you fully isolated versions that cannot impact each other or your host. The major drawback is a more complicated caching behavior: each task gets a new environment in which previously created layers will not be accessible. You can partially fix this problem by trying to pull the previous version of your image before building, then using the --cache-from construction flag to make the extracted image layers available as a matte source:

docker_build:
  stage: build
  image: docker:latest
  script:
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:latest .

Socket Link Brackets

Mounting your host’s Docker socket in your task’s environment is another option when using the Docker runner. This gives you transparent caching and eliminates the need to add the docker:dind service to your CI configuration.

To set this up, register your Runner with a docker-volumes flag that binds the host’s Docker socket to /var/run/docker.sock inside task containers:

sudo gitlab-runner register -n 
  --url https://example.com 
  --registration-token $GITLAB_REGISTRATION_TOKEN 
  --executor docker 
  --description "Docker Runner" 
  --docker-image "docker:20.10" 
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock

Now tasks that run with the docker the image may use the docker binary as usual. The operations will actually happen on your host machine, becoming siblings of the job’s container instead of children.

This is actually similar to using the shell runner with your host’s Docker installation. The images will reside on the host, which will facilitate the seamless use of docker build layer caching.

While this approach may lead to higher performance, less configuration, and none of the limitations of DinD, it comes with its own unique issues. The most important of these are the security implications: Tasks could run arbitrary Docker commands on your Runner host, so a malicious project in your GitLab instance could run. docker run -it malicious-image:latest Where docker rm -f $(docker ps -a) with devastating consequences.

GitLab also warns this socket binding can cause problems when tasks are running concurrently. This happens when relying on containers created with specific names. If two instances of a task are running in parallel, the second will fail because the container name already exists on your host.

You should consider using DinD instead if you think any of these issues will be troublesome. While DinD is generally no longer recommended, this may make more sense for public GitLab instances that run concurrent CI jobs.

Push images to GitLab registry

GitLab projects have the option of a integrated register that you can use to store your images. You can view the contents of the registry by navigating to Packages and Registries > Container Registry in your project’s sidebar. If you don’t see this link, enable the registry by going to Settings > General > Visibility, Project, Features & Permissions and enabling the “Container Registry” toggle.

GitLab automatically sets environment variables in your CI tasks that allow you to reference your project’s container registry. Adjust it script section to connect to the registry and push your image:

script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
  - docker build -t $CI_REGISTRY_IMAGE:latest .
  - docker push $CI_REGISTRY_IMAGE:latest

GitLab generates a secure set of credentials for each of your CI jobs. the $CI_JOB_TOKEN The environment variable will contain an access token that the job can use to connect to the registry as gitlab-ci-token user. The registry server URL is available as $CI_REGISTRY.

The final variable, $CI_REGISTRY_IMAGE, provides the full path to your project’s container registry. It is a suitable base for your image tags. You can extend this variable to create subrepositories, such as $CI_REGISTRY_IMAGE/production/api:latest.

Other Docker clients can pull images from the registry by authenticating using an access token. You can generate them on the Settings > Access Tokens screen of your project. Add the read_registry scope, then use the credentials displayed to docker login in the register of your project.

Using GitLab’s dependency proxy

GitLab’s dependency proxy provides a caching layer for upstream images that you pull from Docker Hub. It helps you stay inside Docker Hub Throughput Limits extracting only the content of images when they have actually changed. It will also improve the performance of your builds.

The dependency proxy is enabled at the GitLab group level by going to Settings > Packages and Registries > Dependency Proxy. Once enabled, prefix image references in your .gitlab-ci.yml folder with $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX to pass them through the proxy:

docker_build:
  stage: build
  image: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:latest
  services:
    - name: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:dind
      alias: docker

That’s all we can say about it! GitLab Runner automatically connects to the dependency proxy registry, so there’s no need to manually provide your credentials.

GitLab will now cache your images, giving you improved performance as well as resilience to network outages. Note that the services the definition also had to be adjusted – environment variables don’t work with the online form used previously, so the full picture name must be specified, then a command alias to reference in your script section.

Although we have now configured the proxy for the images directly used by our job steps, there is still work to be done to add support for the base image in the Dockerfile to construct. A regular statement like this will not go through the proxy:

FROM ubuntu:latest

To add this last element, use the Docker build arguments to make the dependency proxy URL available when browsing the Dockerfile:

ARG GITLAB_DEPENDENCY_PROXY
FROM ${GITLAB_DEPENDENCY_PROXY}/ubuntu:latest

Then change your docker build command to set the value of the variable:

script:
  >
    - docker build 
        --build-arg GITLAB_DEPENDENCY_PROXY=${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX} 
        -t example-image:latest .

Now your base image will also be pulled through the dependency proxy.

Summary

Docker image builds easily integrate into your GitLab CI pipelines. After the initial Runner setup, docker build and docker push commands in your work script are all you need to create an image with the Dockerfile in your repository. GitLab’s built-in container registry gives you private storage for your project images.

Beyond the basic releases, it’s worth integrating with GitLab’s dependency proxy to speed up performance and avoid hitting Docker Hub’s throughput limits. You should also verify the security of your installation by evaluating whether the method you selected allows untrusted projects to run commands on your Runner host. Although it comes with its own issues, Docker-in-Docker is the safest approach when your GitLab instance is publicly accessible or accessible by a large user base.

Previous Britain's lockdown summed up in two pictures
Next New NASA Hubble images show sweeping view of massive spiral galaxy; Burning Wings of the Butterfly Nebula