Any option to create services with pipeline scope/lifetime

Docker Service with Pipeline Lifetime

In our containerization pipelines, we typically have several stages, from building a Docker image to deploying it after testing, etc. For this purpose we use shell and docker executors on a single host that bind mount the /var/run/docker.sock. This is both, comfortable and dangerous.

Pro

  • We use the local containerd registry to communicate the new images. Once built they are available to later test stages from the Docker cache.
  • We can first build a helper image with all required tools and reuse it in later jobs for the Docker executor.
  • Pipelines are quite fast

Contra

  • This approach can create concurrency issues if the same image names are used (e.g. for helper images from different branches) or when relying on the latest tag
  • It violates the rule of isolated build steps:
  • If one job invokes docker login with a ci-job-token to push an image it might log-out another job that wants to publish because Docker CLI seems not to support per command credentials
  • The local registry of the container daemon on the host fills up unpredictably and cannot be cleaned up automatically.
  • When cleaned up manually other pipelines, that are not really self-contained suddenly show up.

Recently, I created a new runner following the Docker-in-Docker tutorial. While it works nicely there are some drawbacks:

  • All build steps spend a considerable amount of time in Waiting for services to be up and running (timeout 30 seconds)... which sums up to several minutes over a pipeline
  • To re-use images there is extra effort required to pass them between build steps (docker save -> archive artifacts -> docker load for each image)

The ideal solution to this problem would be midway: a Docker service that startes when the pipeline begins and is destroyed after the pipeline ends. Based on my research, I am fairly certain that this is not currently possible. I even explored the option of a custom executor, but I couldn’t find a way to detect the beginning and end of a pipeline from the hook scripts. I am also aware that this is not in line with the concept that jobs from a single pipeline can be distributed to several executors/runners, but it could be restricted by tags to a single runner. If I remember correctly, something similar is mentioned in the custom executor docs.

I am addressing this question to the community in the hopes that someone has found a solution that I am not aware of, or that this topic gains enough feedback for a feature request on the GitLab Runner project.

Versions

  • Self-managed
  • Self-hosted Runners