How to install third party software in runners and make it available to pipelines/jobs?

I have Gitlab and two registered runners installed as docker containers. The runners are essentially docker-in-docker because they use docker as the executor. I need to make some third party software available for runner pipelines/jobs. To be specific, I need to install the Intel C++ compiler, but I’m not sure where I should do it. Should I install the compiler in the runners themselves or should it be installed in the runner helper image? This compiler is massive so I can’t realistically include it in the developers’ container images. I just want to make it available. Thanks.

  • Self-managed Gitlab
  • GitLab (Hint: /help): 15.4.2ee
  • Runner (Hint: /admin/runners): ubuntu-v15.2.2
  • What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been? The closest thing I can find is to install the software in the runner helper container image. Number 4 in this link - Advanced configuration | GitLab. Am I looking in the right place?

Hey there :slight_smile:

If you’re using docker executor, in .gitlab-ci.yml file you will define your jobs and which docker image will every job use. Basically, starting with definition of that image, you define your “job environment” so to say, because your job will run in a container using that image. And only what is defined in your container is accessible and usable to your job, nothing outside of it (installed directly on your Runner host).

Now, sometimes this image will be just enough (e.g. you need to run a little Python script, you will use image python:3.10-alpine or similar. But, sometimes the image is not enough, because you need some additional stuff inside. There are 2 ways to solve this problem:

Option A: You will install a few additional software in the job definition itself - you can write it in before_script to separate it logically from the rest of the script. This makes sense to do if you have only a little bit of additional things to add on top of the base image, which also does not take long time to install. However, if you have a lot to configure and it takes lots of time, it might be smarter to do Option B.

Option B: Write a Dockerfile (in same or separate repo, again depends what suits your use case better) that will define your build environment. You can use CI again to automatically build and version this image, push it to a registry (GitLab/DockerHub, etc), and then you will use this image in your original project as image in your jobs. Example:
Let’s say In ProjectB you store your environment definition, and ProjectA is your main project - this is how ci/cd in Project A would look like

stages:
  - build

build:
  stage: build
  image: my-registry.com/path/to/my/ProjectB/image-name:tag
  script:
    - echo "Here we compile in our custom docker image :) "

I often use this, because some of my environments (also some C++ stuff) require quite a lot of things and it’s cleaner and easier to maintain it like a docker image. This way it also cannot happen something updated automatically and now your build does not work, etc.

Now, you’ve mentioned your C++ compiler is massive. I’d still try to use this approach, as it provides the cleanest way (IMO) to maintain and execute your jobs. However, if you think this is not doable, you can use a different executor, e.g. ssh. Then you could have a dedicated Ubuntu VM where you have directly this compiler installed, and use your runner to ssh into it, and execute your script. However, with this approach, there is no automatic cleanup and parallelism is a problem, so you have to think how to solve this yourself.

Hope this helps!

Thank you Paula. I’ll give Option B a shot and see what happens. If I come up with a solution, I’ll post it here.

1 Like