How to install prerequisites that are used by all later stages?

Problem to solve

I’m trying to use GitLab CI to test some Kubernetes CLI utilities like kubectl, helm and others. To download this software, I want to install curl first, but also perhaps things like jq and git. However, if I install curl in the .pre stage it is not available in later steps, and the build steps fail.

Steps to reproduce

Here is my .gitlab.ci.yml file:

# This pipeline will test CLI tools kubectl, helm, rke & others

# Versions of software
variables:
  helm_version:    "v3.14.4"
  kubectl_version: "v1.27.14"

default:
  image: ubuntu:20.04

install-prerequisites:
  stage: .pre
  script:
    - apt -qq update
    - apt -q -y install curl

build-kubectl:
  stage: build
  script:
    - curl -LO https://dl.k8s.io/release/${version}/bin/linux/amd64/kubectl
    - curl -LO https://dl.k8s.io/${version}/bin/linux/amd64/kubectl.sha256
    - chmod +x kubectl
    - echo "$(<kubectl.sha256)  kubectl" | sha256sum --check | grep OK

build-helm:
  stage: build
  script:
    - curl -O https://get.helm.sh/helm-${version}-linux-amd64.tar.gz
    - curl -O https://get.helm.sh/helm-${version}-linux-amd64.tar.gz.sha256sum
    - echo "$(< helm-${version}-linux-amd64.tar.gz.sha256sum)" | sha256sum --check | grep OK

Both build-kubectl and build-helm fail with the error curl: command not found. For brevity, I’ll only include output from the build-helm step here. Both steps fail with the same error.

....
Running with gitlab-runner 16.11.0
  on <redacted>
Resolving secrets
Preparing the "docker" executor
Preparing environment
Getting source from Git repository
Executing "step_script" stage of the job script
Using docker image sha256:2abc4dfd83182546da40dfae3e391db0810ad4a130fb5a887c6ceb3df8e1d310 for ubuntu:20.04 with digest ubuntu@sha256:874aca52f79ae5f8258faff03e10ce99ae836f6e7d2df6ecd3da5c1cad3a912b ...
$ curl -O https://get.helm.sh/helm-${version}-linux-amd64.tar.gz
/usr/bin/bash: line 146: curl: command not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1

Versions

  • Self-managed

Versions

  • GitLab: 16.11.1
  • GitLab Runner, if self-hosted : 16.11.0

Hi,

That won’t work, because .pre is just a stage for a job to run before all others. And job environments (when using docker executor) are completely isolated, so whatever you do in that job, won’t be anymore in others.

IMO, you have two options:

  1. Build your own image that you can use as environment.
    I do this when I need a stable environment that I use multiple times, or even across projects. You can (in this project, or in another) make a Dockefile that will have everything you need. You can then also use GitLab CI/CD to automatically build and publish this image. Then, you can reference this image in the image section of your job for this project. This approach may also save some time, as you don’t need to always freshly install those packages. However, you will need to manually update any packages / do basic maintenance. If you really need only curl on top of ubuntu, then this is overkill.

  2. Use YAML anchors to inject repetitive parts of script (installing packages).
    This basically means, you can put your package installation in before_script and just re-use it later in any job you need. It would look something like this (didn’t test, use as inspiration):

    .install-prerequisites: &install_prerequisites
      before_script:
        - apt -qq update
        - apt -q -y install curl
    
    build-kubectl:
      stage: build
      <<: *install_prerequisites
      script:
        - curl .....
    

Hope this helps :slight_smile:

5 Likes

Thank you for the clear answer and this convenient shortcut! I’ll give that a try. I didn’t want to have to maintain an image simply for a handful of packages-- it is overkill.

I’m surprised to learn that GitLab can’t cache things like /usr/bin between layers. I’ll try to learn more about that.

Jobs that use an image will always spawn a new container, controlled by GitLab Runner.

If you want to persist the installation of specific packages or tools, it is recommended to create a custom container image, push it to the GitLab container registry, and use the image in the CI/CD jobs. More in Pipeline efficiency | GitLab

I just want to point out that CI/CD caching is supported but would be rather awkward to use in order to cache the results of apk add invocations. It becomes a lot more useful if you can simply cache a whole directory, like when installing Ruby gems or NPM packages.

1 Like

Right: That’s one thing I looked at. GitLab will not cache the /usr directory, and copying a package and all the 14 prerequisite packages to a cachable-location is awkward.

I think you’re saying that I should do the following in my CI pipeline:

  1. Use a job to build an image & push it to the registry
  2. In subsequent jobs, pull it from the registry and run my tests.

Can you clarify? The document suggested above doesn’t describe how to use a custom image as part of a workflow & I haven’t had luck searching on my own. Can you point me to some documentation that describes this?

Edit: I found the following blog post about building a build image first (a meta-build-image). Is that what you are describing? Getting [meta] with GitLab CI/CD: Building build images

Michi is suggesting the same approach as I did in my post (No1).

But technically, approach is like described in the blog post you linked.