Ensuring that required packages are not constantly installed

The installation processes here are reloaded at the start of each pipeline. How can I get it to not load them all the time?

before_script:
    - apt-get update --yes
    - apt-get install --yes build-essential
    - apt-get install --yes mesa-common-dev
    - apt-get install --yes cppcheck
    - apt-get install --yes cmake
    - apt-get install --yes g++
    - apt-get -y install ninja-build
    - apt-get -y install gcc-arm-none-eabi
    - apt-get -y install gcc

Move them to somewhere appropriate. Why did you ever put them there if they are not supposed to run every time the before_script step is executed?

(Also unify if you use ‘–yes’ or ‘-y’ and place it before or after ‘install’)

It looks like this installs some build-dependencies for the project you’re working on. The better solution is probably to set something up so you build in a clean (virtual) machine, and then you need to run these installations every time, but you avoid the risk of someone tampering with e.g. g++ on the machine you use to build (read Reflections on trusting trust).

If you’re building a Debian package, it would be better to parse them from the build-dependencies listed in debian/control, a quicker solution would be to just list the packages there and write in your documentation that those packages need to be installed.
If you’re not building the Debian package, the solution is probably similar.

I understand what you mean. To be clear, these packages are the ones that should be in every pipeline process, but loading these packages every time causes a waste of time and speed. My goal was to find out if the GitLab system provides a facility to store these packages and bootstraps in a memory. I think it makes sense to use docker for these things.

Maybe you are looking for caching? apt-get can be more complex than other package managers, but this is how I use it in CI:

cache:
    key: "$CI_COMMIT_REF_SLUG"
    paths:
        - .apt/

...

before_script:
        # Configure apt-get
        - export APT_DIR=.apt
        - export APT_STATE_LISTS=$APT_DIR/lists
        - export APT_CACHE_ARCHIVES=$APT_DIR/archives
        - mkdir -p $APT_CACHE_ARCHIVES/partial
        # Update apt-get lists and install new packages.
        - apt-get update -yqq
        - apt-get install --no-install-recommends -y  -o dir::cache::archives="$APT_CACHE_ARCHIVES" build-essential
     ...

Another way of reducing installation time and bandwidth is to use/create container builder images which come pre-installed with the toolchain. I’m often using the official gcc images as a base and then install additional software on top.

Builder Images for C++

The builder image is rebuilt on-demand (or with a CI/CD schedule on a regular basis).

On GitLab.com SaaS, including the Docker template builds the image and tags the latest version in the container registry from where it can be used in .gitlab-ci.yml job configuration.

include:
    - template: Docker.gitlab-ci.yml

An example exercise is in

A builder image based on your request

In your example, the before_script apt-get commands need to be moved into the Dockerfile like this:

# https://hub.docker.com/_/gcc 
FROM gcc:11.2

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update && apt-get install -y cmake build-essential mesa-common-dev cppcheck g++ ninja-build gcc-arm-none-eabi gcc && rm -rf /var/lib/apt/lists/* 

CMD ["cmake"]

Note that the FROM section pins the GCC version to 11.2. That way you can ensure that you build with a specific compiler version, and only bump the version when you are confident to move. Allows for more reproducible builds, in comparison to :latest potentially breaking your builds and you search endless hours until finding that it is the compiler version and not the code which worked last week.

When the image is spawned as container, the script section fires the first command and overwrites CMD. You can use the container for local dev environments too. More tips on Docker image optimization in Pipeline efficiency | GitLab

In your .gitlab-ci.yml file, you’ll add the following job. On self-managed GitLab, you’ll need DinD to build the container images. Use Docker to build Docker images | GitLab

include:
  - template: 'Docker.gitlab-ci.yml'

# Change Docker build to manual non-blocking
docker-build:
  rules:
    - if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'
      when: manual 
      allow_failure: true

Manually trigger the pipeline job to build and push the builder image.

Navigate into Package & Registries > Container Registry to extract the container image path.

In your build jobs, you’ll add the image: registry.gitlab.com/namespace/project line, or change it globally in .gitlab-ci.yml Note: On self-managed GitLab, the container registry must be enabled, and the registry path needs to be changed accordingly. GitLab Container Registry | GitLab

Conclusion

Builder images are a great way to reduce the traffic and CI/CD time. There are things to keep in mind though:

  • Consider creating a dedicated project and group namespace for all builder image, with documentation on the purpose, security, update frequency, etc.
  • Consider adding container scanning to the images being used in the CI/CD pipelines. Container Scanning | GitLab
  • Document and schedule maintenance for updating the builder images. Security flaws and regression may come from static-never-updated builder images.