Set the order of the build stages?

I’m new to GitLab CI and I’m trying to build my first pipeline. I’ll borrow an example from Keyword reference for the .gitlab-ci.yml file | GitLab

In the following pipeline, can GitLab run the build jobs in an order? That is, can I make “job 1” happen before “job 2”?

image: ubuntu:focal

stages:
  - build
  - test
  - deploy

job 0:
  stage: .pre
  script: make something useful before build stage

job 1:
  stage: build
  script: make build dependencies

job 2:
  stage: build
  script: make build artifacts

Hi @slasiewski

Do the jobs all have to be in the same build stage? Stages are executed sequentially, so the most obvious way to do what you want is to put the jobs in different stages:

image: ubuntu:focal

stages:
  - pre
  - build-deps
  - build
  - test
  - deploy

job0:
  stage: pre
  script: make something useful before build stage

job1:
  stage: build-deps
  script: make build dependencies

job2:
  stage: build
  script: make build artifacts

But there are other options, for example if you are in GitLab SASS or you are using GitLab version >=14.1, you can use the needs keyword:

image: ubuntu:focal

stages:
  - pre
  - build
  - test
  - deploy

job0:
  stage: pre
  script: make something useful before build stage

job1:
  stage: build
  needs: []
  script: make build dependencies

job2:
  stage: build
  needs: [job1]
  script: make build artifacts
1 Like

Thanks. We’re still on GitLab 13.x so I’ll need to try the first method. However, it isn’t working for me still. Let’s say that I have the following in .gitlab-ci.yml:

image: ubuntu:focal

stages:
  - pre
  - build-deps
  - build
  - test
  - deploy

job0:
  stage: pre
  script:
    - apt -qq update
    - apt -qq -y install git

job1:
  stage: build-deps
  script: git clone https://github.com/octocat/Hello-World.git

job2:
    stage: build
    script: cp README README2

What I am expecting to happen is for:

  1. In job0, apt will install git
  2. In Job1, git will clone a repo
  3. In Job2, I simulate a simple build command by copying 1 file.

But this fails, because in job1, the job cannot find git even through it was installed in the previous job. I’m confused why this would be.

What happens is:

  1. In job0, apt does install git:
Running with gitlab-runner 13.3.1 (738bbe5a)
  on docker runner Zf5ntxSi
Preparing the "docker" executor
00:02
Using Docker executor with image ubuntu:focal ...
Pulling docker image ubuntu:focal ...
Using docker image sha256:ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1 for ubuntu:focal ...
Preparing environment
00:01
Running on runner-zf5ntxsi-project-943-concurrent-0 via 99842ed86d20...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/stefanl/citest/.git/
Checking out 49db3ca4 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:15
$ apt -qq update
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
All packages are up to date.
$ apt -qq -y install git
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
The following additional packages will be installed:
...
Setting up git (1:2.25.1-1ubuntu3.2) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
Processing triggers for ca-certificates (20210119~20.04.2) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Job succeeded
done.
  1. job1 fails because it cannot find git:
Running with gitlab-runner 13.3.1 (738bbe5a)
  on docker runner Zf5ntxSi
Preparing the "docker" executor
00:03
Using Docker executor with image ubuntu:focal ...
Pulling docker image ubuntu:focal ...
Using docker image sha256:ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1 for ubuntu:focal ...
Preparing environment
00:00
Running on runner-zf5ntxsi-project-943-concurrent-0 via 99842ed86d20...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/stefanl/citest/.git/
Checking out 49db3ca4 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:01
$ git clone https://github.com/octocat/Hello-World.git
/usr/bin/bash: line 104: git: command not found
ERROR: Job failed: exit code 1

Right, so the reason you can’t find Git in the second job, is that you aren’t saving the installed files as artifacts.

HOWEVER! This is a very un-idiomatic use of pipelines, and I’m not sure whether that’s just because your question was just an example, or something else.

Normally, we would expect that your GitLab runner was installed on a server with Git already installed, if you are using something like a shell runner. The alternative is to use a Docker image in your pipeline jobs, and ensure that the Docker image has the relevant dependencies.

Each job in your pipeline will (automatically) run like this:

  1. Clone your repo with Git
  2. cd to the root of your cloned repo
  3. Download any relevant artifacts from the previous pipeline stage
  4. Download any cache that was previously saved
  5. Run all the Bash commands in the before_script section of the YAML definition of your job
  6. Run all the Bash commands in the script section of the YAML definition of your job
  7. Run all the Bash commands in the script section of the YAML definition of your job
  8. Send any generated artifacts back to the GitLab instance
  9. Store any cached data from this job

So, we would not generally expect there to be any Git commands in your .gitlab-ci.yml file, unless you needed to do something slightly unusual, like dealing with a Git submodule, cloning a second repo, etc.

In your example, I see you are using ubuntu:focal as your base image. Each pipeline job will start off inside a fresh ubuntu:focal container, which is why you need artifacts to pass files between stages.

If you really need to install Git, I would personally do something like this:

image: ubuntu:focal

stages:
  - build-deps
  - build

job1:
  stage: build-deps
  before_script:
    - apt -qq update
    - apt -qq -y install git
  script: 
    - git clone https://github.com/octocat/Hello-World.git
  artifact:
    paths:
      - Hello-World

job2:
    stage: build
    script: 
      - cd Hello-World
      - cp README README2
1 Like

Thanks for the input so far. This works differently then I thought.

We can’t use Gitlab runners with the shell executor due to security issues. Same with the SSH executor. Therefore, I’m trying the Docker method.

My previous example was oversimplified. Here’s a more accurate example. My real goal is to create simple test environment which uses our pre-existing BATS test suite to validate our Kubernetes clusters. I was hoping to simply have a simple container, with kubectl and enough tooling for the BATS tests. Therefore, what I need to do is this:

  1. Start in the gitlab project that contains the BATS test suite.
  2. Use git submodule to grab the submodules for this repo. This requires that git be installed first.
  3. Use curl to install and verify kubectl using the standard installation method. This requires that curl be installed first.
  4. Run a sanity test on BATS to make sure it works borrowing the example from GitHub - bats-core/bats-core: Bash Automated Testing System. This requires that bc and dc be installed first.
  5. Use a shell script to run the actual validation suite against every cluster. This requires that ruby and jq be installed for every job.

Since each job starts in a fresh container, it looks like I can store artifacts to share them among the stages. Also, I need to run apt update and apt install for each job so the correct utilities are installed. I don’t

image: ubuntu:focal

variables:
  K8S_VERSION: 'v1.20.7

stages:
  - build-deps
  - build
  - test
  - deploy

install-bats:
  stage: build-deps
  before_script:
    - apt -qq update
    - apt -qq -y install git
  script:
    - echo "### Install submodules for BATS"
    - git submodule update --init --recursive
  artifacts:
    paths:
      - libs/bats-assert/
      - libs/bats-support/

install-kubectl:
  stage: build-deps
  before_script:
    - apt -qq update
    - apt -qq -y install curl
  script:
    - echo "### Install kubectl"
    - curl --silent --show-error -LO https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl && chmod +x kubectl
    - curl --silent --show-error -LO https://dl.k8s.io/${K8S_VERSION}/bin/linux/amd64/kubectl.sha256
  artifacts:
    paths:
      - kubectl
      - kubectl.sha256

test-kubectl:
  stage: test
  script:
    - echo "$(<kubectl.sha256)  kubectl" | sha256sum --check

test-bats:
  stage: test
  before_script:
    - apt -qq update
    - apt -qq -y install bc dc
  script:
  - ./bats-test.sh --validate-bats

# Kubectl is at `.` which needs to be in the PATH
validate-test-cluster1:
  stage: deploy
  when: manual
  before_script:
    - apt -qq update
    - apt -qq -y install ruby jq
  script:
    - export PATH=$PATH:.
    - ./validate-cluster.sh test1

validate-test-cluster2:
  stage: deploy
  when: manual
  before_script:
    - apt -qq update
    - apt -qq -y install ruby jq
  script:
    - export PATH=$PATH:.
    - ./validate-cluster.sh test2

validate-test-cluster3:
  stage: deploy
  when: manual
  before_script:
    - apt -qq update
    - apt -qq -y install ruby jq
  script:
    - export PATH=$PATH:.
    - ./validate-cluster.sh test3

I think maybe you missed part of the last sentence there?

If you have the same dependencies for each job, you might be better off building your own Docker image with those dependencies in, then pushing it to the container registry in your project, and using that instead of a standard Ubuntu image.

This blog post shows how you might go about using the container registry.