Stages versus Parallel (Matrix)

I am managing the pipeline of a monorepo containing a design system. This means that there’s a lot of git activity with corresponding pipelines/jobs. A while ago we split our test job into separate jobs; lint/yarn test/yarn types:check/etc, because this gave us a clean overview regarding job status directly from the MR, and it improved the speed of the pipeline. 5x small parallel jobs run faster than 1 large slow job.

However, lately we’ve been running low on GitLab Runners. Or better said, more and more projects in our organisation demand CI/CD. This causes our jobs to often show up as waiting for several minutes, sometimes up to 15/20 minutes before it gets picked up by a runner.

Now, I was wondering if there’s a functional difference under the hood between stages and using the parallel + matrix keyword to run parallel jobs. Since the latter clearly states in the documentation:

Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.

I indeed see that with this solution the same runner is being re-used. But I am curious if the same applies with simply using stages. This because stages feel more intuitive:

danger ci:
  extends: .test
    - yarn danger ci --failOnErrors

flow checking:
  extends: .test
    - yarn run flow

  extends: .test
    - yarn run lint

Than doing:

  extends: .test
    - 'eval "$JOB_CMD"'
      - JOB_CMD:
        - 'yarn run packages:check'
        - 'yarn danger ci --failOnErrors'
        - 'yarn run lint'
        - 'yarn run types:check'
        - 'yarn run test:coverage'
        - coverage/junit.xml
      - coverage

Which also falsely looks for coverage in the non-relevant jobs.

Ideally we would want to spin up a single docker instance and in parallel on that instance run the 5 jobs. But as far as I have read, that does not seem possible with GitLab CI. But please correct me if I am wrong.

  • What version are you on? Are you using self-managed or
    • GitLab: 14.7.2, self-managed
    • Runner: AWS