In a CI pipeline, can I author procedural code to determine if a particular job will run?

Problem to solve

In CI pipelines, it could be valuable to execute a procedure that determines which other jobs to execute.

Is this possible in Gitlab CI?

I am ready to use any procedural language to drive this logic, but I guess bash or sh would align well with convention, and obviously yaml itself is not a procedural language.

I have been trying to do this by setting named values. In this approach, in the case that it is decided jobs should be executed, named values should be passed to the jobs. However, I can’t so far succeed with this approach. Passing values through e.g. dotenv reports never seems to be able to influence the value of env vars that drive the job rules logic.

There is a limitation on any values calculated in a procedure, as described at GitLab CI/CD variables | GitLab

These variables cannot be used as CI/CD variables to configure a pipeline (for example with the rules keyword), but they can be used in job scripts.

So this suggests nothing emanating from a job can determine if another job runs or not, is that correct?

So far I can only see complaints in the forums that procedural job control is impossible and I can’t seem to succeed with any of the complex loopholes people allege will allow a procedure authored in a pipeline (e.g. a script) to determine if some other CI job should run in that pipeline or a child pipeline.

The only thing that looks promising so far is dynamic pipelines but these are not inspectable or debuggable and are, in my view, irresponsible to hand over to a team to operate without being able to debug them.

I am happy to launch a child pipeline if that is the only way to reliably calculate and pass a value which can be used to conditionally run a job (e.g. all values need to be known before pipeline launch).

Steps to reproduce

I have tried dotenv reports. Although these values can be passed to jobs somewhat reliably, they are not able to be evaluated as part of job execution logic. Further they are unreliable in that it seems impossible to establish predictable behaviour in the case they are unset. All scenarios I have so far established require all jobs to run always, and I can never conditionally decide to skip a job.

Configuration

It’s not straightforward to provide a configuration, since I don’t know how it should be done, and it seems impossible, even shocking given what Gitlab CI is sold for. Hence the question.

Versions

Please select whether options apply, and add the version information.

  • Self-managed
  • GitLab.com SaaS
  • Dedicated
  • Self-hosted Runners

Versions

  • Gitlab v17.1.3-ee

The eventual answer here seems to be…

  • job rules in the same pipeline file can’t consume named values from dotenv (maybe the job plan has already been executed?)

  • job rules in a child pipeline CAN have dotenv values passed on to the child’s execution via variables:

  • write a parent job that ALWAYS produces a dotenv report artifact containing the same named values

  • a named variable you plan to use must always be set in that job (e.g. set it to None for the noop case)

  • don’t add any rules that limit when the job runs, it should always set them

  • downstream tasks fail in weird ways if consumed variables are absent (e.g. the job to write them is skipped, or they are not assigned)

  • pass the variables when you call the child trigger:include job (like MYVALUE: $MYVALUE )

In this way CHILD jobs can be included or omitted according to logic you write in a script in a parent pipeline by reading $MYVALUE in job rules: in a child pipeline.

So it’s only through child pipelines that you can have procedural logic determine if a job will run. Single level pipelines are unable to do this.