Non-redundant pipelines are auto-cancelled


since some of our CI jobs need quite a long time to finish, I want to use rules to run them only if necessary (based on changed files). Example configuration:

  stage: build
  interruptible: false
    - sleep 60
    - if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_PIPELINE_SOURCE == "push"
        - "dir1/*"

  stage: build
  interruptible: true
    - sleep 30
    - if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_PIPELINE_SOURCE == "push"
        - "dir2/*"

I also want Gitlab to cancel redundant, running pipelines after a new push, therefore Auto-cancel redundant pipelines is activated.

What do I see?

  1. Push a change to dir1build-job1 is triggered
  2. Push a change to dir2 immediately → The first pipeline is cancelled and a new one with build-job2 only(!) is started
  3. The pipeline eventually becomes “OK”.

Therefore, build-job1 is never finished and I would possibly miss a build error!

What do I expect?

I expected the first pipeline to be cancelled only if all its jobs are also contained in the new pipeline. Otherwise, it should continue normally. In a merge request, merging should only be allowed if all jobs triggered by the changes are “OK” (not only the ones triggered by the last push).

My questions

Do I see a bug or a feature? :slight_smile:

How can I achieve my expectation? I do not want to disable auto-cancellation for these long-running jobs. Alternative to not cancelling the first pipeline, it would also be okay to “extend” the new pipeline with the cancelled jobs from the first one, but to me this seems to be more complicated.

I’m on a self-managed Gitlab version 14.3.3ee

Best regards,

1 Like