What you want to do should certainly be possible. I think you probably want something like this:
stages:
- ...
- B
- C
- D
- ...
...
jobC:
- stage: C
- rules:
if: ...
when: on_success
allow_failure: true
- ...
jobDsucceed:
- stage: D
- rules:
if: ...
when: on_success
- ...
jobDfailed:
- stage: D
- when: on_failure
- ...
It probably doesn’t help that the when keyword seems to take an on_failure value when it’s used on its own, but not when it is part of a rule. I’m not sure how this inconsistency in the YAML reference came about!
Thank you for your help @snim2
It is not exactly my need, but my original post was unclear, I edited it.
In fact, for steps B and C, I want the pipeline to fail if one of them fails. And likewise if B fails C should not be played, but directly the failure job.
With your idea I end up with:
stages:
- B
- C
- failed
B:
stage: B
allow_failure: true
script:
- echo "Job B"
C:
stage: C
needs:
- job: B
when: on_success
allow_failure: true
script:
- echo "Job C"
failed:
stage: failed
needs:
- job: B
- job: C
script:
- echo "fail job"
when: on_failure
The problem is that allowing B to fail, the pipeline run C that end up successfull and don’t run the failing job.
Hmm. I don’t think there’s any way to say “if this job or that job fails, then fail the whole pipeline”, so I think you are going to end up with some sort of kludge.
The way I would approach this is to have jobs B and C upload some sort of artifact and then have a job D which needsB and C, checks their artifacts, and only passes if the artifacts show that B and C also passed.
Your other option might be to just combine B and C into a single job, but I imagine you want to avoid that for the obvious reasons.