We are using child pipelines for almost anything and I expect that the job which has the trigger
keyword inside should not block the pipeline if it is allowed to fail even though the strategy: depends
.
Our current setup:
stages:
- create pipeline
- triggers
create_pipeline:
stage: create pipeline
image: my_image
script:
- ./generate_pipeline.py
artifacts:
paths:
- path/to/generated/child_1.yml
- path/to/generated/child_2.yml
pipeline:
stage: triggers
rules:
...
trigger:
include:
- artifact: path/to/generated/child_1.yml
job: create_pipeline
strategy: depend
allow_failure: false
testflight:
stage: triggers
rules:
...
trigger:
include:
- artifact: path/to/generated/child_2.yml
job: create_pipeline
strategy: depend
allow_failure: true
Now I would expect, that if the testflight
child pipeline fails, let’s say, on build. It will fail the whole child, as in the scope of the child pipeline, testflight_build
job is not allowed to fail.
However I would also expect that the trigger job testflight
in the parent pipeline is set to allow_failure: true
it will not block the parent pipeline in case the child pipeline fails.
Now I think the issue is that since the trigger job testflight
in the parent pipeline has a trigger
and a strategy: depends
it now longer is seen as a normal job and hence the allow_failure: true
does not work here.
What would be a solution here since we want to know whether the child pipeline is running, successful or failed, but it seems that this would disallow us from setting it to allow_failure: true
.