Start job if manual job is successful

I’m struggling to set up a pipeline that only starts the second job if the first job was started manually and returned success. I searched online but all I can find are other people’s failed attempts to do this, one of which was a post in this forum.
(I think my issue is slightly different, but if it is a duplicate please let me know.)

My setup has three stages: build, deploy, and reset_vm. For simplicity, let’s assume each stage has only one job. The build and reset_vm jobs can be started manually, while the deploy job should start automatically after the build job ends successfully.

If I keep allow_failure: true (which is the default for manual jobs), the deploy job is triggered even if there’s an error in the build job. I wish there was an “optional” attribute for manual jobs that would allow them to be skipped but would still cause the pipeline to fail if they throw an error. I found an issue on GitLab raised four years ago about this, but has not been resolved.
ChatGPT told me to add “status: success” after “needs: Build”, but that seems to be incorrect syntax.

If I set allow_failure: false for my manual jobs, the pipeline stays in a waiting status until someone manually triggers all manual jobs that other jobs depend on. Also, I can’t start the reset_vm job independently of the manual build job in this scenario.

Can someone help me figure out how to set up my pipeline correctly? Or tell me if I have any misconceptions about these GitLab constructs? Thank you in advance.

To illustrate, this yaml contains both versions in one pipeline:

yaml file
stages:
  - build
  - deploy
  - reset_vm

Build A (allow_failure true):
  script:
    - echo "Starting build with error"
    - jlkajs dlkjas
  when: manual
  stage: build

Build B (allow_failure false):
  script:
    - echo "Starting build with error"
    - jlkajs dlkjas
  when: manual
  stage: build
  allow_failure: false

Deploy A:
  script:
    - echo "Starting deploy"
  stage: deploy
  needs:
    - job: Build A (allow_failure true)
      artifacts: true

Deploy B:
  script:
    - echo "Starting deploy"
  stage: deploy
  needs:
    - job: Build B (allow_failure false)
      artifacts: true

Reset VM:
  script:
    - echo "Starting reset_vm"
  when: manual
  stage: reset_vm
  

This is the pipeline after manually starting only the Build A job:


In the pipeline overview, the status stays “in progress” forever:
waiting 1
waiting 2

PS: I did this on gitlab.com in GitLab Enterprise Edition 16.0.0-pre

Hi Sebastian,

I think chatgpt is at least partially right, at least in my experience.

Normally, all jobs are executed at the same time. You separate them in stages so they follow each other. Upon success or failure, as you define.

By using ‘needs: build’ on the ‘deploy’ job, you tell GL to create a DAG graph that determines the execution order.
So since the ‘deploy’ jobs depend on the build job, GL can run the two build jobs in parallel and then, for each of A and B, the respective deploy job can follow as soon as the build is finished.

In your case, you’ve made the build stage a manual stage, but that’s fine. The DAG should still trigger the deploy job once its build job is successful.
https://docs.gitlab.com/ee/ci/directed_acyclic_graph/

HTH!

Thank you for your reply.
You are correct that the deploy job starts as soon as the build job ist finished, which is exactly the problem. It starts whenever build is finished, no matter if it was successful or failed.
You described how I can set my stage depend on another stage’s success. However, I do not see a way to set a job to depend on another (manual) job’s success, except if I make it non-optional (which leads to other problems, as described above).