I have three different stages, each should be running in a same container as there are many customizations and installations which can’t be processed or mapped to a new container.
install the application and run the main logic(deploy command).
Undeploy command.
Clean up
If Stage 1 fails/success, then Stage 2 should still execute. And If Stage1/Stage2 fails then only stage 3 should execute.
As there is no option to use same container for three different stages, I thought to club all of them in a single stage also. But once the stage 1 portion failed, it is not proceeding to next steps.
If I keep allow_failure: true , even with necessary portion also, it is not failing.
Is it really the same container that you want to use or just the same image? If it’s really the same container, do you know which files you need to keep during all three stages of the pipeline?
Let me elaborate:
stage 1: (first container): builds the product rpm file and shares to stage 2 using artifact
stage 2: (second container): installation and configuration.
stage 3: (second container): product testing, just sharing artifacts won’t suffice, require so much configurations and installations at multiple locations. And stage to be separate, so that developers can distinguish where exactly it failed.
stage 4: (second container): if the product failed, cleanup should happen.
In GitLab 14.1 and later you can refer to jobs in the same stage as the job you are configuring. This feature is enabled on GitLab.com and ready for production use. On self-managed GitLab 14.2 and later this feature is available by default.
But all these stages will create a new container for each stage. But need stage 2 and 3, same container (not just image). And cleanup should run only when the install_and_test stage fails. Can you tell how to do that also
I have the same problem, needs clause doesn’t solve it…jobs from same stage are executed into different container.
Has anyone addressed the situation in any way?