Hi, I m new to gitlab and still trying to figure things out. We have a project that I can’t quite wrap my head around how we we handle the CI/CD portion. At the end of the day we want to be able to build a container and push it to dockerhub, and then do a deploy on an on premise k3s cluster. The build portion seems pretty straight forward using shared runners, but I am not sure how to handle the deploy portion since the shared runner doesnt have access to our on prem system. I was thinking of installing a runner locally that would be able to do the deploy, but what I am not totally clear on is whether this can be broken into two processes where the build is handled by the shared runner and then a chained step deploys using the on prem installed runner. Is this possible? Is this even the best way to tackle this?
Regards.
1 Like
Yes, I think this is possible. Each of the “steps” in a pipeline, which GitLab calls a “job”, can be executed with a different runner. You can tag the job and the runner with special labels.
Add your local runner, as you were planning, and configure it with a tag. https://docs.gitlab.com/ee/ci/runners/#using-tags for details on adding tags for runners. You probably don’t want to enable this runner to run untagged jobs (such as your build step), but maybe you don’t care about that aspect.
Then, in your .gitlab-ci.yml, add a tag to the deployment job, per https://docs.gitlab.com/ee/ci/yaml/README.html#tags.
The shared runners will continue picking up the build job, but will stay away from the deployment job (because they don’t have the correct tag to execute it). The on-prem runner will stay away from the build job (because it only runs jobs with a specific tag), but will handle the deployment job.
1 Like
Hi Divido,
Yes this totally makes sense, the one thing I am still not certain about is making one step dependent on another, so I would only want my deployment step to kick off after the build step. Everything that I’ve seen so far indicates that the steps run concurrently and not sequentially? Or am I misunderstanding that?
Regards.
Sean,
Yes, you can make the job be dependent as well. You are correct, the default behavior is to run everything in parallel. The easiest way (IMO) to add this dependency is to create two different “stages”. The documentation on this is here: https://docs.gitlab.com/ee/ci/yaml/README.html#stage
Basically, we start by defining a number of stages, such as:
stages:
- build
- deploy
Then, define the stage that each job belongs to:
containerize:
stage: 'build'
script:
- ...
deploy:
stage: 'deploy'
tags: ['on-prem-runner']
script:
- ...
This will cause the visualization of the pipeline to put the jobs in different columns. Each job waits until all columns to the left are complete. Within a column, everything runs in parallel. There’s a lot of nuance to this – your runners have to have enough concurrency to run everything, you can define “needs” statements to make more complicated dependency graphs, and there’s always child pipelines. These are great features to learn about as your pipeline gets more and more complex – but for your basic two job, serial pipeline, this is the best way to go.
By the way, if you don’t assign a job to a stage, it gets “test” by default. That’s why everything runs in parallel by default – everything is in the same stage.
Also, the stage names and job names are completely up to you. Obviously, you can’t use keywords like trying to call a job “stages”; but otherwise you can name it anything that makes sense to you.
2 Likes