We are a small gamedev studio that uses GitLab CI/CD to have regular builds avaiable.
For the most time we only had a single custom shell runner setup that did every pipeline.
I then started my own runner, so we now have 2 runners avaiable.
But now we have this issue of jobs being distributed between the machines, which means that build results aren’t available anymore.
I know I could share the artifacts, but that would totally kill our pipeline duration as it would be mostly be busy with up- and downloading the artifacts.
My question now is: Is there anyway to make a runner “stick” to a pipeline or - vice versa- have each pipeline only be executed by a single runner?
I know that
tags are supposed to solve this problem, however both of my runners are equal. I just want the pipeline to stick to a single runner. I tried using the CI/CD variables that are available from GitLab 14.1 (Keyword reference for the `.gitlab-ci.yml` file | GitLab) and dynamically set the tag to the $CI_RUNNER_ID but that is evaluated too early, when the RUNNER ID is not yet set.
The easy way to do this is to add a tag to your runner (you can do this via Settings → CI/CD → Runners then edit your specific runner and add the tag.
Then, in your CI config, you can add the new tag to the jobs you want to run on that runner:
- echo HELLO
You can add as many tags as you like to both runners and jobs to control where your code is executed.
I tried fiddling around with the runner tags sadly with no success.
Currently we have two equal runners… let’s call them
We currently only have 1 pipeline configuration (let’s call that
P1) , which consists of
preparation (downloading project and preparing env vars)
build (compiling and packaging the game)
deliver (uploading the build to our file storage)
I want that every instance of
P1 gets executed by either
R2 but have them not share the different stages and jobs, since the output of e.g. the
build step is needed to do the
So a single tag for the runners that is hardcoded into the job config would prevent that one of those runners could do any
Are you using artifacts to pass the output of the
build stage to
As I mentioned in the first post, I know I could do that. However up- and downloading 5 GB of data (buildsize) would easily increase our total pipeline time to 300% . That’s why I wanted to see if there’s an workaround for that.
The runners are executed in a shell environment on our dev machines.
That gave us the benefit of having persitance between “build” and “testing” stage, so that the testing stage had the files available.
If I now start to share the big compiled project between those steps I might as well shut down the second runner again, since it would take us so long…
What I tried is:
Gave my runners their ID as tag.
R1 had the tag
R2 had the tag
I then tried to use that CI_RUNNER_ID as variable tag (as mentioned in my first post). That looked like that.
However the variable doesn’t get expanded in the way I hoped it to.
That’s what happened in the deliver job then, which means (to me) that the value of the variable wasn’t available at this point in the pipeline.
Right, I see your problem, and AFAIK you can’t use variables in tags.
I’m not sure that there’s a straight forward way around this, but I’m wondering whether you can mis-use rules and/or dependencies to do something like:
- if: '$CI_RUNNER_TAGS ~= /^runner2/'
- if: '$CI_RUNNER_TAGS ~= /^runner2/'
dependencies probably isn’t strictly necessary, and I haven’t tested any of this at all. I have also made an assumption that you have not used other tags on your runners (in which case the reg exps here won’t make any sense).
However, this is the sort of thing I’d try next, maybe in a throwaway repo.
I’d be interested to see if other people have different workarounds for this, but it is something that comes up relatively frequently.
According to this doc entry you can use variables in tags.
However what doesn’t work is also having the global variable set to the $CI_RUNNER_ID at the beginning. And you can’t update global variables in the script scope. That’s what I tried as well.
But thanks for your suggestion I will try that.
Ah, that’s interesting. I think there are some long running issues up about cleaning up how variables work - there are lots of corner cases like yours.
When I first tried it no stage did start, since the preparation stage had
runner2 which no runner could fulfill, since all tags have to be matched.
I then removed the tag requirement.
Then the preparation stage got executed, but no other stage triggered. Probably since the evaluation of the rule was too early?
I’m giving it a shot with the
workflow syntax to maybe set a variable beforehand…
Hi @Gitoza , have you managed to solve this issue? thanks
Hi, we are very much be interested in a way to have 1 runner (we also have 2) handle a pipeline, instead of switching in between jobs. I agree with @Gitoza that sharing artifacts with a shared cache would lower the performance.
Is there any update on this issue?
I tried to use a shared file system between runners.
But still the performance is degraded!