I have configured one shared runner in Linux machine and using shell as an executor for this runner. Now I have two different projects(billrun and netplus) and both projects contains their respective branches.
Both projects have their respective CI file in respective branch. I configured both project’s CI in such a manner that if any commits happen in that branch then CI will run. I’m using same runner in both CI and now whenever some commits are there both pipelines runs on a single machine and it will collapse with another pipeline.
I want to configure runner or CI in such a manner that only single pipeline will run at a time and if any other pipeline wants to run using same runner then it will be queue and wait until first pipeline will finish.
Can anyone please help me to overcome this issue ?
It is not possible to limit Runners to run single pipeline. The closest you can do is to limit Runner to a single job by setting concurrent to 1, but that won’t ensure it won’t pick up any job from other pipelines.
If you insist on using what you have, your only option is to set concurrent to 1 and have all steps in each pipeline in a single Job.
That just sets a limit on the number of jobs that particular runner can execute in parallel, If concurrent is higher, other runners on that runner server can pick up jobs to execute.
If a job has a pipeline with multiple jobs, there is no way to make sure that one runner executes all of those jobs without doing other jobs in between.
Does the runner fail in some way due to a resource constraint?
What is the problem, ( AKA: What fails?) that you are trying to avoid?
However, aside from the ‘unknown problem’.
You could create a second runner and configure each project to only use a dedicated runner.
You could use tags on your jobs to force them to only run on “the correct runner(s)” too.
I think either of those approaches might achieve what you want to control where the jobs are run.
Pardon I’m doing work for NASA currently and it would be convenient if you could specify the job on which to run. Github does this…the idea is for an ephimeral runner (1 job 1 ec2). Doing this for cost savings as to not keep the lights on all the time to run k8s
You can have ephemeral EC2 with docker+machine executor.
Also if Running on AWS EKS there are several ways to have it downscaled to 0 (almost) when there are no CI jobs (considering it is dedicated EKS for Gitlab CI jobs)