Execute whole pipeline, or at least stage, by the same runner

Hi everyone, I have problem with gitlab runners. In our company we have eight gitlab runners on four different computers ( two runner per computer ).
Compilation of our application is divided in three steps:

  1. Compilation ( it’s big app, compilation takes 1,5h ),
  2. Test of common modules,
  3. Provide on network sharing for developers.

In gitlab-ci.yml you can see, that step 1 and 3 is realized by gitlab runner.
1 - Batch fastDE
3 - Batch switch
(2. Test is processed manually by developer yet)
Points 1-3 have to be done on the same computer, because first step prepare exe files in local directory and (after test) switch copies them to network sharing.

And finally, my problem: I recognized, that even if I use ‘Needs’ keyword, jobs even in the same stage are not executed by the same runner.
I was searching a lot of time, but without success. Is any way to do that?

Of course, I know that I can use tags, but it isn’t exactly my process.
My process is: gitlab runner which started pipeline, should processed it to the end.

  • GitLab: self-managed - 15.7.1
  • Runners: 15.6.0

.gitlab-ci.yml file:

variables:
  GIT_STRATEGY: none
  ADDITIONAL_PARAMETERS:
    description: "For pass /UNI or /ANSI. If you choose nothing, default encoding for selected area will be prepared."
    options:
      - ""
      - "/UNI"
      - "/ANSI"
    value: ""
stages:
  - build    
build:
  rules:
   - when: manual
  stage: build
  script:
    - if not exist "L:" net use "L:" \\serv-poz.domain.com\sys
    - if not exist "Z:" net use "Z:" \\serv-poz.domain.com\Src_area
    - call fastDE WX %GITLAB_USER_LOGIN% %ADDITIONAL_PARAMETERS%
switch:
  rules:
    - when: manual
  stage: build
  needs:
    - build
  script:
    - call switch

config.toml file:

concurrent = 1
check_interval = 0
shutdown_timeout = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "runner_lib1"
  url = "http://gitlab.domain.com"
  id = 5
  token = "REDACTED"
  token_obtained_at = 2022-11-23T08:32:23Z
  token_expires_at = 0001-01-01T00:00:00Z
  executor = "shell"
  shell = "cmd"
  [runners.custom_build_dir]
  [runners.cache]
    MaxUploadedArchiveSize = 0
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
1 Like

I would also like to know if there is a solution to this. My use-case is a bit different but also requires that a pipeline is executed on a single runner/worker.

The closest thing I could find is: resource_group but from my understanding this is limited to jobs not pipelines in general

1 Like

Runner tags can be a possible solution, we discussed a similar scenario in Build on a runner but publish on an other? - #2 by dnsmichi including a solution.

Hi Michael, thank you for your answer.
I’ve read the topic you posted. What I understood from that topic, is, that your goal was to give possibility to share artefacts between runners. Unfortunately, It’s no very helpfully for me, because artefacts for my program takes almost 2GB and also I would had to change completely my switch batch to change source of files to share for developers.
Do you see any others ideas?

One of the ideas was to exchange artifacts and ensure that jobs are “pinned” to a specific runner with a tag registered. I interpreted the sentence below as “pin job 1 and 3 to a dedicated same runner”.

gitlab-runner register ... --tag-list  dedicated-runner-for-compile-network-share-tag 

which then gets assigned to the jobs.

build:
  tags: [dedicated-runner-for-compile-network-share-tag]
  ...

switch: 
  tags: [dedicated-runner-for-compile-network-share-tag]
  ...

Not sure though if that works, because I also read

In our company we have eight gitlab runners on four different computers ( two runner per computer ).

I think that 2 runners are used for load sharing, performance, etc. - is this a requirement, or is there a possibility to assign 1 runner per computer, and add as many computers as possible? If yes, job tags with a runner could work.

If not, an additional idea would be creating a local CI/CD cache on the runner host, and dump the executables, re-usable files, etc. there. Caching in GitLab CI/CD | GitLab including the tags idea.

PS: I have edited the initial post and redacted the runner token, and domain. Suggest rotating the potentially compromised token.

It’s close to my goal, but I think that I have better explanation.
See list of my runners:


And changes in my .gitlab-ci.yml file

As you can see, I’ve added variable TAG and default/tags section.
Because of that, in run pipeline view:

Now I can achieve my goal: one pipeline should be processed by one runner, but only in manual mode.
I can choose on with runner (computer) pipeline will be processed. But I don’t want do that :smiley:

Each of my eight runners have possibility to process build and switch jobs, but one limitation is, that it have to be done by one runner, never mind which one.

Runners are totally independent between themselves, were prepared only to have possibility to start many compilation ( different version of product ) in parallel, usually we have max 3 build processes in the same time.

I don’t understand that yet, but if you have no other ideas, I try to read and understand this.

I think that caching is actually the solution for your problem. Think of a shared storage between jobs that are executed on the same machine. It does not matter which runner picks of the job, as long as they remain on the same machine (and use the same runner tag for example).

Something like this, with a global cache definition for the output directory where the exe or package is saved in the build job, and later used on the switch job.

cache: 
  key: $CI_COMMIT_REF_SLUG
  paths: 
    - output-directory/ 

build:
  script:
    - echo $CI_RUNNER_ID 
    - echo "Building something into an exe" 
    - mkdir output-directory/ 
    - echo "Simulating a build" >> output-directory/1.exe 

switch: 
  script:
    - echo $CI_RUNNER_ID 
    - echo "Running a script that depends on the built exe. Simulation will show that the exe file from the build job is available. " 
    - ls -la output-directory/* 

Actually, it looks like this could resolve my problem and I appreciate your effort!

But, do you think that is it possible to create issue for gitlab and add functionality ( some keyword to .yml file )?
Why Am I Asking - I think that in our company we could be able to do changes in our environments to use cache, but it will be very hard and actually, we would prefer adapt runners to environment, not reverse.
Also I’ve seen that the others also were asking about that feature, so maybe you can think about that?

@MLI I think the use case would be covered by this request but feel free to add a comment with more details and give it a :+1:

-James H, GitLab Product Manager, Verify:Pipeline Execution

2 Likes