Kubernetes Runners: How to specify that all the job run on targeted nodes

I am running GitLab Omnibus version 13.12.15.
Deploy to AWS.
My runners are deployed to kubernetes cluster (EKS) using helm.
Right now I have pool of servers that are labeled pool-runner. And the runners are configured to run on these instances.
What I would like to do is create a pool that is labeled pool-jobs or something like that and have all my jobs run on that pool.
This way the actual runners can run on small hosts that don’t scale and the jobs can run in a worker pool that autoscales based on CPU usage.

I don’t see anything in the values.yml file to set this.

You probably need to configure node selector for your jobs, see The Kubernetes executor for GitLab Runner | GitLab

1 Like

I have this setting:

          "pool" = "runner"

This is what causes the RUNNERS to run on the hosts with that label.

How would it get the JOBS to run on a different pool?

I want it to work like this:

I think I may have found the answer to my question.
There is a setting like this farther down in the values.yml file:

nodeSelector: {
  pool: runner

I believe that one controls where the runners run.

and then this one controls where the jobs run:

          "pool" = "jobs"

I am going to test this.

Okay, this seems to work. However, all the jobs seem to be running on the same host in the “jobs” pool. How can I spread them out so that all hosts get used?

I have this setting:
antiAffinity: soft
Which makes the runners not run on the same hosts. But how do I do that for the jobs?

I see these instructions: The Kubernetes executor for GitLab Runner | GitLab
However I obviously would not use all of those settings. I am unclear on which one would be the setting that would give the jobs a soft antiaffinity.