Kubernetes integration, why does kubernetes deploy all runner jobs on same node?

Hi,

I have one gitlab runner which I installed manually in kubernetes via helm chart. It is registered in gitlab and it can handle 10 jobs in parallel.

My kubernetes cluster consists of 3 nodes where workload can be scheduled.
Kubernetes picked node 2 to create the pod which contains the runner.
When the runner is picking jobs from gitlab, ALL of these jobs are scheduled on node 1. If there are multiple jobs, they are running all in parallel pods, all on node 1.

So during build jobs node 1 has quite high cpu load while the other nodes are idling.
Currently I did not set any ressources (limit and request) for runner jobs.

Any ideas what I can do to distribute the jobs on multiple k8s nodes?

Thanks, Andreas

did you ever get an answer to this?

Curious too…

Same problem on my side, it is

i have the same problem.
the pods are deployed using a nodeName parameter, but I’ve never defined it in the gitlab-runner configuration (by using helm values.yaml in my case)
any ideas how to spread the jobs on multiple nodes? It would make the pipeline much faster.