Kubernetes integration, why does kubernetes deploy all runner jobs on same node?


I have one gitlab runner which I installed manually in kubernetes via helm chart. It is registered in gitlab and it can handle 10 jobs in parallel.

My kubernetes cluster consists of 3 nodes where workload can be scheduled.
Kubernetes picked node 2 to create the pod which contains the runner.
When the runner is picking jobs from gitlab, ALL of these jobs are scheduled on node 1. If there are multiple jobs, they are running all in parallel pods, all on node 1.

So during build jobs node 1 has quite high cpu load while the other nodes are idling.
Currently I did not set any ressources (limit and request) for runner jobs.

Any ideas what I can do to distribute the jobs on multiple k8s nodes?

Thanks, Andreas