Kubernetes Scaled Pods not showing on the UI

I recently upgraded to v16.
GitlabRunner is installed with helm.

Also, I am using HPA for my cluster, so when job count > x then add one more gitlab-runner pod (min 1 runner, max 5 runners).

I then registered the runner with runnerToken instead of registrationToken and the runner got registrated. Fine.

I am now wondering, when the cluster scales, why doesnt it show me the dynamically added runner-instances on the UI? It seems a bit confusing to me. I haven’t worked too long with K8S+gitlab-runner, so I am not sure if this was the same behaviour in versions below 16.

When inspecting the pipelines, I see this output:

Running with gitlab-runner 16.1.0 (1233baeff) on gitlab-runner-123-456-aefc0 **lpxz3jfvu8**, system ID: r_Qbu398jdY6q

lpxz3jfvu8 seems to be the same over all pipelines, while SystemID and runner-pod name always changes (makes sense to me as this would confirm, that the pipelines run on the newly scaled/added gitlab-runner pods).

lpxz3jfvu8 is the authenticationtoken I guess.

I think for the old versions, where I used the registrationToken, when a new pod got added via HPA, the authenticationtoken was newly generated from the registrationToken.

So I am curious is this is the expected behaviour?

It would therefore be interesting to know if it is generally “safe” to use the same authenticationToken over multiple runners (on different hosts/vms).

Thanks in advance

I think you misunderstood the architecture of GitLab Runner on Kubernetes. There are 2 kinds of Pods for GitLab Runner on k8s.

  • gitlab runner controller pods usually called gitlab-runner-something-something
  • gitlab runner job pods usually called runner-something-project-id-concurrent-something

Controller pods:

  • fetches jobs from GitLab runner
  • creates/starts Job Pods
  • do NOT execute jobs
  • usually you need just 1 or 2 for HA
  • no need to scale these (HPA or otherwise)

Job Pods:

  • short-lived pods executing the Jobs, they live only to execute the job and then terminate
  • created by controller Pod
  • max number of Pods depends on concurrent settings on your controller Pod

First of all, thanks for your quick response.

I know the difference between the gitlab-runner-something-something and runner-something-project-id-concurrent-something.

I use HPA to scale the controller pods gitlab-runner-something-something.

My values.yaml contains e.g. these lines:

concurrent: 20
  minReplicas: 1
  maxReplicas: 5
    - type: External
        metricName: gitlab_runner_jobs
        targetAverageValue: 4

As an example, when just provisioning the cluster, i have one controller pod running, lets say


then, when too many jobs are running, HPA adds another controller pod

gitlab-runner-something-something-2, gitlab-runner-something-something-3,… until 5

So I wonder, if the behaviour explained above would be expected then.

Btw I want the scaling especially for being able and distribute the jobs on multiple nodes to be failsafe

I am not using the new registration method just yet. I am letting it mature a bit :slight_smile:
But I think the reason why you don’t see additional runners in UI and have the same name is caused by using the new registration method.

Using the old registration method each Controller Pod registered as new Runner using the registrationToken. With the new method each Runner has unique runnerToken and since all the controller Pods are using the same token to GitLab it looks like a single Runner.

Btw single Controller Pod is capable of handling 100+ parallel jobs (depending on size) and also number or placement of Controller Pods has no effect on which node are the Jobs scheduled.