I recently upgraded to v16.
GitlabRunner is installed with helm.
Also, I am using HPA for my cluster, so when job count > x then add one more gitlab-runner pod (min 1 runner, max 5 runners).
I then registered the runner with runnerToken instead of registrationToken and the runner got registrated. Fine.
I am now wondering, when the cluster scales, why doesnt it show me the dynamically added runner-instances on the UI? It seems a bit confusing to me. I haven’t worked too long with K8S+gitlab-runner, so I am not sure if this was the same behaviour in versions below 16.
When inspecting the pipelines, I see this output:
Running with gitlab-runner 16.1.0 (1233baeff) on gitlab-runner-123-456-aefc0 **lpxz3jfvu8**, system ID: r_Qbu398jdY6q
lpxz3jfvu8 seems to be the same over all pipelines, while SystemID and runner-pod name always changes (makes sense to me as this would confirm, that the pipelines run on the newly scaled/added gitlab-runner pods).
lpxz3jfvu8 is the authenticationtoken I guess.
I think for the old versions, where I used the registrationToken, when a new pod got added via HPA, the authenticationtoken was newly generated from the registrationToken.
So I am curious is this is the expected behaviour?
It would therefore be interesting to know if it is generally “safe” to use the same authenticationToken over multiple runners (on different hosts/vms).
I am not using the new registration method just yet. I am letting it mature a bit
But I think the reason why you don’t see additional runners in UI and have the same name is caused by using the new registration method.
Using the old registration method each Controller Pod registered as new Runner using the registrationToken. With the new method each Runner has unique runnerToken and since all the controller Pods are using the same token to GitLab it looks like a single Runner.
Btw single Controller Pod is capable of handling 100+ parallel jobs (depending on size) and also number or placement of Controller Pods has no effect on which node are the Jobs scheduled.