Hi Gitlab community,
I am seeing a very odd behaviour with my CI pipeline which deploys my runners.
A bit of background first, I am running Gitlab CE on a Vsphere hosted VM. I installed a single runner on the gitlab server to service the project that deploys runner servers. This is tag-locked so that the project does not use any other runner but the local one.
I set up a CI pipeline which has three stages - terraform, ansible and clean. The first runs terraform to create VMs. These are then provisioned by the second stage using ansible and it is here that I am running into an issue.
My ansible customises the gitlab-runner host VM and installs the runner software, registers runners (shell) and then customises the config.toml followed by a gitlab-runner restart to re-read the config.toml. It is here that the problem occurs. This runs fine when run manually but when the local gitlab-runner is running it I get “session terminated, killing shell… …killed.” in the pipeline output after the restart. Perusing the logs on the gitlab-runner host ansible is running still and completes.
It almost looks like restarting the runner service on the gitlab-runner host is somehow affecting the local runner on the gitlab host.
Any suggestions?