Problem to solve
I have a pipeline that does a packer build for a RHEL VMWare machine. The runner resides in an on-prem kubernetes cluster with MetaLB as a load balancer.
The packer build job starts, but the way RHEL is installed, the instance started by packer wants to connect back to the gitlab runner pod on a port from a range I specify to get the ks.cfg file.
Obviously, the ephemeral gitlab-runner pod is not exposed so the process will fail.
Is there any way to expose a job pod with a kubernetes service exposed through NodePort or LoadBalancer?
I looked through the Kubernetes executor docs, but couldn’t find anything that seems to fit.
Versions
- Self-managed
-
GitLab.com
SaaS - Dedicated
- Self-hosted Runners
Gitlab 17.3.5
Runner installed with helm chart “gitlab-runner” version 0.69.0 (17.4.0)