After updating to 16.5.0 runner and GitLab instance we noticed a new intermittent failure during CI pipeline starts. I’m having trouble figuring out if this is something gitlab-runner related or GitLab instance related.
Getting source from Git repository
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/en/developers/navigator/.git/
Created fresh repository.
fatal: protocol error: bad pack header
Pipeline Output from failed job:
We are self-hosting gitlab-ee on RHEL. Our GitLab kubernetes runner is running in Azure AKS on 1.26.6 of k8s.
Generally 1-2 retries will get by the failed step.
- GitLab Enterprise Edition v16.5.1-ee
- GitLab Kubernetes Runner 16.5.0
Our GitLab Kubernetes Runner is deployed via the recommended GitLab Agent setup. I’ve noticed after upgrading to 16.5.0, our runner registers every few days on its own, and sometimes leaves older artifacts around.
Our .gitlab-ci.yml step for this is below (i’ve minimized the script section, since it is failing before getting there):
Dockefile for image that we are using is very minimally modified based a dotnet-sdk image:
Any advice on what troubleshooting steps would greatly be apricated. My gut tells me this is something GitLab instance related, and not GitLab runner specific. However, I’m not aware of any updates that were applied to our RHEL GitLab instance, other than the auto-update that was applied on Oct 23rd. Generally when this error starts occurring in our pipelines I notice slowness in navigating around in our GitLab instance and even get a few 500 errors.
Thanks!
Mike