Problems with docker concurrent builds

Hi there,

I’m using a gitlab runner using Docker in Linux (and I’ve used docker to deploy the runner itself, by mounting the docker socket into the runner, following the instructions) and I have a problem when 2 concurrent builds are using said runner. I’m getting (as an example):

Fetching changes with git depth set to 50...
 Reinitialized existing Git repository in /builds/myUsername/myProject/.git/
 fatal: shallow file has changed since we read it

…because /builds is mounted into the docker container, so changes on one of the runners are being seen by the other runner and trashing it. I expected each docker image to have its own copy of the repository and not clobber each other like that. How can I configure my runners so I don’t have this kind of problem?

Thanks in advance.

1 Like

Looks like said change was done here https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1520 but I have no docker “services”, I have normal jobs with a few steps, that’s it…

1 Like

I’ve tried using the CI_CONCURRENT_ID and a custom clone path to workaround this issue (which in my mind should not happen. I understand that taking care of my build scripts be concurrent aware is my responsibility, but this is a problem with the clone step, which I don’t have real control over?) but $CI_CONCURRENT_ID is always 0, so nothing has really changed…

1 Like

Hi @arcnor. Did you manage to find a fix for this issue? I’ve had various problems over the last day with concurrent docker builds on a runner using the Docker executor on Linux. One of the problems I’ve had is exactly what you’ve described here

1 Like

My only “fix” was to remove concurrency, unfortunately. After reading many issues, I think this was by design somehow? I haven’t found the time to find a better solution, but if you find any please share, I’ve spent too much time on this already :slight_smile:.

2 Likes