Attach volumes to multiple pods / jobs

We connected the hosted Gitlab instance to our own Kubernetes cluster. Running CI pipeline jobs in our cluster works fine in simple cases. Many of our build jobs require quite some disc space, which is not available on the nodes themselves. Running on AWS we would like to map EBS backed volumes into the build containers.

We could get this working only for a single job. If we start multiple of them, all of them try to mount the same volume, which is not possible. One pipeline will get the volume, the others will fail. The error message is:

Job failed (system failure): prepare environment: timed out waiting for pod to start.

This happens because the pod cannot mount the volume, that is already in use. I tried to understand the Kubernetes executor in, but I’m not an Go expert and could not figure out how the pod is started exactly.

I assume that people run multiple jobs in K8s and also need volumes, so I’m wondering what I’m missing? How can I run multiple jobs in K8s, each pod having its own volume attached to it?