Help Understand runner performance with concurrent jobs

Without delving into too much detail was wondering if someone could shed some light on why run-time goes up when running jobs concurrently.

I have a self hosted gitlab runner (version 14.7.0) running on Ubuntu Linux.

The pipeline I am running consists of 2 jobs each doing Node based application builds. both of which use the same runner that has a Docker executor. When running as serially as stages the runtime was as such:

Pipeline 1 (43:23)

  • Job 1 (26:43)
  • Job 2 (16:39)

Realizing that there were not any dependencies between them I added needs: [] to Job 2 so that it would start at the same time as Job 1 and run concurrently.

The result was kind of surprising. Rather than decreasing the run time for the pipeline it increased.

Pipeline 2 (48:17)

  • Job 1 (48:17)
  • Job 2 (43:49)

I expected job 2 to run a little long because it was no longer taking advantage of cached node_modules but don’t really understand why it went up so much overall. Looking at timings of specific tasks in log output things seem to be taking nearly twice as long.

Is the runner the bottleneck? Is there some sort of extra overhead or thrashing going on because the runner is servicing multiple jobs? I thought that since the executors were running in Docker containers there wouldn’t be any resource contention happening.

Hard to say without knowing what is built and how it is configured. A reproducible public project could help :slight_smile:

Blind guesses:

  • I/O blocks the jobs, hitting node_modules, etc. - if that’s not an ssd but virtual shared disks, or it has IOPS limits
  • CPU overloaded, or slow vCPU cores. The hardware sizing underneath would be interesting.
  • Hitting OS host limits with Docker

Thanks, I figured that there would not be an easy answer and was fishing for ideas. Getting a something publicly reproducible would take way more time than I currently have to devote to it.

I can try to get some machine specs. I know it runs on a virtual Linux server with probably 8GB of memory and sufficient disk storage and ample CPU resources. It is dedicated to Gitlab runners and has a low workload only serving one project right now. I can also say for certain that when the pipelines I mentioned were running there was no other runner processing happening at all.