Without delving into too much detail was wondering if someone could shed some light on why run-time goes up when running jobs concurrently.
I have a self hosted gitlab runner (version 14.7.0) running on Ubuntu Linux.
The pipeline I am running consists of 2 jobs each doing Node based application builds. both of which use the same runner that has a Docker executor. When running as serially as stages the runtime was as such:
Pipeline 1 (43:23)
- Job 1 (26:43)
- Job 2 (16:39)
Realizing that there were not any dependencies between them I added needs: []
to Job 2 so that it would start at the same time as Job 1 and run concurrently.
The result was kind of surprising. Rather than decreasing the run time for the pipeline it increased.
Pipeline 2 (48:17)
- Job 1 (48:17)
- Job 2 (43:49)
I expected job 2 to run a little long because it was no longer taking advantage of cached node_modules but don’t really understand why it went up so much overall. Looking at timings of specific tasks in log output things seem to be taking nearly twice as long.
Is the runner the bottleneck? Is there some sort of extra overhead or thrashing going on because the runner is servicing multiple jobs? I thought that since the executors were running in Docker containers there wouldn’t be any resource contention happening.