For scaling an internal (self-hosted) gitlab CI, I seem to have different options so setup runners. What we have now is: a VM template, containing 1 runner which can have 5 concurrent jobs. When CI is busy (it really is predictable when we’ll need more or less CI jobs), we just add new VM’s (which is really fast, no problem at all).
However, there seem to be many options to ‘scale’ Gitlab CI jobs:
- Have VM’s, as explained
- Add multiple runners inside one VM.
- Set concurrent higher per runner
I fail to see the advantages/disadvantages of the different possibilities, especially between 2 and 3. For both 2 and 3, updating the config and restarting the VM would do fine. Of course, the VM itself will reserve some resources (we’re managing them internally as well, so no autoscaling on AWS or so). For both, one can have multiple concurrent jobs inside the same VM (using the same CPU/RAM resources). I really fail to see why one would choose one above another, except when one have completely different configs for the runners (in which case I fail to see why you should do that ) - config management of VM’s is not a issue, we’re using Ansible, so updating the configs in several VM’s is fairly fast).
I do get that the three approaches work on a different level: VM containing runners containing concurrent setting. I’m more wondering about how to set the right values. For example, if I know I need about 10-50 concurrent jobs as base, but at predictable peakmoments, it goes up to 100-500 concurrent jobs, would one set many small VM’s, add more runners, or increase the concurrent-value?
So, well… what are the main advantages/disadvantages of the three different approaches? What setup do you use, and why and what is your experience? …?