Hi,
I still don’t really understand why would you do such a complex architecture. But, I can tell you what I did (~40 developers, ~15 active projects, they push like every 5-10mins and new pipeline runs).
I have 1 GitLab server instance, running in a container on a Ubuntu VM.
I have 4 identical GitLab Runners, each running on its own Ubuntu VM (12 CPU, 16GB RAM - you can also start smaller and see over time what are your needs). They are installed using ubuntu packages (not docker). They are all configured as docker executors, have the same tag, and can run up to 4 parallel jobs. This also means I have docker installed next to the gitlab runners on the same VM.
This works pretty good, we can run up to 16 (4x4) jobs in parallel, and it’s very flexible, since every job is executed in a docker container, with its own image → meaning I always define my build/test environments using docker images and I don’t need to change the Runners. For building docker images, I use dind approach described on Use Docker to build Docker images | GitLab
If you need to execute something on another machine, e.g. let’s say you need to deploy a container - then you can use ssh to connect from your job’s container into the target VM - there are plenty of examples of that, some of mine:
- Question on Deploy phase of gitlab-ci.yaml - #2 by paula.kokic
- How to use docker-compose.yml file in GitLab Runner server? - #3 by paula.kokic
Hope this helps!