Thank you for the update, I’m following that issue too. I tried to get an idea of how much CI time libvirt is using by temporarily making my fork private, and running the CI jobs. I appeared to consume 450 minutes in a single CI run, and that’s with many jobs skipped as their cache was primed. So even if we were approved for the Open Source Program, I fear we would easily hit the larger 50,000 minute limit with our rate of contribution and current scale of testing (ignoring that we want to add even more CI). I’ve not checked their single-run CI time, but I expect the QEMU project will consume even more per-run and their plans call for more CI jobs too.
I completely understand why GitLab needs to put some limits on usage, as it isn’t sustainable for a business to provide free unlimited compute resources forever. So I’m expecting that at some point we’re going to have to at least part fund it ourselves with custom runners, even if we join the Open Source Program to get the extra CI minutes allowance.
If going down the custom runner route though, we would need to figure out an improved way to deal with the forking workflow wrt custom runners. We can’t expect contributors to setup custom runners, and AFAICT, runners from a project are inaccessible to CI jobs run in forks right now. We need a way for a project maintainer to explicitly trigger CI jobs on a new or updated merge request, such that the jobs run on the main project’s runners, not the fork’s runners. If GitLab’s integrated CI features can’t cope with that, then it looks like we’ll have to consider use of webhooks to trigger external CI instead (eg https://gitlab.com/ayufan/merge-requests-triggers), reporting back results as merge request comments or labels or approvals in some manner. It would be a shame to loose integration with GitLab CI infra though, as that’s one of my favourite features of GitLab.