CI/CD minute limits will remain unchanged for members of our GitLab for Open Source […] programs.
Fine, nothing to worry about, I thought. Then in September I saw that the usage in my project had gone up significantly, but it was still 8872/50000 minutes (185000 in shared runner duration). Still well within the limits, so I assumed I’d be fine in October too.
No we’re still at 10th of the month and my usage is 32324 CI/CD minutes, but only 64640 (i.e. 2×) in shared runner duration. Now it seems I’ll be hitting the limit quite soon.
What is going on? Something here doesn’t add up. I’ve changed some jobs to use “saas-linux-medium-amd64” runners, but from my understanding that gives at most twice the usage as the default “small” runners (and it’s less than half the jobs and time) (GitLab introduces new machine types for GitLab SaaS Linux Runners).
Thanks for the post. There is additional documentation about how cost factors are applied to projects. You can see the CI/CD and Runner usage by project within your namespace which should provide additional details about which projects are consuming minutes. Based on the numbers provided it appears the projects are in the Open Source program and using a cost factor of .5 as documented (64,640 minutes runner duration * cost factor .5 = 32,320 CI/CD Minutes).
The Medium instances are using a cost factor of 2 so every 1 minute of Runner Duration = 2 minutes of CI/CD minute quota.
Thanks for your reply. I have seen those documents, but that still does not explain the huge difference I’m seeing between October and September. The actual usage in our project hasn’t changed that much, so it looks like something in the accounting has. And it gets worse if we compare with previous months.
To me, it looks like the sentence about the “CI/CD minute limits will remain unchanged” is misleading. Sure, the limits don’t change, but what is considered a “CI/CD minute” changes drastically.
One additional question though. The cost factor of 2 for medium instances is it instead of the 0.5 or in addition to (i.e. multiplied by) the 0.5?
CI/CD minutes usage since Oct 01, 2022
35838 / 50000 minutes
The last months pattern is:
Jun Jul Aug Sep Oct
CI/CD minutes 579 426 379 8872 35963
Shared runner duration 76884 53340 47356 185155 71918
Clearly the times for October match the 0.5 factor, but it didn’t before, hence my point that something else has changed, it looks like the factor has changed from 0.008 to 0.5.
I understand the reasoning for the 0.008 factor for public forks… but that means it may be advantageous for us to create a dummy public fork just to run the CI on it, which seems silly.
It looks the plan was getting the public project cost factor until it was changed to 1 on October 1st at which point the cost factor of .5 for OSS projects was used.
Depending on your workflow public forks may make the most sense. If you are looking to reduce the overall pipeline time and reduce minute there are some ideas in the documentation as well.
The thing is, an OSS project gets effectively 50,000 / 0.5 = 100,000 minutes, and a fork of the same project gets 2,000 / 0.008 = 250,000 minutes. It looks like it’s better to run CI on the fork.
Now comes the “abuse”: Would it be possible to create N forks, and set up the CI in the main project such that it simply triggers a pipeline on one of the forks (ideally selecting the one with the most remaining quota)?