Job queue times

Hi!

My organization has recently purchased GitLab Premium.
We’re on the SAAS version, straight up gitlab.com.
But we have our own private gitlab-runner instances, 3 of them.
My problem is that queue times for CI jobs are really high while the runners are idle.
We’ve used the free tier before and thought that maybe it will get better when we purchase premium, but there has been no improvement since we made the move.
Jobs that take our runners 30 seconds to complete are often stuck for 4 to 5 minutes before they’re even scheduled.
Sometimes, it gets really extreme - one of my own jobs has been scheduled to run after a whopping 45 minutes. While our runners were idle, mind you.

I’ve searched this forum and found CI Queue times, that has been rather disheartening.

Is there something that we or GitLab can do about this?

1 Like

I believe similar issue to mine - GitLab CI jobs queued on retry

Did you find solution? Or any help?

Hi @avollmerhaus @mjagielloTMPL

We have the same issue, any progress?

Hi!

I have opened a ticket with the GitLab support for this case.
What we did so far in /etc/gitlab-runner/config.toml:

  • raised the concurrent setting from 1 to 3
  • set check_interval from undefined to 1

This immediately drove the queue times down from up to 5 minutes to mostly seconds.
GitLab support had the suspicion that my runners were not really idle when the high queue times occur, so the concurrent low setting would be to blame for the runners not accepting more jobs.
I’m not convinced that this was the case as I had often monitored my runners very closely and am sure that they were idle when the high queue times occured.
Based on the concurrent theory, we removed the check_interval setting so it would default again and raised concurrent even further to 6.
Suspicious that the increased check rate also played a role, I unregistered some obsolete projects that didn’t need runners any more, so we went from 6 registered executors per runner to only 2.
At the moment it’s looking good, but the change is too fresh to make a judgement yet. I’ll let the pipelines run for some days and report back.

Hi!

I have opened a ticket with the GitLab support for this case.
What we did so far in /etc/gitlab-runner/config.toml:

  • raised the concurrent setting from 1 to 3
  • set check_interval from undefined to 1

This immediately drove the queue times down from up to 5 minutes to mostly seconds.
GitLab support had the suspicion that my runners were not really idle when the high queue times occur, so the concurrent low setting would be to blame for the runners not accepting more jobs.
I’m not convinced that this was the case as I had often monitored my runners very closely and am sure that they were idle when the high queue times occured.
Based on the concurrent theory, we removed the check_interval setting so it would default again and raised concurrent even further to 6.
Suspicious that the increased check rate also played a role, I unregistered some obsolete projects that didn’t need runners any more, so we went from 6 registered executors per runner to only 2.
At the moment it’s looking good, but the change is too fresh to make a judgement yet. I’ll let the pipelines run for some days and report back.

Hi, we have the same issue. Is there any update from your side?

Yes!
Queue times have stayed down and since there wasn’t a problem anymore I had to stop investigating because of other, more pressing issues.
So I’m not sure what the real reason was, but maybe try to apply some of the steps that I took.
Feel free to report back here if you find the time.