Jobs hanging and running out of minutes

Problem to solve

I’ve just received an email informing that I had run out of CICD minutes. As it should not be possible in only three days I’ve checked the only project that has run pipelines in these days and it had a pipeline with hanging jobs. The jobs had the logs with the “Job succeded” message, but they were still running.

Steps to reproduce

Can’t reproduce, It just happened that once and now I’m out of minutes. No changes to .gitlab-ci.yml

Versions

Helpful resources

Not sure if related since the message has been deleted and topic closed, but seems this other topic opened less than an hour ago is related:

I guess its related to the active incident: https://status.gitlab.com/

3 Likes

Same problem here, seems that jobs are hanging due to a global problem => 2024-09-03: The sidekiq_queueing SLI of the sidekiq service on shard urgent-cpu-bound has an apdex violating SLO (#18489) · Issues · GitLab.com / GitLab Infrastructure Team / Production · GitLab
And it may have consume all the remaining time while hanging… I hope no.

1 Like

Same here, i hope ours compute minutes can be restored

2 Likes

Looks like the incident has been solved => https://status.gitlab.com/

But our compute minutes are gone :cry:

Aint no hope to restore them? Jobs have the default 10 minute timeout, I can understand that 10 minute use, but more than that (or the configured timeout) is not our fault…

I have a free plan so no access to support…
I just bought 1000 minutes (10€) because I can’t wait for the compute minute to reset.

1 Like

I’ve just received this email from Gitlab Team and they have restored our minutes! :partying_face:

imaxe

1 Like

I suddenly received the email today. It’s 556% used (6723 / 1216 units) but I was still able to run CI yesterday. Though, I’m not really sure it related to this thread.

Exactly the same thing! Something supposedly swallowed 6900 minutes today (700% increase).

1 Like

This is happening to me at work. The job succeeds but won’t go in the succeeded status, which drains minutes and blocks the following steps of the pipeline to be executed in a timely manner.

Hi folks,

There were some other threads here on the same topic, but if you’re experiencing this issue today and didn’t get an email, it’s likely related to:

We’re still working out the details on getting everyone sorted in terms of CI minutes, similar to what happened last week. If you’re blocked or are concerned, feel free to reach out to our support team via a ticket. But I just wanted to confirm that we’re aware and working on getting everyone’s minutes in order.

1 Like

Same problem!
We are experiencing an issue where pipeline jobs remain stuck in a running state even though all tasks are successfully completed. This causes an abnormal consumption of CI minutes.

Attached is a chart showing abnormal pipeline usage on September.