I’ve just received an email informing that I had run out of CICD minutes. As it should not be possible in only three days I’ve checked the only project that has run pipelines in these days and it had a pipeline with hanging jobs. The jobs had the logs with the “Job succeded” message, but they were still running.
Steps to reproduce
Can’t reproduce, It just happened that once and now I’m out of minutes. No changes to .gitlab-ci.yml
Aint no hope to restore them? Jobs have the default 10 minute timeout, I can understand that 10 minute use, but more than that (or the configured timeout) is not our fault…
I suddenly received the email today. It’s 556% used (6723 / 1216 units) but I was still able to run CI yesterday. Though, I’m not really sure it related to this thread.
This is happening to me at work. The job succeeds but won’t go in the succeeded status, which drains minutes and blocks the following steps of the pipeline to be executed in a timely manner.
We’re still working out the details on getting everyone sorted in terms of CI minutes, similar to what happened last week. If you’re blocked or are concerned, feel free to reach out to our support team via a ticket. But I just wanted to confirm that we’re aware and working on getting everyone’s minutes in order.
Same problem!
We are experiencing an issue where pipeline jobs remain stuck in a running state even though all tasks are successfully completed. This causes an abnormal consumption of CI minutes.
Attached is a chart showing abnormal pipeline usage on September.