High disk quota usage despite artifacts being deleted

Environment: gitlab.com
Account: Free

I currently have a project that would produce artifacts with an expires_in=1h limit. I recently noticed that my disk quota was almost full; attributing almost 7GB to artifacts. This is unexpected as my artifacts are only around 10M.

I manually checked each CI job, and none of them have a downloadable artifact. That is, there is not a download button next to the job in the job list and a few jobs I looked closely at said that the artifact had expired.

At this point, I am guessing one of two things:

  1. The disk quota counter counts something else as artifacts too. Maybe I’m just not looking correctly.
  2. There is a bug somewhere in Gitlab where expired artifacts are not updating the disk quota.

Does any one have any ideas regarding this bug? I have temporarily stopped producing artifacts, and the disk usage has not changed (as expected).

Thank you.

Yes, I’m also experiencing the same thing. Using gitlab.com and also with free account.

On my private repository, each commit produce around 50 MB artifact and it’s set to expire after 30 minutes. However even after the artifacts expired, the artifacts storage usage never decrease and always going up.

Currently it’s almost reaching 500 MB. I know it’s still very far from GitLab 10 GB limit, but I hope they can fix this issues fast before I reach that limit.

@ivan_achlaqullah If you don’t mind me asking, what exactly did you set to your expire_in value to?

I set mine to 1h, and after looking at the documentation, it looks like that may not be a valid value as I should have done 1 hrs. I’m not sure if gitlab accepted it as a valid input. My artifacts are deleted on the jobs, but I’m wondering if there is some weird issue where the actual artifact blob is not deleted because it doesn’t know what 1h means. I’m curious if it will be deleted after the default 30 days.

If you set it to a valid value (such as 30 mins), then this is probably a Gitlab bug. If not, it may just be that Gitlab doesn’t understand our expire_in values and is defaulting to 30 days.

Dear taysoftware,
Similar thing is happening to me. I experienced huge Storage use (over 9GB), in spite of small repo size (15MB). I still don’t know how to check what is eating up all that space. I tried to delete most of the artifacts, job and pipeline logs. Storage used is still over 4GB. An artifact is 60MB. The expires_in is set to 1 day in my case. How can I see if that 3.9GB is attributed to artifacts or not?

@p-czigany Yah, that is hard to find. Disk quota is per repository, but it is tracked per user/org. From your project page, go to the following:

  1. Click user avatar in upper right
  2. Click Settings
  3. In the left column, click Usage Quotas (At the bottom for me)
  4. This screen defaults to your CI minutes. For storage, click the Storage tab near the top.
  5. Click the arrow to expand your project. You should see where all your data is.

Unfortunately, that is all you can see (as far as I know). You can’t get specific links to data (like seeing all your artifacts). This makes the problem frustrating to diagnose.

I’m using expire_in: 30 min (without s) which according to documentation should be valid value since the example show using both mins and min.

Maybe I found the issue. I just reread the documentation again and just noticed something there:

Note: For artifacts created in GitLab 13.1 and later, the latest artifact for a ref is always kept, regardless of the expiry time.

Then I read the linked proposal.

According to that proposal, the latest created artifact are never deleted even if expire_in are defined. I don’t know why GitLab thinks this is a good idea.

But I’m still not sure. I don’t know if this is actually the cause, or it’s actual bug inside GitLab itself.

I have the same problem. Tried to contact support but got only generic response that didn’t help.
If you are owner of the project, you can check the space allocated by artifacts. Check pic.

I have set project expiration to 2 hours and also deleted all pipelines but storage is only going up and never down. What is the point of keeping artifacts when you can’t access them nor delete them.

Seems to me like some decision of product management. Either you have to pay to get full support or make self-hosting.

Funny thing is that I don’t need artifacts at all. I just need to transfer some files between jobs and this is the only reliable way. :slight_smile: