Reduce Storage Usage on repository

One of my repositories on Gitlab.com is showing me that it is using 9 GB of “storage” when I go to the ‘Project Overview’ page. It says Files: 311 MB, Storage 9 GB.

I am not sure where this 9 GB come from! I have don a lot of CI Testing on this project which also created a lot of artifacts. But I deleted the Pipelines using the Rest API:

curl --header "PRIVATE-TOKEN: $TOKEN" --request "DELETE" "https://gitlab.com/api/v4/projects/$PROJECT/pipelines/$PIPELINE"

After deleting and waiting a day the 9 GB are still showing.

After that I ran “HouseKeeping” in the Settings->General but that also didn’t change the storage of 9GB. At this point I don’t know what to do and I fear that I soon won’t be able to push to my repository because of the 10 GB limit on storage per repository.

I also exported the project to check the git LFS storage but that exported Zip is only 300 MB big. I know the export does not contain the CI data. So I think GitLab somehow did not pickup that my pipelines got deleted. Can somebody help and tell me how I can force a Cleanup of all CI data?

Cheers,

Robin

1 Like

Hello, Robin, I’m having the same problem. In the meantime, have you found a solution?
Peter

Unfortunately no solution but a emergency :rotating_light: strategy which I still have to test.

You could export the project from GitLab which gives you a zip with all settings but not the artifacts. Then move your original repo to another URL/Name and then import a new GitLab project using the zip. That should remove the artifacts and from then on either always set “expire_in” for attributes or don’t use them until fixed

Thanks for your quick response. Which artifacts can be provided with an ‘expires’. Do you mean the images in the Container Registry? Or artifacts from the CI/CD processes?
Regards
Peter

Hi, Robin,
thanks for the hint. I have found the corresponding documentation and will test it
https://docs.gitlab.com/ee/ci/yaml/#artifactsexpire_in
Greetings
Peter

2 Likes

I performed some tests. Despite the ‘expire_in’ the project storage is not released (at least according to the UI)…
Except for the export/import of the project, it seems that there is no way to delete the storage.
Greetings
Peter

Ugh. That is bad news. If the „expire in“ does not work this is a major bug :bug: indeed. Not sure how to escalate it higher.

Thank you for checking!

@robinryf Same results here even after deleting pipelines. The GitLab issue tracker has tens of thousands of issues. Has anyone found a relevant issue so we can track this? I’ve done some searching but can’t seem to find an issue that precisely describes this problem. I too would like to escalate, or at least make sure Gitlab is aware of the issue.

Any update on how to solve this issue? CI/CD keeps consuming storage on the repo, the actual size of all the files on my the repo are 5.3mb, consumed storage by CI is 18GB! @ushandelucca got some news?

In my project I changed the pipeline and now I don’t need artifacts anymore. Since then my storage needs have not grown any further. But now the whole build is represented in one step.
The workaround with export, delete and import of the project seems to be the only solution at the moment.

2 Likes

Thanks a lot for your quick answer, kinda sad workaround we have to do to mitigate this issue. Thanks again pal @ushandelucca

Here’s a few relevant issues I found:

I’m also having the same challenge.

Every pipeline job I run takes about 50MB of storage. Even with the “expire_in” value set, this space is still not reclaimed after the expiration.

My repository size is just 2.9MB, but storage is over 850MB at the moment and leaps an additional 50MB every time I run the pipeline.

Seems to be more issues open related to this


Has somebody tried what happens when you go over the 10GB on Gitlab.com ? I think once you go over the size you can’t push to the repository anymore. Does this Push rule use the “real” size or the wrong size shown in the Repository information?

1 Like

I’m having the same problem, i deleted all jobs and pipelines but job_artifacts_size keeps reporting more than 7gb and every pipeline takes more space that is not reclaimed after expiration (i have expire_in 1 day). I don’t know what to do

There’s a somewhat old (two years) issue reporting that statistics of artifacts size calculation was and still is broken. Seems like this is the current root of storage size quirks.

Any updates?