I’m using gitlab.com with shared runners and i have a 27MB Files repo with some CI/CD pipelines, the thing is that now my Storage Size is 7.4GB and it’s increasing very fast.
I checked the stats of the repo through the api and got this :
so i proceeded to delete all the jobs from the api, but as expected it reported to me that there were 0 jobs so, instead, i deleted almost all the pipelines. From more than 200 pipelines i now have only 10, but the storage size is still 7.4gb and the stats keeps reporting the same, “job_artifacts_size”: 7868808783.
I don’t know what else to try, how i’m supposed to free that job artifacts size reported by statistics if the api says that i have 0 jobs?
Can someone guide me in the right direction? Thanks
Thanks, but i already tried to do a cleanup via the API with no luck, i guess i did something wrong and after deleting the pipelines i can’t retrieve the job ids to call an /erase on the job.
The thing is i deleted all the pipelines after getting 0 jobs from the url /project_id/jobs and now i can’t get the ids if they keep existing
Hello,
from my experience on our self-hosted gitlab :
expiring artifacts works, but it won’t expire the logs (even though those are stored and exposed in the API as artifacts)
erasing the jobs, then deleting the pipeline works (I have a script for that)
because of this bug https://gitlab.com/gitlab-org/gitlab/-/issues/224151, deleting the pipeline without first erasing its jobs creates dandling job artifacts on disc. Admins can delete the dandling artifacts, but I don’t know how to fix the quotas afterwards.