We’ve run into some disk space issues on our gitlab server, and investigations have found a lot of old artifacts for some projects (as I prioritised cleaning them out for some projects, thereby bringing the server to a saner state, I haven’t yet found out if it’s all projects). And looking at the job id for the job that produced some of the artifacts, I’ve found that they come from a job that has
artifacts: expire_in: 2 days
.gitlab-ci.yml, so this had been thought of. But I still have a script running traversing this projects’s jobs and removing artifacts from all old builds, it’s been running for a couple of hours (I didn’t note the start time) and has so far freed up more than 200GiB of disk space.
Has anyone else seen something similar?
As far as I’m aware we haven’t changed anything in GitLab’s configuration (we are planning to move artifacts to our S3-compatible storage, to get more space, but that’s mostly an argument for stability = not changing anything related to it), and the only thing I’ve noticed in the release announcements for new versions of GitLab is a change that tries not to delete artifacts from the newest pipeline (and that sounds pretty sane, and ot like what I see).