Controlling how much disk space is used for saving old builds

We have a runner (using the “shell” executor) that uses a little more diskspace than we feel it should, and looking at it I can see that there are still builds lying around for users that were blocked 9 months ago (because they quit their job here). That’s really not needed.

I could easily add some commands to delete builds, when blocking users, but there are probably also old build lying around for active users (actually I can see builds lying around for myself for projects that I haven’t touched for 10 months).

Can I somehow make old builds be deleted after e.g. 90 days where the project hasn’t been touched? Or in some other way control how much disk space is used by a runner?

I do not know how your runner is started, we use systemd.

Maybe you restart the runner once per day and use a prestart script to delete all builds?

Or you use find within a cronjob and delete all stale directories?

1 Like

I didn’t set up this gitlab instance, but have taken it over from a former colleague when he stopped. Yesterday I found out that the configuration of our runners isn’t controlled by chef that we use for (almost) everything else (I kind of suspected that, as I hadn’t found anything in chef writing that, but I hoped it was just because it was default, and what I found yesterday shows that it isn’t :frowning: ).

I think it was just started once, it has been upgraded since, but apparently that process doesn’t clean anything.

For the past couple of hours I’ve been thinking about writing a script to clean up, and how to do it without breaking anything (deleting half of a repo that someone/something think is there). If anyone has something that works, I’ll be happy to receive a copy.