GItlab backup function exhausting drive space (Shouldn't!)

Hey guys!

I was trying to test out the built-in gitlab backup function yesterday and it ended up crashing my server and destroying my filesystem for a while. What happened was the variable data partition (VGRP1-VAR, debian LVM name) ran full and for some reason ESXI decided to shut down the VM.

Now, I have about 140gb worth of repos on gitlab in my current setup, and the mentioned partition is 931gb in size.
How does this happen? How much data does gitlab create when it does a backup? It shouldn’t be a problem with other VMs hosted on my server, since I always allocate my drive space with the option “Thick Provision, Eager Zeroed” - meaning it allocates the drive space so the exact thing that happened doesn’t happen.


When a backup runs the following things happen.

1.It dumps the DB, Builds, Uploads, .yml file and Repos (this is the major amt of data) in /var/opt/gitlab/backups (omni) , /tmp/backups/ (source install)
2. It will now start to compress the data and create a backup file (.tar file)
3. Deletes the dumped folders (from step 1)

During the step 2. it will contain dumped files and tar file, which means double the data on the disc, here is where the disc fill up problem comes.

U can see the disc utilisation between the backups

Alternately you can take a rake backup by running (omnibus)
sudo gitlab-rake gitlab:backup:create SKIP=repositories, db, uploads


I figured out my problem fortunately. (Gitlab did end up creating a faulty archive, but) my problem was that I had the VM running in snapshot mode with most of my drive space thick provisioned, I did not realize that the delta vmdk file would stack on top of my already allocated main vmdk, so the backup made the delta drive expand over the available drive space on my ESXI server itself. Cleaning up and merging the snapshots ended up fixing my problem.

1 Like