How do automated backups work, mine have run rampant

I’m running sameersbn/gitlab:9.1.4 on a docker 1.12.6 virtual machine. The server ran into thermal issues and dropped out, and repeated this process as I tried to troubleshoot why the system stopped in the first place. I think this had a negative effect on the Gitlab container.

Once I got the system functioning, I found Gitlab in a restart loop as described in #1079. I found the backup drive full, and proceeded to remove the extraneous backups created that day. For good measure, I killed the Gitlab container using docker-compose, and started a fresh container. That was all yesterday.

Today, the Gitlab container is in another restart loop and the backup drive is full of backups timestamped every minute. I’ve cleared the backups again, then restarted the services. Gitlab is running again.

Within the hour, the backups have triggered again, and are running every minute, but I cannot find what’s initiating this.

The docker-compose.yml is configured to backup daily:


/var/spool/cron/crontab/git concurs:

# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/cron.git installed on Tue Jun  6 20:28:26 2017)
# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
00 01 * * * /bin/bash -l -c 'cd /home/git/gitlab && bundle exec rake gitlab:backup:create SKIP= RAILS_ENV=production'

I’ve tried commenting out that single entry in the crontab, then killed the cron task. docker-compose logs showed the gitlab service respawn cron, and the backups continue to run every minute.

Is something else managing the backups and I’m looking in the wrong place?

I’m new to Rails, the logs don’t mention the backups running, or any problems with Sidekiq jobs. Is there some rake command I need to run to bump up the verbosity on the logs?


Alright, after poking at the containers, breaking things, and fixing them, the rampant backups have ceased.

The only oddity that stands out in this affair, is the unicorn process was stuck in a restart loop. I killed it, modified the config/environments/production.rb (or where ever that file is) and set the log level to :debug, restarted the container, and saw a stale pid file was causing unicorn’s restart loop.

I stopped unicorn, removed the stale pid file, found a similar pid for sidekiq, stopped that process, and removed that pid file, too. After dealing with some other shenanigans with the database from a self-induced-gunshot-wound moment that killed the postgres process without a proper shutdown, everything started up fine, and no rampant backups.

It doesn’t make sense to me that unicorn would cause this behavior, maybe sidekiq was quietly stuck in an event loop, too, and it was the cause. Either way, problem solved.