Migrate Gitlab from a dead hacked server in FTP mode

Hi there,

the Debian development server of the company where I work has been hacked. The person who was in charge of it left a month ago, and I discover that the server was not updated at all (Debian 9, Gitlab 13). We have access to the server only in rescue mode (like via a linux live CD). I would like to recover our old GitLab which was installed by hand (a Gitlab 13.6.1-ce.0). My idea was to install the new Gitlab on a docker with the old data.

I’ve been at it all day and I can’t get anywhere. I see that all the official docs talk about using Gitlab commands (backup, restore, etc), which I can’t do since the server is down. Is it possible to migrate data only by copying/pasting files?

Hi, in short, no it’s not really possible as such to do it via copying and pasting (have tried it myself by doing that). You have the database which is important, the files for the repository data is under /var/opt/gitlab/git-data/repositories and it might even be hashed which means they don’t have the names of the project directories or even filenames. I can’t remember which Gitlab version started hashing the directories in this location, so you would have to check it.

If computers in the office have all the repo data already locally, then you can just push this to the new server once it’s installed it might be easier that way to be honest. Trying to recover what you have in the state that it is will be near impossible, and it’s not even sure that copying /opt/gitlab or /var/opt/gitlab would end up restoring what was hacked in the first place meaning your new server might end up being compromised as well (difficult to say without knowing in what sense the server was hacked or compromised).

Also, when restoring from a Gitlab backup you can only do it to the same version. So Omnibus restores to omnibus. Docker to Docker. Source install to source install. They cannot be restored between these versions. Also the version number must be exactly the same to restore. So if the backup is from 13.6.1 then it must be restored to 13.6.1 before upgrading further.

Check and see what you have got under /var/opt/gitlab/backups. Find the latest file here. Copy this, and the /etc/gitlab/gitlab.rb or wherever it is located if a source install. Also ensure of getting the gitlab-secrets.json file as well as this is also needed. You can then use these for restoring the server, but like I mentioned, it would have to be to the same version number and type.

1 Like

Hi iwalker. Thank you for your answer.

I was afraid “impossible” would be the anwser. To tell you the truth, I went through the different stages you describe, and it was indeed when I faced the Hash wall that I decided to stop insisting and to open this topic.

Well, we’ll try to restart the hacked server the time we run the backup procedure. In the future, we’re going to raise our security policies, and set up a CRON job to make daily backups of Gitlab.

Here is a script I use for backups:

#!/bin/bash
#
# Script to backup gitlab-ce

# Variables
PKGNAME=gitlab
PKGVER=`dpkg -l | grep -i gitlab | awk '{print $3}'`
GITLABCONFDIR=/etc/gitlab
GITLABBACKUPS=/var/opt/gitlab/backups
BACKUPDIR=/backups/gitlab
BACKUPDATE=`date '+%F'`
DAYS=7

# Cleanup old backups older than DAYS and wait a minute
# Cleanup is for /backups/gitlab directory
find ${BACKUPDIR} -type f -ctime +${DAYS} -exec rm -f {} \;
sleep 60

# Remove existing gitlab backups
rm -f ${GITLABBACKUPS}/*

# Backup Gitlab
sudo -u git -H gitlab-rake gitlab:backup:create
tar cvjpf ${BACKUPDIR}/${PKGNAME}-${PKGVER}-data-${BACKUPDATE}.tar.bz2 ${GITLABBACKUPS}/*.tar

# Backup Gitlab config
tar cvjpf ${BACKUPDIR}/${PKGNAME}-${PKGVER}-config-${BACKUPDATE}.tar.bz2 ${GITLABCONFDIR}

at the end of the script you can add something to copy the backup to another location, either via ssh/rsync or whatever. Make sure that BACKUPDIR exists since this is the directory where backups will be stored, so either create this directory or change it as needed. In my script it’s /backups/gitlab but you can set this however you want it to be.

The DAYS variable is set to keep backups for 7 days, you can set this as needed. Mine is actually keeping for 3 days.

If you are not using Debian/Ubuntu, then the command used in PKGVER would need to be adapted, for example using rpm/yum/dnf otherwise it won’t pull out the version number of gitlab. The script will then create backup files that look like below:

gitlab-14.4.1-ce.0-config-2021-11-02.tar.bz2
gitlab-14.4.1-ce.0-config-2021-11-03.tar.bz2
gitlab-14.4.1-ce.0-config-2021-11-04.tar.bz2
gitlab-14.4.1-ce.0-data-2021-11-02.tar.bz2
gitlab-14.4.1-ce.0-data-2021-11-03.tar.bz2
gitlab-14.4.1-ce.0-data-2021-11-04.tar.bz2

The script is ran in cron as root user, so in /etc/cron.d/backups I have this:

0 0	* * *	root	/root/scripts/backup-gitlab.sh > /dev/null 2>&1

you can change /dev/null and output to a log file if you want to see output from the backup script for example.

1 Like

hey, thanks a lot!

(we’re using Debian)

Ok so now I’m trying to avoid to restart the server at all by just chrooting in it from the rescue env.
The backup command fail because the PostgreSQL server is not launched:

Dumping PostgreSQL database gitlabhq_production ... pg_dump: [archiver (db)] connection to database "gitlabhq_production" failed: could not connect to server: Connection refused

So: can I just try to start the PostgreSQL server in the chrooted env? (if yes? how? Since it’s the internal PostgreSQL of Gitlab I’m not sure how to proceed…)
Or: is it just hopeless, and I should abandon the idea of making the backup without fully launching GitLab ?

You can use:

gitlab-ctl start postgresql

and see if that will help with the backup. That way only postgres service is running and not entire gitlab. I’m not sure if other services are required to be running, but for a dump of the db, postgres must be running.

1 Like

sadly:

fail: postgresql: runsv not running

Just in case, in chroot environment try:

systemctl start gitlab-runsvdir

and then try starting postgres. If not, then doesn’t look like it is going to work. How are you chrooting exactly? Have you ensured to mount /proc, /dev, etc inside chroot environment? Or not done this?

mount -t proc /proc /path/to/my/chroot/proc
mount --rbind /dev /path/to/my/chroot/dev
mount --rbind /sys /path/to/my/chroot/sys

All that should be done before running chroot /path/to/my/chroot /bin/bash command. Also, once in the chroot, you can do:

source /etc/profile

to load any additional stuff. Not exactly necessary but can help for certain environment stuff.

root@rescue:/# systemctl start gitlab-runsvdir
Running in chroot, ignoring request.

:sweat_smile:

The systemd script has this inside it, so in theory you could also run it manually:

/opt/gitlab/embedded/bin/runsvdir-start &

I added the ampersand on the end so that it would run in the background. That will at least get around the systemd complaint about running in chroot, and then you can try starting postgres as per previous command.

Otherwise I’m out of ideas of how else you could do this without starting the server normally.

1 Like

thank you so much, it’s working now.
I’ll write a post to summary all.

1 Like