HI all.
we have gitlab 13.12.10 ee running on 3 ec2 nodes in AWS. we do regular backup with
gitlab-backup create CRON=1 STRATEGY=copy BACKUP=${BACKUP_TIMESTAMP}
So the strategy COPY first does a tarball locally then move it to a S3 bucket, but now the whole backup is taking too long (almost 6h) and the local disk, where the copy is made is getting full all the time, so we need to resize the boot disk quite often
Is there another backup strategy/way to do so?
Cheers
Hi all. Any thoughts? Cheers
/Alf
Since using Gitlab backup dumps to /var/opt/gitlab/backups then that would need to be the point to resolve it. As data always increases, then so will the requirement for space locally to create it which you’ve found. Technically you could add a second disk to the VM and mount this to /var/opt/gitlab/backups.
Alternatively, you could mount the S3 bucket perhaps to /var/opt/gitlab/backups instead and backup directly to the S3 bucket. This article explains how to mount an S3 bucket under Linux: How to Mount Amazon S3 as a Filesystem in Linux, Windows, and macOS
Maybe that would be an option for you, without requiring expanding your local disk or adding a second disk to the VM. The question is if that is an option for you or not.
Thanks Ian. I will give it a try and let you know.
I didn’t know you can mount an S3 bucket on linux.
Cheers