I recently switched my organizations’s Git hosting from Bitbucket to Gitlab. I have to say I couldn’t be happier with Gitlab.
My concern, now that all of my code is now being hosted by my singular server is what to do in the case of a catastrophic failure. It’s being hosted on Digital Ocean, and they do weekly backups which will handle things from a while system level.
I have Gitlab installed via Omnibus, and have configured it to offload backups daily (cron) to Amazon S3. Then, on S3, I have it (currently) move backups to infrequent access storage after 30 days, then to Glacier after 60 days. I think this may be overkill. Right now my total backup is 15MBs, but it could get very pricy to store it once it is a GB or more.
As of now, my standard policy is to not delete repositories once i’m done working on a project because it’s possible I can come back to it even years after. I’d like to continue that.
How do other people handle backups. My logic problem is that my backup from yesterday has the same data from the backup from two days ago (and I can’t see an instance where i’d want to revert to a backup other than the latest one).
What are some best practices here?
Thanks for your help!