Hello, the company I work for are trying to update there GitLab server which they installed by using the Linux packages(Omnibus) installation method to the latest version being 16.8.2. I have extensively looked into the upgrade process and determined what the upgrade path would look like along with noting down any version specific update instruction and version changes that need to be considered. The question I ask now would be that since the company is using the server frequently on weekdays, the only time that would work would be to upgrade it on the weekend. Due to it being frequently used, we cannot experience long downtime and so I was wondering if we could possibility upgrade the server in small chunks spanning multiple weekends and if that would be a bad approach or if its better to just do it all in one weekend. I know there is a way to update without downtime but the company is not interested in following that approach.
Hi,
Yes, you can stretch the update as much as you want, as you anyways must have stops in between and wait for all background migrations to finish (which might take a while). You didn’t write your starting version, but if you follow official documentation, and/or refer to some examples here on forum, you should know what to do.
Examples:
Good luck!
From my experience, when updating a GitLab server to a new version, like 16.8.2, it’s really important to plan carefully to avoid long downtimes and keep everything running smoothly. However, if you decide to do the update bit by bit over a few weekends, remember that your server will be in a sort of half-updated state for longer, which could cause some small problems or inconsistencies until the update is completely finished. Also, make sure to back up all your data and settings before you start. This way, if something doesn’t go as planned, you can restore everything back to how it was.
Why in a half-updated state for longer? I disagree with this statement. Once an update has finished, the background migrations will need to finish. In some cases these happen very quickly, in others they can take longer, but either way, the server is still accessible and functional most likely with lower performance until the migrations finish due to disk IO. Whether you do the updates quickly one after the other, or take your time doing one or two per day, doesn’t really matter.
Problems come when people update to the next release and don’t wait for background migrations to finish, and then they start the next upgrade. As long as the instructions in the Gitlab documentation is followed, then problems will be minimised.
I once did every single point release update between 12.9.3 to either a 13.x or 14.x release because I was unsure of the upgrade path, and I had a total of 140 updates to do by doing it this way, and I did 70 upgrades per day and allowed the background migrations to finish before starting the next one, and everything was fine. After the first batch of 70 were done, the server was still accessible and online, it wasn’t in a half-updated state. I don’t see how that is even possible unless something goes wrong. At which point, you should be looking at either restoring from a backup, or trying to fix whatever the problem is.
Please do feel free to explain that statement and what situation you class as a half-updated state though? Since I’m curious, as I’ve never seen such a situation in like 7 years of using Gitlab
If downtime is sufficiently critical to you, you might want to look into the so-called zero-downtime upgrade procedure (if you only have on GitLab server, it still experiences downtime). You’ll still need to check that background migrations are done before continuing, and it probably becomes a little more important if you go this route. But if you already think about minimizing visibility to your users, it might not be worth the trouble now, but perhaps going forward?
Like @iwalker, I don’t think the state your server will be in when it only has some of the upgrades should be called “half-updated”, it works perfectly.
Hey there
Perhaps this page helps you to find a step-by-step roadmap of the incremental update process:
I would also agree with the shared opinions here that there should not be a problem with “interim” upgrades.
Perhaps my problem is that I wait for background migrations to finish. But for me, it takes really a lot of time, and I have to confess that I give up at some point.
You can keep GitLab normally running while it’s performing background migrations (it’s not downtime). On large GitLab instances (with lots of data) it can take a while. I don’t think there is any problem with it.
Once they are finished, you can schedule the next update.
In general, my advice would be to update / maintain GitLab regularly (e.g. 1x month if time allows). This way you can make sure you’re not too far away from the latest and you don’t need much time to upgrade.
Totally agree with Paula on this one. I think mine goes pretty much through every point release as I update every time I see a new release. The good thing about that is a few things, the first being a smaller amount of migrations to apply in one go. The second, less chance for something to go wrong, if a step is missed on the upgrade path, or upgrading before the migrations have finished when there have been loads of them to complete. I had one server at a client that took 8+ hours to finish migrations, somewhere around the 15.4.x release when following the upgrade path to bring it up-to-date.
I think that I need to create a certain schedule for myself for this updates, so it won’t take so much time for me.
Following the upgrade path will cut down the amount of upgrades, rather than doing every single point release. However, you still cannot speed up the background migrations - they have to finish, and how long they take depends on a lot of factors, how many repos on the Gitlab server, how much CPU/ram available, etc.
That said, if ensuring you keep your installation up-to-date regularly, like I do when a new release is made, you will effectively only have a few minutes of downtime with Gitlab when the services restart. And the minimal amount of background migrations will finish relatively quick in comparison to upgrading an old instance that requires following the upgrade path.
Yeah, I’ll try to do it your way, and I’ll see how it goes for me. BTH updates it’s the part that I hate the most. It’s really time-consuming for me. The good thing is that I found you guys, and you shared your points, so I’ll try to do it your way.