Gitlab zero downtime upgrade inquiry


The Gitlab documentation at Upgrading GitLab | GitLab reads as follows -

Upgrading without downtime

Starting with GitLab 9.1.0 it’s possible to upgrade to a newer major, minor, or patch version of GitLab without having to take your GitLab instance offline. However, for this to work there are the following requirements: You can only upgrade 1 minor release at a time. So from 9.1 to 9.2, not to 9.3.

It it still necessary to carry-out all minor within-version upgrades on versions 12 and 13 please?

For example, to upgrade from 12.1 to 12.10, is it still the case that I require to upgrade from 12.1 to 12.2 to 12.3 to 12.4 to 12.5 to 12.6 to 12.7 to 12.8 to 12.9 to 12.10 please?


Yes, if you want zero downtime, or almost near zero downtime, then you must do every single point release from 12.1 to 12.10, so 9 upgrades. And also make sure between each upgrade that no background migrations are running (it’s in the Gitlab upgrade docs). If you continue an upgrade while background migrations are still occuring, then you can expect problems or even broken installation. The background migrations command must return zero before you can continue with the next upgrade.

I’ve been following the “without downtime”-procedure for years, I’ve never gotten far enough behind that I needed to upgrade multiple minor versions at a time, but I have skipped point releases (i.e. going from x.y.0 to x.y.2).

Note that it’s not always “zero downtime”. The procedure only partially achieves that. A lot of stuff is dne in the background, but you still need to (I believe it’s the second-to-last step in the procedure) restart puma, which causes the webinterface to stop responding (it just throws 502’s) until everything is ready. I haven’t tried comparing the two methods, but I guess it is better than taking GitLab offline to do the upgrade - it’s just not /zero/ downtime (I’ve actually had colleagues complaining), so nowadays I mostly do the updates outside of business hours.


Given your advice and the documentation, I think we’ll simply do an offline and direct upgrade as per CE update doc for 12.1.0 instead (I’d post the link but this forum seems to have an aversion to my doing so on this occasion). That approach will make our outage easier to coordinate than 9 individual, much shorter outages. Fortunately, arranging an outage is an option.

Thank you both for this further help which we appreciate.


You’ll need to follow the upgrade path as here: Upgrading GitLab | GitLab

12.0.1212.1.1712.10.1413.0.1413.1.11 - > latest 13.Y.Z

So if 12.1.0, then your next is 12.1.17 before jumping to 12.10.14 and upwards, and you will still need to make sure background migrations are finished before continuing the upgrade process:

the exact command is on the link above. I generally also apply updates as soon as they are available, so that we don’t end up to far behind as it causes a lot more work then.


Thanks, we are indeed following the upgrade path you mentioned and the official upgrade docs. I didn’t build the original servers and there were some initial doubts about patching them automatically, so patching fell somewhat by the wayside. I fully intend to implement regular, automated patching once we get them upgraded to the latest version 13 release. We are testing all of this in a pre-prod environment which I rebuilt first and we are gathering notes and advice as we progress. It’s all going fine so far…


1 Like