In an update to my previous post, the package scripts are relying on at least the postgresql services running so that a backup can be taken. Below I have simulated this on my test server and done an upgrade. You will see below I checked the version I had installed (13.3.5), I then checked what was available (13.3.6). I stopped the services, like you did. However, I then started the postgresql and postgres-exporter services, re-checked to see what services were running so that you could see just the postgres ones are active, and then did the upgrade. You can see in this instance, it took the backup of the postgres database. One thing to note however, is at the end of the upgrade, not all the services were started. So I had to issue a gitlab-ctl restart
to get the remaining services running. Below the output (truncated as the full upgrade output is long):
root@gitlab:~# dpkg -l | grep gitlab
ii gitlab-ce 13.3.5-ce.0 amd64 GitLab Community Edition (including NGINX, Postgres, Redis)
root@gitlab:~# aptitude show gitlab-ce
Package: gitlab-ce
Version: 13.3.6-ce.0
New: yes
State: installed (13.3.5-ce.0), upgrade available (13.3.6-ce.0)
Automatically installed: no
Priority: extra
Section: misc
Maintainer: GitLab, Inc. <support@gitlab.com>
Architecture: amd64
Uncompressed Size: 2,068 M
Depends: openssh-server
Conflicts: gitlab-ee, gitlab
Replaces: gitlab-ee, gitlab
Description: GitLab Community Edition (including NGINX, Postgres, Redis)
Homepage: https://about.gitlab.com/
root@gitlab:~# gitlab-ctl stop
ok: down: alertmanager: 0s, normally up
ok: down: gitaly: 0s, normally up
ok: down: gitlab-exporter: 1s, normally up
ok: down: gitlab-workhorse: 0s, normally up
ok: down: grafana: 0s, normally up
ok: down: logrotate: 0s, normally up
ok: down: nginx: 0s, normally up
ok: down: node-exporter: 1s, normally up
ok: down: postgres-exporter: 0s, normally up
ok: down: postgresql: 0s, normally up
ok: down: prometheus: 0s, normally up
ok: down: puma: 0s, normally up
ok: down: redis: 0s, normally up
ok: down: redis-exporter: 0s, normally up
ok: down: sidekiq: 1s, normally up
root@gitlab:~# gitlab-ctl start postgresql
ok: run: postgresql: (pid 1583) 0s
root@gitlab:~# gitlab-ctl start postgres-exporter
ok: run: postgres-exporter: (pid 1595) 0s
root@gitlab:~# gitlab-ctl status
down: alertmanager: 30s, normally up; run: log: (pid 708) 145s
down: gitaly: 29s, normally up; run: log: (pid 706) 145s
down: gitlab-exporter: 29s, normally up; run: log: (pid 713) 145s
down: gitlab-workhorse: 28s, normally up; run: log: (pid 704) 145s
down: grafana: 28s, normally up; run: log: (pid 701) 145s
down: logrotate: 27s, normally up; run: log: (pid 703) 145s
down: nginx: 27s, normally up; run: log: (pid 705) 145s
down: node-exporter: 27s, normally up; run: log: (pid 707) 145s
run: postgres-exporter: (pid 1595) 4s; run: log: (pid 709) 145s
run: postgresql: (pid 1583) 11s; run: log: (pid 714) 145s
down: prometheus: 25s, normally up; run: log: (pid 710) 145s
down: puma: 25s, normally up; run: log: (pid 711) 145s
down: redis: 24s, normally up; run: log: (pid 740) 144s
down: redis-exporter: 24s, normally up; run: log: (pid 712) 145s
down: sidekiq: 20s, normally up; run: log: (pid 702) 145s
root@gitlab:~# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
gitlab-ce
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 782 MB of archives.
After this operation, 6,144 B of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 https://packages.gitlab.com/gitlab/gitlab-ce/debian buster/main amd64 gitlab-ce amd64 13.3.6-ce.0 [782 MB]
Fetched 782 MB in 1min 9s (11.3 MB/s)
(Reading database ... 103121 files and directories currently installed.)
Preparing to unpack .../gitlab-ce_13.3.6-ce.0_amd64.deb ...
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
2020-09-17 17:44:10 +0200 -- Dumping database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
2020-09-17 17:44:12 +0200 -- done
2020-09-17 17:44:12 +0200 -- Dumping repositories ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping uploads ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping builds ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping artifacts ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping pages ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping lfs objects ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping container registry images ...
2020-09-17 17:44:12 +0200 -- [DISABLED]
Creating backup archive: 1600357452_2020_09_17_13.3.5_gitlab_backup.tar ... done
Uploading backup archive to remote storage ... skipped
Deleting tmp directories ... done
done
Deleting old backups ... skipping
Warning: Your gitlab.rb and gitlab-secrets.json files contain sensitive data
and are not included in this backup. You will need these files to restore a backup.
Please back them up manually.
Backup task is done.
Running handlers:
Running handlers complete
Chef Infra Client finished, 13/772 resources updated in 28 seconds
gitlab Reconfigured!
Restarting previously running GitLab services
ok: run: postgres-exporter: (pid 10655) 0s
ok: run: postgresql: (pid 1583) 173s
_______ __ __ __
/ ____(_) /_/ / ____ _/ /_
/ / __/ / __/ / / __ `/ __ \
/ /_/ / / /_/ /___/ /_/ / /_/ /
\____/_/\__/_____/\__,_/_.___/
Upgrade complete! If your GitLab server is misbehaving try running
sudo gitlab-ctl restart
before anything else.
If you need to roll back to the previous version you can use the database
backup made during the upgrade (scroll up for the filename).
root@gitlab:~# gitlab-ctl status
down: alertmanager: 228s, normally up; run: log: (pid 708) 343s
run: gitaly: (pid 10681) 35s; run: log: (pid 706) 343s
down: gitlab-exporter: 227s, normally up; run: log: (pid 713) 343s
down: gitlab-workhorse: 226s, normally up; run: log: (pid 704) 343s
down: grafana: 226s, normally up; run: log: (pid 701) 343s
down: logrotate: 225s, normally up; run: log: (pid 703) 343s
down: nginx: 225s, normally up; run: log: (pid 705) 343s
down: node-exporter: 225s, normally up; run: log: (pid 707) 343s
run: postgres-exporter: (pid 10655) 36s; run: log: (pid 709) 343s
run: postgresql: (pid 1583) 209s; run: log: (pid 714) 343s
down: prometheus: 223s, normally up; run: log: (pid 710) 343s
down: puma: 223s, normally up; run: log: (pid 711) 343s
run: redis: (pid 10178) 66s; run: log: (pid 740) 342s
down: redis-exporter: 222s, normally up; run: log: (pid 712) 343s
down: sidekiq: 218s, normally up; run: log: (pid 702) 343s
the key bit there is:
Restarting previously running GitLab services
since it decided only to restart the services that were running prior to the upgrade. Hence, after doing that type of upgrade I had to do gitlab-ctl restart myself to get the rest running.
It’s possible of course to code the package scripts to take care of such a scenario in the event someone does stop it. For example:
root@gitlab:~# gitlab-ctl status postgresql
run: postgresql: (pid 11006) 386s; run: log: (pid 714) 794s
root@gitlab:~# gitlab-ctl status postgresql
down: postgresql: 10s, normally up; run: log: (pid 714) 828s
so code in the scripts to check if postgres is running (run), if not (down), start it, and then commence the upgrade. A bit similar to how I did it above when simulating it.
However, as per the Gitlab docs, best just to leave gitlab running and let it stop what it needs to do.