Pg_upgrade failure in CE 16.7.7 to 16.11.0

Problem to solve

After running an older version for a while and recently upgrading to 16.7.7 without apparent issues, today I tried to upgrade to the latest gitlab-ce 16.11.0 (omnibus package on Ubuntu) but the postgres upgrade step failed.

In /var/log/gitlab/postgresql/current it has the following errors:

2024-04-19_05:14:29.59665 LOG:  database system was not properly shut down; automatic recovery in progress
2024-04-19_05:14:29.60398 LOG:  redo starts at 253/8BFAFE88
2024-04-19_05:14:29.62591 LOG:  invalid record length at 253/8BFCDB68: wanted 24, got 0
2024-04-19_05:14:29.62592 LOG:  redo done at 253/8BFCDAF0
2024-04-19_05:14:29.69751 LOG:  database system is ready to accept connections
2024-04-19_05:19:51.53218 ERROR:  relation "namespace_descendants" does not exist at character 491
2024-04-19_05:19:51.53220 STATEMENT:  SELECT a.attname, format_type(a.atttypid, a.atttypmod),
2024-04-19_05:19:51.53220              pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod,
2024-04-19_05:19:51.53220              c.collname, col_description(a.attrelid, a.attnum) AS comment,
2024-04-19_05:19:51.53221              attgenerated as attgenerated
2024-04-19_05:19:51.53221         FROM pg_attribute a
2024-04-19_05:19:51.53221         LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum
2024-04-19_05:19:51.53221         LEFT JOIN pg_type t ON a.atttypid = t.oid
2024-04-19_05:19:51.53222         LEFT JOIN pg_collation c ON a.attcollation = c.oid AND a.attcollation <> t.typcollation
2024-04-19_05:19:51.53222        WHERE a.attrelid = '"namespace_descendants"'::regclass
2024-04-19_05:19:51.53222          AND a.attnum > 0 AND NOT a.attisdropped
2024-04-19_05:19:51.53223        ORDER BY a.attnum
2024-04-19_05:19:51.53223

I’ve rolled back the server to the pre-upgrade snapshot, so I won’t be able to get any more logs from it. Is there something I can do to avoid this in a future attempt, or is this a bug?

Some earlier history:

2024-04-19_05:12:35.17262 received TERM from runit, sending INT instead to force quit connections
2024-04-19_05:12:35.20022 LOG:  received fast shutdown request
2024-04-19_05:12:35.20796 LOG:  aborting any active transactions
2024-04-19_05:12:35.21732 FATAL:  terminating connection due to administrator command
...
2024-04-19_05:12:35.22474 LOG:  background worker "logical replication launcher" (PID 1049356) exited with exit code 1
2024-04-19_05:12:35.22543 LOG:  shutting down
2024-04-19_05:12:35.26831 PANIC:  could not open file "/var/opt/gitlab/postgresql/data/global/pg_control": Operation not permitted
2024-04-19_05:12:38.30992 FATAL:  the database system is shutting down
...
2024-04-19_05:12:38.74882 LOG:  checkpointer process (PID 1049351) was terminated by signal 6: Aborted
2024-04-19_05:12:38.74885 LOG:  terminating any other active server processes
2024-04-19_05:12:38.79812 LOG:  abnormal database system shutdown
2024-04-19_05:12:38.85032 LOG:  database system is shut down
2024-04-19_05:13:16.98491 LOG:  starting PostgreSQL 13.14 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0, 64-bit
2024-04-19_05:13:16.99131 LOG:  listening on Unix socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"
2024-04-19_05:13:17.05162 LOG:  database system was interrupted; last known up at 2024-04-19 04:31:28 GMT
2024-04-19_05:13:17.80268 FATAL:  the database system is starting up
...

I suspect that the logs above are not the real problem, but are something that happened after the PG upgrade to 14 failed and rolled back. The error message at the time just said to “check logs” but didn’t say where, and the above is all I could find. (And I didn’t manage to capture the original error since I tried gitlab-ctl tail and it flooded scrollback.)

Ok, so after rolling back to 16.7.7 I tried running sudo gitlab-ctl pg-upgrade alone (without a package upgrade), and this was the result (which is basically the same as the original upgrade):

Checking for an omnibus managed postgresql: OK
Checking if postgresql['version'] is set: OK
Checking if we already upgraded: NOT OK
Checking for a newer version of PostgreSQL to install
Upgrading PostgreSQL to 14.10
Checking if disk for directory /var/opt/gitlab/postgresql/data has enough free space for PostgreSQL upgrade: OK
Checking if PostgreSQL bin files are symlinked to the expected location: OK
Waiting 30 seconds to ensure tasks complete before PostgreSQL upgrade.
See https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server for details
If you do not want to upgrade the PostgreSQL server at this time, enter Ctrl-C and see the documentation for details

Please hit Ctrl-C now if you want to cancel the operation.
Toggling deploy page:cp /opt/gitlab/embedded/service/gitlab-rails/public/deploy.html /opt/gitlab/embedded/service/gitlab-rails/public/index.html
Toggling deploy page: OK
Toggling services:ok: down: alertmanager: 1s, normally up
ok: down: gitaly: 1s, normally up
ok: down: gitlab-exporter: 1s, normally up
ok: down: gitlab-kas: 0s, normally up
ok: down: logrotate: 1s, normally up
ok: down: node-exporter: 0s, normally up
ok: down: postgres-exporter: 0s, normally up
ok: down: prometheus: 0s, normally up
ok: down: redis-exporter: 0s, normally up
ok: down: sidekiq: 0s, normally up
Toggling services: OK
Running stop on postgresql:timeout: run: postgresql: (pid 1049349) 608986s, want down
Running stop on postgresql: OK
Symlink correct version of binaries: OK
Creating temporary data directory: OK
Initializing the new database: OK
Upgrading the data:Error upgrading the data to version 14.10
STDOUT: Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

The source cluster was not shut down cleanly.
Failure, exiting
STDERR:
Upgrading the data: NOT OK
== Fatal error ==
Error running pg_upgrade, please check logs
== Reverting ==
ok: down: postgresql: 23s, normally up
Symlink correct version of binaries: OK
ok: run: postgresql: (pid 2837483) 1s
== Reverted ==
== Reverted to 13.13. Please check output for what went wrong ==
Toggling deploy page:rm -f /opt/gitlab/embedded/service/gitlab-rails/public/index.html
Toggling deploy page: OK
Toggling services:ok: run: alertmanager: (pid 2837834) 0s
ok: run: gitaly: (pid 2837854) 1s
ok: run: gitlab-exporter: (pid 2837892) 0s
ok: run: gitlab-kas: (pid 2837907) 1s
ok: run: logrotate: (pid 2837936) 0s
ok: run: node-exporter: (pid 2837964) 1s
ok: run: postgres-exporter: (pid 2837971) 0s
ok: run: prometheus: (pid 2837982) 1s
ok: run: redis-exporter: (pid 2837995) 0s
ok: run: sidekiq: (pid 2838021) 1s
Toggling services: OK

The logs in postgresql/current were basically the same as above, but didn’t have the “relation does not exist” error, only the things prior to that.

Also, maybe related:

$ ls -l /var/opt/gitlab/postgresql/data/global/pg_control
-rw------- 1 gitlab-psql gitlab-psql 8192 Apr 19 18:29 /var/opt/gitlab/postgresql/data/global/pg_control

This failed into a nicer state (the instance was still operational afterwards, whereas in the original upgrade attempt it refused to launch postgres and stayed down), but I still rolled it back anyway.

Well, still not really sure what happened above, but I tried rebooting the server and then running gitlab-ctl stop before gitlab-ctl pg-upgrade, and this time it seemed to work ok. (Also needed to explicitly gitlab-ctl start, because the upgrade restarted some services but not all of them.)

Haven’t retried the upgrade to 16.11 again yet but that will be soon.

Sadly ran into another database-related issue on the attempt to upgrade to 16.11:

Preparing to unpack .../gitlab-ce_16.11.2-ce.0_amd64.deb ...
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
2024-05-10 18:00:06 +1200 -- Dumping database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
2024-05-10 18:00:34 +1200 -- Dumping database ... done
2024-05-10 18:00:34 +1200 -- Dumping repositories ... [SKIPPED]
2024-05-10 18:00:34 +1200 -- Deleting tar staging files ...
2024-05-10 18:00:34 +1200 -- Deleting backup and restore PID file ... done
rake aborted!
Errno::EPERM: Operation not permitted @ rb_sysopen - /opt/gitlab/embedded/service/gitlab-rails/log/backup_json.log
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/json_logger.rb:33:in `new'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/json_logger.rb:33:in `build'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/json_logger.rb:28:in `info'
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/manager.rb:631:in `puts_time'
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/manager.rb:360:in `cleanup'
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/manager.rb:239:in `ensure in run_all_create_tasks'
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/manager.rb:240:in `run_all_create_tasks'
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/manager.rb:47:in `create'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:13:in `block in create_backup'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:62:in `lock_backup'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:10:in `create_backup'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:101:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:25:in `load'
/opt/gitlab/embedded/bin/bundle:25:in `<main>'

I’m running the apt install with sudo of course, and the log directory mentioned in the error is owned by git:root (with the log file itself owned by git:git), which seems reasonable to me.

What happens if you run gitlab-ctl reconfigure before doing this?

Feel like that has checks in it (as I remember) to make sure files are owned properly.

It does print some stuff about changing ownership permissions, but not to any related file. The upgrade still fails the exact same way after this.

In the end I did manage to get it to work by deleting the file first:

sudo rm /opt/gitlab/embedded/service/gitlab-rails/log/backup_json.log

Oddly after the upgrade succeeded I had a look in the log directory and the file is in the same state as when it was failing – including being empty:

-rw-r--r-- 1 git git         0 Jun 14 19:30 backup_json.log

I hope that doesn’t mean that the next upgrade is going to fail as well, because that’s going to be a pain even if I do know how to recover from it now.

1 Like