I tried to upgrade the PostgreSQL version before upgrading from GitLab 15.11.4 to 16.*, but I encountered the following issue:
sudo gitlab-ctl stop
ok: down: alertmanager: 0s, normally up
ok: down: gitaly: 0s, normally up
ok: down: gitlab-exporter: 0s, normally up
ok: down: gitlab-kas: 0s, normally up
ok: down: gitlab-workhorse: 1s, normally up
ok: down: grafana: 0s, normally up
ok: down: logrotate: 1s, normally up
ok: down: nginx: 0s, normally up
ok: down: node-exporter: 1s, normally up
ok: down: postgres-exporter: 0s, normally up
warning: postgresql: unable to open supervise/status: permission denied
ok: down: prometheus: 0s, normally up
ok: down: puma: 0s, normally up
ok: down: redis: 0s, normally up
ok: down: redis-exporter: 0s, normally up
ok: down: sidekiq: 1s, normally up
sudo gitlab-ctl pg-upgrade
Checking for an omnibus managed postgresql: OK
Checking if postgresql['version'] is set: OK
Checking if we already upgraded: NOT OK
Checking for a newer version of PostgreSQL to install
Upgrading PostgreSQL to 13.8
Checking if disk for directory /var/opt/gitlab/postgresql/data has enough free space for PostgreSQL upgrade: OK
Checking if PostgreSQL bin files are symlinked to the expected location: OK
Waiting 30 seconds to ensure tasks complete before PostgreSQL upgrade.
See https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server for details
If you do not want to upgrade the PostgreSQL server at this time, enter Ctrl-C and see the documentation for details
Please hit Ctrl-C now if you want to cancel the operation.
Toggling deploy page:cp /opt/gitlab/embedded/service/gitlab-rails/public/deploy.html /opt/gitlab/embedded/service/gitlab-rails/public/index.html
Toggling deploy page: OK
Toggling services:ok: down: alertmanager: 143s, normally up
ok: down: gitaly: 142s, normally up
ok: down: gitlab-exporter: 141s, normally up
ok: down: gitlab-kas: 131s, normally up
ok: down: grafana: 130s, normally up
ok: down: logrotate: 130s, normally up
ok: down: node-exporter: 129s, normally up
ok: down: postgres-exporter: 128s, normally up
ok: down: prometheus: 125s, normally up
ok: down: redis-exporter: 122s, normally up
ok: down: sidekiq: 120s, normally up
Toggling services: OK
There was an error fetching locale and encoding information from the database
Please ensure the database is running and functional before running pg-upgrade
STDOUT:
STDERR: psql: error: FATAL: the database system is shutting down
== Fatal error ==
Please check error logs
== Reverting ==
ok: down: postgresql: 0s, normally up
Symlink correct version of binaries: OK
ok: run: postgresql: (pid 1009) 0s
== Reverted ==
== Reverted to 12.12. Please check output for what went wrong ==
Toggling deploy page:rm -f /opt/gitlab/embedded/service/gitlab-rails/public/index.html
Toggling deploy page: OK
Toggling services:ok: run: alertmanager: (pid 1016) 1s
ok: run: gitaly: (pid 1034) 0s
ok: run: gitlab-exporter: (pid 1053) 1s
ok: run: gitlab-kas: (pid 1061) 0s
ok: run: grafana: (pid 1073) 1s
ok: run: logrotate: (pid 1085) 0s
ok: run: node-exporter: (pid 1094) 0s
ok: run: postgres-exporter: (pid 1100) 1s
ok: run: prometheus: (pid 1147) 0s
ok: run: redis-exporter: (pid 1194) 1s
ok: run: sidekiq: (pid 1208) 0s
Toggling services: OK
cat /var/log/gitlab/postgresql/current
2023-06-09_12:39:30.02934 ERROR: could not open file "base/16401/18299": Operation not permitted
2023-06-09_12:39:30.02941 CONTEXT: writing block 33 of relation base/16401/18299
2023-06-09_12:53:44.81974 ERROR: could not open file "base/16401/17975": Operation not permitted
2023-06-09_12:53:44.81977 STATEMENT: /*application:web,correlation_id:01H2G3GJZHZEHJNWBM8FBJFTK0,endpoint_id:Projects::MergeRequests::CreationsController#create,db_config_name:main*/ UPDATE "internal_ids" SET "last_value" = ("internal_ids"."last_value" + 1) WHERE "internal_ids"."project_id" = 214 AND "internal_ids"."usage" = 1 RETURNING "last_value"
2023-06-09_14:45:47.55956 received TERM from runit, sending INT instead to force quit connections
2023-06-09_14:45:47.56016 LOG: received fast shutdown request
2023-06-09_14:45:47.57336 LOG: aborting any active transactions
2023-06-09_14:45:47.57568 FATAL: terminating connection due to administrator command
2023-06-09_14:45:47.57568 FATAL: terminating connection due to administrator command
2023-06-09_14:45:47.57569 FATAL: terminating connection due to administrator command
...
2023-06-09_14:45:47.58470 FATAL: terminating connection due to administrator command
2023-06-09_14:45:47.59591 FATAL: terminating connection due to administrator command
2023-06-09_14:45:47.59591 FATAL: terminating connection due to administrator command
2023-06-09_14:45:47.59792 LOG: background worker "logical replication launcher" (PID 9092) exited with exit code 1
2023-06-09_14:45:47.67150 LOG: shutting down
2023-06-09_14:45:49.29575 FATAL: the database system is shutting down
2023-06-09_14:45:49.71531 PANIC: could not open file "/var/opt/gitlab/postgresql/data/global/pg_control": Operation not permitted
2023-06-09_14:45:50.15046 FATAL: the database system is shutting down
2023-06-09_14:45:50.15355 FATAL: the database system is shutting down
2023-06-09_14:45:50.15483 FATAL: the database system is shutting down
...
2023-06-09_14:45:55.29810 FATAL: the database system is shutting down
2023-06-09_14:45:55.41787 FATAL: the database system is shutting down
2023-06-09_14:47:55.68716 FATAL: the database system is shutting down
2023-06-09_14:47:55.70074 received TERM from runit, sending INT instead to force quit connections
2023-06-09_14:48:00.82731 LOG: checkpointer process (PID 9087) was terminated by signal 6: Aborted
2023-06-09_14:48:00.82732 LOG: terminating any other active server processes
2023-06-09_14:48:00.83048 LOG: abnormal database system shutdown
2023-06-09_14:48:01.20739 LOG: database system is shut down
2023-06-09_14:48:02.19339 LOG: starting PostgreSQL 12.12 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
2023-06-09_14:48:02.19697 LOG: listening on Unix socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"
2023-06-09_14:48:02.31640 LOG: database system was interrupted; last known up at 2023-06-09 14:44:40 GMT
2023-06-09_14:48:16.73086 LOG: database system was not properly shut down; automatic recovery in progress
2023-06-09_14:48:16.75679 LOG: redo starts at 13/D4826A08
2023-06-09_14:48:16.75711 LOG: invalid record length at 13/D482D010: wanted 24, got 0
2023-06-09_14:48:16.75711 LOG: redo done at 13/D482CFE8
2023-06-09_14:48:16.86784 LOG: database system is ready to accept connections
Has anyone encountered this issue before?