Pg_dump: could not connect /var/opt/gitlab/postgresql/.s.PGSQL.5432

Hello forum,

today I tried to upgrade omnibus:

  • gitlab-ce-13.3.2-ce.0.el7.x86_64 to
  • gitlab-ce-13.3.6-ce.0.el7.x86_64

I typed:

  • gitlab-ctl stop
  • yum upgrade

Then I get the error: (more text below)

Dumping PostgreSQL database gitlabhq_production … pg_dump: [archiver (db)] connection to database “gitlabhq_production” failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket “/var/opt/gitlab/postgresql/.s.PGSQL.5432”?

The update procedure:

does not say that my procedure is wrong. (update from a stopped instance)
Should I write a bug report?

This is the complete text:

Transaction test succeeded
Running transaction
gitlab preinstall:
gitlab preinstall: This node does not appear to be running a database
gitlab preinstall: Skipping version check, if you think this is an error exit now
gitlab preinstall:
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
2020-09-17 12:41:31 +0200 – Dumping database …
Dumping PostgreSQL database gitlabhq_production … pg_dump: [archiver (db)] connection to database “gitlabhq_production” failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket “/var/opt/gitlab/postgresql/.s.PGSQL.5432”?
rake aborted!
Backup::Error: Backup failed
/opt/gitlab/embedded/service/gitlab-rails/lib/backup/database.rb:51:in dump' /opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:127:in block (4 levels) in <top (required)>’
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:10:in block (3 levels) in <top (required)>' /opt/gitlab/embedded/bin/bundle:23:in load’
/opt/gitlab/embedded/bin/bundle:23:in `’
Tasks: TOP => gitlab:backup:db:create
(See full trace by running task with --trace)
[FAILED]
gitlab preinstall:
gitlab preinstall: Database backup failed! If you want to skip this backup, run the following command and try again:
gitlab preinstall:
gitlab preinstall: sudo touch /etc/gitlab/skip-auto-backup
gitlab preinstall:
error: %pre(gitlab-ce-13.3.6-ce.0.el7.x86_64) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package gitlab-ce-13.3.6-ce.0.el7.x86_64

You don’t need to stop gitlab when running the upgrade. The upgrade will take care of it, and stop the services when necessary. That means that it will stop what is required, but leave postgresql running so that it can do what it wants to do.

From the documentation on the link you provided:

To update to a newer GitLab version, run:

  • For GitLab Community Edition:
# Debian/Ubuntu
sudo apt-get update
sudo apt-get install gitlab-ce

# Centos/RHEL
sudo yum install gitlab-ce

For GitLab Enterprise Edition:

# Debian/Ubuntu
sudo apt-get update
sudo apt-get install gitlab-ee

# Centos/RHEL
sudo yum install gitlab-ee

nowhere is there a gitlab-ctl stop command prior to the upgrade. I was trying to find a link to something I read before that did mention that the upgrade process will take care of stopping/starting the services, but haven’t been able to find it. If I find it, I will add it later.

Generally though the docs for upgrade should have:

apt-get upgrade
yum update

which should pull in the new version, rather than using the install command. Although I expect it’s done like that to just update gitlab, rather than potentially other updates for the operating system when using apt-get upgrade or yum update.

In an update to my previous post, the package scripts are relying on at least the postgresql services running so that a backup can be taken. Below I have simulated this on my test server and done an upgrade. You will see below I checked the version I had installed (13.3.5), I then checked what was available (13.3.6). I stopped the services, like you did. However, I then started the postgresql and postgres-exporter services, re-checked to see what services were running so that you could see just the postgres ones are active, and then did the upgrade. You can see in this instance, it took the backup of the postgres database. One thing to note however, is at the end of the upgrade, not all the services were started. So I had to issue a gitlab-ctl restart to get the remaining services running. Below the output (truncated as the full upgrade output is long):

root@gitlab:~# dpkg -l | grep gitlab
ii  gitlab-ce                      13.3.5-ce.0                  amd64        GitLab Community Edition (including NGINX, Postgres, Redis)

root@gitlab:~# aptitude show gitlab-ce
Package: gitlab-ce                       
Version: 13.3.6-ce.0
New: yes
State: installed (13.3.5-ce.0), upgrade available (13.3.6-ce.0)
Automatically installed: no
Priority: extra
Section: misc
Maintainer: GitLab, Inc. <support@gitlab.com>
Architecture: amd64
Uncompressed Size: 2,068 M
Depends: openssh-server
Conflicts: gitlab-ee, gitlab
Replaces: gitlab-ee, gitlab
Description: GitLab Community Edition (including NGINX, Postgres, Redis)
 
Homepage: https://about.gitlab.com/


root@gitlab:~# gitlab-ctl stop
ok: down: alertmanager: 0s, normally up
ok: down: gitaly: 0s, normally up
ok: down: gitlab-exporter: 1s, normally up
ok: down: gitlab-workhorse: 0s, normally up
ok: down: grafana: 0s, normally up
ok: down: logrotate: 0s, normally up
ok: down: nginx: 0s, normally up
ok: down: node-exporter: 1s, normally up
ok: down: postgres-exporter: 0s, normally up
ok: down: postgresql: 0s, normally up
ok: down: prometheus: 0s, normally up
ok: down: puma: 0s, normally up
ok: down: redis: 0s, normally up
ok: down: redis-exporter: 0s, normally up
ok: down: sidekiq: 1s, normally up

root@gitlab:~# gitlab-ctl start postgresql
ok: run: postgresql: (pid 1583) 0s

root@gitlab:~# gitlab-ctl start postgres-exporter
ok: run: postgres-exporter: (pid 1595) 0s

root@gitlab:~# gitlab-ctl status
down: alertmanager: 30s, normally up; run: log: (pid 708) 145s
down: gitaly: 29s, normally up; run: log: (pid 706) 145s
down: gitlab-exporter: 29s, normally up; run: log: (pid 713) 145s
down: gitlab-workhorse: 28s, normally up; run: log: (pid 704) 145s
down: grafana: 28s, normally up; run: log: (pid 701) 145s
down: logrotate: 27s, normally up; run: log: (pid 703) 145s
down: nginx: 27s, normally up; run: log: (pid 705) 145s
down: node-exporter: 27s, normally up; run: log: (pid 707) 145s
run: postgres-exporter: (pid 1595) 4s; run: log: (pid 709) 145s
run: postgresql: (pid 1583) 11s; run: log: (pid 714) 145s
down: prometheus: 25s, normally up; run: log: (pid 710) 145s
down: puma: 25s, normally up; run: log: (pid 711) 145s
down: redis: 24s, normally up; run: log: (pid 740) 144s
down: redis-exporter: 24s, normally up; run: log: (pid 712) 145s
down: sidekiq: 20s, normally up; run: log: (pid 702) 145s

root@gitlab:~# apt-get upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  gitlab-ce
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 782 MB of archives.
After this operation, 6,144 B of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 https://packages.gitlab.com/gitlab/gitlab-ce/debian buster/main amd64 gitlab-ce amd64 13.3.6-ce.0 [782 MB]
Fetched 782 MB in 1min 9s (11.3 MB/s)                                                                                                            
(Reading database ... 103121 files and directories currently installed.)
Preparing to unpack .../gitlab-ce_13.3.6-ce.0_amd64.deb ...
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
2020-09-17 17:44:10 +0200 -- Dumping database ... 
Dumping PostgreSQL database gitlabhq_production ... [DONE]
2020-09-17 17:44:12 +0200 -- done
2020-09-17 17:44:12 +0200 -- Dumping repositories ...
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping uploads ... 
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping builds ... 
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping artifacts ... 
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping pages ... 
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping lfs objects ... 
2020-09-17 17:44:12 +0200 -- [SKIPPED]
2020-09-17 17:44:12 +0200 -- Dumping container registry images ... 
2020-09-17 17:44:12 +0200 -- [DISABLED]
Creating backup archive: 1600357452_2020_09_17_13.3.5_gitlab_backup.tar ... done
Uploading backup archive to remote storage  ... skipped
Deleting tmp directories ... done
done
Deleting old backups ... skipping
Warning: Your gitlab.rb and gitlab-secrets.json files contain sensitive data 
and are not included in this backup. You will need these files to restore a backup.
Please back them up manually.
Backup task is done.

Running handlers:
Running handlers complete
Chef Infra Client finished, 13/772 resources updated in 28 seconds
gitlab Reconfigured!
Restarting previously running GitLab services
ok: run: postgres-exporter: (pid 10655) 0s
ok: run: postgresql: (pid 1583) 173s

     _______ __  __          __
    / ____(_) /_/ /   ____ _/ /_
   / / __/ / __/ /   / __ `/ __ \
  / /_/ / / /_/ /___/ /_/ / /_/ /
  \____/_/\__/_____/\__,_/_.___/
  

Upgrade complete! If your GitLab server is misbehaving try running
  sudo gitlab-ctl restart
before anything else.
If you need to roll back to the previous version you can use the database
backup made during the upgrade (scroll up for the filename).

root@gitlab:~# gitlab-ctl status
down: alertmanager: 228s, normally up; run: log: (pid 708) 343s
run: gitaly: (pid 10681) 35s; run: log: (pid 706) 343s
down: gitlab-exporter: 227s, normally up; run: log: (pid 713) 343s
down: gitlab-workhorse: 226s, normally up; run: log: (pid 704) 343s
down: grafana: 226s, normally up; run: log: (pid 701) 343s
down: logrotate: 225s, normally up; run: log: (pid 703) 343s
down: nginx: 225s, normally up; run: log: (pid 705) 343s
down: node-exporter: 225s, normally up; run: log: (pid 707) 343s
run: postgres-exporter: (pid 10655) 36s; run: log: (pid 709) 343s
run: postgresql: (pid 1583) 209s; run: log: (pid 714) 343s
down: prometheus: 223s, normally up; run: log: (pid 710) 343s
down: puma: 223s, normally up; run: log: (pid 711) 343s
run: redis: (pid 10178) 66s; run: log: (pid 740) 342s
down: redis-exporter: 222s, normally up; run: log: (pid 712) 343s
down: sidekiq: 218s, normally up; run: log: (pid 702) 343s

the key bit there is:

Restarting previously running GitLab services

since it decided only to restart the services that were running prior to the upgrade. Hence, after doing that type of upgrade I had to do gitlab-ctl restart myself to get the rest running.

It’s possible of course to code the package scripts to take care of such a scenario in the event someone does stop it. For example:

root@gitlab:~# gitlab-ctl status postgresql
run: postgresql: (pid 11006) 386s; run: log: (pid 714) 794s

root@gitlab:~# gitlab-ctl status postgresql
down: postgresql: 10s, normally up; run: log: (pid 714) 828s

so code in the scripts to check if postgres is running (run), if not (down), start it, and then commence the upgrade. A bit similar to how I did it above when simulating it.

However, as per the Gitlab docs, best just to leave gitlab running and let it stop what it needs to do.

Hello,

many thanks for your explanation and help, I was unsure how to do a “correct update”.
My reason for the initial “gitlab-ctl stop” was:

  • snapshots

gitlab runs here in a VM, and prior to every update, we use the convenient snapshot method, just in case something went horribly wrong. With this, we can restore the snapshot and don’t need to worry that
there are some inconsistencies.

Doing first a snapshot, then do the update with a running gitlab leaves a small window for
inconsistencies, in case we need to restore to the snapshot.
This small windows can be avoided, if the users are unable to “commit”, therefore we used
“gitlab-ctl stop” and ran into the Backup:Error. We did the update by starting the
postrgress instance and everything is fine now.
Thanks for you explanation! The only question maybe is, if gitlab has some sort of maintainance mode?
(Exactly for a snapshot safe update.)

best regards,

Martin

Hi Martin,

Yeah totally understand the reason for stopping it to ensure a full backup and not allowing anyone to connect and potentially make changes during the snapshot.

I’ve checked the gitlab-ctl command, don’t see anything here related to maintenance mode, nor anything within the panel. Someone did open an issue for it: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/19739

but no idea if the Gitlab team were able to integrate it since I’ve not found it anywhere yet.

Regards

Ian