DB Migration fails on latest update

Hi,

I’ve just tried updating our Gitlab on premise to the latest version on Debian 9. Done via an apt upgrade which we haven’t had issues before.

However this time it appears that the gitlab::database_migrations have failed. It looks like the error starts with

bash[migrate gitlab-rails database] (gitlab::database_migrations line 55) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'

Then there are a few ALTER TABLE commands renaming audit_events constraints finally ending with

PG::ObjectNotInPrerequisiteState: ERROR:  sequence must have same owner as table it is linked to

ActiveRecord::StatementInvalid: PG::ObjectNotInPrerequisiteState: ERROR:  sequence must have same owner as table it is linked to

and

PG::ObjectNotInPrerequisiteState: ERROR:  sequence must have same owner as table it is linked to

Full stack/error here: rake aborted!StandardError: An error has occurred, this and all later migratio - Pastebin.com

I’m able to gitlab-ctl start and it comes up, however the healthcheck stays unhealthy stating that the migrations are pending, the version reports as up-to-date on 13.8.0.

Trying to gitlab-ctl reconfigure or gitlab-rails db:migrate RAILS_ENV=production fail with the above.

Has anyone seen this/know how to fix this?

Phew, managed to fix this.

In my case the tables don’t appear to be owned by the same user, as I’ve not looked at the DB before I’ve no idea if this has always been the case or not (I inherited this system).

As the failing script appeared to be ALTERing the audit_events table, which happened to be owned by a user other than the user Gitlab connects as, I changed the owner of the table to the gitlab user.

Re-ran the gitlab-ctl reconfigure which failed again with the same error, so I also change the owner of the sequence audit_events_id_seq to the Gitlab user.

Re-ran gitlab-ctl reconfigure and this time the process completed OK and the healthcheck now passes.

1 Like

Thanks, fixed for me too.

reassign owned by gitlab to "gitlab-psql"

Existing tables were from a source installation, switched to RPM a while back.

Hello,

I have the same problem upgrading my gitlab-ce but I’m not familiar with managing the database owner. Please could you explain me how to implement your solution in details ?

Thanks you very much,

In my case (remember to take backups)

I ssh’d onto our GitLab server then connected to the DB using the following

psql "host=127.0.0.1 sslmode=disable dbname=gitlab user={gitlab_username}"

Once connected I can run

\dt

That then lists all the tables in the GitLab database and their owners. So in my case the tables are owned by one of two accounts.

I then changed the ownership of the audit_events table using

ALTER TABLE audit_events OWNER TO gitlab;

And then changed the sequence owner too

ALTER SEQUENCE audit_events_id_seq OWNER TO gitlab;

Then quit pSQL

\q

Where you should now be back to the regular ssh shell where I ran

gitlab-ctl reconfigure

And that then fixed my issues. No idea if it will work for you…

1 Like

Just a +1, I’m getting the same problem with pg 11 and an upgrade from 13.7 to 13.8.

Thanks !

I will try but it seems I don’t have access to “psql” command. I will try using “gitlab-rails dbconsole”.

I solved some problem downgrading gitlab-ce, install postgre 12, and reinstall gitlab-ce up to 13.7.5. But 13.8.1 produce the same error as you.

Here’s another datapoint:
The gitlab instance that I’m having problems with was migrated from an omnibus instance to kubernetes and the omnibus instance connects to the database as gitlab, while the k8s instance connects to its database as postgres.

Looking at the omnibus instance with \d+ I can see that all tables and sequences are owned by gitlab, while the same query in the k8s database shows some tables/sequences owned by gitlab and others owned by postgres.

Just to confirm I used: SELECT usename,application_name,query_start FROM pg_stat_activity; to check what user gitlab connects with and it’s postgres.

It seems that somewhere along the way the k8s installation started connecting as postgres so all new metadata objects ended up being owned by postgres.

I didn’t think I made the choice to use the postgres user for the application, I just handed over a secret with all the passwords and expected gitlab to do the needful.

I don’t mind handing over the postgres user to gitlab as the database instance only exists for gitlab to use, so I guess the solution to the problem is change the owner of all the tables and sequences as part of the omnibus-to-k8s migration.

I just tried switching the ownership of everything in my k8s system to postgres with “reassign owned by gitlab to postgres” and tried re-running the migrations and now it fails differently:


ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202103”
RENAME TO “audit_events_202103”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202104”
RENAME CONSTRAINT “audit_events_part_5fc467ac26_202104_pkey” TO “audit_events_202104_pkey”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202104”
RENAME TO “audit_events_202104”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202105”
RENAME CONSTRAINT “audit_events_part_5fc467ac26_202105_pkey” TO “audit_events_202105_pkey”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202105”
RENAME TO “audit_events_202105”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202106”
RENAME CONSTRAINT “audit_events_part_5fc467ac26_202106_pkey” TO “audit_events_202106_pkey”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202106”
RENAME TO “audit_events_202106”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202107”
RENAME CONSTRAINT “audit_events_part_5fc467ac26_202107_pkey” TO “audit_events_202107_pkey”;

ALTER TABLE “gitlab_partitions_dynamic”.“audit_events_part_5fc467ac26_202107”
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

PG::UndefinedObject: ERROR: constraint “audit_events_part_5fc467ac26_202107_pkey” for table “audit_events_part_5fc467ac26_202107” does not exist
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:92:in exec' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:92:in block (2 levels) in execute’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies/interlock.rb:48:in block in permit_concurrent_loads' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/share_lock.rb:187:in yield_shares’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/dependencies/interlock.rb:47:in permit_concurrent_loads' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:91:in block in execute’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract_adapter.rb:722:in block (2 levels) in log' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in block (2 levels) in synchronize’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in handle_interrupt' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in block in synchronize’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in handle_interrupt' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in synchronize’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract_adapter.rb:721:in block in log' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/notifications/instrumenter.rb:24:in instrument’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract_adapter.rb:712:in log' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:90:in execute’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/marginalia-1.10.0/lib/marginalia.rb:71:in execute_with_marginalia' /srv/gitlab/lib/gitlab/database/partitioning/replace_table.rb:32:in execute’
/srv/gitlab/lib/gitlab/database/partitioning/replace_table.rb:27:in perform' /srv/gitlab/lib/gitlab/database/partitioning_migration_helpers/table_management_helpers.rb:425:in block in replace_table’
/srv/gitlab/lib/gitlab/database/with_lock_retries.rb:121:in run_block' /srv/gitlab/lib/gitlab/database/with_lock_retries.rb:130:in block in run_block_with_transaction’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in block in transaction' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract/transaction.rb:280:in block in within_new_transaction’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in block (2 levels) in synchronize' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in handle_interrupt’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in block in synchronize' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in handle_interrupt’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in synchronize' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract/transaction.rb:278:in within_new_transaction’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in transaction' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/transactions.rb:212:in transaction’
/srv/gitlab/lib/gitlab/database/with_lock_retries.rb:125:in run_block_with_transaction' /srv/gitlab/lib/gitlab/database/with_lock_retries.rb:95:in run’
/srv/gitlab/lib/gitlab/database/migration_helpers.rb:394:in with_lock_retries' /srv/gitlab/lib/gitlab/database/partitioning_migration_helpers/table_management_helpers.rb:422:in replace_table’
/srv/gitlab/lib/gitlab/database/partitioning_migration_helpers/table_management_helpers.rb:206:in replace_with_partitioned_table' /srv/gitlab/db/migrate/20201112215132_swap_partitioned_audit_events.rb:9:in up’

Hmm, I just tried:

  • Migrating omnibus 12.10.14 to kubernetes
  • reassign owned by gitlab to postgres
  • Upgrade 12.10.14 → 13.0.14
  • Upgrade 13.0.14 → 13.1.11
  • Upgrade 13.1.11 → 13.8.1

This all went ok until the post migration that ended in tears:

❯ klf job.batch/gitlab-migrations-v13.8.1-post-20210201132303
Begin parsing .erb files from /var/opt/gitlab/templates
Writing /srv/gitlab/config/database.yml
Writing /srv/gitlab/config/cable.yml
Writing /srv/gitlab/config/gitlab.yml
Writing /srv/gitlab/config/resque.yml
Copying other config files found in /var/opt/gitlab/templates
Attempting to run ‘/scripts/wait-for-deps /scripts/db-migrate’ as a main process
Checking database migrations are up-to-date
Performing migrations (this will initialized if needed)
WARNING: Active Record does not support composite primary key.

audit_events has composite primary key. Composite primary key is ignored.
rake aborted!
StandardError: An error has occurred, all later migrations canceled:

the column: argument must be set to a column name to use for ordering rows
/srv/gitlab/app/models/concerns/each_batch.rb:52:in each_batch' /srv/gitlab/lib/gitlab/database/migrations/background_migration_helpers.rb:105:in queue_background_migration_jobs_by_range_at_intervals’
/srv/gitlab/lib/gitlab/database/partitioning_migration_helpers/table_management_helpers.rb:383:in enqueue_background_migration' /srv/gitlab/lib/gitlab/database/partitioning_migration_helpers/table_management_helpers.rb:108:in enqueue_partitioning_data_migration’
/srv/gitlab/db/post_migrate/20200722202318_backfill_partitioned_audit_events.rb:13:in up' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:831:in exec_migration’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:812:in block (2 levels) in migrate' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:811:in block in migrate’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/connection_adapters/abstract/connection_pool.rb:471:in with_connection' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:810:in migrate’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1002:in migrate' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1310:in block in execute_migration_in_transaction’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1363:in ddl_transaction' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1309:in execute_migration_in_transaction’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1281:in block in migrate_without_lock' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1280:in each’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1280:in migrate_without_lock' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1229:in block in migrate’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1382:in with_advisory_lock' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1229:in migrate’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1061:in up' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/migration.rb:1036:in migrate’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/tasks/database_tasks.rb:238:in migrate' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/railties/databases.rake:86:in block (3 levels) in ’
/srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/railties/databases.rake:84:in each' /srv/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.4/lib/active_record/railties/databases.rake:84:in block (2 levels) in ’
/srv/gitlab/lib/tasks/gitlab/db.rake:59:in `block (3 levels) in ’

:frowning: For me also the DB migration fails from version 13.7 to 13.8. I have it running in k8s with an helm chart and here the upgrade keeps failing. Luckily the rollback command saved my day for now!

Hi all! We also have an issue about that sequence must have same owner as table it is linked to error over in omnibus-gitlab#5957. The discussion there likely has some additional details, pointers to potential problems and work-arounds.

I’m working on upgrading a GitLab omnibus installation (v12.3.0-ce) on a VM instance. Our goal is to ultimately upgrade to the latest version of GitLab (v13.x). According to the documentation we need to first upgrade to GitLab version 12.10.14 and there must be no active background migrations.
Once the server is upgraded, we execute the following command to reconfigure the system sudo gitlab-ctl reconfigure . This process is fine until gitlab version 12.9.0-ce.0 at which point Sidekiq perpetually requeues the same background migration that produce the errors in its logs check here.

Hello there,
A had the same issue but aletring tables did not resolve the issue, instead, gitlab-rake db:migrate did try to run the failed migrations and prompted the command to execute in order to solve the migration issue. So in my case, I probably did not complete the migrations after upgrading gitlab version && resolve it I had to follow these steps :
1- run gitlab-rake db:migrate (with sudo priviliges)
2- read the error in the console
3- copy the command proposed by the console ( in my case I had to execute this command gitlab-rake gitlab:background_migrations:finalize[ProjectNamespaces::BackfillProjectNamespaces,projects,id,’[null,“up”]’] )
4- gitlab-ctl reconfigure

further explanations are in this link : Batched background migrations | GitLab