After major upgrade from 13.10 -> 14.3.2, service account user can no longer login to registry

I just upgraded our internal GitLab instance from 13.10 to the latest 14.3.2, following the upgrade path described in the documentation (latest 13.x → 14.0.x → 14.y.x). Everything went well except that I jumped the gun on the final upgrade from 14.0.11 to 14.3.2 and had to manually finalize some migrations.

So far almost everything seems to have gone fine, with the exception of a service account user that I created to allow our deployment platform to log into the registry. That login is now failing with the following error:

Error response from daemon: Get https://(registry_url)/v2/: unauthorized: HTTP Basic: Access denied

The registry logs don’t add much beyond that: error authorizing context: authorization token required

I’ve tried generating new access tokens with either permissions only for the registry or all permissions; enabling 2FA on the account in question; and creating a completely new user and access token for that user. I’m not sure if something changed regarding base user requirements to use access tokens, but I didn’t see anything in the documentation.

Things like deployment tokens and tokens for my actual user continue to work fine. I’m thinking there’s just a note I missed somewhere about additional requirements or changes, but any input is appreciated.

There was an update with local accounts and password expired. We had some internal accounts that missed certain privileges due to password expired related to this change Password expired error on git fetch via SSH for LDAP user (#332455) · Issues · GitLab.org / GitLab · GitLab. However, as you created a new user, I’m not sure this is applicable. But you could check in gitlab-rails.

# gitlab-rails console
u = User.find_by_username('<affected username>')
u.password_expired?

Thank you @m.hoogenboom! That was exactly the issue, although in my case it seems to explicitly affect users that are not LDAP users. In any case, that user’s password was indeed expired (despite resetting it through the admin panel, which I did as a separate admin user, not impersonating the user in question), and the provided workaround fixed it.

I did run across another issue in the process, which is that gitlab-rails dbconsole returned the following:

'primary' database is not configured for 'production'

I was able to work around it by just accessing the db directly from the external postgres pod, but it makes me think something went sideways with my upgrade. If you have any thoughts on that, please let me know. Thanks again!

Have you seen this post: docker - gitlab-rails dbconsole command returns 'primary' database not configured for 'production' - Stack Overflow.
Note that my production instance reports the same issue ‘primary’ database is not configured for ‘production’. As we have a premium license, I’ll check with gitlab support if this should be fixed.

Thanks @m.hoogenboom, I just have a couple questions re: that post:

  • Where is the gitlab-psql binary located? I’m running the gitlab helm deployment and I know I can find e.g. gitlab-rails in the runner pod, but I couldn’t find gitlab-psql there
  • We run an external postgres database rather than the included one. Will that cause any issues?
  • We’re technically running the EE deployment although our license has long expired; could this also be related?

Thanks!

Hi,
Not sure where it is located in helm. We are running omnibus. But I got an answer from support. The command should be (for us):
gitlab-rails dbconsole --database main

That worked! Thanks again!

1 Like