500 Internal Server Error Trying To Sign In (iocage FreeBSD jail)

Hello!

I am trying to run GitLab 12.7 on my FreeNAS jail via: https://gitlab.fechner.net/mfechner/Gitlab-docu/blob/master/update/12.6-12.7-freebsd.md

The update actually went quite smoothly, except for the fact that I keep having to manually comment out the doorkeeper.rb monkey patch lines every update and then run the migration scripts.

That said, I am unable to properly sign into my GitLab instance due to an error in the production.log:

Error Info

Redirected to https://my.site/users/sign_in
Filter chain halted as :redirect_unlogged_user rendered or redirected
Completed 302 Found in 26ms (ActiveRecord: 1.3ms | Elasticsearch: 0.0ms)
Started GET “/users/sign_in” for 127.0.0.1 at 2020-02-11 00:21:25 -0500
Processing by SessionsController#new as HTML
Completed 500 Internal Server Error in 20ms (ActiveRecord: 1.3ms | Elasticsearch: 0.0ms)

RuntimeError ():

lib/gitlab/anonymous_session.rb:12:in block in store_session_id_per_ip' lib/gitlab/redis/wrapper.rb:19:in block in with’
lib/gitlab/redis/wrapper.rb:19:in with' lib/gitlab/anonymous_session.rb:11:in store_session_id_per_ip’
app/controllers/sessions_controller.rb:163:in store_unauthenticated_sessions' lib/gitlab/session.rb:11:in with_session’
app/controllers/application_controller.rb:467:in set_session_storage' lib/gitlab/i18n.rb:55:in with_locale’
lib/gitlab/i18n.rb:61:in with_user_locale' app/controllers/application_controller.rb:461:in set_locale’
lib/gitlab/application_context.rb:18:in with_context' app/controllers/application_controller.rb:453:in set_current_context’
lib/gitlab/error_tracking.rb:34:in with_context' app/controllers/application_controller.rb:545:in sentry_context’
lib/gitlab/request_profiler/middleware.rb:17:in call' lib/gitlab/middleware/go.rb:20:in call’
lib/gitlab/etag_caching/middleware.rb:13:in call' lib/gitlab/middleware/multipart.rb:117:in call’
lib/gitlab/middleware/read_only/controller.rb:52:in call' lib/gitlab/middleware/read_only.rb:18:in call’
lib/gitlab/middleware/basic_health_check.rb:25:in call' lib/gitlab/middleware/request_context.rb:23:in call’
config/initializers/fix_local_cache_middleware.rb:9:in call' lib/gitlab/metrics/requests_rack_middleware.rb:49:in call’
lib/gitlab/middleware/release_env.rb:12:in `call’

I have now spent two days fighting with it trying to come up with any good reason for this and am out of ideas of where to pursue next. Any advice is appreciated!

Hi,

that’s not referenced in the linked docs, which impact does it have here?

Is Redis running when you’re trying to login?

Cheers,
Michael

The doorkeeper reference may be unrelated, I wasn’t sure if it might sound relevant since this log appears (to me at least) cryptic.

I did check that, and I get this result which makes me think it is running just fine:

sockstat -l

USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS
www nginx 8648 9 tcp4 192.168.0.240:80 :
root nginx 8647 9 tcp4 192.168.0.240:80 :
redis redis-serv 3681 6 tcp4 192.168.0.240:6379 :
redis redis-serv 3681 7 stream /var/run/redis/redis.sock
postgres postgres 3666 4 tcp4 192.168.0.240:5432 :
postgres postgres 3666 5 stream /tmp/.s.PGSQL.5432
root sshd 3851 4 tcp4 192.168.0.240:22 :
root syslogd 3777 5 dgram /var/run/log
root syslogd 3777 6 dgram /var/run/logpriv

I don’t believe I had ever configured GitLab to be using redis, so if the source script wasn’t using it out of the box, it was never something I had setup in the past (on this FreeBSD iocage jail). In the past I had been using omnibus via Ubuntu and it worked much more seamlessly, but I had transitioned to the FreeBSD port in the hopes of it being easier to maintain and setup.

At this point, once I get it to finally start, I likely will switch it over to a docker or other container setup so there’s even less installation-management, but I need to get it in a working state before I try to back it up and restore it externally.

Hi,

I’m not really a fully-fletched FreeBSD user, just chimed in to add a thought. Since you mention Docker, that would be a good idea - everything which goes beyond the Omnibus packages or Docker, is hard to support and debug.

In terms of the running instance - is downgrading to 12.6 an option with these instructions?

Cheers,
Michael

1 Like

The instructions I had followed were:

but the reason I had been upgrading was my 12.6 install had been having issues as it was and hoped that 12.7 solved a bunch of them.

I’d really like to avoid writing this instance off (i.e. not be able to migrate it’s database and directories, etc.), but I’m just not sure what I can do at this point and I know the port isn’t well supported, haha.

Right now it’s something to do with the storing of session_id_per_ip, I’m just not sure how I can try to alleviate it or turn it more verbose to figure out what the heck has gone wrong.
From the error stack trace, I’m able to deduce it’s clearly a redis misconfiguration where it’s timing out trying to connect to it (I think at least) based on this being the line in question:
@pool.with { |redis| yield redis }
I have tested that I’m able to connect by both socket and port with the default redis settings:

redis-benchmark -q -n 10000
redis-benchmark -q -n 10000 -s /var/run/redis/redis.sock

And I edited the lib/gitlab/redis/wrapper.rb and added a puts params before the attempt to connect, which showed that it is connecting via that redis.sock, so it definitely exists…

As a followup –

  1. I was able to find a workaround, where I just started the gitlab instance (it would start, just couldn’t properly sign in) and then perform a db backup like normal.
  2. I then rsync’d the backup .tar and all relevant config files from the config dir, such as gitlab.yml, secrets.yml, .gitlab_workhorse_secret, etc. and moved them all to the new server in which omnibus would be installed/setup
  3. Installed the identical GitLab omnibus version, 12.7.0-c.e.0, and then followed the various guides on the internet as far as gitlab migrations go and/or gitlab source to omnibus migrations go.
  4. After starting it all up, it just magically works once again :slight_smile:
2 Likes

I am so happy to hear that you were blessed with some GitLab magic. Although it can be mysterious, it’s always a happy ending when things magically work again.

Let us know if you need anything else, @Kyrluckechuck! and thanks for the follow up! :blush:

1 Like