Gitlab Docker (Omnibus) Gitlab Container in Unhealthy State after deployed in Docker Engine

Hi, this is my first time deploying Gitlab using Docker Engine in Centos 7 Core. I`m following guide from https://docs.gitlab.com/omnibus/docker/#install-gitlab-using-docker-engine to set up Gitlab docker.

I`m using custom port for the HTTP, HTTPS and SSH since the port 80, 443 and 22 already used.

sudo docker run --detach \
  --hostname example.com \
  --publish 10443:443 --publish 8080:80 --publish 10022:22 \
  --name gitlab \
  --restart always \
  --volume $GITLAB_HOME/config:/etc/gitlab \
  --volume $GITLAB_HOME/logs:/var/log/gitlab \
  --volume $GITLAB_HOME/data:/var/opt/gitlab \
  gitlab/gitlab-ce:latest

** where example.com will be my server domain

After 4 minutes i checked my container state using “docker container ls | grep gitlab”
I found the Gitlab container in “Unhealthy”.

[root@svr logs]# docker container ls | grep gitlab
59b450bd6ab5        gitlab/gitlab-ce:latest          "/assets/wrapper"        12 minutes ago      Up 12 minutes (unhealthy)   0.0.0.0:10022->22/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:10443->443/tcp   gitlab.

Is there any configuration that i missed during installation ? Where should I start perform the troubleshooting.

Update

I have test similar set up on Google Cloud VM (Free tier), it work fine. The container run in healthy state.

I check container logs on the server using "docker logs gitlab". I found error on Gitlab-Workhorse

==> /var/log/gitlab/gitlab-workhorse/current <==
{"correlation_id":"e1xdPlmbME2","duration_ms":0,"error":"badgateway: failed to receive response: dial unix /var/opt/gitlab/gitlab-rails/sockets/gitlab.socket: connect: connection refused","level":"error","method":"GET","msg":"error","time":"2020-11-23T07:48:52Z","uri":"/-/metrics"}
{"content_type":"text/html; charset=utf-8","correlation_id":"e1xdPlmbME2","duration_ms":0,"host":"127.0.0.1:8080","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"","remote_addr":"127.0.0.1:0","remote_ip":"127.0.0.1","status":502,"system":"http","time":"2020-11-23T07:48:52Z","uri":"/-/metrics","user_agent":"Prometheus/2.20.1","written_bytes":2940}

In Google Cloud VM, I does not found this kind of error.

You will want to look at the logs for gitlab-rails and puma to see if its erroring out before creating the socket.

It might be somthing like a permission problem on your $GITLAB_HOME/data directory, but you will want to see if there is a rails error that corresponds.

i have checked logs in gitlab-rails and puma. Only 1 log in Puma file called “current” shows a warning.

2020-11-24_03:18:53.72455 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"Puma starting in cluster mode..."}
2020-11-24_03:18:53.72465 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"* Version 4.3.5.gitlab.3 (ruby 2.7.2-p137), codename: Mysterious Traveller"}
2020-11-24_03:18:53.72465 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"* Min threads: 4, max threads: 4"}
2020-11-24_03:18:53.72467 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"* Environment: production"}
2020-11-24_03:18:53.72467 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"* Process workers: 4"}
2020-11-24_03:18:53.72468 {"timestamp":"2020-11-24T03:18:53.724Z","pid":319,"message":"* Preloading application"}
2020-11-24_03:19:26.29065 {"timestamp":"2020-11-24T03:19:26.290Z","pid":319,"message":"* Listening on unix:///var/opt/gitlab/gitlab-rails/sockets/gitlab.socket"}
2020-11-24_03:19:26.29085 {"timestamp":"2020-11-24T03:19:26.290Z","pid":319,"message":"* Listening on tcp://127.0.0.1:8080"}
2020-11-24_03:19:26.29089 {"timestamp":"2020-11-24T03:19:26.290Z","pid":319,"message":"! WARNING: Detected 2 Thread(s) started in app boot:"}
2020-11-24_03:19:26.29101 {"timestamp":"2020-11-24T03:19:26.290Z","pid":319,"message":"! #\u003cThread:0x00007f2f4f8d2568 /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/c$
2020-11-24_03:19:26.29110 {"timestamp":"2020-11-24T03:19:26.291Z","pid":319,"message":"! #\u003cThread:0x00007f2f2e773ad0 /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/rack-timeout-0.5.2/lib/rack/timeout/support/scheduler.rb:73 sleep\u00$
2020-11-24_03:19:26.29118 {"timestamp":"2020-11-24T03:19:26.291Z","pid":319,"message":"Use Ctrl-C to stop"}
2020-11-24_03:38:03.81294 Received HUP from runit, sending INT signal to perform graceful restart
2020-11-24_03:38:03.81312 Sending INT signal to Puma 319...
2020-11-24_03:38:03.81656 Waiting for Puma 319 to exit (1)...
2020-11-24_03:38:03.82805 Received HUP from runit, sending INT signal to perform graceful restart
2020-11-24_03:38:03.82824 Sending INT signal to Puma 319...
2020-11-24_03:38:03.82883 Waiting for Puma 319 to exit (1)...
2020-11-24_03:38:04.81750 Waiting for Puma 319 to exit (2)...
2020-11-24_03:38:04.82979 Waiting for Puma 319 to exit (2)...
2020-11-24_03:38:05.81951 Waiting for Puma 319 to exit (3)...
2020-11-24_03:38:05.83168 Waiting for Puma 319 to exit (3)...
2020-11-24_03:38:06.82136 Waiting for Puma 319 to exit (4)...
2020-11-24_03:38:06.83347 Waiting for Puma 319 to exit (4)...
2020-11-24_03:38:07.82221 Waiting for Puma 319 to exit (5)...
2020-11-24_03:38:07.83454 Waiting for Puma 319 to exit (5)...
2020-11-24_03:38:08.82357 Waiting for Puma 319 to exit (6)...
2020-11-24_03:38:08.83614 Waiting for Puma 319 to exit (6)...
2020-11-24_03:38:09.82534 Waiting for Puma 319 to exit (7)...
2020-11-24_03:38:09.83787 Waiting for Puma 319 to exit (7)...
2020-11-24_03:38:10.82612 Waiting for Puma 319 to exit (8)...
2020-11-24_03:38:10.83860 Waiting for Puma 319 to exit (8)...
2020-11-24_03:38:11.82695 Waiting for Puma 319 to exit (9)...
2020-11-24_03:38:11.85091 Waiting for Puma 319 to exit (9)...
2020-11-24_03:38:12.82776 Waiting for Puma 319 to exit (10)...
2020-11-24_03:38:12.85164 Waiting for Puma 319 to exit (10)...
2020-11-24_03:38:13.82942 Waiting for Puma 319 to exit (11)...
2020-11-24_03:38:13.85489 Waiting for Puma 319 to exit (11)...
2020-11-24_03:38:14.85106 Waiting for Puma 319 to exit (12)...
2020-11-24_03:38:14.85107 control/h: 26: kill: No such process
2020-11-24_03:38:14.85108
2020-11-24_03:38:14.85108 Puma 319 did exit.

Not sure if the warning related to my issue. I`ll check the permission for data directory.

Update

I managed to solve the problem after I reinstall docker and redeploy Gitlab into docker using the following ports.

[root@svr ~]# docker container ls | grep gitlab
c15c0b65e2a7        gitlab/gitlab-ce:latest   "/assets/wrapper"        6 hours ago         Up 6 hours (healthy)   22/tcp, 80/tcp, 0.0.0.0:10022->22/tcp, 0.0.0.0:10080->10080/tcp, 443/tcp, 0.0.0.0:10443->10443/tcp   gitlab
1 Like