GitLab Commit: 500 Error

I hope I’ve posted this in the right place.

So I run a private GitLab Server on the latest version: 13.11.4

The Problem

Recently one of our Developers made a large commit to a branch of his, and when you click the Commit Hash, it brings back 500 - Whoops, something went wrong on our end.

No other commits in this GitLab Project have this error. Thankfully this commit is not a master branch commit.

I should also mention I haven’t even filled 50% of the Disk Space on this VM and Memory Usage is staying below 50%. So I really don’t think the issue is related to Disk Space or Memory.

What could be causing this and is there a solution? Let me share some of the current logs (and obviously I’ve changed the Names of the Project, Team and Repository) and the most interesting bits I could find:

root@gitlab:~# sudo gitlab-ctl tail | grep error
==> /var/log/gitlab/nginx/error.log <==
==> /var/log/gitlab/nginx/gitlab_error.log <==
2021-05-16_17:27:21.61109 time="2021-05-16T17:27:21Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:22.62482 time="2021-05-16T17:27:22Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:23.63828 time="2021-05-16T17:27:23Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:24.65270 time="2021-05-16T17:27:24Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:25.66763 time="2021-05-16T17:27:25Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:26.68145 time="2021-05-16T17:27:26Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:27.69607 time="2021-05-16T17:27:27Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:28.70908 time="2021-05-16T17:27:28Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:29.72567 time="2021-05-16T17:27:29Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:30.73896 time="2021-05-16T17:27:30Z" level=error msg="unknown flag `no-auto'"
2021-05-16_16:27:34.11324 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-05-16T16:27:34.112712Z","queryType":"random_walk_with_error"}
2021-05-16_16:27:34.11324 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-05-16T16:27:34.112734Z","queryType":"server_error_500"}
2021-05-16_16:27:34.21788 t=2021-05-16T16:27:34+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/var/opt/gitlab/grafana/provisioning/plugins error="open /var/opt/gitlab/grafana/provisioning/plugins: no such file or directory"
2021-05-16_16:28:00.28927 level=error ts=2021-05-16T16:28:00.289Z caller=manager.go:314 component="discovery manager scrape" msg="Cannot create service discovery" err="unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" type=kubernetes
2021-05-16_16:28:05.36432 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-nodes
2021-05-16_16:28:05.36441 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-cadvisor
2021-05-16_16:28:05.36519 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-pods
2021-05-16_17:27:31.75536 time="2021-05-16T17:27:31Z" level=error msg="unknown flag `no-auto'"
root@gitlab:~# sudo gitlab-ctl tail gitlab-rails | grep error
{"method":"GET","path":"/team/repository/-/commit/e7b1d2c39216a1f504936bbfcfe253d41f640e02","format":"html","controller":"Projects::CommitController","action":"show","status":500,"time":"2021-05-16T17:34:02.300Z","params":[{"key":"namespace_id","value":"team"},{"key":"project_id","value":"repository"},{"key":"id","value":"e7b1d2c39216a1f504936bbfcfe253d41f640e02"}],"remote_ip":"96.241.79.64","user_id":4,"username":"names_are_useless","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0","correlation_id":"01F5V3TM1EA847DT76NQ365TEW","meta.user":"names_are_useless","meta.project":"team/repository","meta.root_namespace":"team","meta.caller_id":"Projects::CommitController#show","meta.remote_ip":"96.241.79.64","meta.feature_category":"source_code_management","meta.client_id":"user/4","gitaly_calls":6,"gitaly_duration_s":40.411898,"rugged_calls":2,"rugged_duration_s":0.010335,"redis_calls":19,"redis_duration_s":0.009139000000000001,"redis_read_bytes":3471,"redis_write_bytes":2434,"redis_cache_calls":18,"redis_cache_duration_s":0.008155,"redis_cache_read_bytes":3290,"redis_cache_write_bytes":1129,"redis_shared_state_calls":1,"redis_shared_state_duration_s":0.000984,"redis_shared_state_read_bytes":181,"redis_shared_state_write_bytes":1305,"db_count":20,"db_write_count":0,"db_cached_count":1,"cpu_s":13.417942,"mem_objects":233814,"mem_bytes":1686836123,"mem_mallocs":99567,"exception.class":"ActionView::Template::Error","exception.message":"4:Deadline Exceeded. debug_error_string:{\"created\":\"@1621186442.000818752\",\"description\":\"Deadline Exceeded\",\"file\":\"src/core/ext/filters/deadline/deadline_filter.cc\",\"file_line\":69,\"grpc_status\":4}","exception.backtrace":[],"db_duration_s":0.04322,"view_duration_s":0.0,"duration_s":41.93531}
root@gitlab:~# sudo gitlab-ctl tail nginx/gitlab_error.log
2021/05/16 16:15:10 [crit] 19546#0: *94 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.0.101, server: git.website.com, request: "GET /team/Project.git/info/refs?service=git-upload-pack HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/team/Project.git/info/refs?service=git-upload-pack", host: "git.website.com"

Searched for “Deadline”:

root@gitlab:~# less /var/log/gitlab/gitlab-rails/production.log
ActionView::Template::Error (4:Deadline Exceeded):
1: - if commit.has_signature?
2:   %a{ href: 'javascript:void(0)', tabindex: 0, class: commit_signature_badge_classes('js-loading-gpg-badge'), data: { toggle: 'tooltip', placement: 'top', title: _('GPG signature (loading...)'), 'commit-sha' => commit.sha } }

What have I tried so far?

I tried restarting GitLab, didn’t fix the problem.

I tried restarting the VM GitLab is contained in, didn’t fix the problem.

I tried updating my /etc/gitlab/gitlab.rb file as per this suggested solution:

gitaly['ruby_num_workers'] = 32
unicorn['worker_processes'] = 12

Formerly both were commented out. I saved these changes and restarted GitLab, didn’t fix the problem.

I asked my Developer if he could revert the Commit and try Recommiting, but (according to him):

it doesn’t look like I can edit the commit, It will only let me push it. I’m trying to see if theres a way I can edit the commit so I can commit it again
I have github desktop open, I reverted the commit and none of the changed files are showing up in my changes folder so I can try commiting again.
I’m actually unsure where to go from here


Anyone here think they can help?

I have no idea what’s the real issue, just some random debug idea.

  1. have you tried to use git clone through ssh and check that the repository is not broken? How about git clone through HTTPS?
  2. try maybe clean up redis?
  3. How about check gitaly client side log?

sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check

Any useful information there?

What do you mean @Jie , could you explain? Do you mean if someone has tried Git Cloning the project recently?

try maybe clean up redis?

How would I do this? Sorry, I’m still learning a lot about Git (and GitLab).

How about check gitaly client side log?

sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check

Any useful information there?

Don’t know if I see anything useful immediately, here’s the output. Do you see anything?


Also to update: the Developer reverted the commit, so it’s no longer in the Branch. However, now the previous 2 commits he made leads to 500 - Whoops, something went wrong on our end. pages, but other Commits in the repo don’t.

In fact, I’m finding only some (but not all) of his commits lead to the 500 - Whoops, something went wrong on our end. page, no one elses.

Could this be a problem related to his GitLab account? But then not all his Commits lead to 500s.