GitLab Commit: 500 Error

I hope I’ve posted this in the right place.

So I run a private GitLab Server on the latest version: 13.11.4

The Problem

Recently one of our Developers made a large commit to a branch of his, and when you click the Commit Hash, it brings back 500 - Whoops, something went wrong on our end.

No other commits in this GitLab Project have this error. Thankfully this commit is not a master branch commit.

I should also mention I haven’t even filled 50% of the Disk Space on this VM and Memory Usage is staying below 50%. So I really don’t think the issue is related to Disk Space or Memory.

What could be causing this and is there a solution? Let me share some of the current logs (and obviously I’ve changed the Names of the Project, Team and Repository) and the most interesting bits I could find:

root@gitlab:~# sudo gitlab-ctl tail | grep error
==> /var/log/gitlab/nginx/error.log <==
==> /var/log/gitlab/nginx/gitlab_error.log <==
2021-05-16_17:27:21.61109 time="2021-05-16T17:27:21Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:22.62482 time="2021-05-16T17:27:22Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:23.63828 time="2021-05-16T17:27:23Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:24.65270 time="2021-05-16T17:27:24Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:25.66763 time="2021-05-16T17:27:25Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:26.68145 time="2021-05-16T17:27:26Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:27.69607 time="2021-05-16T17:27:27Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:28.70908 time="2021-05-16T17:27:28Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:29.72567 time="2021-05-16T17:27:29Z" level=error msg="unknown flag `no-auto'"
2021-05-16_17:27:30.73896 time="2021-05-16T17:27:30Z" level=error msg="unknown flag `no-auto'"
2021-05-16_16:27:34.11324 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-05-16T16:27:34.112712Z","queryType":"random_walk_with_error"}
2021-05-16_16:27:34.11324 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-05-16T16:27:34.112734Z","queryType":"server_error_500"}
2021-05-16_16:27:34.21788 t=2021-05-16T16:27:34+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/var/opt/gitlab/grafana/provisioning/plugins error="open /var/opt/gitlab/grafana/provisioning/plugins: no such file or directory"
2021-05-16_16:28:00.28927 level=error ts=2021-05-16T16:28:00.289Z caller=manager.go:314 component="discovery manager scrape" msg="Cannot create service discovery" err="unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" type=kubernetes
2021-05-16_16:28:05.36432 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-nodes
2021-05-16_16:28:05.36441 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-cadvisor
2021-05-16_16:28:05.36519 level=error ts=2021-05-16T16:28:05.364Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-pods
2021-05-16_17:27:31.75536 time="2021-05-16T17:27:31Z" level=error msg="unknown flag `no-auto'"
root@gitlab:~# sudo gitlab-ctl tail gitlab-rails | grep error
{"method":"GET","path":"/team/repository/-/commit/e7b1d2c39216a1f504936bbfcfe253d41f640e02","format":"html","controller":"Projects::CommitController","action":"show","status":500,"time":"2021-05-16T17:34:02.300Z","params":[{"key":"namespace_id","value":"team"},{"key":"project_id","value":"repository"},{"key":"id","value":"e7b1d2c39216a1f504936bbfcfe253d41f640e02"}],"remote_ip":"96.241.79.64","user_id":4,"username":"names_are_useless","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0","correlation_id":"01F5V3TM1EA847DT76NQ365TEW","meta.user":"names_are_useless","meta.project":"team/repository","meta.root_namespace":"team","meta.caller_id":"Projects::CommitController#show","meta.remote_ip":"96.241.79.64","meta.feature_category":"source_code_management","meta.client_id":"user/4","gitaly_calls":6,"gitaly_duration_s":40.411898,"rugged_calls":2,"rugged_duration_s":0.010335,"redis_calls":19,"redis_duration_s":0.009139000000000001,"redis_read_bytes":3471,"redis_write_bytes":2434,"redis_cache_calls":18,"redis_cache_duration_s":0.008155,"redis_cache_read_bytes":3290,"redis_cache_write_bytes":1129,"redis_shared_state_calls":1,"redis_shared_state_duration_s":0.000984,"redis_shared_state_read_bytes":181,"redis_shared_state_write_bytes":1305,"db_count":20,"db_write_count":0,"db_cached_count":1,"cpu_s":13.417942,"mem_objects":233814,"mem_bytes":1686836123,"mem_mallocs":99567,"exception.class":"ActionView::Template::Error","exception.message":"4:Deadline Exceeded. debug_error_string:{\"created\":\"@1621186442.000818752\",\"description\":\"Deadline Exceeded\",\"file\":\"src/core/ext/filters/deadline/deadline_filter.cc\",\"file_line\":69,\"grpc_status\":4}","exception.backtrace":[],"db_duration_s":0.04322,"view_duration_s":0.0,"duration_s":41.93531}
root@gitlab:~# sudo gitlab-ctl tail nginx/gitlab_error.log
2021/05/16 16:15:10 [crit] 19546#0: *94 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.0.101, server: git.website.com, request: "GET /team/Project.git/info/refs?service=git-upload-pack HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/team/Project.git/info/refs?service=git-upload-pack", host: "git.website.com"

Searched for “Deadline”:

root@gitlab:~# less /var/log/gitlab/gitlab-rails/production.log
ActionView::Template::Error (4:Deadline Exceeded):
1: - if commit.has_signature?
2:   %a{ href: 'javascript:void(0)', tabindex: 0, class: commit_signature_badge_classes('js-loading-gpg-badge'), data: { toggle: 'tooltip', placement: 'top', title: _('GPG signature (loading...)'), 'commit-sha' => commit.sha } }

What have I tried so far?

I tried restarting GitLab, didn’t fix the problem.

I tried restarting the VM GitLab is contained in, didn’t fix the problem.

I tried updating my /etc/gitlab/gitlab.rb file as per this suggested solution:

gitaly['ruby_num_workers'] = 32
unicorn['worker_processes'] = 12

Formerly both were commented out. I saved these changes and restarted GitLab, didn’t fix the problem.

I asked my Developer if he could revert the Commit and try Recommiting, but (according to him):

it doesn’t look like I can edit the commit, It will only let me push it. I’m trying to see if theres a way I can edit the commit so I can commit it again
I have github desktop open, I reverted the commit and none of the changed files are showing up in my changes folder so I can try commiting again.
I’m actually unsure where to go from here


Anyone here think they can help?

I have no idea what’s the real issue, just some random debug idea.

  1. have you tried to use git clone through ssh and check that the repository is not broken? How about git clone through HTTPS?
  2. try maybe clean up redis?
  3. How about check gitaly client side log?

sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check

Any useful information there?

What do you mean @Jie , could you explain? Do you mean if someone has tried Git Cloning the project recently?

try maybe clean up redis?

How would I do this? Sorry, I’m still learning a lot about Git (and GitLab).

How about check gitaly client side log?

sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check

Any useful information there?

Don’t know if I see anything useful immediately, here’s the output. Do you see anything?


Also to update: the Developer reverted the commit, so it’s no longer in the Branch. However, now the previous 2 commits he made leads to 500 - Whoops, something went wrong on our end. pages, but other Commits in the repo don’t.

In fact, I’m finding only some (but not all) of his commits lead to the 500 - Whoops, something went wrong on our end. page, no one elses.

Could this be a problem related to his GitLab account? But then not all his Commits lead to 500s.

I know this is a really old Topic, but I wanted to resurrect it: I’m still getting quite a few 500 - Whoops, something went wrong on our end. Errors when viewing (what I assume are) larger commits in the GItLab Web GUI.

My private GitLab Server is on almost the latest version: 14.0.3

I should also mention that CPU and Memory Usage both hardly ever exceed 50%. Disk Space used on the VM used is also below 50%. I see no spike in CPU or Memory usage when attempting to view Commits that throw the 500 error.

Logs

Let me now share the current logs:

root@gitlab:~# sudo gitlab-ctl tail | grep error
==> /var/log/gitlab/nginx/error.log <==
==> /var/log/gitlab/nginx/gitlab_error.log <==
2021-07-07_17:44:36.24628 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-07-07T17:44:36.246243Z","queryType":"random_walk_with_error"}
2021-07-07_17:44:36.24640 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-07-07T17:44:36.246320Z","queryType":"server_error_500"}
2021-07-07_17:44:36.28918 t=2021-07-07T17:44:36+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/var/opt/gitlab/grafana/provisioning/plugins error="open /var/opt/gitlab/grafana/provisioning/plugins: no such file or directory"
{"method":"GET","path":"/group/project/-/commit/33c6adb7db5e62d81cfff59d3605a84161121ca6","format":"html","controller":"Projects::CommitController","action":"show","status":500,"time":"2021-07-08T13:32:03.462Z","params":[{"key":"namespace_id","value":"group"},{"key":"project_id","value":"project"},{"key":"id","value":"33c6adb7db5e62d81cfff59d3605a84161121ca6"}],"remote_ip":"50.246.69.13","user_id":4,"username":"names_are_useless","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0","correlation_id":"01FA351QVZ4HTM8QFTDJNH0PZA","meta.user":"names_are_useless","meta.project":"group/project","meta.root_namespace":"group","meta.caller_id":"Projects::CommitController#show","meta.remote_ip":"50.246.69.13","meta.feature_category":"source_code_management","meta.client_id":"user/4","gitaly_calls":4,"gitaly_duration_s":37.658303,"redis_calls":11,"redis_duration_s":0.005872,"redis_read_bytes":1792,"redis_write_bytes":1370,"redis_cache_calls":10,"redis_cache_duration_s":0.005084,"redis_cache_read_bytes":1611,"redis_cache_write_bytes":622,"redis_shared_state_calls":1,"redis_shared_state_duration_s":0.000788,"redis_shared_state_read_bytes":181,"redis_shared_state_write_bytes":748,"db_count":20,"db_write_count":0,"db_cached_count":1,"cpu_s":0.815755,"mem_objects":119323,"mem_bytes":18024027,"mem_mallocs":78578,"mem_total_bytes":22796947,"exception.class":"ActionView::Template::Error","exception.message":"4:Deadline Exceeded. debug_error_string:{\"created\":\"@1625751115.000566185\",\"description\":\"Deadline Exceeded\",\"file\":\"src/core/ext/filters/deadline/deadline_filter.cc\",\"file_line\":69,\"grpc_status\":4}","exception.backtrace":[],"db_duration_s":0.15521,"view_duration_s":0.0,"duration_s":38.45785}
root@gitlab:~# sudo gitlab-ctl tail gitlab-rails | grep error
{"method":"GET","path":"/group/project/-/commit/33c6adb7db5e62d81cfff59d3605a84161121ca6","format":"html","controller":"Projects::CommitController","action":"show","status":500,"time":"2021-07-08T13:32:03.462Z","params":[{"key":"namespace_id","value":"group"},{"key":"project_id","value":"project"},{"key":"id","value":"33c6adb7db5e62d81cfff59d3605a84161121ca6"}],"remote_ip":"50.246.69.13","user_id":4,"username":"names_are_useless","ua":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0","correlation_id":"01FA351QVZ4HTM8QFTDJNH0PZA","meta.user":"names_are_useless","meta.project":"group/project","meta.root_namespace":"group","meta.caller_id":"Projects::CommitController#show","meta.remote_ip":"50.246.69.13","meta.feature_category":"source_code_management","meta.client_id":"user/4","gitaly_calls":4,"gitaly_duration_s":37.658303,"redis_calls":11,"redis_duration_s":0.005872,"redis_read_bytes":1792,"redis_write_bytes":1370,"redis_cache_calls":10,"redis_cache_duration_s":0.005084,"redis_cache_read_bytes":1611,"redis_cache_write_bytes":622,"redis_shared_state_calls":1,"redis_shared_state_duration_s":0.000788,"redis_shared_state_read_bytes":181,"redis_shared_state_write_bytes":748,"db_count":20,"db_write_count":0,"db_cached_count":1,"cpu_s":0.815755,"mem_objects":119323,"mem_bytes":18024027,"mem_mallocs":78578,"mem_total_bytes":22796947,"exception.class":"ActionView::Template::Error","exception.message":"4:Deadline Exceeded. debug_error_string:{\"created\":\"@1625751115.000566185\",\"description\":\"Deadline Exceeded\",\"file\":\"src/core/ext/filters/deadline/deadline_filter.cc\",\"file_line\":69,\"grpc_status\":4}","exception.backtrace":[],"db_duration_s":0.15521,"view_duration_s":0.0,"duration_s":38.45785}
root@gitlab:~# sudo gitlab-ctl tail nginx/gitlab_error.log

(Nothing, unlike before)

root@gitlab:~# less /var/log/gitlab/gitlab-rails/production.log | grep Deadline
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625748238.000854607","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625748277.000990067","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625748299.000615308","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625748498.001019078","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625749715.000427988","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625751031.000838431","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):
ActionView::Template::Error (4:Deadline Exceeded. debug_error_string:{"created":"@1625751115.000566185","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}):

(If I tail this log, anytime I end up getting a 500 Error in the Web GUI, another entry populates the file)

It’s been 4 years since I initially posted about this problem, and still getting Deadline Exceeded errors, which I still suspect is the main culprit.

What have I tried so far?

  1. Restarting GitLab doesn’t fix the problem.

  2. Restarting the VM GitLab is installed on doesn’t fix the problem

  3. As per this suggested solution, I tried updating my /etc/gitlab/gitlab.rb file (I’m using Puma as opposed to Unicorn now):

gitaly['ruby_num_workers'] = 64
puma['worker_processes'] = 12

  • This doesn’t fix the issue.
  1. Ran Housekeeping on the project: Opened the Project SettingsGeneralAdvancedHousekeeping → Clicked Run kousekeeping button. This doesn’t fix the issue.

  2. I tried this recommendation. I discovered that 57 is the max value that can be input into any of these values (as you can see in this image), so …

Defaults Values:
Default Timeout Period: 55
Fast Timeout Period: 10
Medium Timeout Period: 30

New Values:
Default Timeout Period: 57
Fast Timeout Period: 57
Medium Timeout Period: 57

  • This has actually helped … somewhat. I’m able to access several Commits now that were giving me 500 errors, but there are still others still giving me 500 errors.
  1. I figured out how to increase the Max Gitaly Timeout value: by increasing the value of puma['worker_timeout'] in /etc/gitlab/gitlab.rb. By default it’s at 60. I changed it to 120 (also increased puma['worker_processes'] so it equals 24) and performed a sudo gitlab-ctl reconfigure without error. Now I go back into Gitaly Preferences and the new max value is 114. So I modified the Values again:

New Values:
Default Timeout Period: 114
Fast Timeout Period: 114
Medium Timeout Period: 114

  • Unfortunately, this hasn’t made any noticeable difference on the Commits still giving me 500 errors. Perhaps I need to go even higher? Right now from the Hardware-side, all I’m really seeing is a bump in CPU usage, and even then I got plenty to spare. However …
  • I am now getting a new error when I run tail -f /var/log/gitlab/gitlab-rails/production.log (no longer do I get the Deadline Exceeded error):
Rack::Timeout::RequestTimeoutException (Request ran for longer than 60000ms):

grpc-1.30.2-x86_64 (linux) src/ruby/lib/grpc/generic/active_call.rb:253:in `run_batch'
grpc-1.30.2-x86_64 (linux) src/ruby/lib/grpc/generic/active_call.rb:253:in `remote_read'
grpc-1.30.2-x86_64 (linux) src/ruby/lib/grpc/generic/active_call.rb:327:in `block in each_remote_read_then_finish'
grpc-1.30.2-x86_64 (linux) src/ruby/lib/grpc/generic/active_call.rb:324:in `loop'
grpc-1.30.2-x86_64 (linux) src/ruby/lib/grpc/generic/active_call.rb:324:in `each_remote_read_then_finish'
puma: cluster worker 1: 12463 [gitlab-puma-worker]:in `each'
Processing by Gitlab::RequestForgeryProtection::Controller#index as HTML
Completed 200 OK in 2ms (ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 86)
Started GET "/api/v4/users/87" for 50.246.69.13 at 2021-07-08 18:18:22 +0000
Processing by Gitlab::RequestForgeryProtection::Controller#index as HTML
Completed 200 OK in 1ms (ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 85)
Started GET "/api/v4/users/87/status" for 50.246.69.13 at 2021-07-08 18:18:22 +0000
Processing by Gitlab::RequestForgeryProtection::Controller#index as HTML
Completed 200 OK in 1ms (ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 85)
Processing by Gitlab::RequestForgeryProtection::Controller#index as HTML
Completed 200 OK in 2ms (ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 131)

Any recommendations?

OK, I … think I’ve resolved the issue, although I feel the resolution could be a lot more efficient. So I’ve modified my Puma settings in /etc/gitlab/gitlab.rb and performed a successful sudo gitlab-ctl reconfigure without error. Here’s what they look like right now:

puma['enable'] = true
puma['worker_timeout'] = 120
puma['worker_processes'] = 24

# Change the worker timeout:
gitlab_rails['env'] = {
   'GITLAB_RAILS_RACK_TIMEOUT' => 600
}

### **Only change these settings if you understand well what they mean**
###! Docs: https://github.com/schneems/puma_worker_killer
puma['per_worker_max_memory_mb'] = 1024
  • As per this documentation, I don’t think increasing puma['worker_timeout'] was helping much (aside from the ability to input higher Gitaly Timeout values). So I added the code shown there (GITLAB_RAILS_RACK_TIMEOUT).
  • I increased the per_worker_max_memory_mb from 650 (default) to 1024, as per documentation here.

It can take a minute to view certain Commits, but I’ve yet to receive the 500 Error when viewing any Commits. Again though, I feel there must be a more elegant solution here …

1 Like