An error occurred while fetching folder content

I had a similar issue (Gitlab Upgrade graphql and statistics now 403) where folder contents were not loading after a recent upgrade, if that is any help.

1 Like

Check this: Password expired error on git fetch via SSH for LDAP user (#332455) · Issues · GitLab.org / GitLab · GitLab

2 Likes

This happend also to me and turned out to be an incorrect reverse proxy configuration with “Host” and “X-Forwarded-Proto” Headers not set to external_url which was working in former versions anyway.

1 Like

Good morning folks! I had the same problem but, with the LDAP user integration. To resolve this, I deleted the user and asked for him do the login with the LDAP credentials. Whit this, the user is automatically created. It worked for me.

1 Like

I upgrade to this version 13.12.1 and it-s ok, for the 13.8.8 it-s not good

Thank you very much Daniel for your contribution. Thanks to your help I managed to solve the problem of a user

Gitlab CE 14.10.2 is having the issue. Restarting seems to clear it up for a while.

index.js:29 Uncaught Error: Timeout on validation of query
at new t (index.js:29:28)
at QueryManager.js:592:47
at r (asyncMap.js:16:53)
at asyncMap.js:9:72
at new Promise ()
at Object.then (asyncMap.js:9:24)
at Object.next (asyncMap.js:17:49)
at O (module.js:132:18)
at j (module.js:176:3)
at e.t.next (module.js:225:5)

Still happening with 14.8.4

Also occurs in 14.10.2

solution is gitlab restart

@mkaatman @alecvinent @alexcha Could you add more detail (e.g., URL without your host I’d say path, reproducible procedure and/or screenshots)?

Mine is the same as the original screenshot. I have it virtualized in proxmox with slow wd reds and I suspect that somewhere I’m hitting a timeout.

If you could point me to the error log path I can check if there are any useful details.

I don’t have the path handy but I login to gitlab, choose a project from the list and it fails.

1 Like

I think you could copy and paste a server-side log from your GitLab instance. Otherwise I cannot help you address this.

Client-side error log is insufficient to me while other GitLab developer can do it better.

I tried looking through the server side logs but I couldn’t find anything. Any tips to track it down? I’m not sure which log file I should expect it in.

I did notice it was a POST to http://gitlab.mydomain.com/api/graphql that returns a 500 with “Internal server error”

I didn’t see anything meaningful in /var/log/gitlab/gitlab-rails/production.log

I did notice this after refreshing a few times.

Okay, I got the error today and I grepped the logs for the ID and found this:

	"request_urgency": "low",
	"target_duration_s": 5,
	"redis_calls": 53,
	"redis_duration_s": 0.128596,
	"redis_read_bytes": 1981,
	"redis_write_bytes": 7949,
	"redis_cache_calls": 51,
	"redis_cache_duration_s": 0.037397,
	"redis_cache_read_bytes": 1798,
	"redis_cache_write_bytes": 6738,
	"redis_shared_state_calls": 1,
	"redis_shared_state_duration_s": 0.021431,
	"redis_shared_state_write_bytes": 53,
	"redis_sessions_calls": 1,
	"redis_sessions_duration_s": 0.069768,
	"redis_sessions_read_bytes": 183,
	"redis_sessions_write_bytes": 1158,
	"db_count": 57,
	"db_write_count": 0,
	"db_cached_count": 8,
	"db_replica_count": 0,
	"db_primary_count": 57,
	"db_main_count": 57,
	"db_main_replica_count": 0,
	"db_replica_cached_count": 0,
	"db_primary_cached_count": 8,
	"db_main_cached_count": 8,
	"db_main_replica_cached_count": 0,
	"db_replica_wal_count": 0,
	"db_primary_wal_count": 0,
	"db_main_wal_count": 0,
	"db_main_replica_wal_count": 0,
	"db_replica_wal_cached_count": 0,
	"db_primary_wal_cached_count": 0,
	"db_main_wal_cached_count": 0,
	"db_main_replica_wal_cached_count": 0,
	"db_replica_duration_s": 0.0,
	"db_primary_duration_s": 0.19,
	"db_main_duration_s": 0.19,
	"db_main_replica_duration_s": 0.0,
	"cpu_s": 2.548205,
	"mem_objects": 648314,
	"mem_bytes": 108218366,
	"mem_mallocs": 419364,
	"mem_total_bytes": 134150926,
	"pid": 3518858,
	"worker_id": "puma_1",
	"rate_limiting_gates": [],
	"exception.class": "Rack::Timeout::RequestTimeoutException",
	"exception.message": "Request ran for longer than 60000ms ",
	"exception.backtrace": ["app/views/projects/show.html.haml:18", "app/controllers/application_controller.rb:142:in `render'", "app/controllers/application_controller.rb:582:in `block in allow_gitaly_ref_name_caching'", "lib/gitlab/gitaly_client.rb:323:in `allow_ref_name_caching'", "app/controllers/application_controller.rb:581:in `allow_gitaly_ref_name_caching'", "app/controllers/application_controller.rb:531:in `set_current_admin'", "lib/gitlab/session.rb:11:in `with_session'", "app/controllers/application_controller.rb:522:in `set_session_storage'", "lib/gitlab/i18n.rb:107:in `with_locale'", "lib/gitlab/i18n.rb:113:in `with_user_locale'", "app/controllers/application_controller.rb:516:in `set_locale'", "app/controllers/application_controller.rb:510:in `set_current_context'", "lib/gitlab/middleware/memory_report.rb:13:in `call'", "lib/gitlab/middleware/speedscope.rb:13:in `call'", "lib/gitlab/database/load_balancing/rack_middleware.rb:23:in `call'", "lib/gitlab/jira/middleware.rb:19:in `call'", "lib/gitlab/middleware/go.rb:20:in `call'", "lib/gitlab/etag_caching/middleware.rb:21:in `call'", "lib/gitlab/middleware/query_analyzer.rb:11:in `block in call'", "lib/gitlab/database/query_analyzer.rb:37:in `within'", "lib/gitlab/middleware/query_analyzer.rb:11:in `call'", "lib/gitlab/middleware/multipart.rb:173:in `call'", "lib/gitlab/middleware/read_only/controller.rb:50:in `call'", "lib/gitlab/middleware/read_only.rb:18:in `call'", "lib/gitlab/middleware/same_site_cookies.rb:27:in `call'", "lib/gitlab/middleware/handle_malformed_strings.rb:21:in `call'", "lib/gitlab/middleware/basic_health_check.rb:25:in `call'", "lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in `call'", "lib/gitlab/middleware/request_context.rb:21:in `call'", "lib/gitlab/middleware/webhook_recursion_detection.rb:15:in `call'", "config/initializers/fix_local_cache_middleware.rb:11:in `call'", "lib/gitlab/middleware/compressed_json.rb:26:in `call'", "lib/gitlab/middleware/rack_multipart_tempfile_factory.rb:19:in `call'", "lib/gitlab/middleware/sidekiq_web_static.rb:20:in `call'", "lib/gitlab/metrics/requests_rack_middleware.rb:77:in `call'", "lib/gitlab/middleware/release_env.rb:13:in `call'"],
	"db_duration_s": 0.34026,
	"view_duration_s": 0.0,
	"duration_s": 34.30009

On refresh it worked, so for some reason my system is just hitting the 60 second timeout. Any way to increase it a bit?

@tnir does this help? What’s interesting is I would have expected the durations in the json to be greater than 60 but it seems duration_s is only 34.

I found this thread and I increased my CPUs to 4 within Proxmox LXC. I guess I’ll see if that helps.

1 Like

Hi! Did you solve this problem? I can’t fix it. We had ‘An error occurred while fetching folder content’ in repository page and can’t load files weeks ago. But ‘find file’ works fine. I tried the solution of ’ `update users set password_expires_at = null where username=‘’ which is not worked. Our version was 13.12.15. After updating to 14.0.12, we still have this problem.