OdbError: object not found - cannot read header (An error occurred while fetching folder content.)

I was running gitlab in a docker container but I had to switch the volume files to another drive. By mistake I also ran the container with a new updated image and ofcourse stupid me messed up the permissions of the data directory. I was able to run the “update-permissions” script and everything seemed to be working fine except some of the repositories were having this issue.

/api/graphql returns a 500 internal server error.
The logs give me this:

Gitlab::Git::CommandError (2:Rugged::OdbError: object not found - cannot read header for (213e32d15ed013354a21f3914af156eb8a0abc64).): 
  lib/gitlab/git/wraps_gitaly_errors.rb:15:in `rescue in wrapped_gitaly_errors' 
  lib/gitlab/git/wraps_gitaly_errors.rb:6:in `wrapped_gitaly_errors' 
  lib/gitlab/git/blob.rb:108:in `batch_lfs_pointers' 
  lib/gitlab/graphql/loaders/batch_lfs_oid_loader.rb:13:in `block in find' 
  lib/gitlab/graphql/generic_tracing.rb:40:in `with_labkit_tracing' 
  lib/gitlab/graphql/generic_tracing.rb:30:in `platform_trace' 
  lib/gitlab/graphql/generic_tracing.rb:40:in `with_labkit_tracing' 
  lib/gitlab/graphql/generic_tracing.rb:30:in `platform_trace' 
  app/graphql/gitlab_schema.rb:43:in `multiplex' 
  app/controllers/graphql_controller.rb:73:in `execute_multiplex' 
  app/controllers/graphql_controller.rb:33:in `execute' 
  app/controllers/application_controller.rb:482:in `set_current_admin' 
  lib/gitlab/session.rb:11:in `with_session' 
  app/controllers/application_controller.rb:473:in `set_session_storage' 
  lib/gitlab/i18n.rb:73:in `with_locale' 
  lib/gitlab/i18n.rb:79:in `with_user_locale' 
  app/controllers/application_controller.rb:467:in `set_locale' 
  lib/gitlab/error_tracking.rb:52:in `with_context' 
  app/controllers/application_controller.rb:532:in `sentry_context' 
  app/controllers/application_controller.rb:460:in `block in set_current_context' 
  lib/gitlab/application_context.rb:56:in `block in use' 
  lib/gitlab/application_context.rb:56:in `use' 
  lib/gitlab/application_context.rb:22:in `with_context' 
  app/controllers/application_controller.rb:451:in `set_current_context' 
  lib/gitlab/metrics/elasticsearch_rack_middleware.rb:16:in `call' 
  lib/gitlab/middleware/rails_queue_duration.rb:33:in `call' 
  lib/gitlab/metrics/rack_middleware.rb:16:in `block in call' 
  lib/gitlab/metrics/transaction.rb:56:in `run' 
  lib/gitlab/metrics/rack_middleware.rb:16:in `call' 
  lib/gitlab/request_profiler/middleware.rb:17:in `call' 
  lib/gitlab/jira/middleware.rb:19:in `call' 
  lib/gitlab/middleware/go.rb:20:in `call' 
  lib/gitlab/etag_caching/middleware.rb:21:in `call' 
  lib/gitlab/middleware/multipart.rb:172:in `call' 
  lib/gitlab/middleware/read_only/controller.rb:50:in `call' 
  lib/gitlab/middleware/read_only.rb:18:in `call' 
  lib/gitlab/middleware/same_site_cookies.rb:27:in `call' 
  lib/gitlab/middleware/handle_malformed_strings.rb:21:in `call' 
  lib/gitlab/middleware/basic_health_check.rb:25:in `call' 
  lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in `call' 
  lib/gitlab/middleware/request_context.rb:21:in `call' 
  config/initializers/fix_local_cache_middleware.rb:11:in `call' 
  lib/gitlab/metrics/requests_rack_middleware.rb:76:in `call' 
  lib/gitlab/middleware/release_env.rb:12:in `call'

At this point I also tried creating a backup, then restoring it in a fresh install of docker gitlab but things got worse, even more repositories were having this problem.

I was also stupid enough to not make a backup before this whole process.

Unfortunately google wasn’t able to help me giving me 0 results for this exact error. Does anyone know what is causing this and if there is a fix? Or am I completely screwed?

Long story short, the real issue was actually with the filesystem of the new drive. I was using a pool with mergerfs and I was missing some settings.

For anyone in the future having the same issue, make sure you have these settings in mergerfs:

allow_other,use_ino,cache.files=partial,dropcacheonclose=true