Deploy on K3s - database_migrations fails

Describe your question in as much detail as possible:
I’m running K3s using a cluster of 5 raspberry pi, 2x rpi4 and 3x rpi3. I’ve create a helm chart to deploy Gitlab into my cluster using the arm docker image provided by ravermeister/gitlab.This images provides default settings for it to work properly on ARM devices, which do not have a lot of RAM.

I’ve mount the 3 directories needed, as known as :

  • “/var/opt/gitlab”
  • “/var/log/gitlab”
  • “/etc/gitlab”

These directories are mounth throught a localPath, using nodeAffinity to target my rp4 with a SSD.

  • What are you seeing, and how does it differ from what you expect to see?
    The installation start, but when reaching the step database_migrations, it fails. The container restarts indefinitely each time it reaches this stage

  • Consider including screenshots, error messages, and/or other helpful visuals
    This is the log message :

Recipe: gitlab::database_migrations
  * ruby_block[check remote PG version] action nothing (skipped due to action :nothing)
  * rails_migration[gitlab-rails] action run
    * bash[migrate gitlab-rails database] action run
      Error executing action `run` on resource 'bash[migrate gitlab-rails database]'
      Command execution failed. STDOUT/STDERR suppressed for sensitive resource
      Cookbook Trace: (most recent call first)
      /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/resources/rails_migration.rb:18:in `block in class_from_file'
      Resource Declaration:
      suppressed sensitive resource output
      Compiled Resource:
      suppressed sensitive resource output
      System Info:
      ruby=ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [aarch64-linux]

I don’t know exactly what the problem is. I’ve already try to limit the memory usage in the container’s configuration. I have already deleted the contents of the installation directories several times to ensure that the installation restarts cleanly but the same error occurs.

Once the CPU of the control plan (master) went up to 100% and it crashed. Which makes me think of a resource problem .

Thank you for your help !