Hi all,
New user and first post Does anyone have tips for debugging helm based installs? I am trying to switch from the helm based installation of postgres to an external installation and I ran into some issues where my cluster wouldn’t start (specifically workhorse and webservice), I can see in the pod logs that it has something to do with the schemas not being loaded. Since I saw that the migration job was failing, I want to start there.
The job migration logs weren’t too helpful, I was able to run helm template to render out the k8s spec files and modify the migration container so that it’s command is to sleep, that way, I can kill my existing install, then install from the rendered (and modified) k8s specs and get my migration containers up where I can exec into them and poke around, here I can run the failing command /scripts/db-migrate and see the failure looks like a db connection failure, I found the file /var/opt/gitlab/templates/database.yml.erb which looks like it has all of the correct permissions, I verified that I can run psql with the data in that file and the connection works
The next thing I’d like to try is to run an strace (to see what files the db migration script opens) or just drop a debugger, I get a stacktrace from the Rake task, so easy enough to just check what variables its picking up (which means that I need nano/vi/etc installed). I tracked down the creation of the git user and saw that it has password disabled (and sudo passwordless ins’t working).
I’m pretty far down the rabbit hole here, I thought I’d stop and ask directions:
- Is there another path I should have taken (given that the debugging I need to do is the tools combined with the exact environment I’m deployed into, I don’t know if another path would get me to resolution)?
- is there an option to build out with development containers that have the tools that I need to debug (or with sudo access)?
- should I rebuild the migration container?