I just recently updated my Private GitLab CE to 13.12.4, the latest version.
Unable to click Merge
button in Merge Request
When I open a Merge Request, and go down to where the Merge button is, it is greyed out and all I see is the message Checking if merge request can be merged…
continuously without end.
I can approve a Merge Request, no issues there.
I also tried entering the GitLab Website through multiple browsers:
- Firefox
- Google Chrome
- Microsoft Edge
Merge button is greyed out and all I see is the message Checking if merge request can be merged…
continuously without end.
crond
is down
I also noticed that, in Terminal, when I run gitlab-ctl status
, everything is up … except crond
:
down: crond: 409s, normally up; run: log: (pid 1450) 1234s
When I try restarting crond, the process is successful
# gitlab-ctl restart crond
ok: run: crond: (pid 28954) 0s
… but checking the status again shows it as down:
# gitlab-ctl status
down: crond: 1s, normally up, want up; run: log: (pid 1450) 2102s
I’ve ran a restart (gitlab-ctl restart
) and a reconfigure (gitlab-ctl reconfigure
). Neither reports any errors, but crond
is still down.
Upon the restart, crond
will come up … but will immediately go down again.
I’ve also tried restarting my GitLab VM, but crond
is still down.
When I run head /var/log/gitlab/crond/current
, I keep seeing the following message every few seconds over and over:
==> /var/log/gitlab/crond/current <==
2021-06-17_19:39:44.27476 time="2021-06-17T19:39:44Z" level=error msg="unknown flag `no-auto'"
Running sudo gitlab-ctl tail | grep error
, I receive more errors and information:
==> /var/log/gitlab/nginx/error.log <==
==> /var/log/gitlab/nginx/gitlab_error.log <==
2021-06-17_19:53:18.70815 time="2021-06-17T19:53:18Z" level=error msg="unknown flag `no-auto'"
...
2021-06-17_19:55:58.77641 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-06-17T19:55:58.776380Z","queryType":"random_walk_with_error"}
2021-06-17_19:55:58.77649 {"@level":"debug","@message":"datasource: registering query type handler","@timestamp":"2021-06-17T19:55:58.776450Z","queryType":"server_error_500"}
2021-06-17_19:55:58.81158 t=2021-06-17T19:55:58+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/var/opt/gitlab/grafana/provisioning/plugins error="open /var/opt/gitlab/grafana/provisioning/plugins: no such file or directory"
2021-06-17_19:54:11.85015 LOG: configuration file "/var/opt/gitlab/postgresql/data/postgresql.conf" contains errors; unaffected changes were applied
2021-06-17_19:56:03.39790 level=error ts=2021-06-17T19:56:03.397Z caller=manager.go:314 component="discovery manager scrape" msg="Cannot create service discovery" err="unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" type=kubernetes
2021-06-17_19:56:08.52511 level=error ts=2021-06-17T19:56:08.524Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-nodes
2021-06-17_19:56:08.52538 level=error ts=2021-06-17T19:56:08.525Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-cadvisor
2021-06-17_19:56:08.52555 level=error ts=2021-06-17T19:56:08.525Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-pods
Right now I’m thinking the two problems are related. Any recommendations on how to fix these?