I’m running gitlab v9.5.10, and gitlab-runner v9.4.2. both are running inside a docker container.
I visited the gitlab log files but found no evidence of this error.
How can i find out who this ‘coordinator’ is and where his log file is? Is there any place i can edit the value for this timeout?
Hello Raju,
I guess it can’t be the yml, because most of the time everything works out well. But once in a while there’s this Timeout.
But possibly I’m wrong, so here’s the yml:
I had the exact same issue (based on the output you posted). It occurred on large files. There are several aspects I needed to adopt to solve the issue:
increase the maximum file-size for job-artifacts (admin settings > Continuous Integration and Deployment > Maximum artifact size. Default is 100 MB
Increase timeouts in /etc/gitlab/gitlab.rb. I´m not 100% sure which one it was, but searching for “timeout” and increasing those that default to 60s was sufficient.
I´m running gitlab behind an external loadbalancer. Since artifacts are uploaded using http, I finally needed to increase the timeouts in our external loadbalancer as well.
Thanks for your suggestions Mat2e.
But I did not succeed changing those parameters. In the meantime I upgraded to the latest gitlab-runner and never had this problem again. So it might came from a bug in the runner.
-chris-
Digging up this old thread as we’ve started getting the same issue after we upgraded from 11.6.3 to 11.7.0 and then 11.8.0 yesterday (Gitlab & associated runners), what finally fixed it for us was to increase the value for unicorn[‘worker_timeout’] from 60 seconds (default) to 600 seconds (lower value would probably have solved it too).