ERROR: Appending trace to coordinator

Hi all! :slight_smile:

I’ve been testing Gitlab pipelines in a project that uses a runner on a separate physical machine. I’m able to see the first minute or so of the job log, but then it just stops. On the runner log, I see “couldn’t execute PATCH”, showing “use of closed network connection” as well.

The network structure I have is basically like this: Gitlab server as a container on docker installed on server A, and a Gitlab runner as a container on docker installed on server B (because of different CPU architecture, needed for multiple platform compilation)

This happens every single time a job is launched with an external runner.

I’ve tried to install the runner directly on the physical machine, outside of a container, but to no avail. I’ve tried to reinstall the container as well, tried to re-set the routing tables on my firewall…

The fact that the runner is able to send the trace a couple of times before starting to fail is what’s driving me crazy, because I really find no reason for this to happen. The same happens to artifacts: they fail to upload.

  • Screenshots, error messages:

Output log on the runner, seen from Portainer

Output log from gitlab-ctl tail on the Gitlab server. Note that the IP is not the same: the runner is bridged on docker, but routing is OK

  • What version are you on?
    • Gitlab: Self-hosted, 13.1.1 (bdb9883705a)
    • Runner: Self-hosted, 13.1.0

Thank you for your help! :smiley:

Bumping this topic, since I still haven’t found a solution


out of curiosity, if you start a runner container on the same host as the GitLab server container, does this work then?

If not, the problem is not the network layer but possibly related to container networking or kernel with re-using sockets - or a bug.

If yes, start with tcpdump between the server and runner to see whenever the packets drop.