SSH timeout from Gitlab shared runner when trying to connect to remote host, while working from a local shell


I am using the below gitlab ci yml to deploy my application to a remote host via shared runners. Around 3-4 weeks ago did my last update on my app, back then it was working fine. Today when I pushed a new update, the script got stuck at the ssh phase and got a connection timed out error. When I executed these commands from my local shell it could connect without any issues.

      - apt-get update -qq
      - apt-get install -qq git
      - 'which ssh-agent || ( apt-get install -qq openssh-client )'
      - eval $(ssh-agent -s)
      - ssh-add <(echo "$SSH_PRIVATE_KEY")
      - mkdir -p ~/.ssh
      - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
      type: deploy
        - ssh

I’ve spent some time trying to figure out what could’ve changed, the only thing I found is that my last successful deployment was doen via gitlab-runner 12.9.0-rc1 (a350f628) while the version of the of the failing one now is gitlab-runner 12.9.0 (4c96e5ad)

Any ideas what might be the cause?

Hi @sandor.palffy, have you found any solutions to your ssh timeout? I’m having the same problem, what’s odd is that the gitlab runners will occasionally timeout, but not every time. Sometimes the same runner domain will timeout once, then run fine the next run. The issue is intermittent, despite firewall rules that specifically allow the runner domain IPs.

Yep. It seems like my hosting provider has restrictions on the ip addresses which can access their servers via ssh so they try to block every IP outside the country (Hungary). I think it’s a bit stupid… but anyways until I stay with the same provider (for a while…) I just run scheduled deployments (my tiny scripts doing a pull and installing some dependencies) from their servers (cron jobs). I know it is more of a hack rather than a workaround though. But it is just a temporary solution for me.