Cannot connect to the Docker daemon : intermittent error

I have a CI pipeline configured that has a docker image being built during the build phase.

I get an intermittent error:
"Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?" when the build starts.

Sometimes it works, sometimes it doesn’t. When it doesn’t work, spamming the “Clear runner cache” button seems to help but not always. Doing a hard power cycle of the underlying kubernetes nodes works but thats not a great solution.

Any ideas why this would be so intermittent? I’ve tried many different settings in my .gitlab-ci.yml file, it currently looks like this:

stages:
  - build
  - deploy
  - test

services:
  - docker:dind

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375
  
build_app:
  only:
    - master
  image: docker:stable
  stage: build
  script:
    - docker info
    - docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_NAME} .
    - docker login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
    - docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_NAME}

Have the same problem. Way too intermittent to rely on. Did you ever figure out a solution?

Please volume docker sock to your runner.

I believe I have solved it, but not sure if the same applies to you.

So I deploy my code into a Kubernetes cluster (with DigitalOcean) and had installed GitLab runner on that cluster and was assuming that was what GitLab was using to run the pipelines.
However I found that my project was still configured to also use the “Shared Runners” that GitLab offer as standard. (Project > Settings > CI/CD > Runners).
Once I turned off the Shared Runners so it only uses my own Runner on my Kubernetes cluster, I have not had that error since.

Thanks for the info! Yes, I use a shared runner on gitlab.com, and it really appears that this error happens because it’s just too slow/busy at times. I have read that hosting my own runner is much faster indeed and recommended, so I will look into doing so.