Unable to push images to google cloud platform

Replace this template with your information

Describe your question in as much detail as possible:
I am running a script to deploy my image into Google Cloud Run platform. docker login fails to get the image from GitlabCI.

  • What are you seeing, and how does that differ from what you expect to see?

$ echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY

513 Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/

514 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

  • Consider including screenshots, error messages, and/or other helpful visuals
  • What version are you on? Are you using self-managed or GitLab.com?
    • GitLab (Hint: /help):
    • Runner (Hint: /admin/runners):

I am on gitlab.com

  • Add the CI configuration from .gitlab-ci.yml and other configuration if relevant (e.g. docker-compose.yml)

This is the gitlab ci file I am using.

.cloud-run-deploy:
image: google/cloud-sdk:alpine
stage: deploy
services:
- docker:dind

variables:
microservice: $CI_PROJECT_NAME
GIT_LAB_IMAGE_TAG: CI_REGISTRY_IMAGE:{RELEASE_TAG}
GCR_IMAGE_TAG: gcr.io/{GCP_PROJECT}/{micro_service}:${RELEASE_TAG}

script:
- echo -------------------------------------------------------
- env
- echo -------------------------------------------------------
- echo Activating google account . . .
- gcloud auth activate-service-account --key-file GCP_KEY - gcloud config set run/region us-east1 - gcloud config set core/project {GCP_PROJECT}
- gcloud config set core/disable_prompts 1
- gcloud auth configure-docker
- gcloud run revisions list --platform managed

- echo Pushing the tagged image into Google Cloud Registry . . . 
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
- docker pull ${GIT_LAB_IMAGE_TAG}
- docker tag ${GIT_LAB_IMAGE_TAG} ${GCR_IMAGE_TAG}
- docker push ${GCR_IMAGE_TAG}

- echo Deploying the service . . . 
- gcloud run deploy ${microservice} --image ${GCR_IMAGE_TAG} --platform managed ${GCR_PARAMS}

stage-deploy:
extends: .cloud-run-deploy
variables:
DOMAIN: soc2central-stage.com
rules:
- if: $CI_COMMIT_TAG
when: manual
environment:
name: stage

  • What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?

I checked a few forums. Same command works when not using the main image of google/cloud-sdk.

Thanks for taking the time to be thorough in your request, it really helps! :blush:

Docker in docker is an amazing concept.

In order to work with docker images within your pipeline you are required to use a job image that contains docker and has the docker daemon running.

Unfortunately this is generally only possible if you are running your containers in --privileged mode, which is likely disabled on Gitlab shared runners.

You must specify a docker-in-docker (dind) image as your job’s image.

From the examples I gather, defining it as a service is enough and I did declare the docker:dind as a service.

services:
- docker:dind

Yep, you’d be correct about services. I forgot about those.

What runner version are you using?

Based on what I can see in this documentation (see step 3), make sure you are specifying the variable DOCKER_TLS_CERTDIR: "/certs" if you are using a runner version > 12.7. If your runner version is <= 12.7, you must set the variable DOCKER_HOST: tcp://localhost:2375

Setting that variable allows your build container to connect to the dind service you have created.

I also recommend using the exact docker:19.03.8-dind image for the service as shown in that example (or verify docker:dind is using version >= 19.03.8) to make sure it works properly.

1 Like

I am actually running on the shared runners provided by gitlab to keep things simple. Turns out it is 13.0-rc1 version. I tried several combinations with and without the DOCKER_TLS_CERTDIR, and with DOCKER_HOST etc, and none of them seem to be working. I did try to use 19.03.8-dind version of the docker image as service. I will try a local runner and see if it resolves the issue.

I looked back at the step 3 I linked, and I believe the issue is with the build image not the service.

If you look at the comment in the sample .girlab-ci.yml

  # When using dind service, we need to instruct docker, to talk with
  # the daemon started inside of the service. The daemon is available
  # with a network connection instead of the default
  # /var/run/docker.sock socket. Docker 19.03 does this automatically
  # by setting the DOCKER_HOST in
  # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03/docker-entrypoint.sh#L23-L29
  #

The entry point script of the docker container automatically connects to the Docker host. So I believe you would need to either use the docker:19.03.8-dind container as you stage image, or include both DOCKER_HOST
and the DOCKER_TLS_CERTDIR variables.

1 Like

After some review of the documentation further, I understood the issue in my script. This is for the benefit of others.

  services:
    - docker:19.03.8-dind
  rules:
    - if: ${RELEASE_TAG} # We need a release tag defined

  variables:
    microservice: $CI_PROJECT_NAME
    DOCKER_HOST: tcp://docker:2375

When adding the services, hostname is not local host. Hostname should be docker, or as part of the service name we can define a specific hostname to use.

We might be able to use stable-dind instead of a specific version. I didn’t try that.