Build docker image and push to Google artifacts

I struggle to build and then push a docker image to Google artifacts. I’ve tried different approaches I hope someone can help me getting this done correct.

What I like to achieve is to build and push an image in one step (if possible).

The following setup doesn’t work because the authentication is not handed to the next step.
Is it possible to authenticate gcloud in one step where I need to use the google/cloud-sdk image, and push in the next step where I need to docker image?

image: docker:18

variables:
  DOCKER_DRIVER: overlay2
  IMAGE_ID: $CI_COMMIT_SHA

stages:
  - setup
  - build

services:
  - docker:dind

setup:
  stage: setup
  image: google/cloud-sdk
  variables:
    GOOGLE_SERVICE_KEY: $GOOGLE_DEV
  only:
    - dev
    - ci-tests
  dependencies:
    - setup
  script:
    - gcloud auth activate-service-account gitlab-ci-dev@hololink.iam.gserviceaccount.com --key-file=$GOOGLE_SERVICE_KEY --project=project
    - gcloud auth configure-docker europe-west3-docker.pkg.dev

build_api:
  stage: build
  only:
    - dev
    - ci-tests
  script:
    - docker build -t europe-west3-docker.pkg.dev/project/repo/api:$IMAGE_ID -f services/api/Dockerfile .
    - docker push europe-west3-docker.pkg.dev/project/repo/api:$IMAGE_ID

My other solution is to build the image in build and push it in a deploy step. But I’m not sure how to pass a file to the next step?

Maybe there is a more clean solution. If so please enlighten me :slight_smile:

Hi,

Are gcloud auth commands generating any files with tokens, or some kind of environment variable that are used during docker push? If so, you can use artifacts to store the login data during setup, and those artifacts will be downloaded automatically in the next stage (your build_api job), and placed in the same folder structure as they were in the previous job.

Thanks for the quick answer, but the solution wasn’t so easy for me at least.
The way I solved it was to install google cloud cli in the image and authenticate with docker login.

Here’s my pipeline:

setup:
  stage: setup
  image: google/cloud-sdk
  variables:
    GOOGLE_SERVICE_KEY: $GOOGLE_DEV
  only:
    - dev
    - ci-tests
  dependencies:
    - setup
  script:
    - apt-get update -y
    - apt-get install curl -y
    - curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-471.0.0-linux-x86_64.tar.gz
    - mkdir -p files
    - cp google-cloud-cli-471.0.0-linux-x86_64.tar.gz files/
  artifacts:
    untracked: false
    when: on_success
    access: all
    expire_in: 15 min
    paths:
      - files

build_api:
  stage: build
  image: docker:24-dind
  only:
    - dev
    - ci-tests
  variables:
    GOOGLE_SERVICE_KEY: $GOOGLE_DEV
  before_script:
    - apk update
    - apk add --no-cache python3
    - tar -xf files/google-cloud-cli-471.0.0-linux-x86_64.tar.gz
    - ./google-cloud-sdk/install.sh --quiet
    - export PATH=$PATH:./google-cloud-sdk/bin
    - gcloud auth activate-service-account gitlab-ci-images@project.iam.gserviceaccount.com --key-file=$GOOGLE_SERVICE_KEY --project=my-project
    - gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://europe-west3-docker.pkg.dev
  script:
    - docker build -t europe-west3-docker.pkg.dev/project/repo/api:$CI_COMMIT_SHA -f services/api/Dockerfile .
    - docker push europe-west3-docker.pkg.dev/project/repo/api:$CI_COMMIT_SHA

Hi,

Sure, that would have been my second suggestion :wink:

You can technically also eliminate the “setup” job, and just download and prepare the binary in the same job (build_api) - not sure if there is any need for separation. Alltogether might be slightly faster with one job as well.

Yes exactly!
I used the google/cloud-sdk image to save the token in a file and exported it as a token, which I could use in the build jobs:

setup:
    stage: setup
    image: google/cloud-sdk
    variables:
        GOOGLE_SERVICE_KEY: $GOOGLE_DEV
    rules:
        - if: $CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_MESSAGE =~ /(run pipeline)/i
          changes:
              paths:
                  - services/api/**/**
    script:
        - mkdir -p files
        - gcloud auth activate-service-account gitlab-ci-images@project.iam.gserviceaccount.com --key-file=$GOOGLE_SERVICE_KEY --project=project
        - gcloud auth print-access-token > files/gcloud_token
    artifacts:
        untracked: false
        when: on_success
        access: all
        expire_in: 15 min
        paths:
            - files

build_api:
    stage: build
    image: docker:24-dind
    rules:
        - if: $CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_MESSAGE =~ /(run pipeline)/i
          changes:
              paths:
                  - services/api/**/**
    dependencies:
        - setup
    variables:
        GOOGLE_SERVICE_KEY: $GOOGLE_DEV
    before_script:
        - cat files/gcloud_token | docker login -u oauth2accesstoken --password-stdin https://europe-west3-docker.pkg.dev
    script:
        - docker build -t europe-west3-docker.pkg.dev/project/repo/api:$CI_COMMIT_SHA -f services/api/Dockerfile .
        - docker push europe-west3-docker.pkg.dev/project/repo/api:$CI_COMMIT_SHA
1 Like