Our setup
We have a web application with multiple artifacts (reverse-proxy, backend, db, frontend), which are packed as docker containers. Each artefact has its own Gitlab git repository and is built with a Gitlab CI pipeline. Our development process utilizes Git Flow, so each change on the development branch results in Docker image with a “develop” tag in the private Docker registry linked to the artefact.
The reverse-proxy artifact “knows” all the containers of the application and holds a docker-compose file for running the application locally. We are deploying the application to a Kubernetes cluster. To avoid code duplication are we using the docker-compose file for helm chart creation with kompose. To trigger a pull of all development docker images when deploying to the development namespace we are using a timestamp flag with the helm upgrade command and imagePullPolicy “Always” in the docker-compose-yml. A secret for the docker registry is created/updated before the helm upgrade command and is referenced in the docker-compose file.
Our .gitlab-ci.yml (reverse-proxy repo) looks like this:
image: docker:19.03.0
services:
- docker:19.03.0-dind
stages:
- build
- deploy
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-develop:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
only:
- develop
tags:
- docker
deploy-development:
stage: deploy
image: "dtzar/helm-kubectl:3.0.0"
environment: development
before_script:
- "apk update && apk add git go musl-dev && GOPATH=/ go get -u github.com/kubernetes/kompose"
- "kubectl create secret -n $KUBE_NAMESPACE docker-registry gitlab-registry-secret --docker-server=$CI_REGISTRY --docker-username=$CI_REGISTRY_USER --docker-password=$CI_REGISTRY_PASSWORD --docker-email=$GITLAB_USER_EMAIL -o yaml --dry-run | kubectl replace --force -n $KUBE_NAMESPACE -f -"
script:
- "kompose convert -c -f docker-compose-dev.yml"
- "helm upgrade --install --set-string timestamp=$(date +%s) app docker-compose-dev"
only:
- develop
tags:
- docker
Our docker-compose-dev.yaml
version: "3"
services:
webapp:
image: registry.gitlab.com/ourusername/webapp:develop
container_name: webapp
ports:
- 8080:80
labels:
kompose.image-pull-policy: "Always"
kompose.image-pull-secret: "gitlab-registry-secret"
timestamp: "{{ .Values.timestamp }}"
backend:
image: registry.gitlab.com/ourusername/backend:develop
container_name: backend
environment:
- MONGODB_URI=mongodb://db/something
ports:
- 5000:5000
labels:
kompose.image-pull-policy: "Always"
kompose.image-pull-secret: "gitlab-registry-secret"
timestamp: "{{ .Values.timestamp }}"
reverse-proxy:
image: registry.gitlab.com/ourusername/reverse-proxy:develop
container_name: reverse-proxy
ports:
- 80:80
- 443:443
labels:
kompose.image-pull-policy: "Always"
kompose.image-pull-secret: "gitlab-registry-secret"
timestamp: "{{ .Values.timestamp }}"
db:
image: mongo:4.2
container_name: db
ports:
- 27017:27017
Example of helm chart template, which was created by kompose:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c -f docker-compose-dev copy.yml
kompose.image-pull-policy: Always
kompose.image-pull-secret: gitlab-registry-secret
kompose.version: 1.20.0 (f3d54d784)
timestamp: "{{ .Values.timestamp }}"
creationTimestamp: null
labels:
io.kompose.service: webapp
name: webapp
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert -c -f docker-compose-dev copy.yml
kompose.image-pull-policy: Always
kompose.image-pull-secret: gitlab-registry-secret
kompose.version: 1.20.0 (f3d54d784)
timestamp: "{{ .Values.timestamp }}"
creationTimestamp: null
labels:
io.kompose.service: webapp
spec:
containers:
- image: registry.gitlab.com/ourusername/webapp:develop
imagePullPolicy: Always
name: webapp
ports:
- containerPort: 80
resources: {}
imagePullSecrets:
- name: gitlab-registry-secret
restartPolicy: Always
status: {}
The problem
Everything works as expected, however randomly some of the containers (always different ones) are not upgraded and we get a ErrImagePull status.
$ kubectl get pods -n reverse-proxy-xxxxxx-development
NAME READY STATUS RESTARTS AGE
backend-8556486b66-5mff5 1/1 Running 0 3m27s
db-6f8567585f-h8qf6 1/1 Running 0 19h
reverse-proxy-79648bc589-5gk5m 1/1 Running 0 19h
reverse-proxy-79bc9f6b6d-dihgm 0/1 ErrImagePull 0 3m27s
webapp-756d696ff9-vz3cb 0/1 ErrImagePull 0 3m26s
webapp-7654f85c56-gw24j 1/1 Running 0 19h
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned reverse-proxy-xxxxxxxx-development/reverse-proxy-79bc9f6b6d-dghgm to pool-c1nvqk48u-k18a
Normal Pulling 7m (x4 over 8m28s) kubelet, pool-c1nvqk48u-k18a Pulling image "registry.gitlab.com/ourusername/reverse-proxy:develop"
Warning Failed 6m59s (x4 over 8m26s) kubelet, pool-c1nvqk48u-k18a Failed to pull image "registry.gitlab.com/ourusername/reverse-proxy:develop": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.gitlab.com/v2/ourusername/reverse-proxy/manifests/develop: unauthorized: HTTP Basic: Access denied
Warning Failed 6m59s (x4 over 8m26s) kubelet, pool-c1nvqk48u-k18a Error: ErrImagePull
Normal BackOff 6m33s (x6 over 8m26s) kubelet, pool-c1nvqk48u-k18a Back-off pulling image "registry.gitlab.com/ourusername/reverse-proxy:develop"
Warning Failed 3m22s (x19 over 8m26s) kubelet, pool-c1nvqk48u-k18a Error: ImagePullBackOff
Do you have any ideas? We also tried to replace the docker registry name with something like this: https://$CI_REGISTRY/v1/
But with no success.
Thanks in advance for any help!