SOLVED Helm deployment of private registry container fails with HTTP Basic: Access denied

Greetings, I am using the Kubernetes integration with Gitlab, have a private runner and helm installed in my namespace. Using this “Internal” project, you can get an idea of what I am trying to do. Pushing image to private docker registry in CI works fine, but later, when helm attempts to pull the image, it fails with HTTP Basic: Access denied

The secret does exist, and is created with the same username and password that docker login uses before pushing the image to the registry, but when I make this project private, or clone it, it always fails when it’s private.

apiVersion: v1
data:
  .dockerconfigjson: [REDACTED]
kind: Secret
metadata:
  creationTimestamp: 2018-07-10T18:20:57Z
  name: gitlab-registry
  namespace: kube-test-7268306
  resourceVersion: "6243465"
  selfLink: /api/v1/namespaces/kube-test-7268306/secrets/gitlab-registry
  uid: fdc1e77c-846d-11e8-9cf9-42010a960012
type: kubernetes.io/dockerconfigjson

I have seen in various posts that CI does and does not work with private repositories. I’ve also see suggestions to use GCR for a registry substitute, since the service account can be giving access to those registries, but I was kinda hoping to use the Gitlab native registry. Can anyone confirm if this works? or even if I’m going in the wrong direction?

In the end, I simply had to use the V1 registry API to pull the Docker image as mentioned in other threads. I guess I just had other problems and didn’t unravel it all until i started over from scratch.

Notice the change for --docker-server in gitlab-ci.yml

kubectl create secret -n "$KUBE_NAMESPACE" \
      docker-registry gitlab-registry \
      --docker-server="$CI_REGISTRY" \
      --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \
      --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \
      --docker-email="$GITLAB_USER_EMAIL" \
      -o yaml --dry-run | kubectl replace -n "$KUBE_NAMESPACE" --force -f -

was changed to

kubectl create secret -n "$KUBE_NAMESPACE" \
      docker-registry gitlab-registry \
      --docker-server="https://registry.gitlab.com/v1/" \
      --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \
      --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \
      --docker-email="$GITLAB_USER_EMAIL" \
      -o yaml --dry-run | kubectl replace -n "$KUBE_NAMESPACE" --force -f -

resolving the issue

To my mind, this is not resolved. Simply rolling back to the old API still leaves the new API broken, and as of this writing, it is still broken. I am encountering the exact same issue on Digital Ocean’s managed Kubernetes service. My next step is to roll out my own Gitlab installation on DO and manage my code there, which is not something I wanted to go through, at least not yet.

Hi, did u resolved this by installing everything by your self? I am having the same issue

I am still dealing with the problem. For the time being, I delete a deployment before redeploying it. This causes some downtime, but I’m accepting that for now. I suspect that it has something to do with multiple containers trying to use an access token within a short period of time but I haven’t had bandwidth to research it further.