How to manage multiple GitLab container registries from a single repository?


A project is worked on GitLab in a single repository, where each subproject contains its own src/ folder, Dockerfile, etc. A similar structure is followed:

|_ sub-proj_1/
|_ sub-proj_2/
|_ sub-proj_n/
|_ docker-compose.yml
|_ .gitlab-ci.yml

When the pipeline is launched, it is filtered by sub-project and the image is built with Kaniko, which is sent to the GitLab registry adding a custom path. The PROJECT variable allows to control sub-projects, i.e. sub_proj_1/, sub_proj_2/, sub_proj_n/ in the Gitlab registry.

  stage: publish_image
    entrypoint: [""]
    - PROJECT=$(echo "$CI_MERGE_REQUEST_TITLE" | grep -o -w -e "api" -e "frontend" -e "db")
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - echo "PROJECT=$PROJECT" >> build.env
    - /kaniko/executor --context $PROJECT --dockerfile $PROJECT/Dockerfile --destination ${CI_REGISTRY_IMAGE}/${PROJECT}:${CI_COMMIT_SHORT_SHA}
      dotenv: build.env
    - if: $CI_PIPELINE_SOURCE == 'merge_request_event' && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH && ($CI_MERGE_REQUEST_TITLE =~ /.*api.*/ || $CI_MERGE_REQUEST_TITLE =~ /.*frontend.*/ || $CI_MERGE_REQUEST_TITLE =~ /.*db.*/)
      when: always

Images are uploaded to the GitLab registry without problems:

enter image description here

There are custom paths in the GitLab registry associated with each subproject:

enter image description here

The images are downloaded to a staging environment using a GitOps controller such as ArgoCD and this error is reported:

Failed to pull image "": rpc error: code = Unknown desc = failed to resolve image "": no available registry endpoint: failed to fetch anonymous token: unexpected status: 403 Forbidden

The cluster have been added via: argocd repo add --username xxx --password xxx

enter image description here


As troubleshooting, it has been tested with a registry where the path is not arbitrary and the deployment is performed without problems.

enter image description here


In the GitLab docu, I see that there is a key path, but it is indicated:

path: This should be the same directory like specified in Registry’s rootdirectory. This path needs to be readable by the GitLab user, the web-server user and the Registry user.


Any suggestions beyond separating the sub-projects (sub_proj_1/, sub_proj_2/, sub_proj_n/) into separate repositories?

Hey there!

I have similar use case and for me pulling different images from single GitLab project works without any issues. Just make sure you are correctly authenticated to GitLab container registry. (Note: having git repo authentication is not the same as having configured registry authentication)

Perhaps you should simply try pulling it with docker (e.g. docker pull, after you’ve authenticated yourself with docker login

I haven’t used Argo CD or any of that fancy stuff yet, but I believe there should be a place where you configure registry authentication - you can add your username and password for the beginning, but long term you might want to configure and use project access token, or any other suitable token

Hope this helps! :slight_smile:

1 Like

The problem has been solved after trying to answer the following questions:

  1. Could it be URL address resolving the issue?

I don’t think so, because as mentioned in point 3 of the troubleshooting section. I have removed the custom part of the registry path and now it works properly. Using the same naming convention of GitLab documentation:

  • Failure:
  • Works:
  1. Could it be an authentication problem?

I can manually download the docker image from the cluster.


  1. Could the failed deployment manifest be copied and applied to another namespace to check that is not a problem with cluster configuration?

As I am in the troubleshooting phase the way to test is via ArgoCD from the VM where I have the GitOps Controller:

argocd app create staging --repo --path charts/db --dest-server https://X.X.X.X:16443 --dest-namespace <name> --sync-policy automatic --auto-prune --self-heal

After creating a new namespace in the same cluster the authentication works properly.

1 Like