Migrating or keeping a docker registry with different naming schema

Hello! I am migrating away from jenkins+bitbucket+docker registry to self-hosted GitLab.
Since my self-hosted docker registry already has 2.5TB of data, I would love to keep that.
I changed the omnibus configuration to use my existing registry.

I would love to use the features of GitLab like auto-cleanup, env variables to push during jobs, read only access for pulling. But it seems like my images are not recognized.

The reason for this is, that my naming schema is a bit different than what GitLab prefers.
We tagged our images with registry.mydomain.com/<vendor>/<appName> so far.

Whereas GitLab prefers this naming schema: registry.mydomain.com/<group>/<project>/<image>

Is it possible to associate a project with specific images that are not using the same naming schema?
In our monorepo, we build about ~20 images. Unfortunately, it will not be possible to rename the images, because we use docker to have our customers control their own updates with the click of a button. After that, the underlying system downloads images with the same name. Probably the only way to change the names would be to do some reverse proxy magic, not sure if somebody has done that before and if it is possible.

Thank you for all ideas to come to solve this! I am at the end of my wits.

  • What version are you on? Are you using self-managed or GitLab.com?
    GitLab 13.10.2-ee (Self-managed)
    GitLab Shell13.17.0
    GitLab Workhorsev13.10.2
    GitLab APIv4
    Ruby2.7.2p137
    Rails6.0.3.4
    PostgreSQL12.6
    Redis6.0.10

  • Add the CI configuration from .gitlab-ci.yml and other configuration if relevant (e.g. docker-compose.yml)


before_script:
  - yarn install --cache-folder .yarn

cache:
  key:
    files:
      - package.json
      - yarn.lock
    prefix: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/
    - .yarn
  when: 'always'

stages:
  - test
  - prepare
  - build
  - publish
  - post-publish


unit-tests:
  stage: test
  script:
    - yarn run test:unit:xunit
  artifacts:
    when: always
    paths:
      - junit.xml
      - coverage/cobertura-coverage.xml
    reports:
      junit: junit.xml
      cobertura: coverage/cobertura-coverage.xml
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - node_modules/
      - .yarn
    policy: pull

prepare-version:
  stage: prepare
  script:
    - "echo '{ \"name\": \"${CI_COMMIT_BRANCH}\" }' > server/VERSION"
    - PREPARE=true semantic-release --dry-run
  artifacts:
    paths:
      - server/VERSION
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - node_modules/
      - .yarn
    policy: pull

build-docker-images: 
  stage: build
  dependencies:
    - prepare-version
  script:
    - 'BRANCH_NAME=${CI_COMMIT_BRANCH} docker-compose -f docker/docker-compose-ci.yml build --build-arg GIT_COMMIT=${CI_COMMIT_SHA} --build-arg BRANCH_NAME=${CI_COMMIT_BRANCH}'
  cache: {}

publish:
  stage: publish
  script:
    - semantic-release
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - node_modules/
      - .yarn
    policy: pull

publish-version-info:
  stage: post-publish
  script:
    - 'node ./scripts/upload-app-store-file.js -f ./docker/docker-compose-privileged.yml -n docker-compose-privileged-${CI_COMMIT_BRANCH}.yml'
    - 'node ./scripts/upload-app-store-file.js -f ./docker/docker-compose-unprivileged.yml -n docker-compose-unprivileged-${CI_COMMIT_BRANCH}.yml'
    - 'node ./scripts/upload-app-store-file.js -f ./docker/docker-compose-onCloud.yml -n docker-compose-onCloud-${CI_COMMIT_BRANCH}.yml'
    - 'node ./scripts/upload-app-store-file.js -f ./docker/docker-compose-setup.yml -n docker-compose-setup-${CI_COMMIT_BRANCH}.yml'
  only:
    refs:
      - master
      - beta
      - release
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - node_modules/
      - .yarn
    policy: pull