CI job fails when pushing image to registry due to sharing layer with another project

My CI job is consistently failing when it tries to push a specific image layer to the projects registry.

We’re running Gitlab-CE Omnibus 15.2.0 in our data center on CentOS7. gitlab-runner is 15.2.0 running on Ubuntu 18.04.

I found a lot of logs like the following when I grepped my projects name on the gitlab host. Am I reading this right and my job is failing because one image layer is shared with a project the gitlab-runner job doesn’t have access to?

Where cijobproject is the project being built and pushed. And otherproject is the project that happens to share an image layer.

gitlab-workhorse/current:379985:

{
    "content_type": "application/json; charset=utf-8",
    "correlation_id": "01G90KF5E5FYM1VP4MDW4KS8T9",
    "duration_ms": 18,
    "host": "gitlab.example.org",
    "level": "info",
    "method": "GET",
    "msg": "access",
    "proto": "HTTP/1.1",
    "referrer": "",
    "remote_addr": "127.0.0.1:0",
    "remote_ip": "127.0.0.1",
    "route": "",
    "status": 401,
    "system": "http",
    "time": "2022-07-27T12:52:18-07:00",
    "ttfb_ms": 18,
    "uri": "/jwt/auth?account=gitlab-ci-token&scope=repository%3Ausername%2Fcijobproject%2Fapp%3Apush%2Cpull&scope=repository%3Aweb-services%2Fotherproject%2Fapp%3Apull&service=container_registry",
    "user_agent": "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/4.15.0-189-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)",
    "written_bytes": 74
}

gitlab-rails/production_json.log:2671:

{
    "method": "GET",
    "path": "/jwt/auth",
    "format": "html",
    "controller": "JwtController",
    "action": "auth",
    "status": 401,
    "time": "2022-07-27T19:52:18.134Z",
    "params":
    [
        {
            "key": "account",
            "value": "gitlab-ci-token"
        },
        {
            "key": "scope",
            "value": "repository:web-services/otherproject/app:pull"
        },
        {
            "key": "service",
            "value": "container_registry"
        }
    ],
    "remote_ip": "buildserveripv4",
    "ua": "docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/4.15.0-189-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)",
    "request_urgency": "low",
    "target_duration_s": 5,
    "db_count": 5,
    "db_write_count": 0,
    "db_cached_count": 1,
    "db_replica_count": 0,
    "db_primary_count": 5,
    "db_main_count": 5,
    "db_main_replica_count": 0,
    "db_replica_cached_count": 0,
    "db_primary_cached_count": 1,
    "db_main_cached_count": 1,
    "db_main_replica_cached_count": 0,
    "db_replica_wal_count": 0,
    "db_primary_wal_count": 0,
    "db_main_wal_count": 0,
    "db_main_replica_wal_count": 0,
    "db_replica_wal_cached_count": 0,
    "db_primary_wal_cached_count": 0,
    "db_main_wal_cached_count": 0,
    "db_main_replica_wal_cached_count": 0,
    "db_replica_duration_s": 0.0,
    "db_primary_duration_s": 0.002,
    "db_main_duration_s": 0.002,
    "db_main_replica_duration_s": 0.0,
    "cpu_s": 0.013884,
    "mem_objects": 5058,
    "mem_bytes": 323488,
    "mem_mallocs": 1302,
    "mem_total_bytes": 525808,
    "pid": 100739,
    "worker_id": "puma_5",
    "rate_limiting_gates":
    [],
    "correlation_id": "01G90KF5E5FYM1VP4MDW4KS8T9",
    "db_duration_s": 0.00195,
    "view_duration_s": 0.00088,
    "duration_s": 0.00914
}

If I’m correct in my interpretation, how in the world do I fix this issue?

Thanks in advance!

Due to the security release I upgraded today. Unfortunately, I’m running into the same issue on another project. Both before and after the upgrade.

Anyone have any ideas?

Edit:

Because I hadn’t tried it yet, I just tried a manual push of the image built on my local machine. I still hit that “denied: requested access to the resource is denied” message.

Which is bizarre. Not only do I have maintainer access on all the involved projects, but my user is a super admin.

It helps to enable the container registry for the project…

For the record, these two projects were created before GitLab had a container registry, so…

Still, why are all the most frustrating issues the ones with the most obvious solutions?