Self-hosted Gitlab Docker Registry error pulling image "failed to register layer: unexpected EOF"

We have Gitlab with configured docker registry. Everything works fine but recently we started experiencing error while pulling images

docker pull < ... image url ... >                                                                                                                                    
master-a7daebc0: Pulling from < .. repo name ... >
4abcf2066143: Already exists 
bd21029f2ae9: Already exists 
a3b94d5a4cc1: Already exists 
a30645fbb7ab: Extracting [==================================================>]  9.551MB/9.551MB
0adf47132b26: Download complete 
7a0214e1401e: Download complete 
failed to register layer: unexpected EOF

Images are built and pushed to the registry in pipeline by kaniko and it never reports any errors it always finishes with a successful status.
When this problem occurs we just rerun our build pipeline so that kaniko re-pushes the image with the same tag and usually that helps, but sometimes it takes several attempts.
It doesn’t happen often however, maybe once a week.
I couldn’t find any relevant logs in the gitlab registry component except for the such lines

current:2024-03-07_13:56:06.17125 time="2024-03-07T13:56:06.171Z" level=error msg="manifest unknown" auth_project_paths="[ ... project name ... ]" auth_user_name= auth_user_type=deploy_token code=MANIFEST_UNKNOWN correlation_id=01HRCK69CPPXDQN0GT2ZGK11GP detail="unknown tag=1a672eaa0413d8a884b80d429c56ab589b7f3227b4b6a7ad854ee7574a873b97" error="manifest unknown: manifest unknown" go_version=go1.20.11 host= ... registry domain name ... method=GET remote_addr= root_repo=... repo name .../cache/manifests/1a672eaa0413d8a884b80d429c56ab589b7f3227b4b6a7ad854ee7574a873b97 user_agent=go-containerregistry/v0.4.1-0.20210128200529-19c2b639fab1 vars_reference=1a672eaa0413d8a884b80d429c56ab589b7f3227b4b6a7ad854ee7574a873b97 version=v3.87.0-gitlab

But I’m not sure if they are really connected to the problem.

One of my guesses is that it could be a network related problem, maybe some packets are lost and that’s why an image isn’t complete, but wouldn’t kaniko (or any other client) report a failure in such circumstances? Or maybe there’s a bug in kaniko ?

Our giltab is 16.7 and kaniko is 1.6.0

I know that it’s rather an ambiguous issue so any help will be appreciated


I’ve finally figured it out. The reason was that kaniko in our pipeline was doing two builds at the same time. Presumably when they were pushed to the registry they somehow rewrote each other. Two builds were for two different image tags, but kaniko allows to assign multiple tags for a single build. After I rearranged the pipeline the problem is gone.