Passing a container image between jobs using Container Registry seems to be an obvious choice, but what if passing them as artifacts or through the cache is faster?
Have anybody tried it?
Push to container registry might be slower, because registry is another node than CI/CD, probably in another cluster, and the connection to it might be slow. The image is uploaded to registry through HTTPS, which requires additional overhead with encryption and authentication.
Both choices are good but all depends on your setup / scenario / use case
Lots of variables, size of images, network throughput etc
Artifacts are also being uploaded and then downloaded again… so I believe you have even bigger problem, bcos Docker Registry at least caches the layers, so upload/download should go relatively fast, and if you .tar it and upload you’re doomed (trust me, I am doing sth similar with image .tars as I don’t have registry in Production). Also note that your space consumption of GitLab will skyrocket in this case.
If you use cache - might work, but in that case you need to make sure both jobs are picked up by exact same runner so the cache can be shared. If you have some more complex setup and you’re uploading to a shared cache server… then you are back to the same issue as artifacts.
So, again depends on your setup. If you believe uploading to a closer GitLab server (artifacts) or cache server would be faster then uploading to further away Registry with already caching layer system in place… you can try, but I don’t think so. Rather make yourself another simple Registry in the same network where are your runners and store in-between builds there.
Costs is definitely the obstacle, but I guess it is possible to remove artifacts as soon as the last job is done.
For the cache, if GitLab could manage cache location itself, or even deploy short lived registry, that would be awesome, but without real metrics/measurements it is hard to tell if there will be any improvements of registry over cache or vice versa.