Everytime i call the pipeline, the dind will be downloaded from dockerhub. Besides that dockerhub have an pull-limit i don’t see any point of loading it over the internet every time.
So is there a way to add my favorite dind version into my Gitlab docker repository so my pipeline fetch it from there?
Assuming you have a Gitlab registry on registry.example.com and you have a Gitlab group called docker and a project called docker, then you could try doing this from a machine with docker installed that is logged into your registry with write permissions:
docker pull docker:dind
docker tag docker:dind registry.example.com/docker/docker:dind
docker push registry.example.com/docker/docker:dind
then perhaps in the section of your CI/CD you can do:
obviously somehow in the CI/CD you’ll need to make sure you are logged into your registry or make the group/project public so that it can be pulled without authentication.
Awesome @iwalker , that worked, thanks a lot!
I now think of pulling other, often used images in my registry as well (Nginx, Solr, PHP, Python). I maintain a variable for the image version i’d like to have in my gitlab-ci.yml and likely create a script which fetches the images from the official dockerhub and imports it into my project.
To be hones, i though there is already something like this available, maybe integrated into Gitlab, or some other kind of proxy/cache like for APT.
Also think of how to keep the images as i have a 1 day cleanup-rule set in the egistry. Maybe i add another tag like “cache” or “keep” to make it an exception of the rule
According to this link: Speed up job execution | GitLab there is the Gitlab dependency proxy - I didn’t search for that before as never occured to me, but that would do what you want without having to pull and push images to your own registry. Even APT on it’s own doesn’t cache anything for multiple servers - sure the server will have cached for itself but that is all. You would need to configure APT on all servers to use a proxy server like Squid Proxy and then configure Squid to cache X MB/GB depending on the size of deb packages you want to cache. Theoretically Squid proxy could be used instead of the Gitlab Dependency Proxy providing the objects it’s configured to cache don’t exceed the size set in the Squid configuration.
Even with 1 day cleanup policy it still is going to keep 5 tags per image as per your screenshot so they won’t get deleted. If you then pull a sixth tag for say nginx, then only the oldest image will be removed, and 5 will still be on your server.
@igittigitt What you’re looking for is something like Sonatype Nexus 3 (my recommendation) or JFrog Artifactory.
You can proxy (and cache) most things using a simple Nexus 3 setup (via VM or docker, etc.) It can proxy dockerhub and cache anything and you just direct all your gitlab-ci files to pull from that. On first go Nexus3 will pull it from docker hub (or ghcr.io or any other repo you setup) and on next pull from your runners it will just use the cached version.
My recommendation is to set up multiple mirrors/proxies for the big ones (Docker hub, ghcr, quay.io, etc.) and then set up an overall docker group with an HTTP(s) endpoint and use that in all your gitlab-ci. That way no matter where the image is actually hosted that one endpoint will be used to pull it and cache it.
Bonus points that it can cache RPMs, debs, raw files of any sort, etc. etc.