I have a project with 4 stages:
- creating a version (using git tags, taking the last and increasing it), saving it in a file to cache
- prescript reads the version file from the cache and exports it to env.
- build the code with the version inside it. saving the binaries to the cache
- if it’s master upload the binaries from the cache as artifacts (lasts 3 years) save the link to a file in the cache
- if it’s master create gitlab release based on the tag and artifact link.
currently i use a docker gitlab runner that runs on a pc with 2 registered runners defined in the toml.
when i used only one of them (locked it for my project) my ci worked fine, but once i allowed more than one runner, the cache is not shared properly and so jobs 1 and 3 and 2 and 4 share cache.
Since the runners do share the same host (docker) this is strange. i did see a way to define on my local network an S3 server like service for cache but it’s to be used with autoscale and docjer-machine and i don’t really use that.
Any neat trick i can do to have the cache work on more than one runner that on the same docker instance ? the plan is to register 10-15 runners on that PC and share them between all my projects.
I saw there is a way to define my own scripts that run before the docker executor runs so maybe if i load with -v a:b a shared directory and copy cache from there i can share it among all instances ?