I’m trying to get CI/CD set up for my company and am just unclear on some conceptual bits that I need to put this all together. Any help is appreciated.
When pushing code up to gitlab, I’m using the docker-in-docker approach to build a new docker image which includes the changes to the source code, and I then want to run the image, replacing the existing container representing the previous build.
In my .gitlab-ci.yml, I’ve included the commands to build and run the container, and the pipeline finishes successfully, however the docker container run in that process does not continue running on the host machine after the pipeline has finished as I had expected.
My gitlab-runner is running on the server that I am deploying to, and I had expected the container created during the CI/CD pipeline would survive on the host afterwards. What do I need to change in my approach to end up with a newly minted container on my server when all is said and done?
Do I need to push the new docker image up to the GitLab registry, pull it back down, and launch it via some additional script as detailed here? If so, can anyone give me some insight into what is in that mysterious deploy.sh used in that example?
Everything starting by gitlab-runner will be stopped once the runner exit.
Hence, your container is stopped and removed after build step complete.
For the deployment stage, I have 2 ideas:
-
Build and push image into gitlab registry. Use ssh to login to your server
, pull and deploy the new image into your docker server. Public key for ssh
access can be config as build env.
-
Config a gitlab runner that use shell executor on your server. Make it only run
on job with tag. And tag your deploy job using some keyword only you know.
Because the shell executor has full access to host shell, it can start docker
container and let it run continuously after the runner exit.
I’ve successfully setup a shell
executor gitlab runner for our test system.
But for production system, of course, I think the ssh approach is better.
1 Like
Thanks @john299, makes more sense now. I’ve already gone to the trouble learning docker so I’m definitely going to go with option A. I think what I’ll do is go ahead and add my ssh key to the build environment as you suggest, and do something like:
ssh deploy@my.deploy.destination './deploy.sh'
Where deploy.sh is responsible for pulling and running the container representing the most recent build.