Hello everyone,
I have been playing around with Gitlab CI for some time now and I read the documentation for it. I have been researching the Gitlab forum and other resources but I did not come across any resources that somebody asked/thought of these questions more effectively in a spot on manner so I am here to get some help. Gitlab CI feature is awesome and great, until the part where the scripts for real deployment should run.
I would like to deploy my apps using gitlab-ci.yml file. I am using a Docker environment for my applications and we know that Docker runs as root. We register a runner which in someway or another needs access to the docker.sock on the actual server that we are deploying to. Now regardless of who the person is who is running the pipeline from Gitlab for deployments they can do anything like below and change the deployment scripts for any kind of exploit on the server:
Example:
deploy_to_prod:
stage: deploy
script:
- docker rm -f $(docker ps -aq) #Removes all the containers on the server
- docker swarm leave -f #Forces the manager to leave docker swarm if the runner is running on manager. Breaking the whole cluster and orchestration in production.
- docker run --name test -v /:/host/root …#Good luck, full host control from the container.
- ls /root #See all the files, explore the file system, secrets if there are any.
and such scripts…
I already tested all this and it’s totally possible. Is there a way to secure this? Even if there is how is that even possible? Docker always runs as root and I need to run docker commands to start the service on the server through CI automation. Can we say that by design GitlabCI is not secure for deployments? If the developers, regardless of what their role is (authorized for deployment or not), can change gitlab-ci.yml file in their repository as they like and enter any script they like then what’s the point of this automation feature for deploying?