Gitlab CI + Deployment

Hello All,
Hello guys small help from anyone will be great appreciated.

I’m planning to use Gitlab CI, for automatic deployment to my local server. i.e 10...***.
But I’m able to setup gitlab-ci.yaml file and pass the test builds are showing as passed.

But here is the my doubt, how it can deploy automatically to my server? Do I need to configure SSH key or secret keys anywhere? How gitlab know which server and which script should run whenever a merge request create with the master branch.

Could you please give the sample peace of code or steps to configure.

Yes. Your CI multi-runner (wherever/whatever that is) will need access to your local server, most likely via pre-shared SSH keys, if you want it to be able to publish code directly and automatically.

Read up on the CI documentation for SSH keys and that might get you pointed in the right direction.

@cmtonkinson is right, I think you indeed should read up on the link concerning SSH keys and also on deploy keys.

Then the most simple setup is to use the Shell runner to execute commands on the local server you want to deploy to (i.e. run some git pull, checkout and install / update commands).

@stefanvangastel @cmtonkinson I’m a bit confused about runners types (e.g. shell vs docker). Will a shell runner on my VPS just execute the code in my CI on my VPS? If so, wouldn’t that cause all kinds of problems when it is testing? I get how it works with a docker container, since after testing the docker image is just destroyed and a new one is created when you test again. But how does that work on a VPS with a shell runner?

Yes, kinda, but…

It may be more accurate to say that when you use the shell runner, the build does run in a shell on the ci-multi-runner host. In most cases installing and starting the ci-multi-runner service will create a gitlab_runner (I think it’s called) user, and run builds:

  • somewhere under e.g. /home/gitlab_runner/
  • with the permissions of the gitlab_runner user

So, any “damage” that user could do to a system, a malicious (or stupid, or accidental, or …) build script can do.


You haven’t specified whether you have 1 VPS hosting both GitLab and the GitLab CI Runner or whether your runner is on a different host that GitLab. Assuming your build scripts aren’t going to actually break anything, the other risk is that you could negatively impact the performance of GitLab if your build tasks are very resource intensive.

@arsbanach You should separate some things:

CI build / test jobs: A shell runner is not the most clean option for testing or building, I believe you should use the docker runner for that so that your VPS (for testing and building etc.) stays clean. So you are right on that.

Deploy: In my opinion you are mixing up 2 things; Provisioning and deployment. E.g. installing MySQL is provisioning, so preparing your server or VPS by installing pre-requirements for your application.
Deploying is installing or updating your application, using Gitlab CI shell runner that would typically mean (in some of my cases) you use the CI commands to update your application by doing:

  • a cd to application directory
  • git pull && git checkout <tagname> or something
  • run some kind of update and/or database migrations script.

Also, since you are not building or testing any code your runner doesnt have to clone or fetch any repo code in a deploy job, so you can use the GIT_STRATEGY: none

I think not using containers will make life extremely hard for you since you will have to check if stuff is already installed or not, clean databases you used in a previous job, update schema’s etc.

Maybe you should look into container linking some more to make the setup with containers work.

I dont know CentOS much though this could help you distillate some tricks from this Dockerfile to apply in your own: https://github.com/CentOS/CentOS-Dockerfiles/tree/master/mysql/centos6

Personally I use a Ubuntu 16.04 image with MySQL, Apache and PHP for testing (I call them All-In-One images for testing) and build Alpine or Ubuntu images with only Apache and PHP for production.

@arsbanach We run everything in an offline on-premise infrastructure so I use a Docker-in-Docker image as Gitlab runner image and use docker-compose or docker commands to connect stuff together. That could also be a solution and doesnt restrict you to Gitlab-ci yml options.

@cmtonkinson
Thanks

I’m currently using a shell runner on our test instance, using it to build our project, and if it works, we want to automagically deploy to our test instance. However, we’re failing to accomplish the final step, trying to execute binary as a fork, eg: ./bin/mybinary &! but the gitlab runner is waiting to my process to finish. Do you know a better approach to ‘run a binary and finish’ that would work on the runners?
Thanks in advice

Why would you run the command with &? If the binary finishes the exit code is used to determine the result. So don’t run it in background with the &.

Yeah, the thing is the binary is a server, so it shouldn’t stop, but I want to tell the runner that “all the job is done”, leave the server running and mark the pipeline as finished. Am I tackling the problem the wrong way? Or is it another path to make this work?

Dont think its tackling the problem the wrong way entirely.

I do think you should maybe run your application / server as a service using a process control system like supervisord or systemd and simply update and restart the service, this way you dont depend on the process.

A quickwin could be running the command with nohup ./bin/mybinary & That should detach the process.

I have a dev & prod runner with tags to identify which jobs run on which runners (see below). We have an ASP.net site so we use shell and have the shell call PowerShell ‘utilities’ we have written to perform certain tasks. Perhaps that is all you need?

Example:

Deploy_Staging:
  stage: Deploy
  script:
   - echo "Deploy Staging"
   - ''
   - echo "Run PowerShell script 'Deploy' which can be setup to perform any Deployment steps as required."
   - PowerShell.exe -ExecutionPolicy Bypass -Command "& '.\Utilities\Deploy.ps1'"
  environment:
    name: Staging
    url: http://your-site-here.com/
  when: manual
  tags:
   - DEV
  only:
   - branches
  dependencies:
   - Build

You could then break your deployment down into different PowerShell scripts as required or across different jobs. Also for production deploy you can include “when: -manual” which prevents check-ins to master being auto deployed. Another useful one is “only: - master” again for your production deploy job.

Ps: Anyone have any good information on Docker? A 'gitlab with docker for dumbies" guide would be great!