Implementing CI/CD with Gitlab Runner for a Python Application

Currently have a python webserver that’s hosted on a dedicated server. I’ve configured Gitlab CI (using the gitlab-ci.yml file) so whenever there’s a push to a staging branch, the necessary dependencies are installed, the application is built, and a number of tests are run. I am using GitLab Runner to clone the latest changes and run all of the pipeline’s commands on my server.

My final ‘stage’ is the deploy stage where I run the command “python”. This works great however I don’t believe this is the right way to go about implementing CI/CD. Although the webserver runs successfully, the deploy stage continues to be pending under “jobs” until I cancel it. I’d like the webserver to keep running but I’m not sure how to go about solving this. Keep in mind, I’m just getting started with CI/CD.

Basically, after all the jobs under my GitLab CI pipeline are finished successfully, how can I automate the pull of the latest changes, and automate running the command “python” with the latest changes?


within the CI pipelines, are you using the Docker executor, or are these commands fired via shell executor on a VM?

In order to run the server as “daemon” in the background, you need to send them there. The simplest one is python & but I wouldn’t recommend doing that. Inside Docker, supervisorctl comes in more handy granting you full control about running services.

If you want to go one step further with running the application in a container itself, the pipeline should include the following:

  • build the code
  • create a new container image with installing the app in its latest version
  • push the image into the container registry
  • deploy on the remote host with pull the container image, and restarting the container

This also solves the problem with the hanging python in foreground, this is now done inside a running container.


1 Like

These commands are fired via the shell. I’m not currently using Docker at all. Let’s say I containerize the entire web application using Docker. Docker seems like a rabbit hole of its own. Where would you recommend I start? Also, how would the pipeline cancel the currently running server, pull the new changes, and deploy without causing any delay or time down for the server?


another possibility would be to run the application e.g. with Nginx and wsgi or similar. Each time you deploy a new version of the script, let it be followed by an nginx reload command. That way you can control that everything runs like you want.

In terms of Docker, I’d suggest building an nginx image which runs your script. You can do that locally with a Dockerfile without the need of incorporating this into GitLab CI. Once you’ve got this working and listening on your desired port, you can create a CI job which builds the image and pushes it to the container registry. Last will be the deployment step where you’ll either do that via SSH remote or Ansible/Terraform and deploy the container, replacing the old version.


PS: I hadn’t been using Docker for a long time also because of “rabbit hole”, but holding off for too long makes you miss certain advantages. You just need to find a task when this can be tested out, and I’d say now you’ll have one :wink: