Deploying a server with custom port

I’m struggling to understand the best method of deploying a custom server app using Gitlab ci/cd.

First method I tried:

  • I have a gitlab runner registered inside a Docker container on a Linux VM running with shell executer
  • A gitlab CI/CD deploy job was made to run an application that is built in a previous stage and delivered to the deploy stage through the artefacts
  • This works but during testing the server may occasionally crash and I’d like it to automatically restart so I tried to set the app up as a service using systemctl but the gitlab-runner user hits permission issues (also feels a bit janky to be running systemctl inside a docker container)
  • The container the runner resides in was started with a specific published port in Docker run which is required for the client app to communicate with the server

Second method I’m trying to setup but struggling with:

  • During the build stage I would package the server app into its own Docker image and store this in the container registry
  • During the deploy stage I would load the custom images using a gitlab runner inside a docker container but with docker executer
  • This feels like a cleaner setup but;
    • The parent docker container publishes the correct port but how do I get the runner to publish the port of the custom image?
    • I believe for other pipeline stages the docker container is stopped when the stage is complete, how do I keep the container running but signal success to the deployment stage? Ideally I’d want the container to stop when stopping the deployment from the gitlab portal

Additional info:

  • The app is a Unity headless server with corresponding Client app
  • I can manually start the app in the docker container and connect via the client using the custom port. This also works when running it as a systemctl service inside the container
  • While I’ve not tried yet, presumable it would still work if I manually run the custom image as I can manually specify the port when running the container
  • Feels like I’m missing a few crucial pieces of information to set this all up correctly for automated deployment

A little update on this (although it would still be good to get some advice on the expected process for long lived deployments that require external port access)

I’ve opted to stay with the first method I was trying. To get that working correctly I’ve modified the image/container to install sudo and systemctl and given the gitlab-runner user sudo access with visudo to some specific commands for running the app as a service.

I have read on other threads that the intention with the Gitlab-runner is to be short lived and thus the docker containers it might spin up be short lived - but for a deployment it would seem to me that there may be a specific need for a long lived container to be deployed automatically, that may need external port access as it does in my case. Is there an expected workflow for this kind of deployment?

Gitlab CI isn’t designed for running server apps long term. You would be best having the CI job run a script which copies the built application to your server (same or different from the Runner host), and starting the app.

The script might look something like the following:

./build.sh  # Build app
scp -R out example.com:/opt/myapp        # Deploy app
ssh example.com ./opt/myapp/start.sh   # Start app
                                                                          # Script must run app in background somehow.
``
Or if you are using docker, it might look like:

``` bash
docker build -t myapp    # Build image
docker push myapp        # Push image to registry, command may be sightly wrong
ssh example.com docker run -d myapp  # Run image