Hi, I’ve been using Gitlab pipelines for a while now, for example to build my own docker images. My setup is as follows: one VM running Gitlab and one VM running the Gitlab Runner. Both are docker containers. I also always keep both up to date with the latest (stable, community) version.
In the runner, I use docker-in-docker to build the images.
Here is my config.toml of the runner:
concurrent = 1
check_interval = 0
shutdown_timeout = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "my-runner"
url = "https://***my*gitlab*url***"
id = 1
token = "glrt-***"
token_obtained_at = 2024-03-25T10:20:11Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker"
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
network_mtu = 0
Unfortunately, I can’t seem to control the resources of the pipeline anywhere, and with every new build (from source) the CPU load is extremely high (load of 24 with 4 cores) and the RAM is also heavily utilized. If out-of-memory occurs on the runner VM, then strangely enough it is not the docker container that is killed, but other processes running on the server. (Ubuntu 22.04).
And sometimes the build process gets completely stuck, so that the docker container of the build no longer exists, but the process in the VM continues to run, so that you permanently have 100% CPU, and the pipeline in Gitlab continues to run, even if the specified timeout of the project has long been exceeded:
Is there any way to control the resources from Docker, how much RAM or CPU the build process is allowed to use? For example, a maximum of 4GB RAM and only 3 of 4 cores or something similar. I just want to make sure that the operating system is not 100% utilized (and overloaded) when a Gitlab pipeline is running.