Running a self-hosted Gitlab instance (16.8) with a couple of docker-exec runners (16.8.0) each on their own dedicated server or VM. Each runner (host) has 8GB RAM.
I’ve noticed through monitoring that each runner is continuously using >90% of their total RAM, even though they are not running any job at the moment. I’m wondering whether it’s expected, or I should add more RAM to the runners, or perhaps change the configuration. I’m worried as with >90% used there’s not much margin.
I checked with htop and the main memory consumers are related with Docker and Gitlab (makes sense as these hosts are just Gitlab runners + Prometheus node-exporter, nothing else):
Here’s what conf.toml
looks like:
concurrent = 3
check_interval = 0
shutdown_timeout = 0
listen_address = "0.0.0.0:9252"
[session_server]
session_timeout = 1800
[[runners]]
name = "foobar-runner"
url = "http://gitlab.foobar"
id = 2
token = "glrt-foobarbaz"
token_obtained_at = 2023-12-19T15:17:30Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker"
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.docker]
tls_verify = false
image = "python:3.11"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
network_mtu = 0
pull_policy = "if-not-present"
Not sure if relevant but I’ve built the Docker images used in the pipelines locally on each runner, as I don’t have a registry yet.
I noticed a memory
config in the runner docker config section, which might be the key to answering this question. The thing is I don’t know if it’s required to use it, and I’m not sure if the runner will just “fill the space” by using all it can at all times, or if it just needs more RAM.
Any hint on whether this is a problem or not, and if it is, how best to address it?