hey guys, im running gitlab 15.11.6 standalone, 8 vcpu’s 32gb ram
i found out that on my gitlab instance there is high load in the middle of the night
there are around 600 people using it, but there is also a problem because there is one repo that contains lot of iso files, so it weights about 180gb… and all repos are about 400gb
in peak during backups i had a load around 40, because of those iso files and it was crashing my instance so i added twice as much ram, so now i have 32gb of ram and now load is 16 in peak
usually cpu usage is around 10-20%, so it is mind boggling to me
output of uptime on production: load average: 1.44, 2.07, 2.32
same thing on stage: load average: 0.06, 0.27, 0.25
my question is why i have load on production from 1.5-3 in the middle of the night, when nobody uses gitlab?
on stage environment i have 4 vcpu’s and 16gb of ram and load is as you can see
Gitlab has maintenance tasks that run, check under Admin → Monitoring → Background Jobs.
There are also housekeeping jobs for repositories, so the fact you have a git pack-objects command running, would suggest maintenance/housekeeping tasks. Which, would be normal behavior of your Gitlab server.
Maybe your stage environment is an empty server, so cannot be compared with a full production server and a huge ton of data.
thanks for answering
i will check if this situation will occur during weekend, when less people will work, then as far as i understand, gitlab wont have as many background jobs, if it is packing recently pushed repositories or other files, if that is the case
my stage environment is synchronized with production, but it is not used by users on daily basis, rather we as admins, use it for test purposes
Gitlab jobs are not related to the amount of users. These are maintenance tasks that run every day, weekend as well. Obviously the more data you have, the longer they can take, especially repository housekeeping.