Gitlab-redis process CPU spike

Hi,
I have self-hosted gitlab-ee.
Server for it meets requirements and all was fine until few days age something started to run slow.
I did deep checkup, updates to latest possible (13.2.2 version). However something stills run slow.
I noticed gitlab-redis process spiking up to 300% cpu usage

so I decided to move redis to external serwer. Everything works, gitlab uses self-hosted redis server but the process still exists and spawns everytime I kill it. Can’t find anything about it.

Any ideas?

Hi,

First off, that is not an official service:

root@gitlab:~# ps aux | grep -i redis
gitlab-+ 12353  1.8  0.0  75996 19724 ?        Ssl  Nov03 509:49 /opt/gitlab/embedded/bin/redis-server 127.0.0.1:0
root     24006  0.0  0.0   8740   824 pts/0    S+   15:47   0:00 grep -i redis
redis    26370  0.2  0.0  94152 12664 ?        Ssl  Nov08  50:16 /usr/bin/redis-server 127.0.0.1:6379
root     28826  0.0  0.0   2160   700 ?        Ss   Nov02   0:00 runsv redis
root     28840  0.0  0.0   2304  1224 ?        S    Nov02   0:01 svlogd -tt /var/log/gitlab/redis

that is from my server. Most likely that is a cryptominer since no gitlab processes run as git. That means your gitlab install has been compromised and you need to upgrade it so that the vulnerabilities are fixed. For more info read this post:

2 Likes

Thanks for reply.
I upgraded gitlab to 13.12.15-ee - the newest my OS could fetch.
Also found the malicious code. It was a cryptominer.
Inverstigated by looking up what commands git user is starting.
Also found 2 malicious repos and an IP adress. IP blocked and repos ported to the github.

Your response was very helpful. Made me look more into user itself.

Thanks again!

You might also want to check crontab for the git user, as well as in /etc/cron.d just in case they put a cronjob to reinstall the cryptominer on reboot etc. Once you are updated, the vulnerability should no longer be exposed, so you shouldn’t see them attacking your server again. Maybe disable self-registration on your server so that they cannot create accounts externally without administrators doing it for them. Alternatively, enable the option that the admin user must approve the account if you must have self-registration enabled. That can stop the bulk of problems for the future as if they cannot register and create accounts or obtain access to projects (eg: you don’t have publicly available projects that allow access for creating snippets etc).

Yeah, I have disabled self-register at the very begining. There also was no not known cron job on any user. Interesting part however was that the git user had constantly open bash which was used to execute curl commands for download.

Maybe they had ran screen or tmux or backgrounded the bash scripts or processes instead of doing cron jobs. Anyway, glad the server is OK now.

Thanks again, determine that it was not gitlab process was the key for the solution!

2 Likes

Hi Scode,

I am facing the same issue which you had.

Can you please provide the instructions in detail which you proceeded.
Thanks

Hi,

So as it was stated before it is not git user. I think git user doesn’t even run any commands.
Killing mining process does nothing as it starts again.
Cron tables were also empty.
So I took different approach. Gotta check what git user is actually doing.

I did a little spy on commands executed by git user with sysdig tool

# sysdig -c spy_users | grep "git)"

It shows life-time commands executed filtered to git user.
That helped me find the actual source of miner. It appeared to be some kind of script downloaded and executed every minute I belive. It checks if miners is on. If not downloads it, starts and repeats the loop.
Blocking the IP, killing the process and finding where the previously downloaded scripts were helped me get this off my system.

Hope it helps

Hi,

Thank you for valuable information. I will install sysdig and check it.

Please share more details about your findings. That’s will really help me a lot.

Thank you.

Installed sysdig and found the details about the scripts and ip.

I can block the IP and kill the process. Can you please share where the previously downloaded script. It will helpful to make a permenant solution.

Thanks a lot.

I’m sorry. I can’t really remember and can’t find history of it. Sysdig should tell you full commands like wget or curl with eventual path where it’s downloaded. Also you can check /tmp dir for suspicious files.