An email alert from linode about suspicious activity in a VPS that only hosts Gitlab CE warned me and after sshing into the box I quicky found that it was compromised:
The CPU was maxed on a process running a agetty, under the git user.
Found on /tmp a suspicious x.sh file that after quick inspection was clearly a CPU crypto-miner.
My immediate actions were:
Performed a backup via linode manager and a manual backup of Gitlab and with config files.
Did a tar with all /var/log and all attacker files on /tmp.
Downloaded everything via scp and powered down the VPS.
Now, I admit the server was a bit unattended and was running an somewhat old version of Gitlab: 12.6.4 and Ubuntu server 16.04 (which ended support on April 2021). So I know that mistake is on our side.
Still, I want to analize how much damage was done and mostly how he gained access.
My first impression is that he never gained superuser access, and only managed to compromise the git user account that gitlab creates for ssh access to repos. Why? Well, any competent attacker with privilieged access will find better ways of hiding what was goining on by a wide margin.
Because of that, I’ve checked the /var/opt/gitlab/.ssh/authorized_keys and wasn’t able to locate anything relevant: all entries started with command="/opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shell key-x",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa where no-pty and using the gitlab-shell should have stopped the attacker to do any of this if he somehow gained access to one of our keys (which I still don’t believe so).
I’ve took a good look at /var/auth.log to check loging attemps and sudo commands and wasn’t able to find anything relevant to the attack.
I took a look at the CVE database / google and wasn’t able to find any obvious matchings for something like this, if there is any suggestion it will be appreciated.
I will greatly appreciate guidance on where I should look next!
Now, regarding future actions:
I will provision a new updated Ubuntu server, secure it and install the exact same version of gitlab CE.
I will restore the backup and start performing the upgrade path to latest 14.1.6.
I won’t restore our gitlab-secrets.json file, as I will supose it may be compromised. My first understanding is that this won’t affect us much (mostly reconfiguration of the gitlab runner, that will be wiped too). I know I have to check Back up and restore GitLab | GitLab.
I will ask to reset all users passwords.
We will scan the repos for any keys or sensitive information that may have slipped to solve.
It’s near impossible to tell how they did it from the information provided. The entire server would have to be checked. Checking /var/log/auth.log is only good as long as the logs are stored for. Default is 4 weeks, so had they connected before then, you won’t have anything in the logs locally after logrotate has done it’s job. Unless of course you have changed the default log rotation to allow keeping logs locally for longer.
Since the process is running as git, did you check the entire user list within gitlab? Did you find any suspicious users registered? My guess is your server allowed users to register (this option can be disabled so that only the Gitlab Admin can create users). I expect they registered this way, and then perhaps ran a Gitlab runner, or started the process over SSH longer than 4 weeks ago. This kind of abuse was happening on gitlab.com until credit-card verification was added to stop people abusing the runners by crypto-mining.
What you can do for the future:
Disable user registration so that no-one can register, only the Gitlab Admin can create.
Change under Admin → Settings → Visibility → Restrict Visibility Levels - Enable Public - that way nobody that isn’t logged into your server can not see your user list and attempt to brute force passwords.
Block SSH access, use only HTTP/HTTPS for git push/pull. That way you close down that attack vector. Create firewall rules for trusted IP addresses to gain access via SSH for sever administration for example. I personally see no reason or advantage for pushing over SSH when HTTP/HTTPS is just as easy. Do this via iptables on your VPS, not via the Linode Firewall in the web admin panel since your server is then still accessible on all ports within the Linode network. You can however duplicate the iptables in the Linode Firewall for extra security if needed. Either way, it should still be done locally on the VPS as well.
Make sure you upgrade your operating system and don’t leave it unpatched - check updates at least once weekly or once per month. I tend to do mine a couple of times a week.
Make sure you upgrade Gitlab regularly. That way you have the latest version that has resolved any potential security risks. If you are doing the updates regularly like in point 4, then Gitlab will also be upgraded during the operating system updates.
Check your Gitlab user list after restoring to make sure that you only have users that you recognise. Just be careful you don’t delete the Gitlab Service accounts. Ask here for verification if you aren’t sure be deleting anything. For example I have Gitlab Support Bot and Gitlab Alert Bot. Anything outside of this can be effectively counted as a rogue account that needs to be disabled/deleted.
Any server that is internet-facing should always be patched and updated regularly. Scan it afterwards with something like Nessus and make sure you have closed down any potential vulnerabilities. Alternatively there are services on the internet for scanning for vulnerabilities. Some are free and don’t report too much but can help a little. Paid subscriptions for such scanning would be more ideal.
We have experienced the same malware (aws warned us). Please, revise also the crontab entry for git user because I have found code to reinstall the malware.
I don’t know exactly how but a new admin user has been created without our consent… How this can be possible?? I hae updated to las 13.12.12 versión (last available for my ubuntu server) and restarting the server reinstall also the malware in /tmp or /var/tmp … Looking for some solution also.
Yes, had the same entry on the crontab file. That was added via the x.sh script.
Anyway I will wipe out the system and do a complete restore.
Regarding users, there is another reporter affected if you check the reddit link. Wasn’t able to locate anything in our database yet but I see a ghost user that is suspicious because we never remove users, only block them. So maybe it was created and removed.
Maybe not for git usage but you want to secure your server. Standard security practice is to block access. You asked for security recommendations so balance security with usage requirements. Otherwise google securing ssh server to ensure you disable password authentication amongst other changes to secure it. Either way, blocking ssh doesn’t stop you using git over http/https. But choose how you want. I work in networks and security, so I prefer to ensure my server is secured and cannot be abused. How you do it is up to you. The link to reddit also provides such security recommendations, block with ufw/iptables/fail2ban/geoip so choose how you want. Good luck!
This user exists in Gitlab, all deleted accounts, any issues, etc from deleted accounts will be attributed to the ghost user. A ghost user on my server exists since Dec 2017 probably appeared from what I remember after a Gitlab upgrade. Our server has been running since beginning of 2017, with good security practices by restricting SSH access, regularly doing upgrades, and hasn’t been compromised.
Could be the reason why yours maybe have been infected. It doesn’t specifically say any version under 13.x but it’s a possibility and worth considering. Either way hope you get your new server running shortly.