403 errors on self-hosted gitlab

I have had a self-hosted gitlab-ce that’s been running well for the last year or so. It’s fully up to date. Today I have suddenly started receiving 403 Forbidden errors, and once they start, I’m completely locked out for a while. What’s more mysterious is that it appears to affect all users at once. For example, I log in successfully as admin into the UI. I hit refresh a few times and I’m locked out.

Similarly, I have external services configured to talk to my gitlab using personal tokens, and they get the same treatment - two projects on envoyer.io accessing the same repo, one will succeed while the other fails (which one is random).

I see that this subject has come up many times over the last few years, but there are no solutions posted. The only description of a ‘fix’ discusses gitlab.com, and says that there is a throttle/blacklist mechanism that limits login rate - that’s fair enough, but it looks like the thresholds are set too low.

My installation has seen increased (legitimate) activity over the last couple of days, which is why I guess it’s happening now, but it also means that it’s right when it’s most damaging.

How can I stop this happening or raise these limits?

1 Like

I found some info about rack attack, however, it doesn’t appear to be that as there are no rack attack entries in my log files, though it does show 403 errors:

Started GET "/" for x.x.x.x at 2019-06-20 16:17:08 +0000
Processing by RootController#index as HTML
Completed 403 Forbidden in 237ms (ActiveRecord: 9.7ms)

Note that this IP is whitelisted in the rackattack settings.

What is responsible for these rejections?

I tried issuing a new token - it’s rejected with a 403 the very first time it’s used, which suggests this is definitely an IP-based ban. I then tried it from a different location and it worked fine. This is impossible to work with!

I found more output in /var/log/gitlab/gitlab-rails/api_json.log:

{"time":"2019-06-20T17:57:37.924Z","severity":"INFO","duration":350.66,"db":22.25,"view":328.41,"status":403,"method":"GET","path":"/api/v4/user","params":[{"key":"private_token","value":"[FILTERED]"}],"host":"git.example.com","ip":"209.97.156.220, 209.97.156.220","ua":"GuzzleHttp/6.3.3 curl/7.58.0 PHP/7.2.19-0ubuntu0.18.04.1","route":"/api/:version/user","queue_duration":62.4,"gitaly_calls":0,"gitaly_duration":0,"correlation_id":"j79OHchvjp4"}

I don’t know if the fact that it appears in this file helps narrow down what is responsible for this.

I rebooted my server, and gitlab served precisely 3 requests before shutting it all down with 403s again.

I have disabled rack attack altogether, so it’s definitely not that.

I eventually had to give up with this. I pushed my code to a private repo on GitLab.com and repointed the external services at that, and now I have no problem with blocking, even though the clients are doing exactly the same thing. This tells me that the policies embedded in GitLab-CE are inappropriate, and more critically, are not controllable or visible by admins, so we are stuck with them. I can’t even tell which part of gitlab is doing the blocking, other than that it’s not rack attack, even though that is exactly the component that is meant to be responsible for blocking potentially malicious access attempts. It’s a bug.

Same bug here with repositories hosted on gitlab.com. We use gitlab for several years in the company I work for, and we now have random 403 errors for random people at random moments for no reason. When we switch to another wifi ou 4G (same computer, same repo) there is no more 403… for a while. Yesterday I tried 2 wifis, 1 ethernet, 1 4G connection from work and none of them worked (but it worked for some other people). I had to take my computer at home to push my code on gitlab. Is there a known solution for this bug, which seems to last for at least 2 years? Thanks!