Gitlab behind proxy: how to log external IP addresses

Gitlab runs with docker-compose and there is a docker registry and two runners.

Currently I use haproxy as proxy, with SSL offloading taking place in gitlab (haproxy runs in tcp mode).

The two runners have to be able to access a service that provides secrets to images running in the CI/CD pipelines of some projects.Therefore I configured gitlab, the runners and said service to share a common network, so I have an entry:
network mode: gitlab_network in the config.toml of the runners.

The problem I encounter is, that the IPs logged by the nginx inside gitlab are always equal to the IP of the docker-container of gitlab itself. This is, because the original IPs seem not to be passed on in tcp mode.

I found out about proxy_protocol and configured nginx according to the documentation here.

I added this in the haproxy backend configuration:

server server1 127.0.0.1:xxx send-proxy-v2

and this in the omnibus config, as documented:

nginx['proxy_protocol'] = true
nginx['real_ip_trusted_addresses'] = [ "IP_OF_THE_PROXY/32"]

This does lead to the external IPs being correctly logged, when gitlab is accessed.

However, the runners now cannot reach their destination url. In the logs I see error messages like this:

gitlab-runner_1  | Dialing: tcp gitlab.mydomain.tld:443 ...          
gitlab_1         | 
gitlab_1         | ==> /var/log/gitlab/nginx/gitlab_error.log <==
gitlab_1         | 2023/03/31 06:37:55 [error] 687#0: *44082 broken header: "�"��[���_�'�9�A�Jƕbj���Vt'� K�Q���@!X�;��Ʌ�Ƈe�*o̷A�Q/�&�+�/�,�0̨̩�	��
gitlab-runner_1  | WARNING: Checking for jobs... failed                runner=G7ZWES2_ status=couldn't execute POST against https://gitlab.mydomain.tld/api/v4/jobs/request: Post "https://gitlab.mydomain.tld/api/v4/jobs/request": read tcp 192.168.16.3:36282->192.168.16.2:443: read: connection reset by peer
gitlab_1         | ���/5�" while reading PROXY protocol, client: 192.168.16.3, server: 0.0.0.0:443
^CERROR: Aborting.

So it looks like the runner tries to access nginx 1. directly, 2. without using proxy_protocol.

Is there a way to solve this by somehow reconfiguring the runners?

Or are there any other suggestions concerning the setup as a whole?

Any hints are welcome!

Below my GITLAB_OMNIBUS_CONFIG:

external_url "https://${DOMAIN}"
        letsencrypt['enable'] = false
        gitlab_rails['smtp_enable'] = true
        gitlab_rails['gitlab_email_from'] = 'gitlab@mydomain.tld'
        gitlab_rails['smtp_address'] = "mail.mydomain.tld"
        gitlab_rails['smtp_port'] = 25
        nginx['proxy_protocol'] = true
        nginx['listen_https'] = true
        nginx['ssl_certificate'] = "/etc/gitlab/ssl/${DOMAIN}.crt"
        nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/${DOMAIN}.key"
        nginx['http2_enabled'] = false
        nginx['proxy_set_headers'] = {
          "Host" => "${DOMAIN}",
        }
        nginx['real_ip_trusted_addresses'] = [ 'myip/32' ]
        nginx['real_ip_header'] = 'X-Forwarded-For'
        nginx['real_ip_recursive'] = 'on'
        gitlab_rails['omniauth_providers'] = [
          {
            "name" => "github",
            "app_id" => "xyz",
            "app_secret" => "xyz",
            "args" => { "scope" => "user:email" }
          }
        ]
        gitlab_rails['packages_enabled'] = true
        gitlab_rails['lfs_enabled'] = true
        gitlab_rails['registry_port'] = ${REGISTRYPORT}
        gitlab_rails['registry_host'] = "${DOMAIN}"
        gitlab_rails['backup_keep_time'] = 86400
        gitlab_rails['gitlab_ssh_host'] = 'gitlab.mydomain.tld'
        gitlab_rails['gitlab_shell_ssh_port'] = mysshport
        registry['enable'] = true
        registry_external_url "https://${DOMAIN}:${REGISTRYPORT}"
        registry_nginx['redirect_http_to_https'] = true
        registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/${DOMAIN}.crt"
        registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/${DOMAIN}.key"
        puma['worker_processes'] = 2
        puma['per_worker_max_memory_mb'] = 2024
        sidekiq['max_concurrency'] = 10
        postgresql['shared_buffers'] = "500MB"
        prometheus['listen_address'] = '0.0.0.0:${PROMETHEUSPORT}'

And the config.toml for the runner:

concurrent = 2
check_interval = 0
log_level = "debug"
shutdown_timeout = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "cicd"
  url = "https://gitlab.mydomain.tld"
  id = 18
  token = "token"
  token_obtained_at = 2023-03-09T13:51:01Z
  token_expires_at = 0001-01-01T00:00:00Z
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    MaxUploadedArchiveSize = 0
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.23"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/srv/gitlab-runner/shared:/shared_cache:rw", "/cache", "/var/run/docker.sock:/var/run/docker.sock"]
    shm_size = 0
    network_mode = "gitlab_network"

You need to have haproxy set the X-Forwarded-For header (haproxy docs).

The comment by @aljaxus is irrelevant: There is no X-FORWARD-FOR header in tcp mode.

I also tried send-proxy (as opposed to send-proxy-v2) in the haproxy config with the same result.

The client IP in the error message relating to the proxy protocol in my case is the ip of the gitlab runner container, that I defined inside the same docker-compose file as gitlab itself.

So I assumed, the problem is that the gitlab runner tries to access the nginx inside the gitlab container via it’s IP inside the docker network, bypassing the reverse proxy and therefore not receiving the PROXY protocol.

However, changing “network_mode” inside the runners config.toml to “host” did not change anything, which I don’t understand.

Maybe somebody can comment on this.

It would also be very helpful, if some hint about how to handle the runners would be added to the part of the documentation relevant for configuring gitlab behind reverse proxy with mode tcp, specifically:

Chapter “Configuring the PROXY protocol”.

Also, there it says “Once enabled, NGINX only accepts PROXY protocol traffic on these listeners. (emphasis by me)”. So this sounds like any other connection to nginx should not use tne PROXY protocol. In this case, the error should not occur at all and we would be talking about a bug.

I am a bit astonished, to say the least, that an issue as this one is receiving so little attention.

I would say, change mode tcp to mode http in haproxy config. I don’t see why you need mode tcp? Any specific reason? Maybe you can explain why you need that mode?

Also, the Gitlab documentation is for reverse proxy with nginx, therefore the documentation doesn’t contain anything related to haproxy whatsoever, nor mode tcp.

What you also have to remember, is when using a reverse proxy that configuration needs to be set in two places. First is configuring the appropriate headers - be it nginx or haproxy, here are some I used with haproxy when using http mode:

    http-request set-header X-Forwarded-Port %[dst_port]
    http-request set-header X-Forwarded-Proto https if { ssl_fc }
    http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
    capture request header X-Forwarded-For len 64
    capture request header User-agent len 256
    capture request header Cookie len 64
    capture request header Accept_Language len 64

second, within Gitlab you will also need the real IP options configured - although it seems that is supposedly done already from the original poster so that shouldn’t be the problem here.

You cannot blame Gitlab for the fact that haproxy doesn’t allow header configuration when it is in tcp mode - so the solution is change the mode or ask on haproxy forums on how to configure headers when mode tcp is being used. Or, use nginx as the reverse proxy as per the Gitlab documentation which works.

Thank you for your answer.

Using http mode would mean that ssl offloading takes place in haproxy.
That has been an issue for me in the past because the docker registry wants https by default and I did not want to override that.

But maybe I should give that another try. The headers you mention can be set with haproxy too, when using http mode.

The documentation in the section “Configuring the PROXY protocol” explicitly mentions haproxy. The PROXY protocol is all about tcp mode. And nginx['real_ip_trusted_addresses'] is also mentioned there.

Actually the configuration described there as such works as advertised; only the gitlab runners I have cannot reach gitlab any longer.

I will at least have some additional trials with runners that do not connect via the network of the docker service, before I give up on this.

The reason for the issues you had with SSL offload is because of the server then returning content with http - the reverse proxy wasn’t telling the webserver it was behind a proxy. The lines for that:

    server app1 server01:80 alpn h2,http/1.1 check
    server app2 server02:80 alpn h2,http/1.1 check
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request set-header X-Forwarded-Proto https if { ssl_fc }
    http-request set-header X-Forwarded-Proto http if !{ ssl_fc }

the first two lines are the servers I redirect to. The frontend is https, the above is the backend with http. The important lines are the X-Forwarded-Proto ones so that the webserver or in this instance Gitlab behind the reverse proxy returns https instead of http even if it is listening on http and not https. Therefore, when you view the source of the page every entry for http:// will show https instead. The X-Forwarded-Port is also important.

If you are missing those entries, the webserver or Gitlab doesn’t know what the forwarded protocol was, and so any content returned is http as if you weren’t using SSL. Then you find things like images, etc do not load because they are trying to load http and not https. Or when you click links, they try to open as http instead of https.

This could be the same reason for the registry, but yeah it could be best just doing as you did and running that on https. You can also configure your haproxy backend to not use SSL offload, and do full SSL for the backend as well. In this case, if it’s a self-signed certificate, you can add at the end of check verify none to ignore verifying the certificates.

The other headers that I sent previously capture are also worth setting, and something I’ve used to ensure the X-Forwarded-For IP address of the browser connecting is sent.

In addition to the haproxy config, the backend server, which in this instance is Gitlab with nginx, will also need this:

proxy_set_header X-Forwarded-Proto $scheme;

that is supposed to return https if it sees that https has been forwarded from the haproxy terminating SSL. With Apache, when I did this, the equivalent was:

SetEnvIfNoCase X-Forwarded-Proto https HTTPS=on

so the backend server listening on HTTP sees incoming protocol as https, and so returns content as https (which is effectively replacing any http:// entries with https:// dynamically)

SSL Offload requires quite a bit of additional work than compared to using full SSL/HTTPS between frontend and backend, due to the translation of everything between https and the servers listening on http.

I tried your suggestion and was able to have gitlab accessible again after some trial and error.

However, I still have the issue with the runners: The runners are trying to access jobs under
https://gitlab.mydomain.tld/api/v4/jobs/request, only this time the error message is of course different; it is something along the lines of “http response to https request”. Somehow the requests from the runners must take a different route than the ones from the outside, although the URL starts with “https://gitlab.mydomain.tld”.

I also tried to change the URL configured in the runners config.toml to access gitlab directly on the http port or to change network_mode to “host”, but the change was not honored - as previously noted.

If you are using a load balancer/reverse proxy, then all requests to gitlab.mydomain.tld will be https. If it’s getting http response to https request, that would suggest a redirect is being made on the reverse proxy from http to https. Assuming something like that was configured in haproxy it would look something like this:

frontend app_frontend
    mode http
    bind *:80
    bind *:443 ssl crt /etc/haproxy/ssl/labs-wildcard-bundle.crt alpn h2,http/1.1
    http-request redirect scheme https unless { ssl_fc }
    default_backend             app_backend

The other alternative, perhaps the runners were registered before https was configured and are configured to connect on http://gitlab.mydomain.tld in which case you may wish to think about removing the runners and registering them again as new runners using the https url instead.

Or, if you are running gitlab runners via docker on the same machine where you have gitlab running - since you mention runners taking a different route than from the outside ones? In which case, you shouldn’t be running runners on the same machine where Gitlab is installed, be it natively installed using Gitlab packages, or even docker (Gitlab do not recommend doing that for performance reasons and most likely in similar config situations like this). If that is the cause, remove those docker runners and run them all outside of your Gitlab server/docker instance.

Sorry, I had too much other work to do during the last weeks. This week I am going to try removing and adding the runners with one of the configurations that otherwise worked and post the results here. I would not have imagined that could be the reason.