Https and logging

I am running gitlab-ce on a server behind a router/firewall and trying to access externally it via a non standard port

As I already have another machine running on https port 443 I want to run the gitlab instance on a different https port and using letsencrypt certificates generated elsewhere and copied across.

Gitlab runs on a local IP No firewall local setup - only currently on router (trying to isolate issues)

I have an external hostname pointing to the static IP on the router

The router is set to port forward external.ip:4443 ->

As per the docs I set (everything else is default):

external_url ‘

I can connect on the local IP This shows the letsencrypt certificate is loaded.

(Browser shows: “This server could not prove that it is; its security certificate is from”)

A couple of issues.

I cannot connect from an external site to

I can see the router passing packets from external to the internal IP but get this in Firefox:

In the bottom bar it says 'performing TLS handshake and then

“Secure Connection Failed
The connection to the server was reset while the page was loading.
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.”

Unfortunately to compound this issue I can see the nginx standard access_log which shows data when accessing from the local IP, but I can’t see ANY nginx logging at all for external sites, so it is proving impossible to track down where the issue lies.

If I add this to gitlab.rb

nginx[‘redirect_http_to_https’] = true

I can see the connection to try to upgrade to but it then fails as above.

root@gitlab:~# netstat -tan | grep 4443
tcp 0 0* LISTEN

I’ve hunted though here for answers but still come up stuck

Tried the following in gitlab.rb

This should be able to be modified from the template:

http {
log_format gitlab_access ‘<%= @gitlab_access_log_format %>’;

but seems it is ignored - I added the $ssl bits, but it never gets expanded to nginx.conf

nginx[‘gitlab_access_log_format’] = ‘$remote_addr - $remote_user [$time_local] $ssl_protocol/$ssl_cipher “$request_method $filtered_request_uri $server_protocol” $status $body_bytes_sent “$filtered_http_referer” “$http_user_agent”’

I’ve seen some comments about adding the external IP to /etc/hosts ?

Current hosts file is:

root@gitlab:~# cat /etc/hosts localhost gitlab

Any help or suggestions gratefully received. I’m sure the solution is dead simple but it’s like banging my head agains a wall !!

No one able to help?

I tried this same route… but I ended up going with a wildcard cert we already had because I didn’t have the Gitlab server accessible from the web when I first switched it to HTTPS, and I couldn’t figure out how to manually active the LE process after it had ran and failed first.

However, I have maintained the external access (restricted only to authorized IPs via the Firewall) and it works very well being NAT’ted from --> gitlab:443 internally.

Going through your notes again…

The hosts file you need to modify is on your client pc, when on the local network (unless you use a locally hosted split-horizon DNS service, such as from Active Directory).

You need to remove those entries from Gitlab’s hosts file. It should be seeing all traffic accessing it via the same URI whether local or remote:

Oh, I see you have got the LE certificate loaded.

That would make it look like a firewall issue. What firewall are you running?

I would step back to http (or use an equivalent service) to troubleshoot and verify that connectivity first. You may also have something else in play (like a filter) that is interrupting the SSL communications.


What on earth is doing here?

Hi and thanks for replying.

Firewall is Endian. Not had any other issues with ports before though.

Need to go have a check again…the was my desktop from the other end of a vpn.

I’m currently away on business til the weekend but will try and check through what you have said. Your point about URI may be relevant… some machines need to connect on the same network 10.0.0.x some from vpn 192.168.10.x and some external.

I presume gitlab wants them all to be accessing using the external IP only, not a local one?

Howdy! Sorry for the piecemeal responses, I was having trouble digesting the whole thing at once so I responded bit by bit. If that’s the only IP you tested from, it may be because your Gitlab machine did not have a route back to that host… unless there is one statically and perpetually created directed on your Gitlab machine. It established the connection because of the port forward, but may not have known where to go with its response, especially if your VPN server has a different IP than your firewall/gateway.

Gitlab should respond fine to both local and external requests (technically it is all local to Gitlab anyways) as long as the requests themselves are requesting the proper URI: You can use a local machine’s hosts file if you don’t have a network-wide DNS setting for that internally.

OK - thanks for that.
I’ve been looking at putting it on VM on my cloud server to avoid the issue completely, but I’ll go back and revisit it next week when I get home as I’d rather the box was tucked away if possible.

Sounds good, best of luck!

1 Like

OK, I fixed it finally.

Not sure it is the greatest solution and could do with refinement I am sure.

The key is keeping the gitlab nginx instance running, but moving it to SSL.

I then ran apache in front of it as a simple reverse proxy, and job done.

I wrote it up briefly in a blog post here


set gitlab.rb to

Add your Letsencrypt certs/links

Now gitlab is running on a https port.

Then run apache and configure a simple reverse proxy to the gitlab nginx instance.

Sorted !

Quite honestly, Gitlab is a mess with the way it runs its own nginx instance etc. That’s where a lot of the issue lie (so many with the same or similar issues)

I guess they wanted to do it to save trying to tell you how to do a local webserver setup to pipes and ports etc.

Hey ho.