How to optimise speed/latency with small teams on opposite sides of the world

Hello! New user here. I’ve done a lot of searching but just can’t get the right info.

I set up a VPS server with gitlab that’s local to my office. Works fine. Really fast. Only 3 users.

Now I also have 2 users in the opposite side of the world. They’re complaining that it’s really slow. Slow as in the web frontend is really laggy.

So we did some ping tests, from the server to/from the slow hosts. 300ms is bad but expected. Maybe that’s the reason for the slowness?

Transfer speeds from server to/from the slow hosts are also <5Mbps. Not good.

But the slow hosts normally get >50Mbps to their local servers, and around 10ms ping times. So they have good local connectivity.

So I set up a new VPS nearest to the slow hosts, and did some tests to the main server. Ping times are still 300ms, but transfer speeds are over 100Mbps. So maybe setting up a cache or proxy nearer to the slow hosts would make a diff?

I was thinking whether setting up GitLab on the VPS nearest to the slow hosts, and pointing it to the original database and filesystem would make GitLab appear faster for the slow hosts.

Saw some info about Load balancing, all the way through to geo, but it looks complicated with HAProxy and other things that are not described in Gitlab docs. Also it’s not clear whether it’ll make any diff at all cos they seem to be for >1000 users where I only have 10 users, so I dun wanna try only to find out its a waste time.

Please anyone got some tips?

What you can try before creating a new VPS.

Install a VPN, like for example Windscribe on the machines with the problem. Get them to connect to a location which is as close as possible to where your VPS is. Then when this VPN is active, get them to do git pull/push etc to see what the performance is like. That will at least help rule out a couple of things:

  1. If their ISP’s are limiting traffic depending on what country they connect to. Running a VPN will encrypt the traffic, so the ISP won’t be able to throttle it based on the content. Let’s hope they don’t throttle VPN’s though!
  2. By connecting the VPN to a location closer to your VPS you are decreasing the amount of hops before they get to connect to your server - so you can at least rule out all the routing between their country and the country the VPS resides in.
  3. By being closer to the VPS, you can also check if the VPS provider is also part of the problem and if they have limited bandwidth. Depends on who the VPS provider is.

That will at least help diagnose, before you think of placing the VPS in their region. But also, what you could do, is most VPS providers have various locations, so technically, you could pick a location for the server which is more or less in the middle of both of you. That way, you both have a decent chance of getting decent throughput, without having to start going into clustering/replication etc.

You can also use this tool, providing you can ping your VPS and you didn’t restrict it: Ping Test - Simultaneously Ping From 10 Global Locations | KeyCDN Tools

it can help you get a visual based on a load of locations for your current VPS. For example, I used this for my VPS in London, and the worst ping was 34ms from Dallas.

Thanks @iwalker for those tips. We’re sure it’s purely a distance issue causing lag and not their ISPs limiting their traffic. Did some more speed tests and actually they can in fact get around 40Mbps downloads (I said <5Mbps in my original post but it fluctuates quite a bit).

So this becomes how to optimise latency with small teams on opposite sides of the world. The ping test you suggested was very useful.

We don’t really want to place the VPS at a location in the middle of both locations, since that is how we first started, and the current VPS that is close to one office is blazing fast.

Now I specifically want to find out more about how to set up a second VPS with Gitlab synchronised with the first.

A quite lengthy post I posted once relating to HA:

as you will see it’s not so straightforward or simple, especially if you are thinking about achieving this with two servers. In particular to attempt what you want with just simple replication to allow connection to the nearest server from the location in question.

Thanks for linking to your post. It strikes me as very strange that it’s either run it on 1 server, or run it on 7 servers! 1 to 7 with nothing in between?

I’m only interested in <20 users, but on different sides of the world, which means lagginess is the only problem, not hardware capability or server load. So would rather not deploy 7 servers!

Attempting to deploy on 2 servers …

It should be possible to deploy one complete Omnibus server, then install a second Omnibus server with application_role only, and connect postgresql, redis, etc. to the first server.

According to the docs,

The GitLab App role is used to easily configure an instance where only GitLab is running. Redis, PostgreSQL, and Consul services are disabled by default.

I’ve started by reconfiguring the first server so that postgresql and redis to listen on TCP, and to redirect db_host, redis_host to the IP address, and making sure that it still works. Yep, I can still login through a web browser.

But I’m totally stuck on what to do with the second server. After setting application_role and connecting it to the first server’s postgresql and redis, what to do? The docs say that postgresql, redis and Consul are disabled. I’ve pointed the first two to the first server, but what on earth is Consul? The application_role also doesn’t seem to have a web server enabled, so do I enable puma or something?

Am I in the right direction? Help would be greatly appreciated. Thankyau.


I’ve successfully installed the second server with application_role only. I basically followed the 2000 user reference architecture to configure GitLab Rails.

I’m manually pointing my web browser to my first server (New York) and second server (Melbourne) addresses for testing. The second server connects to the first server’s postgresql, redis and gitaly. I can login to both servers and see that repositories and wiki pages are the same.

Browsing to the first server is same speed as usual.

But browsing to the second server has a weird ~10 second delay before anything happens. Feels like it is waiting for something, timing out, and then continuing successfully. What might be causing this delay?