Why is my LFS failing to push these files?

At work, I’m running GitLab 13.4.4.
I just enabled LFS in gitlab.rb for the first time, and am trying it out.

The only thing I did was uncomment and configure these 2 lines:

  • gitlab_rails[‘lfs_enabled’] = true
  • gitlab_rails[‘lfs_storage_path’] = “/var/opt/lfs”

Then I ran gitlab-ctl reconfigure.

To test, I created a fresh git repo using the web interface, cloned the repo on my computer (MacOS High Sierra), and ran this:

git lfs install
git lfs track "*.mp4"
git add .gitattributes

cp -v /small_movie.mp4 .
git add small_movie.mp4
git commit

cp -v /big_movie.mp4 .
git add big_movie.mp4
git commit

git push origin master

Even though the small_movie.mp4 file is 241MB, and the big_movie.mp4 file is 1.1GB, the git push command output said this:

$ git push origin master
Locking support detected on remote "origin". Consider enabling it with:
  $ git config lfs.https://company-software-server.com/tal/lfs-test-repo.git/info/lfs.locksverify true
Uploading LFS objects:   0% (0/1), 4.7 GB | 7.9 MB/s, done.
LFS: Put "https://company-software-server.com/tal/lfs-test-repo.git/gitlab-lfs/objects/f2314cbc2a5d01df0cbf97f89dfa338f3613610ab7971c34d7e3c8766900744d/1147248372": read tcp 192.168.XX.XYZ:56903->129.14.XX.YY:443: i/o timeout
error: failed to push some refs to 'https://company-software-server.com/tal/lfs-test-repo.git'

Not only did it try to push 4.7 GB of data for some reason (both the movies put together are nowhere near that big), but it failed.

The only server I have between my client, and gitlab, is an NGINX server. I configured NGINX with:

  • client_max_body_size 0;

to set the upload size to infinite, so that nginx doesn’t prevent uploads.

Am I missing something? Why did it try to push way more data than the size of the files? Why could it have failed?

Update: Bypassing nginx entirely, and using a LAN IP over HTTP (no encryption) instead as a test, I get very similar results. It starts out fairly fast, but slows down over time, sends 2.3 GB of data (again, way more than the size of both files for some reason), and fails.

Update 2: I can confirm that LFS is enabled in the web interface settings of my repo on my GitLab server. It looks like it was enabled by default.

Update 3: I read somewhere that LFS defaults to HTTPS. My guess is that when I use it locally to bypass my Nginx (which provides SSL), and tell git to use HTTP, LFS is still trying to use HTTPS and failing. Remotely, there’s some headers, or something the Nginx isn’t handling right for LFS. Any idea what it could be? The large size in the output is apparently caused by the retries, because of the failures, and is expected.

Thanks

I think I got it working.

Turns out that when behind an Nginx reverse proxy, a few things need to be configured on Nginx:

  • client_max_body_size 0;
    • Allow large files to be uploaded by LFS
    • Default Nginx file size upload limit is like 1MB (if I recall)
  • proxy_request_buffering off;
    • Don’t buffer uploads

As far as I can tell, because proxy_request_buffering defaults to on, when git LFS (the git client) was doing a PUT to upload the large git objects (my movies) to the gitlab server, nginx was accepting the connection, and trying to buffer the entire large binary blob, before sending it off to GitLab.

Nginx must have limits on its buffer size, so after Nginx accepted several hundred MB (not sure exactly how much), it would hit the buffer limit, and close the connection prematurely.

Because the connection didn’t fail initially, the git client would think that there was a network issue that can be recovered from if it just tries again, so it does. When it tries again, nginx again accepts the connection, downloads as much data as its buffer can hold, and kills the connection prematurely.

This keeps happening until gitlab exhausts its pre-configured number of retires, and finally errors out.

This also explains why the uploaded data size is way higher than my .git directory size. It keeps re-trying to upload the same data over and over again.

Setting proxy_request_buffering off; in Nginx tells it not to buffer anything, and to send partial requests to the GitLab server as they come in, which fixes the issue.

If someone wanted to, they could probably also just increase the Nginx buffer size for requests. In that case, they would probably either need to deal with running out of RAM, or set a disk cache Nginx can use that has enough space for large requests. Not sure if Nginx uses RAM, or uses disk cache by default, or where to set those options, but I’m happy with just disabling buffering.

Update: This makes sense (at least to me), and after doing this, I was able to push the large files successfully, but trying it again on another test repo with the same files, it fails again. Either this wasn’t the problem, or this wasn’t the only problem. Still looking into it.

Update 2: New problem shows 403 Forbidden for requests to upload the data. This seems to be unrelated to my original problem, and appears to be a bug covered here: https://gitlab.com/gitlab-org/gitlab/-/issues/25852

Since this new problem is a separate issue, this seems to be solved