LFS broke unexpectedly

We’ve had a gitlab.com repository with LFS for a while now, and everything used to work okay. However, yesterday, a lfs object file was pushed to the repository, but somehow the file was cut off, and it resulted in an access denied - the file was there, but it was too small (by a few kilobytes) and it couldn’t be accessed by anyone, just resulting in an “Access Denied” error.

We worked around this by going back a commit, and getting the files again and doing a fresh branch of master and merging back. However, now we have a new problem, and I have no idea if it is related to our problems yesterday.

Currently when trying to download lfs objects, I get a result of “LFS: lfsapi/client: refusing insecure redirect, https->http”

This worked fine just two days ago, and there has been no configuration changes in between. Only change has been the fact that one LFS object broke weirdly.

Anyone have any ideas?

1 Like

No solution here, but as of early this morning. I see the same (LFS objects failing to fetch due to a 302, a refused https–>http) with many LFS objects in a gitlab.com repository.

Looks like LFS fetches redirect to http://storage.googleapis.com/gitlab-lfs-objects/ ..., which is a downgrade to http, and is correctly forbidden.


I couldn’t fix the offending commits, but I rewrote history and synced everyone’s local repository properly to reflect the rewritten history, and now everything works okay - even LFS pushes and pulls.

Explain how please. I have similarly affected files which I tried purging with BFG Repo Cleaner and re-adding, and still cannot fetch them. Thanks.

Wrote up the issue here https://gitlab.com/gitlab-com/support-forum/issues/3040

I hard reset the all the individual branches that had the offending commit in the upline to a commit right before the offending commit, and force pushed the branches to remote. Then I went one by one with our developers to make sure they don’t have the offending commit anymore anywhere.

This is a really arduous process even with only a few collaborators and not many branches, and there’s lots that can go wrong. It’s also “time-sensitive” in that the more time passes, the more changes are made to the history and more branches are created from an offending commit, and the harder it gets to fix it this way.

Thanks, I rolled back with just plain git reset to keep my work since then.

But when I tried to re-commit and re-push my work, the LFS files still existed server-side and were not re-uploaded, so the problem persisted. Were you able to get around this in any way? Seems any GitLab features that could deal with this are WIP at best, so we’re in a pretty grim situation.