New project was created on production environemnt. I forgot to enable lfs and when pushing it didn’t work. But now for some when I do git lfs push for any project when adding some LFS file (it can be small so it isn’t some timeout issue) it keeps pushing, everytime saying “LFS: Access forbidden. Check your access level.” trying again and every time it tries for another time it makes a new folder in cache folder of lfs-objects. I know that project has LFS enabled, because if I disable it it failes at:
trace git-lfs: HTTP: POST …/info/lfs/locks/verify
trace git-lfs: HTTP: POST …/info/lfs/objects/batch
And when LFS is enabled it goes pass this but fails here:
Started PUT "/group/repo.git/gitlab-lfs/objects/c124cd4ddfdc022e0babda6aa1a538e2e51a4ba0be58397b13724bf841221146/30027" for 1.2.3.4 at 2018-07-11 09:54:38 -0500
Processing by Projects::LfsStorageController#upload_finalize as HTML
Parameters: {"file.md5"=>"71149e44dbcab7b9fa16daf81aa03f41", "file.name"=>"c124cd4ddfdc022e0babda6aa1a538e2e51a4ba0be58397b13724bf841221146", "file.path"=>"/gitlab-data/shared/lfs-objects/tmp/uploads/c124cd4ddfdc022e0babda6aa1a538e2e51a4ba0be58397b13724bf841221146204444403", "file.sha1"=>"f453ed7cfa933818215bafd371d0c44a90bb6595", "file.sha256"=>"c124cd4ddfdc022e0babda6aa1a538e2e51a4ba0be58397b13724bf841221146", "file.sha512"=>"f894cbd80626a576cf8947aa3a84bb8d3e814013d9d61a281f8d3be4d2fccfbc71eacfb219dc723dd03da19d50f3941c1088b3613d7e6d171e1f2fbd08f2bb76", "file.size"=>"30027", "namespace_id"=>"group", "project_id"=>"repo.git", "oid"=>"c124cd4ddfdc022e0babda6aa1a538e2e51a4ba0be58397b13724bf841221146", "size"=>"30027"}
Completed 403 Forbidden in 22ms (Views: 0.1ms | ActiveRecord: 10.5ms | Elasticsearch: 0.0ms)
I am not sure where the Forbidden is coming from - nginx? github-workhorse? The user pushing is an owner on the repo so I have full permissions to write to the repo.
Having the exact same issue with 11.1.4-ce. Is there somebody who was able to fix it? We really would like to migrate our repositories to LFS but rebuilding the server just for this purpose is absolute no option.
At our Company we have had the same situation after we had to rebuild gitlab repositories.
The LFS files are stored once in the <git_data>/lfs-object directory. This ensures only one copy of the file is stored even if it is referenced by multiple repos.
The git lfs uses the sha254sum has to determine the file structure and file name.
so if file.zip has a hash of 45dac46c5e34efca58f7d8ce0b389a905bc453de36d6f66de690a1f961752df2
The lfs files are stored two levels deep in with the structure being ‘first byte’/‘second byte’/‘the rest of it’ - ie for object 45dac46c5e34efca58f7d8ce0b389a905bc453de36d6f66de690a1f961752df2 it is /45/da/c46c5e34efca58f7d8ce0b389a905bc453de36d6f66de690a1f961752df2
The root cause of the issue is that the Gitlab database has a refrence for this hash but no corresponding file location. So when the file is pushed to gitlab the server thinks this file exists and when it can not find the file location or file in the lfs-objects directory it gives a 403 forbidden error. the gitlab server does not check if the file exists or not.
A workaround to get the file committed would be to recreate the file with a change. The hash for the resulting file needs to be different than the original file - just renaming is not enough. Since gitlab still has the hash in the databse, this doesn’t solve the original issue.
The actual steps to fix the issue are:
copy of over the files to git
use the sha256sum to get the first two directories
create them if they dont exist and copy the file to the new location
and then rename the file using the remaing part fo the hash value