User Uploads to S3 Buckets are invalid

Hello!

Yesterday I installed GitLab 14.9.0-ee to a brand new Kubernetes Cluster in order to migrate my old Docker Containerized Gitlab 14.9.0-ce to it. The Migration itself worked as I would have expected.

But now that everything seems to be working I still have one issue. User Uploads (Profile Avatars and Group Avatars) do not work.

What I can see is that when I upload a valid JPG file as Profile Picture, GitLab transforms it into a valid PNG. But then before uploading it to S3 it adds a hex number and “chunk-signature” with hash to the beginning of the file.

Example copied from a HEX viewer:

f05e;chunk-signature=374c3f5d1da2890d52c6d2f7049cdfe55519c167774715e97aeda387da146998...PNG........IHDR.........

This means that when the Browser gets to download the image (with download_proxy enabled) it is no longer a valid PNG file.

I have tested and when I manually upload a valid PNG into the place the invalid PNG was uploaded, the manually saved PNG works correctly.

From looking into the S3 bucket all images are there, just invalid.

The S3 Configuration must be correct because Gitlab can upload the images to it, and also download it. Its just that the images are invalid.

Installation Information:

GitLab 14.9.0-ee migrated from 14.9.0-ce
Kubernetes 1.23.4 in Scaleway Kubernetes Kapsule
S3 Storage Buckets in Scaleway S3

I have just seen that there are security updates available for 14.9.x and will be installing those now.

So. I have just tried and unlinked all avatars by just iterating over projects, users and groups in Rails Console, and then changed the target S3 bucket and upload an image.

Sadly the same issue occurres. After I upload it to Gitlab, it is loaded into the HTML DOM correctly and then uploaded into S3 with the chunk hash.

I have no idea why this is broken.

Here is my Helm-Values Bucket configuration for reference

   object_store:
      enabled: true
      proxy_download: true
      storage_options: {}
        # server_side_encryption:
        # server_side_encryption_kms_key_id
      connection:
        secret: gitlab-rails-storage
        key: connection
    lfs:
      enabled: true
      proxy_download: true
      bucket: atvg-k8s-gitlab-lfs
      connection: {}
        # secret:
        # key:
    artifacts:
      enabled: true
      proxy_download: true
      bucket: atvg-k8s-gitlab-artifacts
      connection: {}
        # secret:
        # key:
    uploads:
      enabled: true
      proxy_download: true
      bucket: atvg-k8s-gitlab-uploads2
      connection: {}
        # secret:
        # key:
    packages:
      enabled: true
      proxy_download: true
      bucket: atvg-k8s-gitlab-packages
      connection: {}
    externalDiffs:
      enabled: false
      when:
      proxy_download: true
      bucket: atvg-k8s-gitlab-mr-diffs
      connection: {}
    terraformState:
      enabled: false
      bucket: atvg-k8s-gitlab-terraform-state
      connection: {}
    ciSecureFiles:
      enabled: false
      bucket: atvg-k8s-gitlab-ci-secure-files
      connection: {}
    dependencyProxy:
      enabled: false
      proxy_download: true
      bucket: atvg-k8s-gitlab-dependency-proxy
      connection: {}

    ## https://docs.gitlab.com/charts/charts/globals#pseudonymizer-settings
    pseudonymizer:
      configMap:
      bucket: atvg-k8s-gitlab-pseudo
      connection: {}
        # secret:
        # key:
    backups:
      bucket: atvg-k8s-gitlab-backups
      tmpBucket: atvg-k8s-gitlab-tmp

And here the S3 Secrets in censored form

apiVersion: v1
kind: Secret
metadata:
  name: gitlab-rails-storage
  namespace: gitlab
type: Opaque
stringData:
  connection: |
    provider: AWS
    region: fr-par
    aws_access_key_id: <valid-key-id>
    aws_secret_access_key: <valid-key>
    aws_signature_version: 4
    endpoint: "https://s3.fr-par.scw.cloud"
    path_style: false
---
apiVersion: v1
kind: Secret
metadata:
  name: s3access
  namespace: gitlab
type: Opaque
stringData:
  accesskey: <valid-key-id>
  secretkey: <valid-key>

Example of the issue:

Image uploaded to Gitlab:

Image downloaded from S3:

Okay, so upgrading to the latest version (14.10.2) didnt fix the issue. Also moving back to Community Edition did nothing.

Today I thankfully found a configuration setting in the GitLab object-storage documentation that streaming uploads to S3 can be disabled, so I tried it.

AND IT WORKS NOW!

I have put this to my gitlab-rails-storage connection block, updated it and recreated the webservice pods.

enable_signature_v4_streaming: false

I am not sure if this now means that the max size of file GitLab can transfare to Scaleway S3 is 5GB, or if it now uses “normal” multi-part uploads (which allows for up to 5TB per object and is limited to max. 1000 parts with >= 5MB and <= 5GB).

Sadly I couldnt find anything about that in that documentation.

If anyone knows how GitLab will behave with this disabled and > 5GB files, please tell me :smiley: