400 Bad Request error while uploading artifacts to MinIO

Hi!

I have a Gitlab instance installed from the Helm charts on a k8s cluster, connected to a MinIO for object storage (using Bitnami charts and not the Gitlab included version).

We recently migrated from an Omnibus installation and everything works great except build artifacts upload (job logs work ok). Everytime a CI job tries to upload an artifact, whether it be a single file or a directory, it fails giving a 400 error with no more information.

Here is what the production.log shows:

Started POST "/api/v4/jobs/423369/artifacts?artifact_format=zip&artifact_type=archive&expire_in=1+week" for 100.64.1.44 at 2023-09-27 13:25:00 +0000 
Started PATCH "/api/v4/jobs/423369/trace" for 100.64.1.224 at 2023-09-27 13:25:01 +0000 
Started POST "/api/v4/jobs/request" for 100.64.0.147 at 2023-09-27 13:25:01 +0000

And the api_json.log:

{"time":"2023-09-27T13:25:00.925Z","severity":"INFO","duration_s":0.0038,"db_duration_s":0.00091,"view_duration_s":0.00289,"status":400,"method":"POST","path":"/api/v4/jobs/423369/artifacts","params":[{"key":"artifact_format","value":"zip"},{"key":"artifact_type","value":"archive"},{"key":"expire_in","value":"1 week"},{"key":"file","value":{"filename":"artifacts.zip","type":"application/octet-stream","name":"file","tempfile":null,"head":"Content-Disposition: form-data; name=\"file\"; filename=\"artifacts.zip\"\r\nContent-Type: application/octet-stream\r\n"}}],"host":"xxx-gitlab-webservice-default.gitlab-server.svc.cluster.local","remote_ip":"100.64.1.44","ua":"gitlab-runner 15.2.2 (15-2-stable; go1.17.9; linux/amd64)","route":"/api/:version/jobs/:id/artifacts","redis_calls":2,"redis_duration_s":0.000325,"redis_write_bytes":118,"redis_shared_state_calls":2,"redis_shared_state_duration_s":0.000325,"redis_shared_state_write_bytes":118,"db_count":1,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":1,"db_main_count":1,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.001,"db_main_duration_s":0.001,"db_main_replica_duration_s":0.0,"cpu_s":0.007921,"mem_objects":4259,"mem_bytes":384184,"mem_mallocs":1139,"mem_total_bytes":554544,"pid":31,"worker_id":"puma_1","rate_limiting_gates":[],"correlation_id":"a6edb036-c83d-474c-9876-853927e4b5a6","meta.user":"xxxx","meta.project":"xxx/public-api-documentation","meta.root_namespace":"xxx","meta.client_id":"ip/100.64.1.44","meta.caller_id":"POST /api/:version/jobs/:id/artifacts","meta.remote_ip":"100.64.1.44","meta.feature_category":"build_artifacts","meta.pipeline_id":45684,"meta.job_id":423369,"content_length":"44723","request_urgency":"low","target_duration_s":5} 

The object storage configuration has been verified to work on other Gitlab services.

There is no useful info in the various gitlab logs. I can see the POST request on the webservice (I’m using a Kubernetes self-hosted version of Gitlab) and the 400 status but no errors anywhere. MinIO does not show any more errors. It even seems that the request does not got to the MinIO (can’t see it when showing MinIO traces).

The only info that I found related to this issue, here or in the Gitlab issues, either are way old and do not match my issue or are not related.

Any help or pointers would be greatly appreciated!

Versions info:

  • Gitlab version: 15.2.5-ee
  • Gitlab charts version: 6.2.5
  • Bitnami MinIO charts version: 12.7.0
1 Like

Hello,

Same here. I installed the latest version of the Helm chart (gitlab-7.4.1) and configured the chart to use remote Minio. I started being suspicious when I saw that all buckets where empty (0B) post-install. I created an empty repo ā€˜nai-fe’ then deleted it and sure enough I got 500 error. Upon digging a bit I saw this in the logs of the gitlab-registry Pod:

{"config_http_addr":":5000","config_http_host":"","config_http_net":"","config_http_prefix":"","config_http_relative_urls":false,"correlation_id":"01HC22QA9ZVT3YFAVN8B6Z0YJT","go_version":"go1.20.7","level":"info","method":"GET","msg":"router info","path":"/v2/orga/nai-fe/tags/list","root_repo":"orga","router":"gorilla/mux","time":"2023-10-06T08:47:32.159Z","vars_name":"orga/nai-fe","version":"v3.83.0-gitlab"}
{"auth_project_paths":["orga/nai-fe"],"auth_user_name":"","auth_user_type":"","correlation_id":"01HC22QA9ZVT3YFAVN8B6Z0YJT","go_version":"go1.20.7","level":"info","msg":"authorized request","root_repo":"orga","time":"2023-10-06T08:47:32.164Z","vars_name":"orga/nai-fe","version":"v3.83.0-gitlab"}
{"delay_s":0.543899601,"error":"RequestCanceled: request context canceled\ncaused by: context canceled","level":"info","msg":"S3: retrying after error","time":"2023-10-06T08:47:52.256Z"}
{"auth_project_paths":["orga/nai-fe"],"auth_user_name":"","auth_user_type":"","code":"REQUESTCANCELED","correlation_id":"01HC22QA9ZVT3YFAVN8B6Z0YJT","detail":"context canceled","error":"requestcanceled: request canceled","go_version":"go1.20.7","host":"gitlab-registry.gitlab.svc:5000","level":"error","method":"GET","msg":"request canceled","remote_addr":"10.42.0.173:53650","root_repo":"orga","time":"2023-10-06T08:47:52.801Z","uri":"/v2/orga/nai-fe/tags/list?n=10000","user_agent":"GitLab/16.4.1-ee","vars_name":"orga/nai-fe","version":"v3.83.0-gitlab"}
{"content_type":"application/json","correlation_id":"01HC22QA9ZVT3YFAVN8B6Z0YJT","duration_ms":20642,"host":"gitlab-registry.gitlab.svc:5000","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"","remote_addr":"10.42.0.173:53650","remote_ip":"10.42.0.173","status":400,"system":"http","time":"2023-10-06T08:47:52.801Z","ttfb_ms":20641,"uri":"/v2/org/nai-fe/tags/list?n=10000","user_agent":"GitLab/16.4.1-ee","written_bytes":81}

You can see there is a ā€œS3: retrying after errorā€ in the 3 line and then ,ā€œstatusā€:400," in the last one. Not sure I interpret this properly but my intuition tells me there is a compatibility issue.

Hi!

Unfortunately, I don’t think we are dealing with the same issue. I don’t have a 500 error and my buckets are working. The artifacts bucket is even getting populated with CI jobs logs, so everything good on the MinIO side.

This still happens after recent upgrades to Gitlab and MinIO. Current versions are:

  • Gitlab: 16.5.1-ee
  • Gitlab charts: 7.5.1
  • Bitnami MinIO charts: 12.8.18

For context, I used tcpdump to try to get the raw response from Gitlab and here is what I got:

HTTP/1.0 400 Bad Request
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
vary: Origin
Cache-Control: no-cache
X-Runtime: 0.014332
X-Gitlab-Meta: {"correlation_id":"1b2f0aed-ab35-49d4-aec9-0bac6b3fda19","version":"1"}
X-Request-Id: 1b2f0aed-ab35-49d4-aec9-0bac6b3fda19
Content-Length: 27

{"error":"file is invalid"}

Case closed: it turns out that the runner config was pointing to the wrong port of the Gitlab webservice. I was using port 8080 instead of port 8181.

1 Like