Gitlab artifacts upload failed

While running build job in Gitlab CI/CD, atifacts uploading failed.

Expected: gitlab artifacts uploads works normally.
Current Behaviour: gitlab artifcats uploads fails most of the times

image

ALthough it works sometimes:

image

Logs:

[root@srvxdocker01 gitlab-workhorse]# tail -f current| grep “error”
{“correlation_id”:“01EZSFN4ZWRMDMMPVZ9BTYVY55”,“error”:“handleFileUploads: extract files from multipart: persisting multipart file: unexpected EOF”,“level”:“error”,“method”:“POST”,“msg”:“error”,“time”:“2021-03-02T12:47:24Z”,“uri”:"/api/v4/jobs/1337/artifacts?artifact_format=zip\u0026artifact_type=archive"}
{“correlation_id”:“01EZSFN68Y4F3JA1WR62WQ14TZ”,“error”:“handleFileUploads: extract files from multipart: persisting multipart file: unexpected EOF”,“level”:“error”,“method”:“POST”,“msg”:“error”,“time”:“2021-03-02T12:47:25Z”,“uri”:"/api/v4/jobs/1337/artifacts?artifact_format=zip\u0026artifact_type=archive"}
{“correlation_id”:“01EZSFN87NGZCF4GDYKQMV2VVR”,“error”:“handleFileUploads: extract files from multipart: persisting multipart file: unexpected EOF”,“level”:“error”,“method”:“POST”,“msg”:“error”,“time”:“2021-03-02T12:47:27Z”,“uri”:"/api/v4/jobs/1337/artifacts?artifact_format=zip\u0026artifact_type=archive"}

[root@srvxdocker01 gitlab-workhorse]# cat current| grep 01EZSFN4ZWRMDMMPVZ9BTYVY55
{“correlation_id”:“01EZSFN4ZWRMDMMPVZ9BTYVY55”,“error”:“handleFileUploads: extract files from multipart: persisting multipart file: unexpected EOF”,“level”:“error”,“method”:“POST”,“msg”:“error”,“time”:“2021-03-02T12:47:24Z”,“uri”:"/api/v4/jobs/1337/artifacts?artifact_format=zip\u0026artifact_type=archive"}
{“content_type”:“text/plain; charset=utf-8”,“correlation_id”:“01EZSFN4ZWRMDMMPVZ9BTYVY55”,“duration_ms”:206,“host”:“my_host”,“level”:“info”,“method”:“POST”,“msg”:“access”,“proto”:“HTTP/1.1”,“referrer”:"",“remote_addr”:"ip",“remote_ip”:"ip",“route”:"^/api/v4/jobs/[0-9]+/artifacts\z",“status”:500,“system”:“http”,“time”:“2021-03-02T12:47:24Z”,“ttfb_ms”:205,“uri”:"/api/v4/jobs/1337/artifacts?artifact_format=zip\u0026artifact_type=archive",“user_agent”:“gitlab-runner 13.7.0 (13-7-stable; go1.13.8; linux/amd64)”,“written_bytes”:22}

Versions:
Gitlab Version :13.7.5
Gitlab runner Version: 13.7.0

snip of .gitlab-ci.yml :

build_job:
   stage: build
   script:
    - echo "Building python library & wheel"
    - echo "Test for Build again-29"
    - python3 setup.py bdist_wheel
   artifacts:
     paths:
     - dist/*whl

cat /etc/gitlab-runner/config.toml:

    [[runners]]
      name = "runner01
      url = "my_host"
      token = "my_token"
      executor = "docker"
      [runners.custom_build_dir]
      [runners.cache]
        [runners.cache.s3]
        [runners.cache.gcs]
        [runners.cache.azure]
      [runners.docker]
        tls_verify = false
        image = "docker:latest"
        privileged = false
        disable_entrypoint_overwrite = false
        oom_kill_disable = false
        disable_cache = false
        volumes = ["/cache"]
        shm_size = 0

I already tried the followiing:

Updated Gitlab from 12.7.0 to 13.7.5 and gitlab-runner 13.7
Also tested with Gitlab-runner of different versions: 12.9, 13.8

Thanks in advance.

1 Like

If you are using Object Storage in your GitLab instance, please switch to Consolidated configuration.

You can also look at this issue for more details if you are not using Object Storage: Job succeed, but uploading artifacts fails with 500 error (#26869) · Issues · GitLab.org / gitlab-runner · GitLab

Is there any HTTP proxy between GitLab and Runner?

@balonik Thanks for your reply. I am not using Object storage.

Yeah. There is a nginx reverse Proxy between Gitlab and Runner.

His solution was to run gitlab-runner as root, but mine is running already under root.

@romas if there is another Nginx proxy server, not the one that comes bundled with GitLab Omnibus package (if you use Omnibus installation), it could be possible for the Nginx reverse proxy to interrupt the connections. For example if you have request buffering enabled and artifacts are too big or there are some strict limits like limit_conn or proxy_max_temp_file_size on the proxy side.
I would take a look at the Nginx proxy logs if there aren’t any hints why are the connections getting interrupted.

@balonik I am posting my configuration and error logs. I already disabled any limit with : client_max_body_size 0;
my nginx.conf:

server {
  server_name my_server;
  listen 80;
  access_log /var/log/nginx/access_git.log main;
  error_log /var/log/nginx/error_git.log;
  return 301 https://$host$request_uri;
}

server {
  listen 0.0.0.0:443 ssl;
  server_name     my_server;
  server_tokens   off;
  root            /dev/null;

  access_log      /var/log/nginx/access_git.log main;
  error_log       /var/log/nginx/error_git.log;

  # disable any limits to avoid HTTP 413 for large image uploads
  **client_max_body_size 0;**

  include proxy.conf;
  proxy_set_header  Host  my_server;

  location / {
    proxy_pass http://0.0.0.0:8888;
  }

}

Error logs:

[error] 1668#0: *165265 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 1.1.1.1, server: my_server, request: "POST /api/v4/jobs/1357/artifacts?artifact_format=zip&artifact_type=archive HTTP/1.1", upstream: "http://0.0.0.0/api/v4/jobs/1357/artifacts?artifact_format=zip&artifact_type=archive", host: "my_host"

@romas unfortunately, that error in Nginx is too generic. I am sorry, but I don’t have anything more.
I would try to test Runner connecting directly to GitLab without the proxy to rule out it is a GitLab issue, but then I had issues with gitlab-workhorse myself in the 13.7 and only update to 13.8 eventually get it fixed for me.

1 Like