Hello GitLab Community,
We’re encountering an issue with our pipeline where artifacts cannot be archived. We’re using GitLab v17.5.0-ee along with GitLab Runner v17.5.2, configured as Kubernetes executors. Searching the forum, we haven’t found any soluctions, which help in this case.
Problem Description
The pipeline log shows the following error during artifact upload:
[...]
`Created cacheUploading artifacts for successful jobUploading artifacts...
ERROR: Uploading artifacts as "archive" to coordinator... 413 Request Entity Too Large id=21604 responseStatus=413 Request Entity Too Large status=413 token=glcbt-64
FATAL: too largeCleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1`
Steps Taken
We have attempted the following fixes based on the documentation (link):
- Increased
nginx['client_max_body_size']
from 200MB to 250MB on the GitLab server. - Updated the “Maximum attachment size (MiB)” setting from 10MB to 250MB in GitLab’s settings.
After applying these changes, the job ran successfully once. However, on subsequent attempts, the error reoccurred.
Current Roadblock
We suspect the artifact size might be a factor but haven’t been able to determine its exact size.
Request for Assistance
Does anyone have suggestions for additional settings or areas we should investigate to resolve this issue? Are there best practices for debugging artifact upload size issues in GitLab with Kubernetes runners?
Any help or guidance would be greatly appreciated!
Thank you in advance for your time and support.