So job one is failing with an artifact but job two is successful with a local file that is in the repo. I’ve had several people look at this and we are using correct syntax from gitlab.
To help debug, the y’all files you imported with the include keyword will be beneficial.
Edit: based on the trigger documentation, for dynamically generated configuration, you need to use artifacts keyword instead of local in the includes section.
Unfortunately there is no real easy way to figure out why these child jobs fail. This same job is working within a different gitlab server I’ve used but not this server for some reason. I’ve checked the artifacts storage that each of these different servers are using and they both are storing the artifacts. And I’m able to download the generated artifacts from the UI on the first job.
I cat’ed the all the logs on the kubernetes web instance and the only error I found in the logs that looked suspicious was below. But I haven’t done anything with “test_reports_count”…
==> gitlab/production.log <==
Started GET "<PATHWASHERE>/pipelines/684/test_reports_count.json" for <ip>> at 2020-06-10 22:41:13 +0000
Processing by Projects::PipelinesController#test_reports_count as JSON
Parameters: {"namespace_id"=>"<PATHWASHERE>", "project_id"=>"sunny-test", "id"=>"684"}
No template found for Projects::PipelinesController#test_reports_count, rendering head :no_content
Completed 204 No Content in 48ms (ActiveRecord: 13.6ms | Elasticsearch: 0.0ms | Allocations: 18136)
@tmos22 - do you think something was possibly broken with the new feature add of "Runner now supports downloading artifacts directly from Object Storage " ? And is there any hope this will be fixed with 13.1?
With 13.1 I’m just getting “job “breed” has missiing artifacts metadata and cannot be extracted”. But the file should be there. I can download the artifact from the browser even. @tmos22