Trigger child pipeline with generated configuration file - not working

Local files can be passed to child pipelines, but artifacts are NOT working.

  • GitLab : 13.0.0
  • Runner (Hint: /admin/runners): 13.0.1
  • deployed with kubernetes and using s3 storage without tls
  • Add the CI configuration from .gitlab-ci.yml and other configuration if relevant (e.g. docker-compose.yml)

    stages:
    - templatize
    - deploy

    breed:
    stage: templatize
    image: busybox:latest
    script:
    - j2 tpl.j2 deployment.yml -o config/master.yml
    artifacts:
    paths:
    - config/master.yml
    tags:
    - any

    thebadchild:
    stage: deploy
    trigger:
    include: config/master.yml

    why:
    stage: deploy
    script:
    - cat config/master.yml
    tags:
    - any

    thegoodchild:
    stage: templatize
    allow_failure: true
    trigger:
    include:
    - local: config/microservice_a.yml
    - local: config/template.yml

  • What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?


So job one is failing with an artifact but job two is successful with a local file that is in the repo. I’ve had several people look at this and we are using correct syntax from gitlab.

To help debug, the y’all files you imported with the include keyword will be beneficial.

Edit: based on the trigger documentation, for dynamically generated configuration, you need to use artifacts keyword instead of local in the includes section.

Thanks @tmos22 for your reply. I’ve actually tried both methods to try and locate the issue. I’ve also used this below with the same issue.

thebadchild:
stage: deploy
trigger:
include:
artifact: config/master.yml
job: breed

Unfortunately there is no real easy way to figure out why these child jobs fail. This same job is working within a different gitlab server I’ve used but not this server for some reason. I’ve checked the artifacts storage that each of these different servers are using and they both are storing the artifacts. And I’m able to download the generated artifacts from the UI on the first job.

I cat’ed the all the logs on the kubernetes web instance and the only error I found in the logs that looked suspicious was below. But I haven’t done anything with “test_reports_count”…

==> gitlab/production.log <==
Started GET "<PATHWASHERE>/pipelines/684/test_reports_count.json" for <ip>> at 2020-06-10 22:41:13 +0000
Processing by Projects::PipelinesController#test_reports_count as JSON
  Parameters: {"namespace_id"=>"<PATHWASHERE>", "project_id"=>"sunny-test", "id"=>"684"}
No template found for Projects::PipelinesController#test_reports_count, rendering head :no_content
Completed 204 No Content in 48ms (ActiveRecord: 13.6ms | Elasticsearch: 0.0ms | Allocations: 18136)

I have tried different s3 providers for the gitlab-artifacts backend and they both give me trouble and I’m seeing this 204 error still.

@tmos22
Here are the files used with the pipeline:

deployment.yml:

deployments:
  - region: canada
    foundation: master
    clusters:
      - bill
      - bob
      - jonny
      - beer
  - region: us
    foundation: master
    clusters:
      - bill
      - bob

pipeline.yml:

stages:
  - deploy

ihatethis:
  stage: deploy
  trigger:
    include: config/master.yml

why:
  stage: deploy
  script:
    - cat config/master.yml
  tags:
    - any

tpl.j2:

stages:
  - plan
plan:
  stage: plan
  image: myimage:latest
  script:
    - echo asdf
  tags:
    - donttouchmytag

microservice_a.yml

test:
  stage: test
  image: myimage:latest
  extends: .wow
  tags:
- donttouchmytag

template.yml

.wow: 
script:
- echo wow

@tmos22 - do you think something was possibly broken with the new feature add of "Runner now supports downloading artifacts directly from Object Storage " ? And is there any hope this will be fixed with 13.1?

thanks!

With 13.1 I’m just getting “job “breed” has missiing artifacts metadata and cannot be extracted”. But the file should be there. I can download the artifact from the browser even. @tmos22