How to share artifacts even on job failure?

This is the potion of pipeline I have.

install-rpm:

  stage: deployrpm

  allow_failure: true

  extends:

   - .iftestawsdeploy

  image:

    name: amazon/aws-cli

    entrypoint: [""]

  variables:

      terraform_version: 1.1.7

      terragrunt_version: 0.36.6

      kubectl_version: 1.21.11

  before_script:

    - *install_dependencies

    - *install_projectn_rpm

  script:

    - *run_projectn_deploy

  after_script:

    - echo "Just to test after script creating artifacts or not even successful"

    - *forward_projectn_resources

    - ls dist1

  artifacts:

   paths:

    - ./dist1

  tags:

    - ec2runner

The above script block may fail sometimes. Irrespective of its result, after_script will run, so I am creating a tar.gz file there, which I’m adding the folder to artifacts, so available to next stages.

But, the artifact is being created only when the script block is success. If the script block is failed then the after_script content is not adding to artifacts even the tar file is created.

How to make the tar file available to artifacts even the script block is failed?

Please suggest.

So, in this case you want to use artifacts:when:

install-rpm:
  stage: deployrpm
  ...
  artifacts:
   paths:
    - ./dist1
  when: always

Thanks snim2, it worked, I tried searching for this option since morning and able to find same luckily in in another forum.

It would be great if you/your team members reply in IST timings also.

Glad you got it working! Just for clarity, the vast majority of people on this forum are ordinary GitLab users, not team members.

Sorry for conusion, I mean by team members, not gitlab team, your fellow colleagues :slight_smile: