CI Job not Always Downloading Artifacts from previous Stage

I have a Build and Package jobs. The Package job just uploads the same artifacts as the Build job, and then calls another project to create the RPMs using those artifacts (the dev didn’t want my rpm files contaminating his nice repository).

We’ve done this with a number of projects; however, in projects that use one particular CI file the artifacts are not always downloaded in the Package stage.

If I refresh the Package job, then everything works as expected.

It’s pretty consistently fails the first time the job runs, then will work until the artifacts from the build job expire.

Here is the gitlab-ci file:

include:
  - project: 'docs/ci'
    ref: master
    file: '/templates/gitlab-ci.yml'

And the template it includes:

stages:
  - build
  - package

build:
  stage: build
  image: gompute/centos7-gcdocs:latest
  tags:
    - docker
    - small
  script:
    - gcdocs build .
  artifacts:
    name: ${CI_PROJECT_NAME}-${CI_COMMIT_SHORT_SHA}
    paths:
      - dist/
    expire_in: 1 hour

# package files
.packaging_template: &packaging_template
  stage: package
  tags:
    - docker
    - small
  artifacts:
    name: ${CI_PROJECT_NAME}-${CI_COMMIT_SHORT_SHA}
    paths:
      - dist/
    expire_in: 1 day
  cache: {}
  script:
    - if [ "$VERSION" = "DATE" ]; then VERSION=$(date -u "+%Y%m%d%H%M") ; fi
    - >
      curl -k -X POST -F token=$CI_JOB_TOKEN -F ref=master
      -F variables[CALLING_EPOCH]=${EPOCH}
      -F variables[CALLING_COMMIT_TAG]=${VERSION}
      -F variables[CALLING_PROJECT_ID]=${CI_PROJECT_ID}
      -F variables[CALLING_JOB_ID]=${CI_JOB_ID}
      -F variables[CALLING_PROJECT_NAME]=${CI_PROJECT_NAME}
      -F variables[CALLING_PROJECT_URL]=${CI_PROJECT_URL}
      -F variables[CALLING_PROJECT_TITLE]=${CI_PROJECT_TITLE}
      https://gitlab.gompute.net/api/v4/projects/414/trigger/pipeline
  needs:
    - job: build
      artifacts: true

package:tst:
  <<: *packaging_template
  only:
    - master
  variables:
    VERSION: "DATE"
    EPOCH: 0

package:tag:
  <<: *packaging_template
  only:
    - tags
  variables:
    VERSION: ${CI_COMMIT_TAG}
    EPOCH: 1

The output when it fails is just to skip the Artifact Download step:

Checking out cae03725 as master...
Skipping Git submodules setup
Restoring cache
00:01
Downloading artifacts
00:02
Running before_script and script
00:01

When it succeeds it downloads as normal:

Checking out cae03725 as master...
Removing dist/
Skipping Git submodules setup
Restoring cache
00:01
Downloading artifacts
00:02
Downloading artifacts for build (31740)...
Downloading artifacts from coordinator... ok        id=31740 responseStatus=200 OK token=WhYSvxvy
Running before_script and script
00:01

Gitlab version is: 12.9.2
Runner version is: 12.9.0

One thing: it seems that when the runner creates a new git repository it doesn’t download any artifacts. When it reuses an existing one then it does download the artifacts successfully. Though this might just be coincidence.

Am I doing something wrong?

Could there be an issue with our caching?

Is there a way to force or retry artifact downloads?

I think this was all my fault.

I was testing by manually restarting jobs (in order of dependencies) thinking that they would honour those dependencies… but it seems that they don’t. Running a whole new pipeline does not seem to have this issue.