Scheduled Jobs in parent pipeline

Parent pipeline not allowing scheduled jobs to run

I have a CI pipeline that I am converting to rules and am using parent pipelines. My scheduled jobs are no longer running. The reason I believe is because in a parent pipeline, the CI_PIPELINE_SOURCE is no longer schedule but rather parent_pipeline. Here is my sample gitlab.yml file:

/my-project/my-repo/.gitlab-ci.yml

# Package A configuration
Task:
  # `trigger` is the keyword to create a child pipeline
  trigger:
    # Include the configuration file of the child pipeline
    include: .gitlab-ci-ecs-task.yml
    # With this strategy, the parent only succeed if the child succeed too
    strategy: depend

# Package B configuration
Service:
  # `trigger` is the keyword to create a child pipeline
  trigger:
    # Include the configuration file of the child pipeline
    include: .gitlab-ci-ecs-service.yml
    # With this strategy, the parent only succeed if the child succeed too
    strategy: depend

/my-project/my-repo/.gitlab-ci-ecs-service.yml

variables:
  REGION: us-east-1
  ECR_REPO_NAME: my-repo-runner
  ECS_CLUSTER_NAME: repo
  SERVICE_NAME: reepo
  DOCKERFILE: runner.Dockerfile

include:
  - project: 'my-project/my-pipelines'
    ref: 'test'
    file: '.gitlab-ci-ecs-service.yml'

/my-project/my-repo/.gitlab-ci-ecs-task.yml

variables:
  REGION: us-east-1
  ECR_REPO_NAME: my-repo-runner
  ECS_CLUSTER_NAME: repo
  DOCKERFILE: runner.Dockerfile

include:
  - project: 'my-project/my-pipelines'
    ref: 'test'
    file: '.gitlab-ci-ecs-service.yml'

Now if you’re thinking .gitlab-ci-ecs-service.yml and .gitlab-ci-ecs-task.yml are the same, they are not. One of them has an extra variable that is passed. Unfortunately, there is no way to pass different variables to different include statements. but that is a secondary problem. The primary is getting the schedule job to work.

/my-project/my-pipelines/.gitlab-ci-ecs-service.yml

stages:
  - build
  - deploy

image: registry.gitlab.com/my-repo/base-images/terraform:latest

variables:
  ACCOUNT_NUMBER: 1234567890
  ECR_REPO_NAME: services
  ECS_CLUSTER_NAME: services
  SERVICE_NAME: NONE
  REGION: us-east-1
  DOCKERFILE: Dockerfile

.initialize_container: &init_container
    - ECR_REPO_URL=${ACCOUNT_NUMBER}.dkr.ecr.${REGION}.amazonaws.com/${BUILD_ENVIRONMENT}-${ECR_REPO_NAME}
    - ECS_CLUSTER_ARN=arn:aws:ecs:${REGION}:${ACCOUNT_NUMBER}:cluster/${BUILD_ENVIRONMENT}-${ECS_CLUSTER_NAME}
    - ECS_SERVICE_NAME=${BUILD_ENVIRONMENT}-${SERVICE_NAME}
    - echo "ECR_REPO_URL=$ECR_REPO_URL"
    - echo "ECS_CLUSTER_ARN=$ECS_CLUSTER_ARN"
    - echo "ECS_SERVICE_NAME=$ECS_SERVICE_NAME"
    - echo "DOCKERFILE=$DOCKERFILE"

.create_build_container: &create_container
    - docker build -t "$ECR_REPO_URL:$CI_COMMIT_SHA"
      --build-arg ARTIFACTORY_USERNAME="$ARTIFACTORY_USERNAME"
      --build-arg ARTIFACTORY_PASSWORD="$ARTIFACTORY_PASSWORD"
      --file $DOCKERFILE
      .
    - $(aws --profile default --region "$REGION" ecr get-login --no-include-email)

.push_build_container: &push_container
    - docker push "$ECR_REPO_URL:$CI_COMMIT_SHA"
    - docker tag "$ECR_REPO_URL:$CI_COMMIT_SHA" "$ECR_REPO_URL:$BUILD_ENVIRONMENT"
    - docker push "$ECR_REPO_URL:$BUILD_ENVIRONMENT"
    - docker tag "$ECR_REPO_URL:$CI_COMMIT_SHA" "$ECR_REPO_URL:latest"
    - docker push "$ECR_REPO_URL:latest"

.restart_ecs_service: &restart_service |
    if [[ $SERVICE_NAME != "NONE" ]]; then
        aws --profile default --region "$REGION" ecs update-service \
          --cluster "$ECS_CLUSTER_ARN" \
          --service "$ECS_SERVICE_NAME" \
          --force-new-deployment
    fi

build:
  stage: build
  services:
    - docker:19.03.5-dind
  script:
    - BUILD_ENVIRONMENT=test
    - *init_container
    - *create_container
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"

deploy-dev:
  stage: deploy
  services:
    - docker:19.03.5-dind
  script:
    - BUILD_ENVIRONMENT=dev
    - *init_container
    - *create_container
    - *push_container
    - *restart_service
  rules:
    - if: ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "main")

deploy-dev2:
 stage: deploy
 services:
   - docker:19.03.5-dind
 script:
    - BUILD_ENVIRONMENT=dev2
    - *init_container
    - *create_container
    - *push_container
    - *restart_service
 rules:
   - if: $CI_COMMIT_BRANCH == "dev2" && $CI_PIPELINE_SOURCE == "push"

deploy-qa:
  stage: deploy
  services:
    - docker:19.03.5-dind
  script:
    - BUILD_ENVIRONMENT=qa
    - *init_container
    - *create_container
    - *push_container
    - *restart_service
  rules:
    - if: $CI_COMMIT_BRANCH == "qa" && $CI_PIPELINE_SOURCE == "push"

deploy-trn:
  stage: deploy
  services:
    - docker:19.03.5-dind
  script:
    - BUILD_ENVIRONMENT=trn
    - *init_container
    - *create_container
    - *push_container
    - *restart_service
  rules:
    - if: ($CI_COMMIT_BRANCH == "trn" || $CI_COMMIT_BRANCH == "training") && $CI_PIPELINE_SOURCE == "push"

deploy-prod:
  stage: deploy
  services:
    - docker:19.03.5-dind
  script:
    - BUILD_ENVIRONMENT=prod
    - *init_container
    - *create_container
    - *push_container
    - *restart_service
  rules:
    - if: ($CI_COMMIT_BRANCH == "prod" || $CI_COMMIT_BRANCH == "production") && $CI_PIPELINE_SOURCE == "push"

downmerge:  
  stage: slackbot
  script:
    - BUILD_ENVIRONMENT="downmerge"
    - echo "$BUILD_ENVIRONMENT"
    - env
    - /usr/bin/slackbot.sh
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"

Hi Salim,

You’re right, all the child pipelines will have $CI_PIPELINE_SOURCE == parent_pipeline. You might need to include these jobs to the parent pipeline in order to run them by schedule.
Alternatively, you can put this job as a separate child pipeline and use the rules enclosure for the trigger.
For example:

Task2:
  trigger:
    include: task2.yml
    strategy: depend
  rules:
    - if: $CI_PIPELINE_SOURCE == "schedule"