GIT_STRATEGY propagates to child pipelines thus breaking them

Running on GitLab Enterprise Edition [15.3.3-ee] company self hosted.

I have a number of large projects that have linear (non-circular) dependancies; libraries, programs, installs, … all for desktop applications.
Each will build in their pipeline correctly when all are up to date, but if a push is made to a program project that relies on a change in a library project that wasn’t push, it will fail.
In order to solve this we have policy to push projects in a specific order which generally works.
We want a complete full build of everything done overnight every night and I’ve gone the route of a scheduled pipeline that triggers all the others in order (parallel when can) but immediately ran into build failures which seems inexplicable when I could run a pipeline on the exact same thing and it worked. I finally figured out that the log when it failed showed that the git clone was being skipped.

I tracked it down to the fact that we use a lot of template yaml files that are included to define functions for script in the before_script to make the scripts much shorter and focused. And because the project repos are generally very large, any pipeline job that doesn’t need the repo (which is pretty much all but the build stage/job) has a GIT_STRATEGY of ‘none’ set in their template. The pipeline that is triggering the others doesn’t need anything in the jobs that are triggering the others, yet that setting propagates to the triggered pipeline overriding its setting whether inherited or specifically set.

I created 2 sandbox projects just to test this and show the “problem”.

Here is my test sandbox yaml:

include:
    - project: 'great-bay/automation'
      ref: main
      file: '/YamlTemplates/.global_template.yml'

default:
    before_script:
        - !reference [.global_template, before_script]

stages:
    - build

.build_template:
    variables:
        GIT_STRATEGY: clone

build1:
    stage: build
    variables:
        GIT_STRATEGY: clone
    script: |
        if( $Env:GIT_STRATEGY -eq 'clone' )
        { LogMessageGreen "GIT_STRATEGY is 'clone' as expected" }
        else
        {
            LogMessageRed "GIT_STRATEGY is '$Env:GIT_STRATEGY', not 'clone' as expected"
            Exit 1
        }

build2:
    extends: .build_template
    stage: build
    script: |
        if( $Env:GIT_STRATEGY -eq 'clone' )
        { LogMessageGreen "GIT_STRATEGY is 'clone' as expected" }
        else
        {
            LogMessageRed "GIT_STRATEGY is '$Env:GIT_STRATEGY', not 'clone' as expected"
            Exit 1
        }

Here is the triggering sandbox2 yaml:

stages:
    - trig

.trig_template:
    variables:
        GIT_STRATEGY: none

trig1:
    stage: trig
    variables:
        GIT_STRATEGY: none
    trigger:
        project: great-bay/Sandbox
        branch: test_trigger_vars
        strategy: depend

trig2:
    extends: .trig_template
    stage: trig
    trigger:
        project: great-bay/Sandbox
        branch: test_trigger_vars
        strategy: depend

trig3:
    stage: trig
    trigger:
        project: great-bay/Sandbox
        branch: test_trigger_vars
        strategy: depend

Only ‘trig3’ succeeds.
Note, this is pared way down as normally a build stage would extend the .build_template which extends the .global_template, etc. And those templates have the GIT_STRATEGY set.

The only way for me to solve it is to NOT set a git strategy in any way in the parent pipeline, but does it waste time getting the repo? I can’t tell since I can’t see a log for those jobs and the documentation doesn’t really say what is happening.
Doing so will mean I can’t import any of my templates, but to do that I may need an entirely separate project so it has a clean/empty pipeline.
UNESS there is a way to say “remove” the variable setting from the job that is triggering the child pipeline. Is there, as that would make my life so much easier?

BTW, I did search for how to remove a variable and everything I found used script, which is a bummer since you can’t have script in a job that uses ‘trigger’.

Two solutions for you:

Check if the inherit settings for variables fix your problem already. If they don’t manually trigger the downstream pipeline using the api:

I hadn’t seen the

inherit:
variables: false

in any examples, so that is useful to head off the setting of GIT_CLONE_PATH which I have habitually set at the top of the pipeline file as a global, and in this case had had to move into specific jobs and not define in t he trigger ones.
But it doesn’t sound like it will work for my case of GIT_STRATEGY because it is never a global, it is coming from either from a default: or a template used via extends:

The pipeline-trigger (addin?) sounds like it would have been useful many versions of GitLab ago, but since they added the ‘trigger:’ key and the ‘strategy: depend’ to get it to wait for completion I can’t see any advantage in it, nor worth the trouble of trying to talk corporate into installing it.

I’m leaning toward creating a completely new and empty project that will only have the .gitlab-ci.yml file for scheduling pipelines that trigger others, that way I don’t need to use any yaml template files nor any variables unless strictly defined in the triggering pipeline job. That way there isn’t anything to accidentally propagate.

pipeline trigger is a CLI tool that you can use to trigger other pipelines. Its similar to the trigger keyword but allows for more customization as it allows the full range of the api options.

Did you ever find a viable solution to this issue?
I have the same problem. No matter what I try as GIT_STRATEGY, it is always the value that is defined in the parent pipeline.

I went the route of creating a new project with just a .gitlab-ci.yml file and a pipeline that just has stages/jobs with trigger so that there wasn’t really anything to propagate. Each job explicitly defines the variables needed to get the triggered pipeline to perform as desired. After a bit of tinkering, I was fairly pleased with how it was working. Since there isn’t anything in the repo for the pipeline I don’t set anything so it just uses the default and it doesn’t matter whether it clones the yml file or not since it is tiny so fast, and isn’t used.

But I had several scenarios I wanted and trying to do them in this one parent triggering pipeline would have required variables, and I didn’t want to create a different project just for each scenario. Then I had this either brilliant or idiotic idea, depending on your point of view. I renamed this parent project to “Automation”, then renamed my branch for the purpose of the current pipeline and scheduled that so it runs daily. I then on my local system created a new “orphan” branch, i.e. it isn’t based on the original branch. Named it for my next purpose, defined a ci pipeline, scheduled it. Etc.

I now have 4 completely separate branches in this Automation project, each scheduled at different times and frequencies, and each triggering several of my other project pipelines (most often the same ones). They are so easy to edit. The triggered pipelines are also easier to edit because I can just add a rule that uses if to check for a single variable sent from this triggering job.

I have also made each of these branches protected and configured to not allowed to merge into them so as to never cross between the branches.
Since now everything scheduled is defined in one project, no need to be bouncing around between my ~80 projects to look at pipelines for success/failure … they all show in the one pipeline list for the Automation project, and the branch names make it easy at a glance to know what failed.

This is likely an unusual if not unique way to solve my problem, but I’m happy with how it is working out.