Protected environments stack up approvals for manual deployment jobs, even though only the oldest was actually manually triggered

We’re using gitlab.com, premium

We’re attempting to build a CICD pipeline where only Maintainers can deploy to prod, upon manual approval.

The docs seem to suggest this can/should be done by making your prod deploy job when: manual, and then protecting that environment to require approval from a Maintainer. This does work in the simplest case.

However, once we get into a practical project, we land in situations where someone manually triggers the prod deploy job, and the Maintainer may take some time to get around to approving the deployment from the Environments page. In the meantime, every other pipeline’s prod deploy job has been triggering. You can watch the Environments page change to show a new job for approval as each new pipeline runs, and each of those pipelines won’t show a play/cancel button on their manual prod deploy job. When the maintainer eventually comes to the Environments page to approve that requested deployment, the pending approval they see will be for the most recent pipeline. They have to manually reject it, and all other pipelines that ran since the one they actually want to deploy.

This seems to be not only annoying and cumbersome, but incredibly error-prone. The Maintainer has to be trained and vigilant to be sure that when they get to the Environment page to approve a deployment, they’re not actually shipping some volatile feature branch.

The expected behavior would be that only prod deploy jobs that are manually triggered get stacked on the Environments page. Or better yet, a list of all pending jobs would be showed so a Maintainer can be deliberate about what they’re approving and shipping. Or that the Maintainer could approve a specific deploy job from the exact pipeline it’s running in.

Are we missing a piece of the configuration puzzle here? Is this behavior actually intended for some reason?

1 Like

I escalated this to an issue, which hasn’t yet been triaged by the Gitlab team. In the meantime, we’ve gone with the following, hacky workaround:

before_script:
  - 'json_member=$(curl --header "PRIVATE-TOKEN: $(cat /token/path)" $CI_API_V4_URL/projects/$CI_PROJECT_ID/members/all/$GITLAB_USER_ID)'
  - access=$(echo $json_member | grep -Eo '"access_level"[^,]*' | grep -Eo '[^:]*$')
  - if [ $access -ge $MIN_PROD_DEPLOY_ACCESS ]; then echo User $GITLAB_USER_NAME authorized; else echo User $GITLAB_USER_NAME has insufficient authorization; exit 1; fi

With the API token being a group-level read-api token that we store as a k8s secret and mount into our private runner, then cat into the curl. MIN_PROD_DEPLOY_ACCESS is 40 for our needs, see the docs

We would certainly still be interested in actually using the platform-native feature someday, it’s just more likely to cause mistakes than catch mistakes/malfeasance in its current form.

1 Like

Typically you filter the pipelines to only allow production deployments when certain conditions are met: We do production deployments based on tags so the production job has a rules:if: CI_COMMIT_TAG_REF ~= $release-.* clause.