Multiple deployments from one pipeline

Deploying ti multiple clusters

Working with 4 production clusters and 2 staging clusters all of which have their own argocd installation which is serviced by one gitlab repo that has helm charts with value files for each.

The state of each cluster is that the ‘production’ version of services is deployed to each and this is the main branch of a repo for that service. Development branches are deployed to the staging cluster also - we identify these as ‘sandbox’ versions…

I need to track the deployment progress of each cluster (in each service repo) - other tasks are run based on their status. I need to create a gitlab deployment for each. I have static environments set up for the production versions of the service and dynamic environments set up for the sandbox versions of the service.

These deployments are to be generated in the service repository with the deploymentDs passed to the repo argo observes, along with other data about the deployment.

Seeking some guidance on good practice here…

I could have a job for each cluster that generates the deployembt and passes info to the argocd repo for it to update it’s info or I can bash and generate the IDs in one job (seems less wasteful in terms of runners).

Some of this could run into using arrays in bash (runners are running alpine so I don’t have some of the available tooling in bash) or I could do something with json and pass that obect around.

I’d be interested in any thought that you might have that leans one way or the other as to json or ‘bash’ array that might make managing this process simple. Or indeed any other startegies for performing multiple deployments from a single pipeline.

An immediate thought was a different direction: multi-project pipelines, where each cluster deployment is triggered from a central pipeline. This requires the CI/CD configuration to live in separate GitLab projects for each cluster. More in Downstream pipelines | GitLab and this blog post

So I probably should clear this up a little…

I have one repo that that has the helm charts for argo app-of-apps and the values required to target the version to be deployed. Fundamentally this is just the commit hash of the service repositories that are deployed.

So each repository has its own pipeline, performing all the testing etc etc. It also contains the helm chart deploying that service. The pipeline is such that on merge to main, the repository sends the commit hash to the one repo that contains the app-of-apps, pipeline in that repo verifies the information sent over and if all is well, it will update the tragte revision for the service repository in a values file used to configure the app of apps.

Argo watching this repository, sees the updated hash and then deploys the service from its repository.

Hope that makes sense…

So the issue is there are vlues files in the argo repository for each cluster - these can all be updated at once but what I’d like to do is deploy to one cluster, wait until the deployment has completed and then deploy to the next cluster, without any human interaction

Have you figured out how to do this? I think we are in a similar situation and haven’t figured out how to automate/gate deployments to different environments because all of the values files for each environment live in the repo that argocd reads to know which version to deploy to which environment.

YES! Or at least a process that will allow you to achieve this.

So we have an argocd repo that contains a values file per environment for the helm chart that defines our app-of-apps…

Each service creates a gitlab deployment for each cluster that we are deploying the service too (https://docs.gitlab.com/ee/ci/environments/.) Instead of using the deployments you can define in gitlab ci yaml (using the environment property of a job definition - see this example), we use a script that generates a deployment via the gitlab api for each cluster and records the IDs of those deployments.

These IDs are passed onto the deployment job (via artefacts) of the services pipeline where it will create a child pipeline in the argocd repo to update one of the value files for a cluster - the ID of the gitlab deployment is passed to this job. The argo application accepts the deployment ID and uses it to notify gitlab on successful sync. It does this by updating the status if the gitlab deployment from running to success.

Back in the gitlab repo pipeline, the deployment script is long polling the status of the deployment via the gitlab api using its ID. when it sees the status change to success it triggers the next deployment to the next cluster. rinse and repeat until all clusters have been updated.

Hope that makes sense - if not reply and I will do somekind of live walkthrough with you…

1 Like