Multiple deployments from one pipeline

Deploying ti multiple clusters

Working with 4 production clusters and 2 staging clusters all of which have their own argocd installation which is serviced by one gitlab repo that has helm charts with value files for each.

The state of each cluster is that the ‘production’ version of services is deployed to each and this is the main branch of a repo for that service. Development branches are deployed to the staging cluster also - we identify these as ‘sandbox’ versions…

I need to track the deployment progress of each cluster (in each service repo) - other tasks are run based on their status. I need to create a gitlab deployment for each. I have static environments set up for the production versions of the service and dynamic environments set up for the sandbox versions of the service.

These deployments are to be generated in the service repository with the deploymentDs passed to the repo argo observes, along with other data about the deployment.

Seeking some guidance on good practice here…

I could have a job for each cluster that generates the deployembt and passes info to the argocd repo for it to update it’s info or I can bash and generate the IDs in one job (seems less wasteful in terms of runners).

Some of this could run into using arrays in bash (runners are running alpine so I don’t have some of the available tooling in bash) or I could do something with json and pass that obect around.

I’d be interested in any thought that you might have that leans one way or the other as to json or ‘bash’ array that might make managing this process simple. Or indeed any other startegies for performing multiple deployments from a single pipeline.

An immediate thought was a different direction: multi-project pipelines, where each cluster deployment is triggered from a central pipeline. This requires the CI/CD configuration to live in separate GitLab projects for each cluster. More in Downstream pipelines | GitLab and this blog post

So I probably should clear this up a little…

I have one repo that that has the helm charts for argo app-of-apps and the values required to target the version to be deployed. Fundamentally this is just the commit hash of the service repositories that are deployed.

So each repository has its own pipeline, performing all the testing etc etc. It also contains the helm chart deploying that service. The pipeline is such that on merge to main, the repository sends the commit hash to the one repo that contains the app-of-apps, pipeline in that repo verifies the information sent over and if all is well, it will update the tragte revision for the service repository in a values file used to configure the app of apps.

Argo watching this repository, sees the updated hash and then deploys the service from its repository.

Hope that makes sense…

So the issue is there are vlues files in the argo repository for each cluster - these can all be updated at once but what I’d like to do is deploy to one cluster, wait until the deployment has completed and then deploy to the next cluster, without any human interaction