Working with 4 production clusters and 2 staging clusters all of which have their own argocd installation which is serviced by one gitlab repo that has helm charts with value files for each.
The state of each cluster is that the ‘production’ version of services is deployed to each and this is the main branch of a repo for that service. Development branches are deployed to the staging cluster also - we identify these as ‘sandbox’ versions…
I need to track the deployment progress of each cluster (in each service repo) - other tasks are run based on their status. I need to create a gitlab deployment for each. I have static environments set up for the production versions of the service and dynamic environments set up for the sandbox versions of the service.
These deployments are to be generated in the service repository with the deploymentDs passed to the repo argo observes, along with other data about the deployment.
Seeking some guidance on good practice here…
I could have a job for each cluster that generates the deployembt and passes info to the argocd repo for it to update it’s info or I can bash and generate the IDs in one job (seems less wasteful in terms of runners).
Some of this could run into using arrays in bash (runners are running alpine so I don’t have some of the available tooling in bash) or I could do something with json and pass that obect around.
I’d be interested in any thought that you might have that leans one way or the other as to json or ‘bash’ array that might make managing this process simple. Or indeed any other startegies for performing multiple deployments from a single pipeline.