So I have a pipeline that uses Terraform to build out infrastructure.
I supply pre-configured variables to the pipeline to create EKS clusters in this example. (cluster_name, region, cluster_version, etc).
So I have a pipeline I run for each EKS cluster.
My question is when I make changes to this module I want to basically run ALL of them again (at least the build stage, terraform plan) so I can see all the changes that need to be applied to all my clusters.
Each cluster has it’s own (cluster_name) saved for TF_STATE_NAME and CI_ENVIRONMENT_NAME.
Only way I can think to do this would be to have a project with the TF module and create a separate project for each cluster and subscribe to that TF module so when changes are made it kicks off new pipeline… Then the TF_STATE_NAME and environment aren’t needed anymore for separation because the project itself becomes the separation. It just leads to many projects for almost no reason.
So what I was trying to figure out is as I update my current project if there was a way to kick off a build of N pipelines (N EKS clusters), loops through some sort of list of current environments or tf states, pre-fills the parameters (unique for each cluster) then I can review the plan and kick off the deploy for all of them as Change windows allows.
I assume all your pipelines are manual up until now, so you do not use tfvars files and you enter all variables manually in UI? Or you edit/commit changes to tfvars when you create new cluster?
It mostly depeds on how do you provide the pre-configured variables to pipelines and why they are not part of TF code.
What about having tfvars file for each EKS cluster in repo? In that case you can run pipeline for all of them. I would create jobs for each EKS cluster.
Bigger change, depending on how many EKS clusters you have, would be to actually create a TF module out of the code and have a “parent” `TF with all clusters defined with variables. This makes more sense if you will be managing them long-term, but requires a lot of work moving resources around in TF state.
I do have tfvars so instead of specifying variables to the pipeline manually I could run the build/deploy jobs (not sure how to pass those tfvars files to those jobs though)… also would I be extending the .terraform:build jobs and stuff (one for each EKS cluster) with each tfvars?
So I have the TF code in a repo where I was manually passing the tfvars as variables but could just as easily use the saved off tfvars files that I committed to the repo for when running locally.
The question I have is how to get a project that can do the build (terraform plan) for all my clusters so I can see what the changes would be and then run the deploy manually on each of them.
Reason I am looking to do such a change is for new changes as well as something like updating TF modules in the lock file. This allows me to see what those changes would be from a MR… then run the deploy against “test” clusters to see how it behaves… and then merge into main once confirmed things are how I expect… still would want manual deployments to each cluster so can be controlled by Change Request windows.
As I mentioned I could create “child” projects (one for each EKS cluster) and source in the parent module (in this case instead of tfvars I just supply the variables right into their main.tf) but this now leads to “many” projects having to be created vs just looping through a list of tfvar files like you suggested (just can’t exactly understand that workflow)
I would use a nice TF feature, auto loading tfvars files So, you can name all your tfvars files like _cluster_name_.tfvars. Now, these files could live in the repository and won’t be automatically loaded by TF so it’s safe.
Just a snippet, but I you will get the idea.
.terraform:build:
before_script: # it could be in script, but if you will use the GitLab TF template it needs to be in before_script
- mv "${EKS_CLUSTER}.tfvars" "${EKS_CLUSTER}.auto.tfvars"
script:
- terraform plan
...
cluster1:build: # repeat for each cluster
extends: ['.terraform:build']
variables:
EKS_CLUSTER: cluster1
Downside is that you would need to have set of jobs for each cluster, it could be automated by using the Dynamic child pipelines, but I won’t mention them here, because you won’t get that nice MR widget for Terraform with them.