Running CI/CD Pipeline Across Multiple Directories in one project

Hello,

So I’ve recently created a little test environment, just to mess around with creating a pipeline.

I have my .gitlab-ci.yml file in the root of my directory, and if I add a new file, s3.tf for example, with the necessary code to create an S3 Bucket in AWS, I can run my pipeline and it will deploy the infrastructure with no problem at all. However, I’d like to expand this project at some point, so I would prefer to put all of my ‘infrastructure’ files into a directory called ‘infra’.

Therefore, I created a ‘infra’ directory within the project, and made a new terraform file to deploy different AWS infrastructure, but the pipeline doesn’t seem to be able to detect this directory or any of the files in it. I’m probably missing something really stupid but any help would be great! Thanks

Contents of .gitlab-ci.yml:

image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
  TF_ROOT: ${CI_PROJECT_DIR}/
  TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/tf-state
cache:
  key: example-production
  paths:
    - ${TF_ROOT}/.terraform
before_script:
  - cd ${TF_ROOT}
stages:
  - prepare
  - validate
  - build
  - deploy
init:
  stage: prepare
  script:
    - gitlab-terraform init
validate:
  stage: validate
  variables: 
        TF_VAR_AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
        TF_VAR_AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
        TF_VAR_AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
  script:
    - gitlab-terraform validate
plan:
  stage: build
  variables: 
        TF_VAR_AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
        TF_VAR_AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
        TF_VAR_AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
  script:
    - gitlab-terraform plan
    - gitlab-terraform plan-json
  artifacts:
    name: plan
    paths:
      - ${TF_ROOT}/plan.cache
    reports:
      terraform: ${TF_ROOT}/plan.json
apply:
  stage: deploy
  variables: 
        TF_VAR_AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
        TF_VAR_AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
        TF_VAR_AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
  environment:
    name: production
  script:
    - gitlab-terraform apply
  dependencies:
    - plan
  when: manual
  only:
    - main


Hi,

This is terraform feature. Terraform does not look into subdirectories. It is only taking files in current directory into account, which is called a “root module”. Any other subdirectory is treated as “module” and not included by default. You can read more about it in official docs.

One of the really simple solution is to define the subdirectory as module. You can add following to main.tf for example:

module "infra" {
  source = ./infra
}

However, this approach is not really scalable and if this repo will grow larger and larger all your plans will take a lot of time and slow you down, because each time every resource needs to get compared to real world.

If you forsee that you will have a lot of resources and you want to keep a mono-repo approach a better solution is to have separate jobs for each set of resources, in your case sub-directories. This adds some complexity to Pipelines setup, but helps if your TF plans takes a long time.

1 Like

Thank you! That worked perfectly, I added the following to main.tf, because I think you may have missed the “” around ./infra:

module "infra" {
  source = "./infra"
}

I think for the time being this will be fine, and I’ll mess around a bit more. But in future I would like to test increasing the complexity of the Pipeline to have separate jobs in each sub-directory. I don’t suppose you have any documentation or guides you could recommend for this?

Many thanks!

Hi George.

You had to change something in your . gitlab-ci.yml. I have the same problem as you, but here it did not work even informing the module in my file. Or was it exactly as you sent before?