(GitLab OpenTofu Component) Can't seem to get IPv6 working with my AWS resources

Problem to solve

Describe your question in as much detail as possible:

Hi all,

I am trying to enable IPv6 on my AWS EKS cluster with associated VPC, Route 53 records, etc. that I am creating using GitLab CI and the GitLab OpenTofu component. I keep getting stymied by errors relating to configuring IPv6 for the subnets that are created in the VPC. In the most recent pipeline run, I got two errors of the following form:

 Error: creating EC2 Subnet: MissingParameter: Either 'ipv6CidrBlock' or 'ipv6IpamPoolId' should be provided.
 	status code: 400, request id: (omitted)
 
   with module.vpc.aws_subnet.public[0],
   on .terraform/modules/vpc/main.tf line 97, in resource "aws_subnet" "public":
   97: resource "aws_subnet" "public" {

And two more errors of the form:

 Error: creating EC2 Subnet: InvalidSubnet.Conflict: The CIDR '10.0.0.0/24' conflicts with another subnet
 	status code: 400, request id: (omitted)
 
   with module.vpc.aws_subnet.private[0],
   on .terraform/modules/vpc/main.tf line 233, in resource "aws_subnet" "private":
  233: resource "aws_subnet" "private" {

I’ll figure out the IPv4 range conflicts later. Right now, I am mostly concerned with the first two errors, relating to IPv6.

Any help would be appreciated. My website is down while I try to figure this out.

  • What are you seeing, and how does that differ from what you expect to see?
  • Consider including screenshots, error messages, and/or other helpful visuals

I am seeing the error messages listed above, in the output from the apply stage of the pipeline.

Steps to reproduce

Which troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?

I have browsed through a bunch of different AWS and Terraform documentation articles, StackOverflow questions/answers, and more – too many to list here. I can’t seem to find much documentation on this particular subject (setting up IPv6) related to GitLab’s OpenTofu component, however.

Configuration

Add the infrastructure-as-code or cloud-native configuration relevant to the question.

.gitlab-ci.yml:

include:
  - local: development.yml
    rules:
      - if: ($CI_PIPELINE_SOURCE == 'push' && $CI_COMMIT_REF_PROTECTED == 'false')
        when: always
  - local: mr.yml
    rules:
      - if: $CI_PIPELINE_SOURCE == 'merge_request_event'
        when: always
  - local: prod_deploy.yml
    rules:
      - if: ($CI_PIPELINE_SOURCE == 'push' && $CI_COMMIT_REF_PROTECTED == 'true')
        when: always
  - component: gitlab.com/components/opentofu/full-pipeline@0.18.0
    inputs:
      # The version must currently be specified explicitly as an input,
      # to find the correctly associated images. # This can be removed
      # once https://gitlab.com/gitlab-org/gitlab/-/issues/438275 is solved.
      version: 0.18.0
      opentofu_version: 1.7.1
      state_name: $state_name

stages: [validate, test, build, deploy, cleanup]

#plan:
#  allow_failure: true

destroy:
  needs: []

development.yml:

variables:
  environment_name: "sgt_com_${CI_COMMIT_REF_SLUG}"
  job_prefix: "sgt_com_${CI_COMMIT_REF_SLUG}"
  cluster_name: "sgt_com_${CI_COMMIT_REF_SLUG}"
  TF_VAR_cluster_name: "sgt_com_${CI_COMMIT_REF_SLUG}"
  state_name: "sgt_com_${CI_COMMIT_REF_SLUG}"
  TF_STATE_NAME: "sgt_com_${CI_COMMIT_REF_SLUG}"

mr.yml:

variables:
  environment_name: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"
  job_prefix: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"
  cluster_name: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"
  TF_VAR_cluster_name: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"
  state_name: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"
  TF_STATE_NAME: "sgt_com_${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME}"

prod_deploy.yml:

variables:
  environment_name: 'sgt_com_prod'
  job_prefix: 'sgt_com_prod'
  cluster_name: 'sgt_com_prod'
  TF_VAR_cluster_name: 'sgt_com_prod'
  state_name: 'sgt_com_prod'
  TF_STATE_NAME: 'sgt_com_prod'

providers.tf:

provider "aws" {
  region = var.region
}

...

versions.tf:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    ...
  }
}

variables.tf:

variable "region" {
  default     = "us-west-1"
  description = "AWS region"
}

...

data.tf:

data "aws_availability_zones" "available" {}

data "http" "gstatic-ipranges-cloud-json" {
  url = "https://www.gstatic.com/ipranges/cloud.json"
  request_headers = {
    Accept = "application/json"
  }
}

...

vpc.tf:

locals {
  azs                            = slice(data.aws_availability_zones.available.names, 0, 2)
  vpc_cidr                       = "10.0.0.0/16"
  google_cloud_json              = jsondecode(data.http.gstatic-ipranges-cloud-json.response_body)
  google_cloud_prefix_arr        = local.google_cloud_json.prefixes
  google_cloud_us_east1_prefixes = [for ipv4Prefix, scope in local.google_cloud_prefix_arr : ipv4Prefix if scope == "us-east1"]
}

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name                                          = var.cluster_name
  cidr                                          = local.vpc_cidr
  azs                                           = local.azs
  private_subnets                               = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
  public_subnets                                = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 4)]
  enable_nat_gateway                            = true
  single_nat_gateway                            = true
  enable_dns_hostnames                          = true
  enable_ipv6                                   = true
  public_subnet_assign_ipv6_address_on_creation = true
  public_subnet_ipv6_native                     = true
}

resource "aws_security_group" "public_sg" {
  name        = "${var.cluster_name}_Public_Security_Group"
  description = "Public internet access"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "public_out" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 0
  type              = "egress"
  cidr_blocks       = ["0.0.0.0/0"]
  ipv6_cidr_blocks  = ["::/0"]
}

resource "aws_security_group_rule" "public_http_in" {
  from_port         = 80
  protocol          = "tcp"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 80
  type              = "ingress"
  cidr_blocks       = ["0.0.0.0/0"]
  ipv6_cidr_blocks  = ["::/0"]
}

resource "aws_security_group_rule" "public_https_in" {
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 443
  type              = "ingress"
  cidr_blocks       = ["0.0.0.0/0"]
  ipv6_cidr_blocks  = ["::/0"]
}

resource "aws_security_group_rule" "GitLab_In" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 0
  type              = "ingress"
  cidr_blocks       = ["34.74.90.64/28", "34.74.226.0/24"]
}

resource "aws_security_group_rule" "GoogleCloud_us_east1_In" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 0
  type              = "ingress"
  cidr_blocks       = local.google_cloud_us_east1_prefixes
}

resource "aws_security_group_rule" "My_IP_In" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 0
  type              = "ingress"
  cidr_blocks       = [(... my public IPv4 address ...)]
}

resource "aws_security_group" "private_sg" {
  name        = "${var.cluster_name}_Private_Security_Group"
  description = "Private internet access"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "private_out" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.private_sg.id
  to_port           = 0
  type              = "egress"
  cidr_blocks       = ["0.0.0.0/0"]
  ipv6_cidr_blocks  = ["::/0"]
}

resource "aws_security_group_rule" "private_in" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.private_sg.id
  to_port           = 65535
  type              = "ingress"
  cidr_blocks       = module.vpc.private_subnets_cidr_blocks
  ipv6_cidr_blocks  = module.vpc.private_subnets_ipv6_cidr_blocks
}

Versions

Please select whether options apply, and add the version information.

  • Self-managed
  • GitLab.com SaaS
  • Self-hosted Runners

Versions

  • GitLab (Web: /help or self-managed system information): GitLab Enterprise Edition 17.1.0-pre 810dcfe9279
  • GitLab Runner, if self-hosted (Web /admin/runners or CLI gitlab-runner --version):

Infrastructure-as-Code

  • Terraform/OpenTofu: OpenTofu v1.7.1
  • Ansible: N/A
  • GitLab OpenTofu component: v0.18.0 (just upgraded from v0.17.0; didn’t make any real difference)

Cloud-native

  • Agent for Kubernetes (Operate > Kubernetes clusters): N/A, for the moment
  • Kubernetes (kubectl version): N/A, for the moment

(We’ll get to those later; the pipeline doesn’t really get that far yet)

Note that I’m allowlisting GitLab.com’s IPv4 ranges in my security group rules, along with the IPv4 ranges for the GitLab runners, which run in Google Cloud us-east1 according to GitLab’s docs. (I’m parsing the latter from https://www.gstatic.com/ipranges/cloud.json, as Google’s documentation directs.) I don’t want to leave my VPC wide open to the world, which is what happens by default with this OpenTofu component, apparently.

Thanks for any help you can provide.

Update: OK, I’ve learned a few more things since I posted last night. For one, apparently I was setting up my Ingress and Load Balancer the wrong way. I guess I’m supposed to use the AWS Load Balancer Controller to set up the load balancer(s).

For another, I learned how to set up EKS Access Entries and Access Policy Associations using Terraform. That functionality is now included in my IaC. Good thing, too, because the Access Entries and Polici Associations are pretty much required, AFAICT.

I have commented out the IPv6 stuff for now, just to get my website back up and running. Hopefully I will have it back up sometime today.

If I can get some help with the IPv6 functionality, that would still be nice, but it’s no longer time-critical. Thanks!

Another Update: I have now learned that the Access Entries and Policy Associations can be created automatically just by setting enable_cluster_creator_admin_permissions = true in the module "eks" block. Cool! I’m trying that.

My logic for parsing the Google Cloud us-east1 IP prefix list needed work. Here’s what I have now:

locals {
  # ...
  google_cloud_json              = jsondecode(data.http.gstatic-ipranges-cloud-json.response_body)
  google_cloud_us_east1_prefixes = [for p in local.google_cloud_json.prefixes : lookup(p, "ipv4Prefix", "") if p.scope == "us-east1" && p.service == "Google Cloud" && lookup(p, "ipv4Prefix", "") != ""]
}

# ...

resource "aws_security_group_rule" "GoogleCloud_us_east1_In" {
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.public_sg.id
  to_port           = 0
  type              = "ingress"
  cidr_blocks       = local.google_cloud_us_east1_prefixes
}