Cleanup of the kubernetes namespaces created for each MR by Review Apps

I have a pipeline that makes use of the Review Apps, so a dynamic environment is created for each merge request, that creates a new Kubernetes namespace for each merge request too.

I also have a environment:on_stop job that deletes the temporary docker image from ECR and uninstalls (undeploys) the helm chart when the environment is stopped (either via the auto_stop_in or by

I was expecting that then the GitLab environment was stopped or deleted that would also delete the kubernetes namespace. But it doesn’t seem to do it.

The relevant portion of the .gitlab-ci.yml:

deploy review app:
  stage: deploy
  image: alpine/helm:3.5.0
  dependencies: []
  script:
    - helm -n "$KUBE_NAMESPACE" upgrade
      --install --wait "$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID" chart
      -f helm-reviewapp-values.yaml
      --set-string "ingress.annotations.external-dns\.alpha\.kubernetes\.io/hostname=$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com."
      --set-string "ingress.host=$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com"
      --set-string "image=$AWS_REPOSITORY:$CI_PROJECT_NAME.$CI_MERGE_REQUEST_ID"
      --set "deploymentAnnotations.app\.gitlab\.com/app=${CI_PROJECT_PATH_SLUG}"
      --set "deploymentAnnotations.app\.gitlab\.com/env=${CI_ENVIRONMENT_SLUG}"
      --set "podAnnotations.app\.gitlab\.com/app=${CI_PROJECT_PATH_SLUG}"
      --set "podAnnotations.app\.gitlab\.com/env=${CI_ENVIRONMENT_SLUG}"

  environment:
    name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
    url: https://$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com
    on_stop: stop review app
    auto_stop_in: 1 day
  needs:
    - build docker image review app
  rules:
    - if: $CI_MERGE_REQUEST_ID

stop review app:
  stage: cleanup approval
  script: echo approved
  dependencies: []
  environment:
    name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
    action: stop
  needs:
    - deploy review app
  rules:
    - if: $CI_MERGE_REQUEST_ID
      when: manual


uninstall helm chart:
  stage: cleanup
  image: alpine/helm:3.5.0
  dependencies: []
  environment:
    name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
    action: stop
  script:
    - helm -n "$KUBE_NAMESPACE" uninstall "$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID"
  needs:
    - stop review app
  rules:
    - if: $CI_MERGE_REQUEST_ID

delete ecr image:
  stage: cleanup
  image: amazon/aws-cli:2.1.19
  dependencies: []
  script:
    - aws ecr batch-delete-image --repository-name XXXX --image-ids "imageTag=$CI_PROJECT_NAME.$CI_MERGE_REQUEST_ID"
  needs:
    - stop review app
  rules:
    - if: $CI_MERGE_REQUEST_ID

So my question is how do people do the cleanup then? Because you end up with a kubernetes namespace for each merge request so that quickly piles up.

I can think of several workarounds but I would appreciate any feedback on them

  • delete the kubernetes namespace myself on the on_stop job
  • schedule a periodic pipeline that deletes all “old” namespaces

This is a known issue. Namespaces are not cleaned up after the environment is stopped. Unfortunately you cannot remove the namespace on the stop action, as with the managed cluster feature, your job will not have permissions to delete the namespace. The only solution at the moment is to periodically delete old namespaces (don’t forget to clear the kubernetes cache in the settings after performing manual updates on the cluster).
see Cleanup namespaces created for environments on environment-elimination (#27501) · Issues · GitLab.org / GitLab · GitLab

1 Like