We’ve currently got multiple repositories and a “core” which each of them share, each using a separate repository and I’d like to move them all into a single nrwl/nx repository as I’ve had a good experience with it in the past.
Now I’m trying to create the Gitlab pipeline to build and deploy each of the projects into a single cluster and having issues all along the way.
Building was fairly simple. I’m just changing the PROJECT_NAME and it’ll build and publish a container for that project:
buildApi:
stage: build
image: 'registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.4.0'
variables:
DOCKER_TLS_CERTDIR: ''
services:
- docker:19.03.12-dind
script:
- |
export PROJECT_NAME=api
export AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS="--build-arg PROJECT_NAME=$PROJECT_NAME"
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}-api
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}-api
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- /build/build.sh
rules:
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
Review worked fine as well like so. I just change the PROJECT_NAME and it manages to pull the correctly published container and deploys the built project fine:
reviewApi:
extends: .auto-deploy
stage: review
script:
- |
env
export PROJECT_NAME=api
export K8S_SECRET_PROJECT_NAME=$PROJECT_NAME
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}-$PROJECT_NAME
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}-$PROJECT_NAME
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy persist_environment_url
environment:
name: review/$PROJECT_NAME-$CI_COMMIT_REF_NAME
url: http://$CI_PROJECT_ID-$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
on_stop: reviewApiStop
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
reviewApiStop:
extends: .auto-deploy
stage: cleanup
variables:
GIT_STRATEGY: none
script:
- export PROJECT_NAME=api
- auto-deploy initialize_tiller
- auto-deploy delete
environment:
name: review/$PROJECT_NAME-$CI_COMMIT_REF_NAME
action: stop
allow_failure: true
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
when: manual
Now the part that’s causing me grief is trying to do the same thing for the staging environment, which looks like so:
stagingApi:
extends: .auto-deploy
stage: staging
script:
- |
export PROJECT_NAME=api
export K8S_SECRET_PROJECT_NAME=$PROJECT_NAME
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}-$PROJECT_NAME
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}-$PROJECT_NAME
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
env
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
environment:
name: staging/$PROJECT_NAME
url: http://$CI_PROJECT_PATH_SLUG-staging-api.$KUBE_INGRESS_BASE_DOMAIN
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH != "master"'
when: never
- if: '$STAGING_ENABLED'
But now I’m running into a couple of problems and it’s mainly around Gitlab AutoDevops magically providing extra variables when using environments with specific names:
- review/*
- staging
- production
These variables are as I’ve seen so far:
- CI_ENVIRONMENT_NAME
- CI_ENVIRONMENT_SLUG
- KUBE_URL
- KUBE_TOKEN
- KUBE_NAMESPACE
- KUBE_SERVICE_ACCOUNT
- KUBE_CA_PEM_FILE
- KUBE_CA_PEM
At first I figured I could get around the KUBE_ variables by providing them myself, as it’s my cluster and I know those… so I created something like:
export KUBE_NAMESPACE=platform-20820261-$CI_ENVIRONMENT_SLUG
export KUBE_SERVICE_ACCOUNT=platform-20820261-$CI_ENVIRONMENT_SLUG-service-account
But now I see that CI_ENVIRONMENT_SLUG and CI_ENVIRONMENT_NAME aren’t provided either. They’re ONLY provided for: review/*
, staging
and production
. Why can’t these just be given to any pipeline step with an environment name?
Another thing, it’d be much easier to re-use some of this stuff if it didn’t rely on logic/magic that was hidden behind the pipeline that only appears on certain conditions… for example the functions for creating those I don’t have access to.
I’m thinking I’m going to have to either just use environment names prefixed with review/
to get around limitations like:
- review/ci-some-branch
- review/staging
- review/production
But am not sure if I can apply ENV vars like: review/ci-*
and review/staging
.