Describe your question in as much detail as possible:
I am using a gitlab repo with the build and deploy jobs from autodevops to deploy an Angular app. Below are the parts I have taken from the autodevops jobs
[ stages defined above ]
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v2.0.0"
dependencies: []
[ . . . ]
include:
- template: Jobs/Build.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml
review:
extends: .auto-deploy
stage: review
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy persist_environment_url
environment:
name: review/$CI_COMMIT_REF_NAME
url: http://$CI_PROJECT_ID-$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
auto_stop_in: 8 hrs
on_stop: stop_review
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
stop_review:
extends: .auto-deploy
stage: cleanup
variables:
GIT_STRATEGY: none
script:
- auto-deploy initialize_tiller
- auto-deploy delete
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
allow_failure: true
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
when: manual
# Staging deploys are disabled by default since
# continuous deployment to production is enabled by default
# To automatically deploy to staging and
# only manually promote to production, enable this job by setting
# STAGING_ENABLED.
staging:
extends: .auto-deploy
stage: staging
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
environment:
name: staging
url: http://$CI_PROJECT_PATH_SLUG-staging.$KUBE_INGRESS_BASE_DOMAIN
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH != "master"'
when: never
- if: '$STAGING_ENABLED'
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
production:
<<: *production_template
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
This configuration works just fine unless I set $POSTGRES_ENABLED
to true.
- What are you seeing, and how does that differ from what you expect to see?
- Consider including screenshots, error messages, and/or other helpful visuals
Here is what happens in the review stage when I enable postgres.
auto-deploy deploy
Error: release: not found
No PostgreSQL helm values file found at '.gitlab/auto-deploy-postgres-values.yaml'
history.go:52: [debug] getting history for release review-helm-verbo-v76wki-postgresql
Release "review-helm-verbo-v76wki-postgresql" does not exist. Installing it now.
install.go:159: [debug] Original chart version: "8.2.1"
install.go:176: [debug] CHART PATH: /root/.cache/helm/repository/postgresql-8.2.1.tgz
client.go:108: [debug] creating 4 resource(s)
wait.go:53: [debug] beginning wait for 4 resources with timeout of 5m0s
wait.go:334: [debug] StatefulSet is not ready: project-21457043-review-helm-verbo-v76wki/review-helm-verbo-v76wki-postgresql. 0 out of 1 expected pods are ready
wait.go:334: [debug] StatefulSet is not ready: project-21457043-review-helm-verbo-v76wki/review-helm-verbo-v76wki-postgresql. 0 out of 1 expected pods are ready
[ . . . ]
install.go:382: [debug] Install failed and atomic is set, uninstalling release
- *What version are you on?
- What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?
I’ve read through the customizing auto-devops page, some helm documentation because at one point I tried just making a new helm chart, the auto-deploy script on this repo
I know that my pipeline succeeds and the review app is deployed when I disable Postgres, and that has been my fix for the past month, but I want to enable postgres at this point.