Bad Error messages

Replace this template with your information

Describe your question in as much detail as possible:

  • What are you seeing, and how does that differ from what you expect to see?
  • Consider including screenshots, error messages, and/or other helpful visuals
  • What version are you on? Are you using self-managed or GitLab.com?
    • GitLab (Hint: /help):
    • Runner (Hint: /admin/runners):

I’m getting a super sketchy error message trying to follow the quick start guide which isn’t very detailed it feels like in the thick of it. I moved to GitLab for this feature, but if I have to spin my own CI/CD I might move back to GitHub since my other personal projects are there. =/

It might be easier for me to just use docker and docker compose, but wanted to give this a try since the marketing makes it seem like it should be very easy.

Basically, running a simple rails api server using autodevops and google k8s. I have no idea what this error is trying to tell me: “Error: release staging failed: timed out waiting for the condition”

  1. What timed out?
  2. What is “the condition”?
  3. Why doesn’t the autodevops deploy debug env variable give more information on this error?
Running with gitlab-runner 13.5.0-rc2 (71c90c86)
  on docker-auto-scale 72989761
Preparing the "docker+machine" executor
00:17
Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.6 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.6 ...
Using docker image sha256:c68a85f8a3242259a0d12208f638c78870f5cdffa16ea26ccbac048c2224113b for registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.6 with digest registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image@sha256:047ec4975a8b13d9a3e182012f98a1a7e296414dc62d7773f8ee85f29c928179 ...
Preparing environment
00:02
Running on runner-72989761-project-16433280-concurrent-0 via runner-72989761-srm-1604272243-2e6a51a8...
Getting source from Git repository
00:01
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/guild-labs/guild-apps/guildhall/.git/
Created fresh repository.
Checking out a3bf8b5b as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
05:14
$ auto-deploy check_kube_domain
$ auto-deploy download_chart
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://gitlab-org.gitlab.io/cluster-integration/helm-stable-archive 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
"bitnami" has been added to your repositories
"stable-archive" has been added to your repositories
Download skipped. Using the default chart included in auto-deploy-image...
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
	Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "stable-archive" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete.
Saving 1 charts
Downloading postgresql from repo https://gitlab-org.gitlab.io/cluster-integration/helm-stable-archive
Deleting outdated charts
$ auto-deploy ensure_namespace
NAME                         STATUS   AGE
guildhall-16433280-staging   Active   76m
$ auto-deploy initialize_tiller
Checking Tiller...
Tiller is listening on localhost:44134
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
[debug] SERVER: "localhost:44134"
Kubernetes: &version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-gke.401", GitCommit:"eb94c181eea5290e9da1238db02cfef263542f5f", GitTreeState:"clean", BuildDate:"2020-09-09T00:57:35Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
$ auto-deploy create_secret
Create secret...
secret "gitlab-registry-guild-labs-guild-apps-guildhall" deleted
secret/gitlab-registry-guild-labs-guild-apps-guildhall replaced
$ auto-deploy deploy
Error: release: "staging" not found
[debug] SERVER: "localhost:44134"
[debug] Fetched bitnami/postgresql to /root/.helm/cache/archive/postgresql-8.2.1.tgz
REVISION: 5
RELEASED: Sun Nov  1 23:11:53 2020
CHART: postgresql-8.2.1
USER-SUPPLIED VALUES:
fullnameOverride: staging-postgresql
image:
  tag: 9.6.16
postgresqlDatabase: staging
postgresqlPassword: testing-password
postgresqlUsername: user
COMPUTED VALUES:
extraEnv: []
fullnameOverride: staging-postgresql
global:
  postgresql: {}
image:
  debug: false
  pullPolicy: IfNotPresent
  registry: docker.io
  repository: bitnami/postgresql
  tag: 9.6.16
ldap:
  baseDN: ""
  bind_password: null
  bindDN: ""
  enabled: false
  port: ""
  prefix: ""
  scheme: ""
  search_attr: ""
  search_filter: ""
  server: ""
  suffix: ""
  tls: false
  url: ""
livenessProbe:
  enabled: true
  failureThreshold: 6
  initialDelaySeconds: 30
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
master:
  affinity: {}
  annotations: {}
  extraInitContainers: ""
  extraVolumeMounts: []
  extraVolumes: []
  labels: {}
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  priorityClassName: ""
  tolerations: []
metrics:
  enabled: false
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: bitnami/postgres-exporter
    tag: 0.8.0-debian-10-r4
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  prometheusRule:
    additionalLabels: {}
    enabled: false
    namespace: ""
    rules: []
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  securityContext:
    enabled: false
    runAsUser: 1001
  service:
    annotations:
      prometheus.io/port: "9187"
      prometheus.io/scrape: "true"
    loadBalancerIP: null
    type: ClusterIP
  serviceMonitor:
    additionalLabels: {}
    enabled: false
networkPolicy:
  allowExternal: true
  enabled: false
persistence:
  accessModes:
  - ReadWriteOnce
  annotations: {}
  enabled: true
  mountPath: /bitnami/postgresql
  size: 8Gi
  subPath: ""
postgresqlDataDir: /bitnami/postgresql/data
postgresqlDatabase: staging
postgresqlPassword: testing-password
postgresqlUsername: user
readinessProbe:
  enabled: true
  failureThreshold: 6
  initialDelaySeconds: 5
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
replication:
  applicationName: my_application
  enabled: false
  numSynchronousReplicas: 0
  password: repl_password
  slaveReplicas: 1
  synchronousCommit: "off"
  user: repl_user
resources:
  requests:
    cpu: 250m
    memory: 256Mi
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001
service:
  annotations: {}
  port: 5432
  type: ClusterIP
serviceAccount:
  enabled: false
shmVolume:
  enabled: true
slave:
  affinity: {}
  annotations: {}
  extraInitContainers: ""
  extraVolumeMounts: []
  extraVolumes: []
  labels: {}
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  priorityClassName: ""
  tolerations: []
updateStrategy:
  type: RollingUpdate
volumePermissions:
  enabled: true
  image:
    pullPolicy: Always
    registry: docker.io
    repository: bitnami/minideb
    tag: stretch
  securityContext:
    runAsUser: 0
HOOKS:
MANIFEST:
---
# Source: postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: staging-postgresql
  labels:
    app: postgresql
    chart: postgresql-8.2.1
    release: "staging-postgresql"
    heritage: "Tiller"
type: Opaque
data:
  postgresql-password: "dGVzdGluZy1wYXNzd29yZA=="
---
# Source: postgresql/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: staging-postgresql-headless
  labels:
    app: postgresql
    chart: postgresql-8.2.1
    release: "staging-postgresql"
    heritage: "Tiller"
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
  selector:
    app: postgresql
    release: "staging-postgresql"
---
# Source: postgresql/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: staging-postgresql
  labels:
    app: postgresql
    chart: postgresql-8.2.1
    release: "staging-postgresql"
    heritage: "Tiller"
spec:
  type: ClusterIP
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
  selector:
    app: postgresql
    release: "staging-postgresql"
    role: master
---
# Source: postgresql/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: staging-postgresql
  labels:
    app: postgresql
    chart: postgresql-8.2.1
    release: "staging-postgresql"
    heritage: "Tiller"
spec:
  serviceName: staging-postgresql-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: postgresql
      release: "staging-postgresql"
      role: master
  template:
    metadata:
      name: staging-postgresql
      labels:
        app: postgresql
        chart: postgresql-8.2.1
        release: "staging-postgresql"
        heritage: "Tiller"
        role: master
    spec:      
      securityContext:
        fsGroup: 1001
      initContainers:
        - name: init-chmod-data
          image: docker.io/bitnami/minideb:stretch
          imagePullPolicy: "Always"
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
            
          command:
            - /bin/sh
            - -c
            - |
              mkdir -p /bitnami/postgresql/data
              chmod 700 /bitnami/postgresql/data
              find /bitnami/postgresql -mindepth 0 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | \
                xargs chown -R 1001:1001
              chmod -R 777 /dev/shm
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: data
              mountPath: /bitnami/postgresql
              subPath: 
            - name: dshm
              mountPath: /dev/shm
      containers:
        - name: staging-postgresql
          image: docker.io/bitnami/postgresql:9.6.16
          imagePullPolicy: "IfNotPresent"
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
            
          securityContext:
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: POSTGRESQL_PORT_NUMBER
              value: "5432"
            - name: POSTGRESQL_VOLUME_DIR
              value: "/bitnami/postgresql"
            - name: PGDATA
              value: "/bitnami/postgresql/data"
            - name: POSTGRES_USER
              value: "user"
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: staging-postgresql
                  key: postgresql-password
            - name: POSTGRES_DB
              value: "staging"
            - name: POSTGRESQL_ENABLE_LDAP
              value: "no"
          ports:
            - name: tcp-postgresql
              containerPort: 5432
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - -c
                - exec pg_isready -U "user" -d "staging" -h 127.0.0.1 -p 5432
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
                - /bin/sh
                - -c
                - -e
                - |
                  exec pg_isready -U "user" -d "staging" -h 127.0.0.1 -p 5432
                  [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - name: dshm
              mountPath: /dev/shm
            - name: data
              mountPath: /bitnami/postgresql
              subPath: 
      volumes:
        - name: dshm
          emptyDir:
            medium: Memory
            sizeLimit: 1Gi
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"
Release "staging-postgresql" has been upgraded.
LAST DEPLOYED: Sun Nov  1 23:11:53 2020
NAMESPACE: guildhall-16433280-staging
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                  READY  STATUS   RESTARTS  AGE
staging-postgresql-0  1/1    Running  0         39m
==> v1/Secret
NAME                TYPE    DATA  AGE
staging-postgresql  Opaque  1     39m
==> v1/Service
NAME                         TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
staging-postgresql           ClusterIP  10.1.0.141  <none>       5432/TCP  39m
staging-postgresql-headless  ClusterIP  None        <none>       5432/TCP  39m
==> v1/StatefulSet
NAME                READY  AGE
staging-postgresql  1/1    39m
NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
    staging-postgresql.guildhall-16433280-staging.svc.cluster.local - Read/Write connection
To get the password for "user" run:
    export POSTGRES_PASSWORD=$(kubectl get secret --namespace guildhall-16433280-staging staging-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run staging-postgresql-client --rm --tty -i --restart='Never' --namespace guildhall-16433280-staging --image docker.io/bitnami/postgresql:9.6.16 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host staging-postgresql -U user -d staging -p 5432
To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace guildhall-16433280-staging svc/staging-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U user -d staging -p 5432
WARNING: Rolling tag detected (bitnami/postgresql:9.6.16), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
Validating chart version...
Fetching the previously deployed chart version... 
Fetching the deploying chart version... v1.0.6
secret "staging-secret" deleted
secret/staging-secret replaced
No helm values file found at '.gitlab/auto-deploy-values.yaml'
Deploying new stable release...
[debug] SERVER: "localhost:44134"
Release "staging" does not exist. Installing it now.
[debug] CHART PATH: /builds/guild-labs/guild-apps/guildhall/chart
INSTALL FAILED
PURGING CHART
Error: release staging failed: timed out waiting for the condition
Successfully purged a chart!
Error: release staging failed: timed out waiting for the condition
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
  • Add the CI configuration from .gitlab-ci.yml and other configuration if relevant (e.g. docker-compose.yml)

Not using docker or a custom gitlab-ci.yml, using autodevops.

  • What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?

I’ve tried searching google and gitlab docs for any information. I tried adding the AUTO_DEVOPS_DEPLOY_DEBUG flag for more information but alas I’m here.

Thanks for taking the time to be thorough in your request, it really helps! :blush:

You can check the output of the created container in k8s.
For example in gke → Cluster → Services → gitlab-auto-deploy → Logs