CI/CD with Kubernetes 1.16: Production job fails, possibly due to Kubernetes version?


we’re using Gitlab CI/CD on with a connected Kubernetes cluster. Until recently, we ran on Azure, but have now decided to switch over to digitalocean. When I run our build / deploy pipeline now on our fresh cluster, I’m getting this error in the “production” job:

$ auto-deploy deploy
secret/production-secret replaced
Deploying new release...
Release "production" does not exist. Installing it now.
Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
ERROR: Job failed: exit code 1

After doing some googling, I found this release announcement for Kubernetes 1.16, which states that the Deployment resource has been moved up from extensions/v1beta1 to (eventually) apps/v1:

The Kubernetes version used on digitalocean is indeed 1.16.2. I do not recall the version we used on Azure, but judging from the article’s date, the 1.16 release is somewhat recent (September 2019).

My question is this: Am I right in assuming that this issue is caused by Gitlab’s CI/CD using a pre-1.16 notation to automatically create Deployments on Kubernetes clusters? If so, how can I adapt the deployment script to use the apps/v1 scope?

Update for anyone who is interested:

To test whether the problem was really caused by the different Kubernetes versions used, I created a new cluster on digitalocean using the previous Kubernetes version (1.15) and connected that cluster to Gitlab CI/CD. The build / deploy pipeline works fine, so I guess the hypothesis was correct.

Now, this is certainly not a perfect solution - I’d like to be able to use current software on my servers, but for now it’s ok.

After some digging, it seems that Gitlab’s own auto-deploy-image uses Kubernetes 1.13.12. This is defined at the top of its .gitlab-ci.yml. What I don’t know is if one could just change the version up there, rebuild the auto-deploy-image and use it instead of Gitlab’s version, or if there are further version incompatibility issues that would pop up if one did that.


Hi Christopher

Have you ever resolve this issue. It appears Digital Ocean Kubernetes have upgraded well beyond the standard version offered in Gitlab CICD. The minimum version currently is 1.14.8, 1.15.5 and 1.16.2 and all 3 Kubernetes versions failed with exactly the same error (timeout).

==> v1beta1/Ingress
99 production-auto-deploy 3s
100 NOTES:
101 Application should be accessible at:
102 Waiting for deployment “production” rollout to finish: 0 of 1 updated replicas are available…
Pulling docker image gitlab/gitlab-runner-helper:x86_64-ec299e72 …

Hi kktam,

I obviously don’t know your application, but since your deployment is failing with 1.14 and 1.15 as well, are you sure this is the same issue? I’m asking because downgrading our digitalocean cluster to Kubernetes 1.15 solved our problem for the time being.

Anyway, I opened an issue regarding my original post, and it turns out the Gitlab folks are working on Kubernetes 1.16 support for Auto DevOps:

I hope this helps.


Hi Christopher,

Thanks. The information is very helpful. Do you know if Gitlabs have any timeline in Kubernetes 1.16 support for autodevops?


The second linked issue points to 12.7 which will be the next feature release on the 22nd. You can subscribe to issue notifications with the bell in the bottom right corner to be notified about progress and whether this is coming with 12.7 or being rescheduled.