CICD pipeline stops working: status code 426

Two days ago our CICD pipeline stopped working without any modifications on our hand.
We use with shared runners to deploy our application on an EKS cluster.
To make it work we deployed a gitlab agent on our cluster.

When the CICD tries to run

kubectl apply -f airflow/secrets_and_configs/airflow_dags_image_secret.yaml

we get this error I can’t figure out what’s happening

error: error validating "airflow/secrets_and_configs/airflow_dags_image_secret.yaml": error validating data: the server responded with the status code 426 but did not return more information; if you choose to ignore these errors, turn validation off with --validate=false

airflow/secrets_and_configs/airflow_dags_image_secret.yaml is a yaml definition file that creates a resource in our kubernetes cluster. Running the following command directly from the command line works perfectly well.

kubectl apply -f airflow/secrets_and_configs/airflow_dags_image_secret.yaml

I tried upgrading the gitlab-agent but it didn’t change anything…
We are using

I have the same problem, did you find a solution to this ?


We chose to deploy a gitlab runner inside our Kubernetes Cluster instead. It was what we were planning to do so we rushed it to fix this issue.

Ok thanks for the reply.

I have the same problem. Im upgrade kuberentes agent, deploy gitlab runner.

Maby any other solution ?

Hi, I have also the same problem. Also since round about 14 days now. We did not change anything with our kubernetes cluster (only added melallb as load balancer).

Strange thing is, if I download the generated artifacts from the gitlab ci and use them to apply the .yaml-files via a local kubectl-console they work flawlessly. So maybe it has something to do with permissions?

Hi, same issue here for the last 2 weeks or so.

Can anyone from gitlab just come here to give us some input ?

Or anyone else who fixed this (i’m not talking about using k8s based runner here this is not a solution, this is a workaround)

[EDIT] Applying validate=false work as a workaround, but i’d ove something more robust

We have applied the ’ --validate=false’ option to the ‘kubectl apply’ commands in the meantime. This turns of client side validation (The deployments are still validated on the server). This works for now, but is far from ideal. So yes, would be nice, if someone from gitlab can give insights why this behavior is occurring now.

1 Like

Same problem here…I will use --validate=false in the meantime.

Hello. I was having the same problem earlier today. This fixed it for me.. Apperently for me it was a kubectl version problem. I use dtzar/helm-kubectl so in my deploy I changed it to run an older version like so…

image: dtzar/helm-kubectl:3.10.2
stage: deploy

hope this helps.