AutoDevOps deploying old image

Issue

I make a source change, commit it, and AutoDevOps triggers a pipeline.
In the pipeline I can see the image that is built and published:

1127 Successfully built b70bb48cce79
1128 Successfully tagged registry.gitlab.xyz/k8/netcore5/master:a5455aac5177f98ef418ec15b84f24f6a42714f3
1129 Successfully tagged registry.gitlab.xyz/k8/netcore5/master:latest

Then, when I go and look at the staging job, its taking a long time. So I check the cluster.

netcore5 % kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
staging-postgresql-0     1/1     Running   0          22h
staging-8cf8cccd-g5gf6   0/1     Running   7          9m8s

When I get the pod spec

kubectl get pod staging-8cf8cccd-g5gf6 -o yaml
...
   image: registry.gitlab.xyz/k8/netcore5/master:771e5d21a5e0c6a6d5595fa9dc583cfaeaa0bf14

This is a different image tag. When I check in gitlab it corresponds to an image tag deployed over 24H ago. This image tag is used in the replicaset. And used in the deployment.
I tried deleting the deployment, and doing another build once everything was clean, and gitlab simply rebuild the image with a new tag, and then created a deployment with an old tag again.

Help!

Oh FFS.

[1285] Deploying new stable release...
[1286] UPGRADE FAILED
[1287] Error: "staging" has no deployed releases

This error here seems to be from the auto-deploy script calling helm upgrade --install and the error seems to indicate it can’t upgrade the deployment. Because the previous deployment has no deployed released. It has no deployed releases because its failing its liveness check. Which I have added and am trying to deploy.

So, how do I fix this? I need to manually clean the deployment from helm so the --install succeeds.
Somehow.

oh. ok.

kubectl delete namespace project-id-staging