Permission denied to delete kubernetes namespace

Currently I cannot stop my review apps from the CI pipeline job. It’s a GitLab managed Kubernetes cluster in Google Cloud. I don’t know what changed, but I have worked on the pipeline so it’s maybe my fault. But actually I haven’t changed anything respective the stop process. The job is easy because it has only deleted the whole namespace so far.

review_stop:
  stage: tested
  variables:
    GIT_STRATEGY: none
  script:
    - kubectl delete namespace $KUBE_NAMESPACE
  environment:
    name: review/$CI_COMMIT_REF_NAME
    action: stop
  only: [branches]
  except: [master]
  when: manual

Now I get only this error:

$ kubectl delete namespace $KUBE_NAMESPACE
Error from server (Forbidden): namespaces "st8ment-tv-20103579-review-review-rcmeyv" is forbidden: User "system:serviceaccount:st8ment-tv-20103579-review-review-rcmeyv:st8ment-tv-20103579-review-review-rcmeyv-service-account" cannot delete resource "namespaces" in API group "" in the namespace "st8ment-tv-20103579-review-review-rcmeyv"
ERROR: Job failed: exit code 1

The branch name “review” is a special review app in my case, so I had a “Protected Branch” configuration for this. I thought it has maybe todo with each other and have already removed the configuration - without success.

Was this cluster created by GitLab? Or it was created by you manually/using terraform?
Do you have a gitlab-admin service account inside kube-system namespace?

The cluster was created by GitLab. A service account with that name doesn’t exists, should it?

$ kubectl get serviceaccounts --namespace=kube-system
NAME                                 SECRETS   AGE
attachdetach-controller              1         18d
certificate-controller               1         18d
cloud-provider                       1         18d
clusterrole-aggregation-controller   1         18d
cronjob-controller                   1         18d
daemon-set-controller                1         18d
default                              1         18d
deployment-controller                1         18d
disruption-controller                1         18d
endpoint-controller                  1         18d
event-exporter-sa                    1         18d
expand-controller                    1         18d
fluentd-gcp                          1         18d
fluentd-gcp-scaler                   1         18d
generic-garbage-collector            1         18d
heapster                             1         18d
horizontal-pod-autoscaler            1         18d
job-controller                       1         18d
kube-dns                             1         18d
kube-dns-autoscaler                  1         18d
metadata-agent                       1         18d
metadata-proxy                       1         18d
metrics-server                       1         18d
namespace-controller                 1         18d
node-controller                      1         18d
persistent-volume-binder             1         18d
pod-garbage-collector                1         18d
prometheus-to-sd                     1         18d
pv-protection-controller             1         18d
pvc-protection-controller            1         18d
replicaset-controller                1         18d
replication-controller               1         18d
resourcequota-controller             1         18d
service-account-controller           1         18d
service-controller                   1         18d
statefulset-controller               1         18d
ttl-controller                       1         18d

I believe so.
This is the way gitlab has permissions to admin cluster specifics after creation. Not sure why it was not created on your case. Or if I’m wrong and it work in a different way when the cluster is created directly by GitLab. On my case I’ve created the cluster and pointed to it in GitLab and created a Service Account for it.

You could create one with the following content for kubectl:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab-admin
    namespace: kube-system

Not sure if it’s the problem. On this page the serviceaccount with the name “gitlab-admin” appears only for RBAC clusters: https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html#rbac-cluster-resources

I’ve not said, but RBAC is disabled* because there was an issue with installing redis. (I wanted to investigate further later.) But it already runned with this setup.

(*) https://kubernetes.io/docs/reference/access-authn-authz/rbac/#permissive-rbac-permissions

Hi @JT2809,

Have you tried after giving this user "system:serviceaccount:st8ment-tv-20103579-review-review-rcmeyv:st8ment-tv-20103579-review-review-rcmeyv-service-account" proper permission?

Hi, thanks for your reply and sorry for my delay. Currently I’ve ignored the issue because I will analyse the system workload in the near future to select a node type that better suits to our application. My hope is that the problem disappears from alone if I setup a new cluster since it already worked and I assume that there was maybe a manual change that leaded to this problem.

1 Like