CI Environment Resources not successfully created on self-hosted omnibus gitlab

I have a self-hosted omnibus GitLab with Kubernetes enabled on a subgroup. When I try to use a dynamic environment, the pipeline job fails with the messages “This job failed because the necessary resources were not successfully created” and “The deployment of this job to 7-k8s-ci-install did not succeed”.

Looking at the Kubernetes cluster where I expected the namespace to be created, I see that no namespace was in fact created.

This is an old-style self-deployed Kubernetes cluster configured with an API URL, CA Certificate, and service token. It is an RBAC-enabled, GitLab-managed cluster with Namespace per environment enabled. It does not have any of the other integrations enabled.

In the gitlab logs, I see

{"severity":"INFO","time":"2021-08-28T14:20:34.312Z","correlation_id":"01FE6J4CTPDNQDM0EZVGAAKDGK","message":"Pipeline authorized","project_id":7,"user_id":2}
{"severity":"ERROR","time":"2021-08-28T14:20:34.509Z","correlation_id":"01FE6J4CTPDNQDM0EZVGAAKDGK","message":"Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:464. There must be another instance already in execution."}
{"severity":"ERROR","time":"2021-08-28T14:20:48.972Z","correlation_id":"01FE6J4V878Y3H3V32RC9283GB","message":"Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:464. There must be another instance already in execution."}
{"severity":"ERROR","time":"2021-08-28T14:20:49.025Z","correlation_id":"01FE6J4V878Y3H3V32RC9283GB","message":"Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:464. There must be another instance already in execution."}
{"severity":"ERROR","time":"2021-08-28T14:20:49.172Z","correlation_id":"01FE6J4V878Y3H3V32RC9283GB","message":"Cannot obtain an exclusive lease for ci/pipeline_processing/atomic_processing_service::pipeline_id:464. There must be another instance already in execution."}
{"severity":"INFO","time":"2021-08-28T14:26:16.598Z","correlation_id":"30d94503145b7e92e7d65d8a732457e9","message":"Updating statistics for project 3"}

My self-hosted GitLab 14.1.1 Omnibus is running on Ubuntu 18.04

$ apt list gitlab-\*
Listing... Done
gitlab-ce/bionic,now 14.1.1-ce.0 amd64 [installed]
gitlab-cli/bionic,bionic 1:1.3.0-2 all
gitlab-runner/bionic 10.5.0+dfsg-2 amd64
gitlab-shell/bionic,bionic 6.0.4-1 all
gitlab-workhorse/bionic 0.8.5+debian-3 amd64

This is a new GitLab install and a new Kubernetes cluster; I’m not sure where the problem lies or how to debug it further.

  • What are you seeing, and how does it differ from what you expect to see?

I’m seeing the errors described above, but I expected GitLab to create the Kubernetes namespace and run my CI job with the KUBECONFIG for that namespace. This is a pattern that I’ve used successfully on GitLab.com, but is

  • Consider including screenshots, error messages, and/or other helpful visuals
  • What version are you on (Hint: /help) ? and are you using self-managed or gitlab.com?
  • What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been?

Thanks for taking the time to be thorough in your request, it really helps! :blush:

Thank you!

I’m replying to my own message to help all those who come after.

The problem turned out to be a typo in the ClusterRoleBinding.

The general technique for creating the service account, role binding, then getting the info GitLab needs to manage the cluster is expressed in the following snippet:

echo "
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab-admin
  namespace: kube-system
" | kubectl apply -f -

echo "
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: gitlab-admin
  namespace: kube-system
" | kubectl apply -f -

echo "### Paste this into the authorization token of the gitlab kubernetes integration:"
kubectl -n kube-system get secret -o json | jq -r '.items[] | select (.metadata.name | startswith("gitlab-admin-token-")) | .data.token | @base64d'