I am experiencing an issue using the Gitlab Agent. The context is passed correctly, but the moment I execute a
kubectl get pods command, it says I need to be logged in. This is a brand new kubernetes cluster, and the agent had just been created. And there is no way for me to clear a cluster cache on an agent as according to this topic.
Note: Both agents/contexts are having the same issue. Only difference is
that primary-agent is a kubernetes cluster version 1.23 and
test-v20-agent is a version 1.20. I was not sure if this was the cause of the problem, so that’s why I tested it.
I am using a self-hosted GitLab installation omnibus with version 14.7.0-ee, and a K8S cluster version v1.20.15. The agent and runner are successfully connected from K8S, and using Kaniko, I can build an application and send to the GitLab container registry. But I can’t execute kubectl commands.
.gitlab-ci.yml file looks like this:
deploy: stage: deploy allow_failure: true image: name: bitnami/kubectl:latest entrypoint: [""] script: - kubectl config get-contexts - kubectl config use-context peppeo/k8s-agent:test-v20-agent - kubectl config view - kubectl get pods
kubectl config view command, I could see that it called the k8s-proxy on my gitlab instance, so I ran
sudo gitlab-ctl tail on the GitLab server to see if anything stood out, and I noticed that when the kubectl command were to execute, it called the
k8s-proxy but my server responded with 401 error. So is there something I need to configure on my Omnibus installation to make it work?
Any help would be appreciated.
UPDATE: Feb 6, 2022
While trying to test it out, I uncommented everything in the
.gitlab-ci.yml file but the
kubectl get pods command. This gave me a different error:
When searching for a solution, I then found this article on Medium which explained I had to create a
RoleBinding in order for Gitlab runner to get pods and create such. It confuses me, as there already is a
ClusterRoleBinding set up during the Helm installation of the Gitlab Runner, but I did as the article said and then I could see the runner pods when running
kubectl get pods
However, if I follow the GitLab documentation and use a particular context:
kubectl config use-context peppeo/k8s-agent:test-v20-agent I once again get the “You must be logged in to the server…” error.
I would really appreciate if somebody could shed some light on why this is. I don’t understand why I have to create additional Roles and Rolebindings manually after Helm installation, and I don’t understand how I can get some pods when not using the context and nothing when I do use them.