I have my project set up on gitlab.com as a private repository.
My project is pointing to my external k8s cluster and I have verified I can communicate with kubernetes via a cluster-admin role (based on the gitlab.com documentation). I have tested the connection to kubernetes by installing helm via the kubernetes dashboard on the gitlab.com project page.
When I attempt to do a deployment via my pipeline I am receiving the following error message:
$ kubectl version
Client Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.2”, GitCommit:“cff46ab41ff0bb44d8584413b598ad8360ec1def”, GitTreeState:“clean”, BuildDate:“2019-01-10T23:35:51Z”, GoVersion:“go1.11.4”, Compiler:“gc”, Platform:“linux/amd64”}
error: You must be logged in to the server (the server has asked for the client to provide credentials)This is my Service Account and ClusterRoleBinding configuration
Every once in a while I keep getting that error, anyways I can solve this issue once and for all? It doesn’t make sense that everything this issue happens, we will remove and add the settings again. Can you please advise @sedonami
Same here; this is the second time I’ve run into the issue, and second time I’ve come to this as a result of searching… so thanks twice over! Just cleared the cache and everything works now.
They’re talking about the Kubernetes Admin Panel in GitLab.
In the left sidebar, select Infrastructure = Dropdown => Kubernetes Clusters = List of of clusters => Select your cluster => Advanced Settings Tab => Scroll to Clear Cluster Cache and click the button
I am experiencing same issue using the Gitlab Agent. The context is pased correctly, but the moment I execute a kubectl get pods command, it says I need to be logged in. This is a brand new kubernetes cluster, and the agent had just been created. And there is no way for me to clear a cluster cache on an agent.
I ran sudo gitlab-ctl tail on the GitLab server to see if anything stood out, and I noticed that when the kubectl command were to execute, it called the k8s-proxy but my server responded with 401 error. So is there something I need to configure on my Omnibus installation to make it work?
I am using GitLab version 14.7.0-ee, and a K8S cluster version v1.20.15. The agent and runner are successfully connected from K8S, and using Kaniko, I can build an application and send to the GitLab container registry. But I can’t execute kubectl commands.
As far as I understood, it usually happens when Omnibus Gitlab is accessed via http. not https.
I have the same problem on my home lab (no public address, no real domain = no SSL).
Just now will try to use deprecated certificate-based Kubernetes integration.