Job Failed: "You must be logged in to the server (the server has asked for the client to provide credentials)"

I have my project set up on as a private repository.

My project is pointing to my external k8s cluster and I have verified I can communicate with kubernetes via a cluster-admin role (based on the documentation). I have tested the connection to kubernetes by installing helm via the kubernetes dashboard on the project page.

When I attempt to do a deployment via my pipeline I am receiving the following error message:

$ kubectl version
Client Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.2”, GitCommit:“cff46ab41ff0bb44d8584413b598ad8360ec1def”, GitTreeState:“clean”, BuildDate:“2019-01-10T23:35:51Z”, GoVersion:“go1.11.4”, Compiler:“gc”, Platform:“linux/amd64”}
error: You must be logged in to the server (the server has asked for the client to provide credentials)This is my Service Account and ClusterRoleBinding configuration

apiVersion: v1
kind: ServiceAccount
name: gitlab
namespace: default

kind: ClusterRoleBinding
name: gitlab-cluster-admin

  • kind: ServiceAccount
    name: gitlab
    namespace: default
    kind: ClusterRole
    name: cluster-admin

Any tips on where to look next would be greatly appreciated.

This appears to be some kind of bug within

To fix this I removed the Kubernetes configuration via the dashboard and then added a new one.

Every once in a while I keep getting that error, anyways I can solve this issue once and for all? It doesn’t make sense that everything this issue happens, we will remove and add the settings again. Can you please advise @sedonami

If someone ever gets this error again, I solved it by clearing the cluster cache in the Kubernetes administration panel


Thanks jo.walt, that worked.

Same here; this is the second time I’ve run into the issue, and second time I’ve come to this as a result of searching… so thanks twice over! Just cleared the cache and everything works now.

I created my account on this forum to like this answer

how does one clear cache of kubernetes cluster?

Thanks a ton @jo.walt

I was about to delete and re-create the cluster and then found this post, saved my day!

I was looking for that solution for so long time, lol. Thank you so much!

@chris-williams me too :grinning:

Since it took me a while to figure it out:

They’re talking about the Kubernetes Admin Panel in GitLab.

In the left sidebar, select Infrastructure = Dropdown => Kubernetes Clusters = List of of clusters => Select your cluster => Advanced Settings Tab => Scroll to Clear Cluster Cache and click the button

1 Like

EDIT: I have created a separate topic that can be found here.

I am experiencing same issue using the Gitlab Agent. The context is pased correctly, but the moment I execute a kubectl get pods command, it says I need to be logged in. This is a brand new kubernetes cluster, and the agent had just been created. And there is no way for me to clear a cluster cache on an agent.

I ran sudo gitlab-ctl tail on the GitLab server to see if anything stood out, and I noticed that when the kubectl command were to execute, it called the k8s-proxy but my server responded with 401 error. So is there something I need to configure on my Omnibus installation to make it work?

I am using GitLab version 14.7.0-ee, and a K8S cluster version v1.20.15. The agent and runner are successfully connected from K8S, and using Kaniko, I can build an application and send to the GitLab container registry. But I can’t execute kubectl commands.

Same problem, anyone have any ideas to fix it ?

As far as I understood, it usually happens when Omnibus Gitlab is accessed via http. not https.
I have the same problem on my home lab (no public address, no real domain = no SSL).
Just now will try to use deprecated certificate-based Kubernetes integration.

1 Like

After going through GitLab support, this has been resolved by adding certificate to Omnibus installation. GitLab KAS (Agent) - Job failed: You must be logged in to the server - #7 by Corfitz