GitLab KAS (Agent) - Job failed: You must be logged in to the server

I am experiencing an issue using the Gitlab Agent. The context is passed correctly, but the moment I execute a kubectl get pods command, it says I need to be logged in. This is a brand new kubernetes cluster, and the agent had just been created. And there is no way for me to clear a cluster cache on an agent as according to this topic.

Note: Both agents/contexts are having the same issue. Only difference is that primary-agent is a kubernetes cluster version 1.23 and test-v20-agent is a version 1.20. I was not sure if this was the cause of the problem, so that’s why I tested it.

I am using a self-hosted GitLab installation omnibus with version 14.7.0-ee, and a K8S cluster version v1.20.15. The agent and runner are successfully connected from K8S, and using Kaniko, I can build an application and send to the GitLab container registry. But I can’t execute kubectl commands.

My .gitlab-ci.yml file looks like this:

   stage: deploy
   allow_failure: true
     name: bitnami/kubectl:latest
     entrypoint: [""]
   - kubectl config get-contexts
   - kubectl config use-context peppeo/k8s-agent:test-v20-agent
   - kubectl config view
   - kubectl get pods

From the kubectl config view command, I could see that it called the k8s-proxy on my gitlab instance, so I ran sudo gitlab-ctl tail on the GitLab server to see if anything stood out, and I noticed that when the kubectl command were to execute, it called the k8s-proxy but my server responded with 401 error. So is there something I need to configure on my Omnibus installation to make it work?

Any help would be appreciated.

UPDATE: Feb 6, 2022

While trying to test it out, I uncommented everything in the .gitlab-ci.yml file but the kubectl get pods command. This gave me a different error:

When searching for a solution, I then found this article on Medium which explained I had to create a Role and RoleBinding in order for Gitlab runner to get pods and create such. It confuses me, as there already is a ClusterRole and ClusterRoleBinding set up during the Helm installation of the Gitlab Runner, but I did as the article said and then I could see the runner pods when running kubectl get pods

However, if I follow the GitLab documentation and use a particular context: kubectl config use-context peppeo/k8s-agent:test-v20-agent I once again get the “You must be logged in to the server…” error.

I would really appreciate if somebody could shed some light on why this is. I don’t understand why I have to create additional Roles and Rolebindings manually after Helm installation, and I don’t understand how I can get some pods when not using the context and nothing when I do use them.


same issue here plz help

I got it too…

I am considering downgrading the Gitlab version and go back to certificate based Kubernetes clusters. Back then things worked.

Looking at the pricing table, apparently Gitlab Agent isn’t for all tiers yet… Only Premium can use it… Or if you use public repo on

Which is so confusing, cause here: GitLab Agent for Kubernetes | GitLab it says is for all tiers.

Hi everyone,
Gitlab Agent for Kubernetes works in free edition.
Your gitlab must be hosted from https and also agent should be directed to wss address.
kubectl does not send authorization header if target is http.

1 Like

As confirmed in this issue: GitLab KAS (Agent) - Job failed: You must be logged in to the server (#352284) · Issues · / GitLab · GitLab