Hi all,
I have a Omnibus Gitlab CE 14.10.5 running with docker compose on premise
It works fine, I enabled KAS
On my Gitlab I create a project “setup_gitlab_agent” with a file .gitlab/agents/testagent/config.yaml
observability:
logging:
level: debug
ci_access:
projects:
- id: jack.chuong/setup_gitlab_agent
Go to Infrastructure → Kubernetes clusters → Agent → Connect a cluster (agent) → choose “testagent” above .
I connect to my k8s cluster on premise and install Gitlab agent
kubectl create ns gitlab-agent
helm upgrade --install gitlab-agent gitlab/gitlab-agent \
--namespace gitlab-agent \
--set config.token=****** \
--set config.kasAddress=ws://gitlab.mydomain.com/-/kubernetes-agent/ \
--set image.tag=v14.10.0
testagent Connection status : Connected
In my project “setup_gitlab_agent” , I create .gitlab-ci.yml
stages:
- deploy
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config use-context jack.chuong/setup_gitlab_agent:testagent
- kubectl get pods
when: manual
I run pipeline and get error
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jack.chuong/setup_gitlab_agent:testagent gitlab agent:6
kubectl config use-context jack.chuong/setup_gitlab_agent:testagent
Switched to context "jack.chuong/setup_gitlab_agent:testagent".
ubectl get pods
E0328 08:24:34.174016 48 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
The cluster name “gitlab” in kube context is wrong , it should be “kubernetes” - my kubernetes cluster name.
I tried to edit .gitlab-ci.yml
stages:
- deploy
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config set-context jack.chuong/setup_gitlab_agent:testagent --cluster=kubernetes --namespace=default --server="https://k8s.mydomain.com:6443"
- kubectl config get-contexts
- kubectl config use-context jack.chuong/setup_gitlab_agent:testagent
- kubectl config view
- kubectl get pods
when: manual
This is pipeline error output
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jack.chuong/setup_gitlab_agent:testagent gitlab agent:6
kubectl config set-context jack.chuong/setup_gitlab_agent:testagent --cluster=kubernetes --namespace=default --server="https://k8s.mydomain.com:6443"
Context "jack.chuong/setup_gitlab_agent:testagent" modified.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jack.chuong/setup_gitlab_agent:testagent kubernetes agent:6 default
kubectl config use-context jack.chuong/setup_gitlab_agent:testagent
Switched to context "jack.chuong/setup_gitlab_agent:testagent".
kubectl config view
apiVersion: v1
clusters:
- cluster:
server: http://gitlab.mydomain.com/-/kubernetes-agent/k8s-proxy/
name: gitlab
contexts:
- context:
cluster: kubernetes
namespace: default
user: agent:6
name: jack.chuong/setup_gitlab_agent:testagent
current-context: jack.chuong/setup_gitlab_agent:testagent
kind: Config
preferences: {}
users:
- name: agent:6
user:
token: REDACTED
kubectl get pods
E0328 09:10:08.542297 65 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I did something wrong ? Please give me some advice, thank you very much.