Kubernetes contexts from agent not injected into pipeline of projects authorized to use the agent

Hi,

I have issues with access to kubernetes agent from different projects. The variables and contexts that should be available in the pipeline don’t appear. I’m running gitlab 14.5 CE omnibus docker image.

I set up two pipelines to test this:

  1. Pipeline in the kubernetes agent config repository.
  2. Pipeline in a separate project that is listed in authorized projects. I used this as an example. I tried both group and project authorization.

The first pipeline works, I can use kubectl from the pipeline and the contexts are visible. By setting the context from my agent I can apply manifests to the cluster.

The second one doesn’t work. The documentation says that appropriate variables should be injected into my pipeline but it doesn’t seem to be the case. Is there some additional setup required?
Configuration for the agent is private but the documentation doesn’t mention it has to be public to use the CICD tunnel functionality.

Anyone knows what may be the issue here? Thanks for any help.

Hi,

Si I’ve tried with Public/Internal/Private Configuration repository, coupled with a Public / Internal / Private repository.
In the agent configuration, i tried to allow access to the repository with /group/project path or by ID, i also add a groups access configuration to allow the whole group with no luck.

The KUBECONFIG variable is simply not exported to the runner as it should be.

Agent configuration:

ci_access:
  projects:
    - id: "388"
  groups:
    - id: bioman

CI Job:

deploy:
  image:
    name: bitnami/kubectl:latest
    entrypoint: [""]
  script:
    - set
    - kubectl config get-contexts
    - kubectl config use-context bioman/gitlab-agent:gitlab-agent-1
    - kubectl get pods -n gitlab-agent
  tags:
    - docker

In the configuration repository CI job, i can see the KUBECONFIG variable exported :
KUBECONFIG=/builds/bioman/gitlab-agent.tmp/KUBECONFIG
In the project_id CI job, the KUBECONFIG variable is not there.

1 Like

@vnagy Should we create an issue for this ?
Because in the current state:

  • We can’t centralize config because premium
  • We can’t share the agent configuration to other repositories, because it doesn’t work (my guess is one part of this feature is still in premium access or something, that’s why there is no error message)
  • The only thing we can do with the agent curently, is setup one for each project. But you can guess that with more than hundreds project, i’m not going to install 100 agents in my kubernetes cluster.

Since there’s no reply I think it would be best if we created an issue for this indeed. I will make one and link it here for you too.

2 Likes

I’m running into the same issue. I’m trying to use the method outlined in the documentation with one repository for the agent config and another for the cluster management with the new cluster management project template.

Both of my repositories are private. I’ve authorized the cluster management project in the agent config (and tried many different formats for the path). I’m able to run kubectl commands in the agent config CI just fine, but the cluster management says :

$ if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi
error: no context exists with the name: path/to/agent-configuration-project:your-agent-name

I think there is a bug in the documentation here, I found if I removed my “KUBE_CONTEXT” environment variable in the cluster management project - this worked.

It could be due to my kubernetes cluster still using certificate authenication as well as agent - and this confusing how this is injected into image. I tested this by changing the image used - and the correct context was passed in - yet with the cluster management gitlab image - this fails.

1 Like

Thanks, that actually fixed the issue for me aswell!

Edit:
This actually does not fix the issue. Without the KUBE_CONTEXT variable, the deployment will use the wrong context (the default one using the gitlab installation service account), thus it cannot create the appropiate namespaces etc.


I actually have a fresh Helm installation of Gitlab and I only got the agent setup and had the same issue :slight_smile:
Edit 2:
I guess your KAS might fallback to the certificate based authentication and thus has full admin access to the cluster, not really sure about that tho.

//update

After deleting everything and retrying it for a couple times I realized that I was creating my repositories for the root user instead of creating them in a group. When I placed both projects in the Gitlab Instance group, it worked… >.<

That also means that I did define the KUBE_CONTEXT environment variable with: path/to/agent-configuration-project:your-agent-name.

This is an interesting hint not to run the repositories under root, which I do, and things dont run as expected.

I have managed to install Kunernetes agent and gitlab-runner successfully (in Kubernetes), but when I start the pipeline to get a simple ConfigMap deployed in Kubernetes, I get X509 certificate error.