Kubernetes contexts from agent not injected into pipeline of projects authorized to use the agent

Hi,

I have issues with access to kubernetes agent from different projects. The variables and contexts that should be available in the pipeline don’t appear. I’m running gitlab 14.5 CE omnibus docker image.

I set up two pipelines to test this:

  1. Pipeline in the kubernetes agent config repository.
  2. Pipeline in a separate project that is listed in authorized projects. I used this as an example. I tried both group and project authorization.

The first pipeline works, I can use kubectl from the pipeline and the contexts are visible. By setting the context from my agent I can apply manifests to the cluster.

The second one doesn’t work. The documentation says that appropriate variables should be injected into my pipeline but it doesn’t seem to be the case. Is there some additional setup required?
Configuration for the agent is private but the documentation doesn’t mention it has to be public to use the CICD tunnel functionality.

Anyone knows what may be the issue here? Thanks for any help.

Hi,

Si I’ve tried with Public/Internal/Private Configuration repository, coupled with a Public / Internal / Private repository.
In the agent configuration, i tried to allow access to the repository with /group/project path or by ID, i also add a groups access configuration to allow the whole group with no luck.

The KUBECONFIG variable is simply not exported to the runner as it should be.

Agent configuration:

ci_access:
  projects:
    - id: "388"
  groups:
    - id: bioman

CI Job:

deploy:
  image:
    name: bitnami/kubectl:latest
    entrypoint: [""]
  script:
    - set
    - kubectl config get-contexts
    - kubectl config use-context bioman/gitlab-agent:gitlab-agent-1
    - kubectl get pods -n gitlab-agent
  tags:
    - docker

In the configuration repository CI job, i can see the KUBECONFIG variable exported :
KUBECONFIG=/builds/bioman/gitlab-agent.tmp/KUBECONFIG
In the project_id CI job, the KUBECONFIG variable is not there.

1 Like

@vnagy Should we create an issue for this ?
Because in the current state:

  • We can’t centralize config because premium
  • We can’t share the agent configuration to other repositories, because it doesn’t work (my guess is one part of this feature is still in premium access or something, that’s why there is no error message)
  • The only thing we can do with the agent curently, is setup one for each project. But you can guess that with more than hundreds project, i’m not going to install 100 agents in my kubernetes cluster.

Since there’s no reply I think it would be best if we created an issue for this indeed. I will make one and link it here for you too.

1 Like