I have installed Gitlab CE using Helm, but our AD users can’t login to the platform. The following error is shown in the login UI: Could not authenticate you from Ldapmain because "Invalid credentials for userX"
(but credentials are ok!)
Installation:
helm upgrade --install gitlab gitlab/gitlab --namespace my-ns --tiller-namespace tiller-ns --timeout 600 --set global.edition=ce --set global.hosts.domain=example.com --set global.hosts.externalIP=<ExternalIPAddressAllocatedToTheNGINXIngressControllerLBService> --set nginx-ingress.enabled=false --set global.ingress.class=mynginx-ic --set certmanager.install=false --set global.ingress.configureCertmanager=false --set gitlab-runner.install=false --set prometheus.install=false --set registry.enabled=false --set gitlab.gitaly.persistence.enabled=false --set postgresql.persistence.enabled=false --set redis.persistence.enabled=false --set minio.persistence.enabled=false --set global.appConfig.ldap.servers.main.label='LDAP' --set global.appConfig.ldap.servers.main.host=<IPAddressOfMyDomainController> --set global.appConfig.ldap.servers.main.port='389' --set global.appConfig.ldap.servers.main.uid='sAMAccountName' --set global.appConfig.ldap.servers.main.bind_dn='CN=testuser,OU=sampleOU3,OU=sampleOU2,OU=sampleOU1,DC=example,DC=com' --set global.appConfig.ldap.servers.main.password.secret='gitlab-ldap-secret' --set global.appConfig.ldap.servers.main.password.key='password'
Notes:
-I have installed previously my own NGINX Ingress Controller separately:
helm install stable/nginx-ingress --name nginx-ingress --namespace my-ns --tiller-namespace tiller-ns --set controller.ingressClass=mynginx-ic
-I have previously created a secret with the password for the user configured as bind_dn (‘CN=testuser,OU=sampleOU3,OU=sampleOU2,OU=sampleOU1,DC=example,DC=com’). The password is encoded using base64, as indicated in the documentation
File: gitlab-ldap-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-ldap-secret
data:
password: encodedpass-blablabla
-Instead of providing all these parameters in the commandline during the chart installation, I have tried just configuring everything in the various values.yaml that this Gitlab Helm chart provides, but it just seemed easier to document here this way, for reproduction purposes.
-I have tried adding these parameters, no luck:
--set global.appConfig.ldap.servers.main.encryption='plain'
--set global.appConfig.ldap.servers.main.base='OU=sampleOU1,DC=example,DC=com'
-To make it even simpler, we are not considering persistency for any component. That is why, these are all set to false:
–set gitlab.gitaly.persistence.enabled=false
–set postgresql.persistence.enabled=false
–set redis.persistence.enabled=false
–set minio.persistence.enabled=false
*
I do need persistency, but let’s just focus on LDAP authentication this time, which is my main issue at the moment.
-I have checked with my sysadmin, and we use plain 389 in Active Directory. No encryption
My environment
kubectl.exe version
Client Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.1”, GitCommit:“4485c6f18cee9a5d3c3b4e523bd27972b1b53892”, GitTreeState:“clean”, BuildDate:“2019-07-18T09:18:22Z”, GoVersion:“go1.12.5”, Compiler:“gc”, Platform:“windows/amd64”}
Server Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.1”, GitCommit:“4485c6f18cee9a5d3c3b4e523bd27972b1b53892”, GitTreeState:“clean”, BuildDate:“2019-07-18T09:09:21Z”, GoVersion:“go1.12.5”, Compiler:“gc”, Platform:“linux/amd64”}
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
helm ls --tiller-namespace tiller-ns
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
gitlab 1 Tue Oct 29 18:16:06 2019 DEPLOYED gitlab-2.3.7 12.3.5 my-ns
kubectl.exe get nodes
NAME STATUS ROLES AGE VERSION
kubernetes01.example.com Ready master 102d v1.15.1
kubernetes02.example.com Ready <none> 7h16m v1.15.1
kubernetes03.example.com Ready <none> 102d v1.15.1
kubernetes04.example.com Ready <none> 11d v1.15.1
After installing this chart, everything seems to work fine:
kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
gitlab-gitaly-0 1/1 Running 0 65m
gitlab-gitlab-exporter-5b649bfbb-5pn7q 1/1 Running 0 65m
gitlab-gitlab-shell-7d9497fcd7-h5478 1/1 Running 0 65m
gitlab-gitlab-shell-7d9497fcd7-jvt9p 1/1 Running 0 64m
gitlab-migrations.1-gf8jr 0/1 Completed 0 65m
gitlab-minio-cb5945f79-kztmj 1/1 Running 0 65m
gitlab-minio-create-buckets.1-d2bh5 0/1 Completed 0 65m
gitlab-postgresql-685b68b4d7-ns2rw 2/2 Running 0 65m
gitlab-redis-5cb5c8b4c6-jtfnr 2/2 Running 0 65m
gitlab-sidekiq-all-in-1-5b997fdffd-n5cj2 1/1 Running 0 65m
gitlab-task-runner-5777748f59-gkf9v 1/1 Running 0 65m
gitlab-unicorn-764f6548d5-fmggl 2/2 Running 0 65m
gitlab-unicorn-764f6548d5-pqcm9 2/2 Running 0 64m
Now, if I try to login with a LDAP user, I get the error mentioned before. If I go inside the unicorn pod, I can see the following messages in the /var/log/gitlab/production.log
:
What am I missing? Do I need to configure anything else? I have configured all the parameters for LDAP Authentication mentioned here but still I’m having trouble trying to authenticate.
Sorry, but I am new with Gitlab and all its internal components. I can’t seem to find where to edit this file for example: /etc/gitlab/gitlab.rb
(in which pod should I enter? I literally entered each one of them, and did not find this configuration file). Also, I noticed some of the documentation says that some diagnostics tools can be executed such as gitlab-rake
, gitlab:ldap:check
, or utilities such as gitlab-ctl reconfigure
, but again… where to run these?? On the unicorn pod? gitlab-shell pod? I noticed various Gitlab documentation pages reference to some of these tools to troubleshoot incidents, but I don’t think this chart follows the same architecture.
I have looked this post for example, because it seems the same issue, but I can’t find /etc/gitlab/gitlab.rb
Any help will be much appreciated. It’s been a couple of weeks since I’ve been dealing with this issue.