Kubernetes Integration "Something went wrong while installing Helm Tiller"

When I’ve seen failures on installing into gitlab-managed-apps it has been because the system:serviceaccount doesn’t have sufficient privileges to install into gitlab-managed-apps. This can be common in RBAC enabled (i.e. pretty much all modern) clusters. You might need to create a service account with cluster admin privileges as described at https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster. There’s a similar step for EKS

1 Like

Thanks for the recommendation. Followed these steps, verified that I could CURL to the K8s api using the token from the server where gitlab is running, got a valid response back.

curl -X GET https://10.0.1.60:6443/api --header "Authorization: Bearer [BEARER_TOKEN]" --insecure

No success however when attempting to install Helm.

Something went wrong while installing Helm Tiller
* Kubernetes error.

Not sure where to go to find logs. The following didn’t reveal anything significant:

sudo gitlab-ctl tail

I’m going to try the the curl command you mentioned but I found at least a promising error at:
/var/log/gitlab/gitlab-rails/kubernetes.log
{“severity”:“ERROR”,“time”:“2019-03-31T23:21:38.364Z”,“correlation_id”:“Xp9nsmQUaX1”,“exception”:“Kubeclient::HttpError”,“error_code”:null,“service”:“Clusters::Applications::InstallService”,“app_id”:4,“project_ids”:[15],“group_ids”:,“message”:“SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)”}
I’m guessing it has something to do with how I’m exposing the kubernetes cluster because gitlab and kubernetes on not on the same network. I’m also guessing that I should install or get the issuer for the certificate and make sure that it is accepted everywhere in between them… not sure though.

I suspect the same. It appears to be an issue with certificate validation. Note when CURL’ing I’m allowing for self-signed certs via the “–insecure” option. Not sure how to make GitLab do the same. I suppose I could get a valid certificate issued for the box.

A lot of things with GitLab and Kubernetes both get a lot more complicated to manage with self-signed certificates. Not saying it can’t be done, but it does get more complicated. I started out with self-signed, eventually gave up, got a DNS entry for the box and let letsencrypt do it.

There is no issue using self-signed certs. This is the entire point of providing the CA cert.

You just need to init the cluster correctly. Hearing that GitLab staff “just gave up” and didn’t try to understand it or make it work… eeesh.

kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=example.com

Having said that, and now that I have gotten over the blank kubernetes error when installing helm, I now get a 403. Yes, I have setup my ServiceAccount and ClusterRoleBinding per the documentation linked earlier, and it still gives a 403. I have to agree, the fact that you can’t just install vanilla k8s on-prem, and add it easily with the provided documentation, is a bit of a joke, at least for lesser experienced k8s admins. This does not bode well for me convincing management to abandon Jenkins and drive k8s through our on-prem GitLab.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: gitlab-admin
  namespace: kube-system

Here is the API Error Response

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "pods is forbidden: User \"system:serviceaccount:default:gitlab-admin\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}

As you can see the namespace for some reason gets set to default. I have no idea how I ended up with a default namespace gitlab-admin and a kube-system namespace gitlab-admin.

Now that I figured all that out, I cleaned it all up, reset the gitlab-admin service account and CRB, and got the correct token for the correct namespace with arg --namespace=kube-system.

I have finally got a working Helm install, with self-signed certs.

My setup:
GitLab.com Gitlab
k8s 1.14.1 on-prem behind firewall with NAT port forwarding.
k8s configured with self-signed cert using my domain name (fixes blank Kubernetes Helm Install error).
Configured GitLab.com to point to said domain name and port.
Provided CA, and correct token after SA and CRB created (namespace declaring on kubectl is important)(fixes 401 and 403 errors).

2 Likes

I found this topic (thank you contributors!), and found it after searching a solution for this beauty of an error message (‘better’ even than M$ fault messaging). Just to let you know messaging in Gitlab-ce might do better.

Hi @sgrossman: I have a repo in gitlab and trying to connect to my existing cluster which is running behind nginx. I am unable to install helm tiller and it is not giving any error codes as well. Its only telling installation failed. I have followed the below link and done the required steps https://gitlab.com/help/user/project/clusters/add_remove_clusters.md#add-existing-cluster

1 Like

Hello I have sames errors.

When I logs pods I have this error :

Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like “https://kubernetes-charts.storage.googleapis.com” is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 10.96.0.10:53: read udp 192.168.161.134:52213->10.96.0.10:53: i/o timeout

How to resolv that ?

Regards

Sorry I have not been checking on this thread. Have not seen these before but think they may be with access to the charts on the public network. Where is your kubernetes cluster? And your gitlab installation? 10.96.0.10 and 192.168.161.134 are both private IPs, so the request may be trying to use a local DNS server to resolve the names? I can get to kubernetes-charts.storage.googleapis.com from where I am. :man_shrugging:

Sorry, have not been following this thread lately.
You might try to narrow down what’s failing by:

  1. See if the gitlab-managed-apps namespace was created
    kubectl get namespaces
  2. See if the tiller pod was created:
    kubectl get pods --namespace gitlab-managed-apps
  3. See if you can get the pod logs for the tiller pod:
    kubectl logs tiller-deploy-<specific values from previous get pods> --namespace gitlab-managed-apps

I’ve found that a lot of times that pod disappears very quickly on failure so you may need to be running the kubectl logs command repeatedly to capture the log before the container is removed. Or try the --previous option. Post results back here.

Hi,
where should we put the kubeconfig configuration on a on-prem gitlab installation?
It seems the calls to the cluster are made by the git user, but this user has not a home directory.