Struggling to install Kubernetes cluster & Gitlab. Can anyone point me in the right direction?

Before I get too far ahead of myself, the basic business need that I am trying to solve is to get to what is shown in the several “idea to production” videos that have been posted in the last several months. That holy grail idea of “click a button, make a new repo, talk amongst yourselves, commit a fix, hook up a CI pipeline, voila” is super attractive to me (in both web apps and in embedded systems). If I miss any super obvious things along the way, or if I am at all unclear in my very long rambling, I apologize profusely and invite you to please point them out so that I can learn and communicate better next time.

I currently have gitlab and mattermost installed on an Ubuntu box via the omnibus apt package, which my Debian web box is reverse proxying via a manually configured virtualhost. Less than ideal for several reasons, not the least of which this box rebooting and being used for other things periodically (compiling Yocto for a few hours sometimes for embedded Linux distros–for some reason the SD card reader quits working and I have to power the machine down all the way to get it to work again). The gitlab and mattermost install is not getting much use, for those reasons as well as the lack of CI/CD. I want to put gitlab on the web box instead. It’s running another website that needs to continue running. I also want to be able to “deploy” new web apps to the box without having to manually configure new virtualhost directives. So I know I need to make the website a “repo” in Gitlab and turn the box into a cluster that supports such CI/CD type stuff. Not sure if that’s called DevOps, but it sounds good.

In other words, I guess what I’m trying to get to is doing this in house with our website and various web apps used internally:

I was totally inspired by the one done last year, with the OpenShift template that one-click-installs Gitlab, Mattermost, etc. So I tried with OpenShift first, running in a Docker image on a local machine. I was able to get one machine up and running with that configuration, and was even able to make new repos and have it auto-deploy to the OpenShift cluster. Wasn’t much of a stretch to add a Dockerfile to my existing website repo which I imported into it, and have it magically generate a new pod and route it out to the * But it was indicated at the time several months ago that this setup isn’t ready for production prime time yet, so I left it.

According to the release today, it’s supposedly out of the box, minimal setup, but I’m either doing something wrong, or it’s only “minimal setup” for someone who’s done it dozens of times and has the steps down pat.

So next I tried to run a Kubernetes cluster on a local machine (via minikube) to play around with it. I got as far as getting Gitlab up and running on a cluster-internal IP (10.something.really.random with a random port somewhere in the 30k range). I could not figure out how to get from there to the idea to production demo (e.g. having everything on a wildcard DNS like * - like, CI/CD deploying to * Installed nginx-ingress to my minikube install, but I guess minikube doesn’t support loadbalancer? kubectl get svc only ever showed external IP as Pending

So I went to try to install a Kubernetes cluster in a brand spanking new VM. I started with a Debian VM, since if I could get it running on that, it should be straightforward to do the same thing on the actual web box. I cloned Kubernetes and built from source based on these instructions (but installed newer versions of etcd, go, and compiled with make rather than make release - i didn’t give my VM enough space for all the things). Then I continued with the helm chart instructions, and here’s how I try to get things up and running:


(because gitlab uses hostpath volumes, and because it is a privileged container, from my understanding)

and in another window after it’s up and running:

helm init
helm install -f gitlab/values.yaml --name gitlab gitlab/gitlab

(values.yaml is pretty well bone stock, just has externalUrl configured for in the unlikely event I manage to get this thing working, and I set the image to 9.2.0 for funsies, even though on previous attempts, 9.1.2 did not work, neither did 9.1.4)

Basically I cannot get Gitlab running at all on my VM Kubernetes cluster–it’s getting read-only file system error and then vomiting on a shell command that failed:

What am I missing? I thought I was better at Linux than this, but the last few years explosion of containers and whatnot has me seeing more stars than an Ellen selfie. Do I just need to bite the bullet and cough up the dough for a drop of the Google/Microsoft/Amazon cloud, or is there actually a step-by-step process documented somewhere that will turn a piece of metal that I can touch into a cluster of something (other than clusterfsck :slight_smile: ) which can both run gitlab AND run other things (such as apps in gitlab repos that are CI/Cdeployed)?

I would be super appreciative of anyone that can lead me to the koolaid, I am thirsty and quite ready to drink it.


Hi Fury,

Check out CoreOS Tectonic. I’ve had decent luck on bare metal (I do VMware vms actually) with it, and you can do up to a 10-node k8s cluster for free.

Before installing full Gitlab I’d try and make sure you could deploy and use a simple app container or two…

I’m also struggling to get GitLab fully operational on K8s using the official helm chart instructions here and here

It works with http://, but https:// will keep looking for the matching tls certificates even when nginx['listen-https'] = false.

Fresh install to minikube via helm

minikube start --memory 4096 --profile gitlab-ce

minikube profile gitlab-ce

helm init

helm install --namespace kube-system --name gitlab-ingress-ctrl stable/nginx-ingress

kubectl create -f secrets.yaml

helm install --namespace gitlab-ce \
  --set externalUrl=https://my-domain \
  --name gitlab-ce \
  -f values.yaml \


apiVersion: v1
kind: Namespace
  name: gitlab-ce
    name: gitlab-ce
apiVersion: v1
kind: Secret
  name: gitlab-tls
  namespace: gitlab-ce
  tls.crt: |
 tls.key: |


imagePullPolicy: IfNotPresent
externalURL: https://my-domain
omnibusConfigRuby: |
  # These are the settings needed to support proxied SSL
  nginx['listen_port'] = 80
  nginx['listen_https'] = false
  nginx['proxy_set_headers'] = {
    "X-Forwarded-Proto" => "https",
    "X-Forwarded-Ssl" => "on"
  enabled: true
  annotations: nginx
    - my-domain
  ## Secrets must be created in the namespace, and is not done for you in this chart
    - secretName: gitlab-tls
        - my-domain
serviceType: NodePort


2017/06/21 03:48:18 [emerg] 1334#0: BIO_new_file("/etc/gitlab/ssl/my-domain.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/gitlab/ssl/my-domain.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)


Try manually create ConfigMap with omnibus ruby config settings

kubectl create -f configmap.yaml


apiVersion: v1
kind: ConfigMap
  name: nginx
  namespace: gitlab-ce
    app: gitlab-ce-gitlab-ce
  ## This is used by GitLab Omnibus as the primary means of configuration.
  ## ref:
  gitlab_omnibus_config: |
    nginx['listen_port'] = 80;
    nginx['listen_https'] = false;
    nginx['proxy_set_headers'] = {"X-Forwarded-Proto" => "https", "X-Forwarded-Ssl" => "on"};


Try commenting out the lines about tls, just above service type

I decided to take another stab at it a few days ago, did a new setup using kubeadm, seems I’m getting close but I don’t know which persistent volume provisioner to use (rather not use hostPath and be screwed if that machine dies), so my Gitlab setup on that cluster can’t proceed. I guess k8s 1.7 is getting support for StorageOS so that the storage goes cluster-wide? Or I could somehow install it on 1.6. Not sure.

Interesting…I tried out Container Linux but wasn’t having much luck with that either (though I assume part of the problem was just that I was trying it out on VMs and not bare metal). I assume they’ve built Tectonic to solve that problem for me? I’ll give it a shot :slight_smile: thanks

I figured it out. I should have been using the gitlab/gitlab helm chart described here rather than the stable/gitlab-ce chart from official-helm.

helm repo add gitlab
helm init

Just thought I’d give an update for anyone stumbling upon this thread later on…

I’ve (finally) gotten a cluster up and running and installed the gitlab-omnibus chart AND learned how to copy a backup to it so that I could restore. Storage-wise, I figured hostPath wasn’t the greatest idea, I wanted something that worked well with Kubernetes and had some replication to avoid SPOF. I gave up on StorageOS, tried and failed to set up GlusterFS, finally happened upon and it was the answer for me (at least the best I’ve found so far). It manages Ceph behind the scenes.

Check this snippet out for how I set up my cluster (as best I can reconstruct–let me know if you try to do these steps and run into a problem, I’d be happy to help):

Next obstacle is getting CI to work so I can auto deploy stuff from gitlab: Review step fails in auto deploy on kubernetes executor (bare metal cluster)

Many thanks to the good folks I ran into at the Kubernetes slack and the Rook gitter that helped me. I’m also hanging out at the Gitlab gitter in case anyone prefers chat to forums.

How did you manage to restore a backup of gitlab? As far as I know, there is still no way to mount a Ceph file system managed by rook on the host (Issue on github :

Did you use the “test the storage” step as described in the rook documentation to add and restore your backup is the volume?