Before I get too far ahead of myself, the basic business need that I am trying to solve is to get to what is shown in the several “idea to production” videos that have been posted in the last several months. That holy grail idea of “click a button, make a new repo, talk amongst yourselves, commit a fix, hook up a CI pipeline, voila” is super attractive to me (in both web apps and in embedded systems). If I miss any super obvious things along the way, or if I am at all unclear in my very long rambling, I apologize profusely and invite you to please point them out so that I can learn and communicate better next time.
I currently have gitlab and mattermost installed on an Ubuntu box via the omnibus apt package, which my Debian web box is reverse proxying via a manually configured virtualhost. Less than ideal for several reasons, not the least of which this box rebooting and being used for other things periodically (compiling Yocto for a few hours sometimes for embedded Linux distros–for some reason the SD card reader quits working and I have to power the machine down all the way to get it to work again). The gitlab and mattermost install is not getting much use, for those reasons as well as the lack of CI/CD. I want to put gitlab on the web box instead. It’s running another website that needs to continue running. I also want to be able to “deploy” new web apps to the box without having to manually configure new virtualhost directives. So I know I need to make the website a “repo” in Gitlab and turn the box into a cluster that supports such CI/CD type stuff. Not sure if that’s called DevOps, but it sounds good.
In other words, I guess what I’m trying to get to is doing this in house with our website and various web apps used internally:
I was totally inspired by the one done last year, with the OpenShift template that one-click-installs Gitlab, Mattermost, etc. So I tried with OpenShift first, running in a Docker image on a local machine. I was able to get one machine up and running with that configuration, and was even able to make new repos and have it auto-deploy to the OpenShift cluster. Wasn’t much of a stretch to add a Dockerfile to my existing website repo which I imported into it, and have it magically generate a new pod and route it out to the *.127.0.0.1.xip.io. But it was indicated at the time several months ago that this setup isn’t ready for production prime time yet, so I left it.
According to the release today, https://about.gitlab.com/2017/05/22/gitlab-9-2-released/#official-gitlab-installation-on-kubernetes it’s supposedly out of the box, minimal setup, but I’m either doing something wrong, or it’s only “minimal setup” for someone who’s done it dozens of times and has the steps down pat.
So next I tried to run a Kubernetes cluster on a local machine (via
minikube) to play around with it. I got as far as getting Gitlab up and running on a cluster-internal IP (10.something.really.random with a random port somewhere in the 30k range). I could not figure out how to get from there to the idea to production demo (e.g. having everything on a wildcard DNS like *.127.0.0.1.xip.io - like gitlab.127.0.0.1.xip.io, CI/CD deploying to *.apps.127.0.0.1.xip.io). Installed nginx-ingress to my minikube install, but I guess minikube doesn’t support loadbalancer?
kubectl get svc only ever showed external IP as Pending
So I went to try to install a Kubernetes cluster in a brand spanking new VM. I started with a Debian VM, since if I could get it running on that, it should be straightforward to do the same thing on the actual web box. I cloned Kubernetes and built from source based on these instructions (but installed newer versions of etcd, go, and compiled with
make rather than
make release - i didn’t give my VM enough space for all the things). Then I continued with the helm chart instructions, and here’s how I try to get things up and running:
env ENABLE_HOSTPATH_PROVISIONER=true ALLOW_PRIVILEGED=true kubernetes/hack/local-up-cluster.sh
(because gitlab uses hostpath volumes, and because it is a privileged container, from my understanding)
and in another window after it’s up and running:
helm init helm install -f gitlab/values.yaml --name gitlab gitlab/gitlab
(values.yaml is pretty well bone stock, just has externalUrl configured for gitlab.127.0.0.1.xip.io in the unlikely event I manage to get this thing working, and I set the image to 9.2.0 for funsies, even though on previous attempts, 9.1.2 did not work, neither did 9.1.4)
Basically I cannot get Gitlab running at all on my VM Kubernetes cluster–it’s getting read-only file system error and then vomiting on a shell command that failed: https://pastebin.com/CtrAAc3u
What am I missing? I thought I was better at Linux than this, but the last few years explosion of containers and whatnot has me seeing more stars than an Ellen selfie. Do I just need to bite the bullet and cough up the dough for a drop of the Google/Microsoft/Amazon cloud, or is there actually a step-by-step process documented somewhere that will turn a piece of metal that I can touch into a cluster of something (other than clusterfsck ) which can both run gitlab AND run other things (such as apps in gitlab repos that are CI/Cdeployed)?
I would be super appreciative of anyone that can lead me to the koolaid, I am thirsty and quite ready to drink it.