GitLab CI on AWS EKS Fargate

My end goal is to have a K8s cluster (AWS EKS) using Fargate to autoscale builds.

Right now I do builds using GitLab Runners installed (Docker) on a few machines but those machines obviously need to be up all the time and there’s a limited number of them. Ideally it would be nice if “up to some maximum number” of builds can happen in parallel all in Fargate.

So, my first attempt was to get a cluster up and running but installing the GitLab Runner failed as there are apparently no nodes upon which to run the installer. I kind of thought this all would have allocated a pod for me since it’s attempting to spawn a “thing” but maybe I need to at least have an EC2 attached to the cluster to “host” the runner?

Probably asking the wrong question the wrong way since I’m pretty new to all of this but that’s where I’m stuck. The runner won’t install in the cluster (lists as “pending” with an error saying “0/2 nodes are available: 2 Too many pods.”).

Kind of wondering if it’ll even work once it does get installed. Will the runner even use Fargate to spawn the pods to do the builds? Advice welcome.

Hi @paul.braman
Please see official docs for Fargate
As stated in the docs you need EC2 to host the GitLab Runner daemon.

Well …

I installed an EC2 as a single-instance nodegroup using eksctl (just to host the GitLab Runner) but it seems to be spawning tasks on the EC2 instead of on the cluster through Fargate.

So something’s still not exactly correct here.

Please note that the page linked above talks about ECS not EKS when discussing Fargate. Maybe I need to rethink this whole thing and use ECS instead of EKS.

@paul.braman I’m looking into the same problem, I wonder if you make it work ?

I exactly have done the same thing.

Create EKS > add Node Group and Fargate profile > deploy Gitlab-runner > add gitlab-ci Jobs

The process was almost successful except that I can not access DNS from inside of fargate POD.

I have seen a solution here to change Cloud Computing Services - Amazon Web Services (AWS) value. But It is not fitful for the purpose because it will add change coredns pod to other fargate PODs, but i want to keep the number of fargate PODs to zero if no gitlab-ci JOB exists.

Now, gitlab-ci JOBs can be automatically created, but they can not access outside with domain name address for installing additional linux packages.

It is an exception that the docker image can be normally pulled from fargate PODs.

I assume there is some monitoring service that exports cluster utilization to AWS, which then manages node scaling via an ASG. Is there a specific reason you wish to put CPU and RAM restrictions on your jobs? Unless you expect memory leaks or hung jobs… At least in GitLab, Hung jobs can be handled via job timeout as Small Business IT Support || Unleash Your Technology Today does. Otherwise, I believe that your jobs require what they require. Limiting resources and perhaps destroying jobs is counterproductive because they are expensive. Bill the responsible team in detail and let them decide whether the expense is prohibitive.