I want to create a CI job with multiple services and set resource requests/limits for containers in the job’s Pod specifically for each of these services.
my-job: stage: test image: node:14 services: - name: company/big-fat-db alias: bigdb - name: company/tiny-app alias: tinyapp script: - run-some-test.sh variables: KUBERNETES_SERVICE_MEMORY_REQUEST: 6Gi KUBERNETES_MEMORY_REQUEST: 2Gi
The problem is that such setup as described above creates a job pod and applies
resources.requests to 6GiB to both services, but I need to set lower resource request value for
tinyapp service and keep larger value only for
The same applies for CPU requests/limits.
The main issue is that the final Pod definition demands twice as much CPU/RAM resources as it actually needs and Pod is not scheduled due to insufficient resources.
I checked both Gitlab docs and gitlab-runner source code, expecting to find something like
n is the index of service container as specified in the job definition), but without avail.
Is this even possible, or is there some workaround? I’m trying to rewrite old
docker:dind-based jobs so they can use native GitLab services (implemented as multi-container k8s pods), and this is currently blocking me. Running DinD is very inefficient, as we know.
For the record, I’m using Gitlab.com SaaS but it is not relevant to this topic.