How to set kubernetes resources per job service?

I want to create a CI job with multiple services and set resource requests/limits for containers in the job’s Pod specifically for each of these services.

For example:

my-job:
  stage: test
  image: node:14
  services:
  - name: company/big-fat-db
    alias: bigdb
  - name: company/tiny-app
    alias: tinyapp
  script:
  - run-some-test.sh
  variables:
    KUBERNETES_SERVICE_MEMORY_REQUEST: 6Gi
    KUBERNETES_MEMORY_REQUEST: 2Gi

The problem is that such setup as described above creates a job pod and applies resources.requests to 6GiB to both services, but I need to set lower resource request value for tinyapp service and keep larger value only for bigdb service.

The same applies for CPU requests/limits.

The main issue is that the final Pod definition demands twice as much CPU/RAM resources as it actually needs and Pod is not scheduled due to insufficient resources.

I checked both Gitlab docs and gitlab-runner source code, expecting to find something like KUBERNETES_SERVICE_n_MEMORY_REQUEST (where n is the index of service container as specified in the job definition), but without avail.

Is this even possible, or is there some workaround? I’m trying to rewrite old docker:dind-based jobs so they can use native GitLab services (implemented as multi-container k8s pods), and this is currently blocking me. Running DinD is very inefficient, as we know.

For the record, I’m using Gitlab.com SaaS but it is not relevant to this topic.

Regards,
Robert

1 Like

Looking at the kubernetes executor interaction diagram, is there a POST that results in the service POD being created and if so, any chance the KUBERNETES_SERVICE_* variables are conveyed so the resources per service can be specified? Perhaps a future feature (adding to the .gitlab-ci.yml service: properties and to the POST request? Hopefully as this is a pretty compelling use case. Not all Services are meant to be created equal…