I feel like I’m just missing it, but is there no way to configure the Kubernetes executor to maintain a minimum number of available job/build pods? I’m running in EKS and use Karpenter to automatically scale when need, although I also run overprovisioning, it never seems to quite be enough, so I end up wasting minutes waiting for a new node so a job/build pod can get placed. I would love to be able to configure a minimum number of pods that the executor should keep available and reuse. For example, if I am not currently running a job, there should one pod up and available to accept a job when one comes. If a job comes, then the executor places it on the waiting pod and creates a new build/job pod, killing one of the pods if the job finishes before another comes.
I feel like this was a thing at some point, but am unable to find anything now. Please let me know if there is otherwise I will file a request.