Resource_group to be used Across Multiple Repositories that use a shared runner

We have a self hosted runner (v -17.5.3) on a server which is 12 cores and 24gb RAM. We have multiple projects using the same runner.

Versions

  • Self-managed
  • Self-hosted Runners

Two of these projects have CI jobs that are resource intensive and run parallel processes.

We are able to make sure that runner will not pick up CI job on Project1 when there is another job already running from Project1 (another branch) using the resource_group: configuration in the gitlab.ci.yml
The snip related to the configuration:

The second job goes into Waiting state: Snip is below:

We are able to make sure that runner will not pick up CI job on Project2 when there is another job already running from Project2 (another branch) using the resource_group: configuration in the gitlab.ci.yml as snips show above.

Is there a way to restrict the jobs from Project1 be going in to waiting state if there is already a job that is being run form the Project2 and vice versa?

We want other jobs from other projects (other than Project1 and Project2) to be able to run in parallel even when the resource intensive job is running from Project1 or Project2. We only want to restrict Jobs from Project1 and Project2 to run only one at a time and add to queue if a job is already running from either of these projects.

The config.toml file is as below:

Configuration


concurrent = 4
check_interval = 0
connection_max_age = "15m0s"
shutdown_timeout = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "dummy-value"
  url = "dummy-value"
  id = dummy-value
  token = "dummy-value"
  token_obtained_at = dummy-value
  token_expires_at = dummy-value
  executor = "docker"
  environment = ["dummy-value", "dummy-value", "HTTPS_PROXY=dummy-value", "HTTP_PROXY=dummy-value", "MAVEN_OPTS=-DproxyHost=dummy-value -DproxyPort=8080"]
  request_concurrency = 4
  [runners.docker]
    tls_verify = false
    image = "alpine:latest"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
    network_mtu = 0