Resource isolation for Docker executor and NFS storage driver

Hello,
we run a GPU server with a single Gitlab-Runner maintaining a single Docker executor per GPU on the same host. As storage capacities are running low, I am now trying to use the NFS storage driver for Docker to mount all build folders to the storage server.

[[runners]]
  name = "********"
  limit = 1
  output_limit = 102400
  url = "******************"
  id = 376
  token = "*************"
  token_obtained_at = **************
  token_expires_at = 0001-01-01T00:00:00Z
  executor = "docker"
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "alpine:latest"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    gpus = "device=3"
    shm_size = 8589934592
    disable_cache = false
    [runners.docker.volume_driver_ops]
      "o" = "addr=10.0.0.63,rw"
      "device" = ":/my/share" 
      "type" = "nfs"

The CI jobs are able to mount the respective volumes for their builds under /builds, but as soon as we add the NFS-related section, the volume contains all available build folders from all runners under the NFS-share /my/share. So a build job on executor A is run in /builds/token-id/0/my/namespace/project1 and for B it’s /builds/token-id/0/my/namespace/project2 but both containers can access the respective other folders. How can we achieve the same behavior as we see when using the local Docker volume driver, where each /builds/token-id/0/my/namespace/project is strictly isolated?

Best
Dennis