Shared gitlab runner fail to fetch git repository

Dear gitlab user and teams,

I encounter a strange problem with gitlab-runner on kubernetes. For some reasons and not all the time the runner fail to fetch the git repository as below:

Running with gitlab-runner 15.0.0 (febb2a09)
  on dev-l-privileged-large-gitlab-runner-54f7959848-pwrn2 gphY8WZz
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image fedora:36 ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-gphy8wzz-project-83-concurrent-0b6pfl to be running, status is Pending
Running on runner-gphy8wzz-project-83-concurrent-0b6pfl via dev-l-privileged-large-gitlab-runner-54f7959848-pwrn2...
Getting source from Git repository
00:00
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/a_project/.git/
Created fresh repository.
fatal: unable to access 'https://gitlab.foo.fr/a_project.git/': Failed to connect to gitlab.foo.fr port 443 after 8 ms: Connection refused
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1

This is even stranger that come only on one of the two tasks as seen below

below log seen through the gitlab web interface

Tha values.yaml used for the kubertes gitlab-runner

gitlabUrl: "https://gitlab.foo.fr/"
imagePullPolicy: IfNotPresent
unregisterRunners: true
concurrent: 2
checkInterval: 10


## Configure integrated Prometheus metrics exporter
## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
metrics:
  enabled: true
  service:
    enabled: true
  serviceMonitor:
    enabled: true

## Configuration for the Pods that that the runner launches for each new job
##
runners:
  ## Default container image to use for builds when none is specified
  ##
  image: rockylinux:8.5
  
  privileged: true
  tags: "privileged,large"
  runUntagged: false

  ## Configure environment variables that will be injected to the pods that are created while
  ## the build is running. These variables are passed as parameters, i.e. ,
  ## to  command.
  ##
  ## Note that  (see below) are only present in the runner pod, not the pods that are
  ## created for each build.
  ##
  ## ref: https://docs.gexport NAMESPACE="gitlab"itlab.com/runner/commands/#gitlab-runner-register
  ##
  env:
    HOME: /tmp

  config: |
    [[runners]]
      [runners.kubernetes]
        privileged = true
        # build container
        cpu_limit = "2"
        memory_limit = "5Gi"
        # service containers
        service_cpu_limit = "1"
        service_memory_limit = "1Gi"
        # helper container
        helper_cpu_limit = "1"
        helper_memory_limit = "1Gi"
      [runners.kubernetes.volumes]
        [[runners.kubernetes.volumes.host_path]]
          name = "var-dbus"
          host_path = "/var/run/dbus"
          mount_path = "/var/run/dbus"
          read_only = false
        [[runners.kubernetes.volumes.host_path]]
          name = "run-dbus"
          host_path = "/run/dbus"
          mount_path = "/run/dbus"
          read_only = false


## Configure environment variables that will be present when the registration command runs
## This provides further control over the registration process and the config.toml file
## ref: 
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html
##
envVars:
  - name: HOME
    value: /home/gitlab-runner

Thanks for your help

I found, it wad fail2ban, indeed it banned one external ip so 1 node if the tasks was launched on the the banned node then the task failled immediately