Runner redirected to login page (cant get source code)


I recently spun up a self-managed gitlab container and am having some issues getting CI/CD working.

For my current setup:
Gitlab hosted in docker (gitlab/gitlab-ce) hosted by Unraid. Current sitting bridged.
Gitlab runner hosted in docker (gitlab/gitlab-runner) hosted by Unraid. Also sitting bridged and passing in my config.toml file and Unraid docker.sock file

Then I have a relatively simple single project (/root/projectnamehere) with the following .gitlab-ci.yml:

  - test
  stage: test
    - pip install --upgrade pip
    - pip install -r requirements.txt
    - python

For my runner configuration I have:

  name = "test1"
  url = "http://192.168.X.XYZ:9080/"
  token = "<A Shared Token>"
  clone_url = "http://192.168.X.XYZ/"
  executor = "docker"
    tls_verify = false
    image = "python:3.7.9"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

Originally I had issues with name resolution, hence the addition of the clone_url. However, I now get the following error when attempting to run my job:

  • Getting source from Git repository
  • Fetching changes with git depth set to 50...
  • Reinitialized existing Git repository in /builds/root/projectnamehere/.git/
  • fatal: unable to update url base from redirection:
  • asked for: http://gitlab-ci-token:[MASKED]@192.168.X.XYZ/root/projectnamehere.git/info/refs?service=git-upload-pack
  • redirect: http://192.168.X.XYZ/login
  • ERROR: Job failed: exit code 1

I’ve gone through and rebuilt my runners, I have tried to implement SSH keys (seems like it shouldn’t be necessary?), and use different types of runners (shared, project-specific, etc) and all yield the same issue.

Is there something I’m missing in passing my docker containers? Is there more I need to modify in my config.toml?

Any help is greatly appreciated!

Hi @5sbwerewolf83, welcome to the community forum!

It sounds like the clone_url is causing an issue.

Can you comment-out the clone_url = "http://192.168.X.XYZ/" line in your runner’s config.toml, run the job again, and verify if the problem persists?

Hi! Thank you for your reply and warm welcome to the forum :slight_smile:

Unfortunately without that line, I get a name resolution error:
fatal: unable to access 'http://unraid:9080/root/projectnamehere.git/': Could not resolve host: unraid

I originally had found this thread that prompted me to try the clone_url (fwiw, extra_hosts did not work). I figured the redirect was more progress than the name resolution error so that was the path I went down.

Hi again!

The first error sounds like your runner hit a 302 redirect to the /users/sign_in when trying to connect to http://192.168.X.XYZ/.

The error we see now: fatal: unable to access 'http://unraid:9080/root/projectnamehere.git/': Could not resolve host: unraid seems like an issue resolving unraid to the correct internal IP address on your network.

What DNS/network settings are you using to point unraid:9080 to an internal IP (192.x.x.x)?

What is the external_url of the GitLab instance as it is set in /etc/gitlab/gitlab.rb?

1 Like


Alright, so I did some digging and maybe narrowed the issue down a bit.

Unraid itself is running the docker containers for both my Gitlab and Runner instances. When spinning up the Runner (which is more of a Runner-Manager it seems), I am passing my docker.sock file; as a result when I actually run a job, it creates a new docker container (visible on my Unraid Docker dashboard for a brief moment) specific for the job. The issue being that new container does not know how to route to my Gitlab instance. Here is a snapshot of what that temporary container looks like:

The “Default” here is the network setting. Typically I would see something like Host, Bridged, etc instead.

I can modify the etc/hosts file in my Runner (manager) to route the namespace I provided in my gitlab.rb file ( to the appropriate IP address. That works and is confirmed by a traceroute/ping check. The issue is that same configuration doesnt pass on to the secondary containers to run my jobs. Or possibly it is not even attaching itself to my network in a way that would let it know how to route to other devices.

Side note: For sanity sake. I moved my two Gitlab and Runner containers to be bridged by my network. (192.X.Y.Z & 192.X.Y.Z+1 respectively)

Answering my own thread here a bit…

The Gitlab-CE package available through the Unraid “apps” section will default to “Bridged”. You have to leave that set as “Bridged”, you cannot use a custom bridge, or any other options. Then in the “Advanced Options” for the container, you need to modify the extra parameters to: --env GITLAB_OMNIBUS_CONFIG="external_url 'http://<Your unraid server IP here>:9080/'" instead of the default (--env GITLAB_OMNIBUS_CONFIG="external_url 'http://unraid:9080/'"). All of the other defaults should be fine.

Then, create a second docker container for the gitlab-runner following the normal configuration here. You can optionally add a pass in a config.toml file from one of your unraid shares (see doc here on the config).

I used the following command to register my runner:
gitlab-runner register --non-interactive --url "http://<unraid IP here>:9080/" --registration-token "<reg token from gitlab ci/cd webpage>" --executor "docker" --docker-image python:3.7.9 --description "docker-runner" --tag-list "docker" --run-untagged="true" --locked="false"

and the following .gitlab-ci.yml:

# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
image: python:3.7.9

# Change pip's cache directory to be inside the project directory since we can
# only cache local items.

# Pip's cache doesn't store the python packages
# If you want to also cache the installed packages, you have to install
# them in a virtualenv and cache it as well.
    - .cache/pip
    - venv/

  - python -V  # Print out python version for debugging
  - pip install virtualenv
  - virtualenv venv
  - source venv/bin/activate
  - pip install -r requirements.txt

    - python

and now it works. So basically my root issue was not modifying the default unraid parameter in the advanced setting, which caused me to try different network configurations (which were also invalid).

hope this helps someone who finds this!