Permission denied in docker executor

I’ve stood up a new GitLab runner on a new computer and am having a weird problem with the Docker executor in the CI/CD pipeline where the Docker container doesn’t appear to want to run any executables within the git folder.

For example, when cmake goes to run the tests, I get Permission denied.

[100%] Linking CXX executable test-executable
CMake Error at /usr/share/cmake-3.18/Modules/GoogleTestAddTests.cmake:77 (message):
  Error running test executable.
    Path: '/builds/my/project/build/test-executable'
    Result: Permission denied
    Output:
      
Call Stack (most recent call first):
  /usr/share/cmake-3.18/Modules/GoogleTestAddTests.cmake:173 (gtest_discover_tests_impl)

I get the same error when I run my bash script via ./build.bash rather than bash build.bash

What I find odd is this does not happen if I run the docker image through a container made outside the GitLab CI/CD process, so my gut is telling me it is something related to my configuration. My config file:

concurrent = 4
check_interval = 0
connection_max_age = "15m0s"
shutdown_timeout = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "Docker Runner XYZ"
  url = "xxxxx"
  id = xx
  token = "xxxxxxx"
  token_obtained_at = 2024-03-17T19:34:54Z
  token_expires_at = 0001-01-01T00:00:00Z
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    MaxUploadedArchiveSize = 0
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = ""
    privileged = false # I've tried this as true and false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    pull_policy = ["if-not-present"]
    shm_size = 0
    network_mtu = 0
    security_opt = ["apparmor=unconfined","seccomp=unconfined"] # I tried with and without this line

I am wondering if anyone has any insight into what I might have done incorrect during the setup.

Hi,

Can you give us your .gitlab-ci.yml?

My assumption: when you commit something to git, git stores not only content, but also permissions. By default, those are non-executable permissions.

You have two options:
a) commit your script with executable permissions

git update-index --chmod=+x filename.sh
# followed by git add, git commit, etc

b) add executable permissions to the script right before you use it:

chmod +x filename.sh

Your runner config seems fine to me.

That was my first thought, but I docker exec’d into the container and the build script does have the executable flag set. Its also worth noting, and why I made this topic, that the executable produced by Google Test during the build process also cannot be run, neither can the executable produced by CMake.

It is almost like the docker container won’t run any executable created by my project, hence my gut it is a configuration issue. Like some permissions flag I don’t know about is set or not set.

My .gitlab-ci.yml:

stages:
  - build
  - test

variables:
  BUILD_IMAGE_TAG: "1.0.1"

workflow:
  rules:
    - if: $CI_COMMIT_TAG
      variables:
        VERSION: "$CI_COMMIT_TAG+$CI_JOB_ID"
    - if: $CI_COMMIT_BRANCH
      variables:
        VERSION: "0.0.0-$CI_JOB_ID"

build:
  stage: build
  tags:
    - arm64
    - docker
  image: my/debian/builder/arm64:$BUILD_IMAGE_TAG
  before_script:
    - export PATH=${PATH}:${CI_PROJECT_DIR}:${CI_PROJECT_DIR}/build    <--- I added this thinking it might be a path issue
  script:
    - cd ${CI_PROJECT_DIR}
    - bash build.bash --build -DVERSION:STRING="$VERSION"
    - bash build.bash --package-deb
  artifacts:
    paths:
      - build/*.deb
    untracked: false
    when: on_success
    expire_in: "30 days"

test:
  stage: test
  tags:
    - arm64
    - docker
  image: my/debian/builder/arm64:$BUILD_IMAGE_TAG
  before_script:
    - export PATH=${PATH}:${CI_PROJECT_DIR}:${CI_PROJECT_DIR}/build
  script:
    - cd ${CI_PROJECT_DIR}
    - bash build.bash --test      # <----- This is the step that fails
  artifacts:
    paths: ["build/test_detail.xml"]
    reports:
      junit: build/test_detail.xml
    untracked: false
    expire_in: "30 days"
    expose_as: "Test Results"

That is interesting.

Well, firstly, you definitely should be able to use your scripts as ./build.sh and not bash build.bash → this is already quite suspicious to me. As you are using there custom built image my/debian/builder/arm64:$BUILD_IMAGE_TAG I would suggest inspecting that one again and making sure that /bin/bash is configured properly, that user which you have set has all the permissions it needs to run scripts, etc…

I’m really not sure if there is any setting on GitLab Runner / GitLab level that could influence this kind of behavior…

It would be immensely helpful if I could spawn a container from this image as GitLab does.

Out of curiosity, what does the gitlab runner do when it spawns the container? Is it a docker run command? If so, how do I determine the arguments? Or does gitlab build a docker compose-esque file? If so, how can I generate it?

Personally, I would like to know that too :smiley:

I haven’t found anything on official docs at the moment, but I’m also quite sure I’ve seen somewhere in the docs about it.

Most of the times, I develop scripts just by running docker run -it -v ${pwd}:/src my-image bash locally, exporting any predefined env vars if my script needs them, and executing commands / script. And when something fails here, it will definitely fail in the pipeline, so I always make sure it runs locally. However, I’ve already experienced - not often, but still - that even though it works locally, it does not work in the pipeline for whatever reason… It does make it hard to debug. However, there are apparently remote debugging options available as well (I haven’t tried it yet) - Interactive web terminals | GitLab

Something weird must be going on… I switched my setup to just use debian:bullseye-20240110-slim as the base image and install my dependencies in before_script. Still getting the permission denied when running the test executable/build script.

Ok, figured it out. It was a problem with the filesystem mounted that had the overlay2 directory. It was mounted with the noexec flag. I’m not sure why this means that doing docker run -it image bash works, but once I changed that, the CI processes all started working again.

1 Like