Gitlab runner not executing commands inside the specified image


I’m trying to build and then test a docker image. The build step is working fine and the image is pushed to the registry. However, when I’m trying to use it in the subsequent steps the commands aren’t run within the container specified.

Here is my gitlab-ci.yml:


  • build
  • analytics


stage: build

image: docker:stable

  • dind

- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME

stage: analytics

- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php

- mess_detection.html
expire_in: 1 week
when: always

- production

allow_failure: true

For which I’m getting the following output:

Running with gitlab-runner 11.5.0 (3afdaba6)
on docker-runner1 3ecbfa81
Using Docker executor with image
Pulling docker image
Using docker image sha256:30fd807a3fe43ba05e7af397a830519e68a96820c80cdbeca7badaa92641310a for
Running on runner-3ecbfa81-project-38-concurrent-1 via ci…
Fetching changes…
HEAD is now at 68bb62a9 Update .gitlab-ci.yml
68bb62a9…95a80183 master -> origin/master
Checking out 95a80183 as master…
Skipping Git submodules setup
$ vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
/bin/bash: line 79: vendor/bin/phpmd: No such file or directory
Uploading artifacts…
WARNING: mess_detection.html: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1

I also noticed that the WORKDIR for that image is /build/namespace/project instead of /app which is specified in the image i built.

What am I missing? From my understanding, it should be possible to use a custom image and then run commands inside the container.

It’s getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project. However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.

After some more research, I found that gitlab runs 4 sub-steps for each build step:

  1. Prepare : Create and start the services.
  2. Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
  3. Build : User build. This is run on the user-provided docker image.
  4. Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.

It seems like in my case, step 3 isn’t executed properly and the command is still running inside the gitlab runner docker image. Any suggestions on how to fix this?

In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it’s not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.

I’ve opened an issue for this on gitlab:

As it turns out, gitlab-runner was working as expected. What is quite confusing though is that it does some manipulations to the image it’s booting up.
The entrypoint is overridden and the folder the repository is checked out into is mounted into the container with the WORKDIR pointing to it.

So while it is possible to run your own images as containers, you have to keep in mind that you might need to change the folder before running any commands.