Passing Docker Image Between Build and Test Stage in GitLab Runner

What I Want to Do

Using GitLab Runner via Docker-in-Docker image:

  1. Build an image from a Dockerfile in my repository
  2. Run numerous tests against that Docker image

My Setup

Dockerfile

before_script:
  - cd /builds/user

stages:
  - build
  - test

build_image:
  stage: build
  script:
    - cd Repository
    - docker info
    - docker build -t image .

test_one:
  stage: test
  script:
    - docker run -v Repository:/opt/src container -m unittest source/tests/test_one.py

test_two:
  stage: test
  script:
    - docker run -v Repository:/opt/src container -m unittest source/tests/test_two.py

Runner config.toml

concurrent = 2

[[runners]]
  name = "GitLab Runner"
  url = "https://gitlab.com/ci"
  token = "[REDACTED]"
  tls-ca-file = ""
  executor = "docker"
  [runners.docker]
    image = "gitlab/dind:latest"
    privileged = true
    volumes = ["/cache"]

Problem

Obviously, I’m running into the issue where the build steps in build_image no longer exist in test_one, so I get the following.

$ docker run -v Repository:/opt/src image -m unittest source/tests/test_one.py
Unable to find image ‘image:latest’ locally
Pulling repository docker.io/library/container
docker: Error: image library/image not found.

I’ve read in a few places where perhaps artifacts could be used to store the results of the build between stages; however, that just doesn’t seem to work.

I’ve tried the following in my .gitlab-ci.yml

build_image:
  stage: build
  script:
    - cd Repository
    - docker info
    - docker build -t image .
  artifacts:
    untracked: true
    paths:
      - /var/lib/docker

Unfortunately, I’m still unable to find/use the built Docker image.

Question

Has anyone been able to do this? I’d like to split the tests into separate jobs (to make them concurrent), but rebuilding the image every time in before_script: is absolutely brutal.

2 Likes

Since version 8; Gitlab CI is now fully integrated into Gitlab itself (no separate website anymore). What I did was creating a docker image and use it in gitlab CI :wink: Your dockerfile can maybe be based on jessie/ubuntu or another image. Meaning a lot is already done for you.

For example, a Dockerfile could be (based on the official PHP docker image):

# Based on PHP image
FROM php:5.6-fpm

MAINTAINER Melroy van den Berg "melroy@melroy.org"  

ENV PATH "/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:$PATH"

# Copy come stuff
COPY . /var/www/html/

# Install some stuff
RUN apt-get install npm

To build it, execute:

sudo docker build .

Where . is the location where the Dockerfile is location. This case the same folder.

Although I advise you to use some name and tag. Syntax is: name:tag. Tag is optionally, but used for version.

sudo docker build -t latest:1.0 .

Now you got a new image, check-out!:

sudo docker images

To use it in Gitlab:

  1. Get token from yourgitlab.com/admin/runners.

  2. You can use this image when for example registering your runner.

sudo gitlab-runner register
  1. With the following info:
Host: http://yourdomain.com/ci
Token: xxxx (see admin/runners)
Description: Some text
Tags: <optionally>
Executor: docker
Docker Image: latest:1.0

Notice: The docker image name “latest:1.0”. Docker will get the image called ‘latest’ with the tagged version: 1.0 locally.

HINT: Like you said, you want to split into separate jobs. Within your .gitlab-ci.yml, you can define as many jobs as you want. Optionally you can use runner’s tag to execute a specific job on a specific runner.

Below an example of a .gitlab-ci.yml file:

job_1:
  script:
    - php test.php
  tags:
    - testing

job_2:
  script:
    - php index.php
  tags:
    - production

Good luck!!

  • Melroy

Hello @intelliphant

Have you solve your problem ?

I have got the same issue. I don’t know how to do because my image size is 560 MB.
If I use Cache or Artifacts …

Your feedback will be helpfull

build:
  stage: build
  image: xxx/dind:1.0
  tags: [dind]
  script:
  - docker build ...
  - docker save --output ...
  - cache:
    key: "$CI_BUILD_REF"
    untracked: true

# Tag the Docker Image because build stage is OK
tag:
  stage: package
  tags: [dind]
  image: xxx/dind:1.0
  script:
    - docker load ...
    - docker tag ...
  cache:
    key: "$CI_BUILD_REF"
    untracked: true

# Push docker image to registry
push:
  stage: deploy
  tags: [dind]
  image: xxx/dind:1.0
  script:
    - docker load ...
    - docker push  ...
  cache:
    key: "$CI_BUILD_REF"
    untracked: true

Hi @intelliphant, I suspect you’re not able to run your built image in the test stage because your artifacts bundle from build doesn’t actually contain the image.

I’m trying to do the exact same you’ve described but with gitlab.com CI. I’m not sure if you’re hosting GitLab yourself. Browsing the artifacts on GitLab UI from my build stage reveals it’s empty, and downloading the zip turns out to be a mere 22 bytes in size. So, no image there. I think it’s because my base docker container running on a shared runner is not run as privileged, so I cannot access /var/lib/docker. But that’s just a wild guess.

Successfully built 20a8c2f94639
Uploading artifacts...
WARNING: /var/lib/docker: no matching files

I guess you could try changing the Docker image directory as described here but I haven’t tried this yet.

Here is my .gitlab-ci.yml file:

image: docker:latest

services:
  - docker:dind

before_script:
  - docker --version

build:
  stage: build
  script:
    - docker build . -t app
  artifacts:
    paths:
      - /var/lib/docker

test:
  stage: test
  script:
    - docker run --name app -d app
    - docker exec app npm run lint-test

@danger89 This may be a solution, but I don’t think it answers @intelliphant’s question. For me, this could mean building my image and pushing to a registry before every push to gitlab, which could get cumbersome and time-consuming during development. Maybe I have it wrong though.

If anyone has managed to figure this out, please share.

On further investigation, I believe I’m not able to access /var/lib/docker not because of non-privileged environment, but because that directory is hidden in the docker vm. On OSX, it is hidden in the xhyve vm.

ls: /var/lib/docker: No such file or directory

According to here:

If you’re trying to access the volume data, I think the usual way is to launch another container: docker run -v test:/test -it ubuntu:16.04 bash will get a shell with the volume data visible in /test.

Not sure how to make that work for this use case so far…

@intelliphant
@guillaume.marchand

So this is working for me for passing a built Docker image as an artifact:

image: docker:latest

services:
  - docker:dind

before_script:
  - docker info

build:
  stage: build
  script:
    - docker build . -t app
    - mkdir image
    - docker save app > image/app.tar
  artifacts:
    paths:
      - image

test:
  stage: test
  script:
    - docker load -i image/app.tar
    - docker run --name app -d app
    - docker exec app npm run lint-test

Now on to caching dependencies for me :slight_smile:

9 Likes

@gustavjf It’s a great solution, but it’s prone. The network and disk IO activity increases dramatically. It would be good to get some sort of command to execute stages at same agent.

So I guess creating an artifact is the ‘right’ way to do this?

I was thinking the other possibility could be to use the GitLab container registry and push the built container during build, then pull that down in the following stages. You’d probably need to use commit refs for the tag, otherwise you could run into issues if you have multiple pipelines running in parallel.

Edit: actually, that’s exactly what they suggest at the bottom of: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

Have anyone figured out how to do this without pushing/pulling?
Also it’d be nice to use it like a service in next job

build:
  image: docker:git
  script:
    - docker build -t ci-image .
  artifacts/cache: whatever
...
test:
  services:
    - ci-mage
  script:
    - run test --over=ci-image
1 Like