Can I build Docker containers on the Linux SaaS Runner?

Is it possible to run a docker build command from the default SaaS Runner? When I try, I get a docker: command not found which should be the end of it, but the documentation seems to indicate it is possible.

Am I confused on what the SaaS Runner provides, or am I configuring my CI incorrectly?

stages:
  - build

default:
  before_script:
    - mkdir -p $HOME/.docker
    - echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json

docker-build:
  stage: build
  script:
    - docker build --tag $CI_REGISTRY_IMAGE:test .

Hi @chr1s :wave:

Yes, you can build and push container images using GitLab CI with the default SaaS Linux runner.

To build the image, you’ll need to use a Docker image and a Docker in Docker (dind) service container as shown in the example here: Build and push container images to the Container Registry | GitLab

If you make the following modification to your docker-build job, it should work:

docker-build:
  image: docker:20.10.16
  stage: build
  services:
    - docker:20.10.16-dind
  script:
    - docker build --tag $CI_REGISTRY_IMAGE:test .

If you want to docker push the newly built image to your GitLab container registry, you’ll need to first authenticate to the container registry. The easy way to do this (without hardcoding credentials) would be something like this:

docker-build:
  image: docker:20.10.16
  stage: build
  services:
    - docker:20.10.16-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build --tag $CI_REGISTRY_IMAGE:test .
    - docker push $CI_REGISTRY_IMAGE:test

I hope you find this helpful. Let us know if you have any questions!

Thanks for the response! I should have mentioned I am currently building my image using the dind service and docker image, as you specified. Really, my problem is testing the image a step later.

When developing locally, and when running from GitHub Actions ubuntu-latest, I can do the following relatively easily because I am communicating from host VM to containers.

docker compose up -d # launch new image with test services
curl http://localhost:15672/api/overview # my rabbitmq, for example

In GitLab, it looks like I need to instead communicate from one container (my job runner) to another container (dind) which is actually running my compose.

docker-integration-test:
  image: docker:20.10.16
  stage: test
  services:
    - name: docker:20.10.16-dind
      alias: docker
  script:
    - docker compose up -d # launch integration test services
    - curl http://docker:15672/api/overview # my rabbitmq, for example

While slightly inconvenient in this example, a lot of my tests, as well as my local development workflow, rely on services and APIs to be available at localhost. Unfortunately aliasing the dind service to localhost didn’t seem to work.

Is there any way to have localhost communication with running containers on the SaaS runners?

The difference is that:

  • GitHub Actions runs the commands in a VMs shell.
  • GitLab SaaS runner starts a Docker container you specify and runs your commands in a container image in Docker.

You have to adjust your steps and setup to accommodate this difference.

Re-reading how saas runners work, the bullet points repeatedly state the jobs run in VMs.

  • Each of your jobs runs in a newly provisioned VM, which is dedicated to the specific job.
  • The VM is active only for the duration of the job and immediately deleted. This means that any changes that your job makes to the virtual machine will not be available to a subsequent job.
  • The virtual machine where your job runs has sudo access with no password.
  • The storage is shared by the operating system, the image with pre-installed software, and a copy of your cloned repository. This means that the available free disk space for your jobs to use is reduced.
  • Untagged jobs automatically run in containers on the small Linux runners.

It is only that last point that mentions containers. This list is on the Runner SaaS page, and implies there are tags that run jobs directly on the VM for the SaaS product. I’ve tried the tags saas-linux-small-amd64, saas-linux-medium-amd64, and saas-linux-large-amd64, listed here, but there is still no docker CLI.

I’m assuming this is actually impossible on the SaaS Runners and the documentation is just confusing.

Not really confusing in my opinion. It stated there that by default untagged jobs run on Linux Runners. You just have to read the other pages as well and in the Linux page is clearly written it’s using containers. If you choose Windows or MacOS runner you will get a bare VM.

The documentation just kind of expects that you have a general knowledge how GitLab Runners work and whats the options in order to use them. But every topic is linked there for further reading.

It is confusing that the introductory page to SaaS Runners tells you three times that the job runs in a VM, but that the default Linux runners are a special case. Using a self-hosted GitLab runner, I should be able to configure jobs to run directly on a VM?

Technically, even the Linux Runner creates a dedicated ephemeral VM, but the jobs are executed in containers :slight_smile:

If you use your own Runner with shell executor you can execute commands directly on the VM.

1 Like