Is it possible to run a docker build command from the default SaaS Runner? When I try, I get a docker: command not found which should be the end of it, but the documentation seems to indicate it is possible.
If you want to docker push the newly built image to your GitLab container registry, you’ll need to first authenticate to the container registry. The easy way to do this (without hardcoding credentials) would be something like this:
Thanks for the response! I should have mentioned I am currently building my image using the dind service and docker image, as you specified. Really, my problem is testing the image a step later.
When developing locally, and when running from GitHub Actions ubuntu-latest, I can do the following relatively easily because I am communicating from host VM to containers.
docker compose up -d # launch new image with test services
curl http://localhost:15672/api/overview # my rabbitmq, for example
In GitLab, it looks like I need to instead communicate from one container (my job runner) to another container (dind) which is actually running my compose.
docker-integration-test:
image: docker:20.10.16
stage: test
services:
- name: docker:20.10.16-dind
alias: docker
script:
- docker compose up -d # launch integration test services
- curl http://docker:15672/api/overview # my rabbitmq, for example
While slightly inconvenient in this example, a lot of my tests, as well as my local development workflow, rely on services and APIs to be available at localhost. Unfortunately aliasing the dind service to localhost didn’t seem to work.
Is there any way to have localhost communication with running containers on the SaaS runners?
Re-reading how saas runners work, the bullet points repeatedly state the jobs run in VMs.
Each of your jobs runs in a newly provisioned VM, which is dedicated to the specific job.
The VM is active only for the duration of the job and immediately deleted. This means that any changes that your job makes to the virtual machine will not be available to a subsequent job.
The virtual machine where your job runs has sudo access with no password.
The storage is shared by the operating system, the image with pre-installed software, and a copy of your cloned repository. This means that the available free disk space for your jobs to use is reduced.
Untagged jobs automatically run in containers on the small Linux runners.
…
It is only that last point that mentions containers. This list is on the Runner SaaS page, and implies there are tags that run jobs directly on the VM for the SaaS product. I’ve tried the tags saas-linux-small-amd64, saas-linux-medium-amd64, and saas-linux-large-amd64, listed here, but there is still no docker CLI.
I’m assuming this is actually impossible on the SaaS Runners and the documentation is just confusing.
Not really confusing in my opinion. It stated there that by default untagged jobs run on Linux Runners. You just have to read the other pages as well and in the Linux page is clearly written it’s using containers. If you choose Windows or MacOS runner you will get a bare VM.
The documentation just kind of expects that you have a general knowledge how GitLab Runners work and whats the options in order to use them. But every topic is linked there for further reading.
It is confusing that the introductory page to SaaS Runners tells you three times that the job runs in a VM, but that the default Linux runners are a special case. Using a self-hosted GitLab runner, I should be able to configure jobs to run directly on a VM?