I’d like to use GitLab CI to run tests. The application on which I wish to run the tests depends on a different application (also made by me and published in the GitLab docker container registry) and that application depends on redis.
I’ve configured my .gitlab-ci.yml
file thusly:
stages:
- test
test:
image: node:latest
stage: test
services:
- redis:latest
- registry.gitlab.com/other-app:latest
variables:
REDIS_URL: redis://redis:6379
script:
- npm install
- npm run lint
- npm run test
When executing the job I get an error saying that my other-app
could not connect to redis://redis:6379
because it couldn’t find the specified address. My guess is that the other-app
container can’t talk to the redis
container… is that correct? How would I go about fixing that issue (if that’s even possible) ?
Yes, it seems like services communicating with each other is a glaring hole in the otherwise very well considered system.
Disappointing that there isn’t more traction on this question.
I found a little workaround that you might be able to use as well, it is a little hacky. Here goes.
For my use case I want to run integration tests against a Django instance which uses a Postgres database. Like you suggest, it would be optimal to run both the Django instance and the Postgres as services, and use the test environment proper to run the integration tests. Instead I was able to get everything to coordinate correctly by running the database database as a service, the Django instance as the test environment (in the background), and the tests in the foreground. Some coordination was required to make everything actually work, here is what my .gitlab-ci.yml
ended up looking like:
integration_test:
stage: integration-test
image:
name: registry.gitlab.com/other-app:latest
# override whatever the other-app wants to do
entrypoint: [""]
dependencies:
- thing-needed-for-tests-to-run
variables:
# The BASE_URL is the network address for the other service that I'm trying to run integration tests against, since it is the same container I'm currently I don't know what it is yet
# BASE_URL: who-knows
services:
- name: postgres:9.6
alias: db
# not here anymore, now under image
# - name: registry.gitlab.com/other-app:latest
# alias: app
before_script:
- ip a
- container_ip=$(ip a | grep -v 127.0.0.1 | grep -v inet6 | grep inet | head -1 | awk '{ print $2 }' | awk -F/ '{ print $1 }')
- echo $container_ip
- cat /etc/hosts
# Isolate the "hostname" of the container that we're running in
- export BASE_URL=$(grep $container_ip /etc/hosts | head -1 | awk '{ print $2 }')
- echo $BASE_URL
# make a new entry in /etc/hosts to allow testing of a subdomain type thing (this is particular to my use case)
- echo "${container_ip} testing.${BASE_URL}" >> /etc/hosts
- PRIOR=$(pwd)
# Start the app
- cd /wherever/my/app/is/in/the/docker/container
- python manage.py migrate
- python manage.py seed_live_test
- python manage.py runserver 0.0.0.0:8000 &
- cd $PRIOR
script:
- python -m pip install dependencies-from-above
- cd wherever/the/other/deps/are
- python -m pip install -r requirements.test.txt
- cd test
- pytest
This seems like a pretty stable, if hacky, workaround. But hey, at least I can run my tests.
There is now a feature flag to enable inter-service communication. Simply add the FF_NETWORK_PER_BUILD
variable with a non null value to the job’s variables and they should be able to communicate now.
The reference issue.
1 Like
Thank you, this is exactly what I was looking for. This is perfect for my case:
- Service docker-dind
- Service minio
In my job, the process needs to speak with minio and needs to run containers that also need to talk with the same instance of minio. Running a Docker container in host network mode in docker-dind makes minio contactable using its service alias. Perfect!