Making services in GitLab CI talk to each other

I’d like to use GitLab CI to run tests. The application on which I wish to run the tests depends on a different application (also made by me and published in the GitLab docker container registry) and that application depends on redis.

I’ve configured my .gitlab-ci.yml file thusly:

  - test

  image: node:latest
  stage: test
    - redis:latest
    REDIS_URL: redis://redis:6379
    - npm install
    - npm run lint
    - npm run test

When executing the job I get an error saying that my other-app could not connect to redis://redis:6379 because it couldn’t find the specified address. My guess is that the other-app container can’t talk to the redis container… is that correct? How would I go about fixing that issue (if that’s even possible) ?

1 Like

Yes, it seems like services communicating with each other is a glaring hole in the otherwise very well considered system.

Disappointing that there isn’t more traction on this question.

I found a little workaround that you might be able to use as well, it is a little hacky. Here goes.

For my use case I want to run integration tests against a Django instance which uses a Postgres database. Like you suggest, it would be optimal to run both the Django instance and the Postgres as services, and use the test environment proper to run the integration tests. Instead I was able to get everything to coordinate correctly by running the database database as a service, the Django instance as the test environment (in the background), and the tests in the foreground. Some coordination was required to make everything actually work, here is what my .gitlab-ci.yml ended up looking like:

  stage: integration-test
    # override whatever the other-app wants to do
    entrypoint: [""]  
    - thing-needed-for-tests-to-run
    # The BASE_URL is the network address for the other service that I'm trying to run integration tests against, since it is the same container I'm currently I don't know what it is yet
    # BASE_URL: who-knows
    - name: postgres:9.6
      alias: db
    # not here anymore, now under image
    # - name:
    #   alias: app
    - ip a
    - container_ip=$(ip a | grep -v | grep -v inet6 | grep inet | head -1 | awk '{ print $2 }' | awk -F/ '{ print $1 }')
    - echo $container_ip
    - cat /etc/hosts
    # Isolate the "hostname" of the container that we're running in
    - export BASE_URL=$(grep $container_ip /etc/hosts | head -1 | awk '{ print $2 }')
    - echo $BASE_URL
    # make a new entry in /etc/hosts to allow testing of a subdomain type thing (this is particular to my use case)
    - echo "${container_ip} testing.${BASE_URL}" >> /etc/hosts
    - PRIOR=$(pwd)
    # Start the app
    - cd /wherever/my/app/is/in/the/docker/container
    - python migrate
    - python seed_live_test
    - python runserver &
    - cd $PRIOR
    - python -m pip install dependencies-from-above
    - cd wherever/the/other/deps/are
    - python -m pip install -r requirements.test.txt
    - cd test
    - pytest

This seems like a pretty stable, if hacky, workaround. But hey, at least I can run my tests.

There is now a feature flag to enable inter-service communication. Simply add the FF_NETWORK_PER_BUILD variable with a non null value to the job’s variables and they should be able to communicate now.

The reference issue.