How to Access Postgres Service from Child Docker Container? How do I access IP address of postgres service or gitlab shared runner?


The documentation recommends configuring a postgres instance as a service in .gitlab-cy.yml. CI jobs defined in .gitlab-ci.yml, are able to connect to the postgres instance via the service name, ‘postgres’.

I have the following CI build architecture, running within a shared runner at


The tusd, minio and listener containers are spawned within a docker-compose process, triggered inside the pytest CI job. The listener container writes information back to the postgres database.

The listener container is unable to communicate with the postgres container using the hostname, ‘postgres’. The hostname is unrecognised. How can the listener container communicate with the postgres database instance?

Do I use the IP address of the postgres container or the shared gitlab-runner? If so, how do I determine the IP address?

** Update 11/1/2019**
Resolved the issue following the advice below. Moved all services into docker-compose file so that they can communicate with each other. This includes the postgres container etc…After some refactoring in test environment initialisation, tests are now invoked via docker-compose run command.

Now able to successfully run tests using gitlab-shared runner…

Kind regards


Hello, with your setup I would suggest you spawn Postgres in your docker-compose setup as well. Services are most useful when only the build container does try to access them, especially as you have to take special care because of possibly shared ip-ranges.

That said I think services are exposed additionally via environment variables which you could use when running docker-compose up -d. Try this with a test job and just do use the „set“ command as script

1 Like

Hi mfriedenhagen,

Thanks for responding. Much appreciated :smiley:

Apologies for the delayed response, I have been away over the Christmas period.

If I move the postgres instance into the docker-compose setup is it possible for the pytest container to access it as well? I will briefly explain further context.

The pytest instance is a CI job that installs a python rest api and runs pytest to perform unit and integration testing. The tests make requests to the rest api instance, which in turn reads and writes from a postgres database backend. The docker-compose containers provide an upload feature. When the upload is completed the listener container writes meta data back to the same postgres database backend.

In summary, the postgres database backend is shared between the pytest (rest api) container and docker-compose services. The postgres database instance is a docker image pulled from the project’s gitlab container registry for each pipeline run. This docker image sets up the initial state of the database, e.g. loads lookup table scripts etc.

Since posting, I briefly tried a test job, listed below, together with docker-compose file. In this approach I have created two containers in the CI job:

  1. postgres: Postgres database instance.
  2. restapi: Starts pytest. Some tests spawn the docker-compose services. Both the docker-compose services and restapi write to the same postgres database.

Here all docker-compose services are on the same network as the postgres and pytest containers.

  stage: build
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: tcp://docker:2375
    SHARED_PATH: ${CI_PROJECT_DIR}/fileserver
    - docker:dind
    - docker -H $DOCKER_HOST network create -d bridge localnet
    - docker network connect localnet postgres

    - docker -H $DOCKER_HOST build -t restapi:latest --build-arg ARG_CI_JOB_TOKEN=${CI_JOB_TOKEN} --build-arg ARG_CI_REGISTRY=${CI_REGISTRY} --build-arg PG_USER=${PG_USER} --build-arg PG_PASSWORD=${PG_PASSWORD} --build-arg PG_DB=${PG_DB} --build-arg TOKEN=${TOKEN} .
    - docker -H $DOCKER_HOST run -v /var/run/docker.sock:/var/run/docker.sock --network localnet --privileged=true restapi:latest

From what I understand, after some reading/testing, docker-compose services must be on the same network as external containers/services to be able to communicate.

With this approach the docker-compose services start but the restapi:latest container times-out when trying to access the service name. I think this is because I am binding volume /var/run/docker.sock.

If I remove the volume binding the docker-compose service is unable to start when it is spawned by the rest:api container. It is unable to locate a docker daemon on http+docker://locahost…Do I need to set DOCKER_HOST environment variable to tcp://docker:2375 when building the restapi container? Will it recognise the docker:dind service?

I have listed the docker-compose file below…

version: "3.7"

    external: true


      - ./config/.minio.test.env
      - ./minio-test:/data
    - localnet

      - ./config/.tusd.test.env
    - ./certs:/server/certs
    - localnet

      - ./config/.uploaded-listener.test.env
    - localnet

Kind Regards



  • why don’t you use --network localnet when starting postgres?
  • where do you cal docker-compose up -d or the likes?
  • why don’t you start postgres in the docker-compose file?
  • how do you try to test “restapi”?

My guess would be you should put everything into the docker-compose file and just expose the ports of restapi so you may test this one from the gitlab-container spawned by the gitlab-runner.
If you e.g. expose 8080 you should be able to access it from the build container at docker:8080.


Hello Mirko,

Will have to think further regarding refactoring to place everything in docker-compose…

why don’t you use --network localnet when starting postgres?

Yes, agree could use --network localnet to start the postgres container.

where do you call docker-compose up -d or the likes?

This is done inside the restapi container via lovely-pytest-docker. This accepts the path of docker-compose(s) files. The docker-compose process is triggered
by the docker_services pytest fixture, inside the rest-api container.

how do you try to test “restapi”?

The restapi is a implemented using the pyramid framework with a sqlalchemy postgres database backend.
The restapi container installs the restapi pyramid implementation and associated unit / integration tests.
The endpoints of the restapi are tested using webtest and pytest. Tests check that resources are maintained correctly when invoking restapi endpoints, i.e correct http response codes and database state are as expected.
The upload functionality is provided by the docker-compose services with a tusd nginx proxy. When an upload is completed metadata is written back to the same postgres database via the listener docker-compose service instance. Each test for upload functionality triggers a docker-compose process with the aid of the lovely-pytest-docker pytest plugin. All this takes place within the restapi container.

why don’t you start postgres in the docker-compose file?

The pyramid restapi uses a sqlalchemy postgres database backend. Not sure if I could reference the postgres database inside the docker compose instance from the pyramid views…then would have to refactor so docker-compose instance is triggered when pyramid framework starts. Presumably I would be able to use the url of the form postgres://userid:passwd@docker-compose-service-name:5432/dbname?

Another alternative could be to use kubernetes cluster…tusd, minio, listener and rest-api are deployed as kubernetes pods. Not sure if this would provide an easy local development environment and would probably cost more…