Please see the below snippet adapted from the Gitlab CI’s postgres service example project::
- name: postgres
# the above syntax does not work
# Configure postgres service (https://hub.docker.com/_/postgres/)
image: # <any>
# some commands that update the postgres database
- name: postgres
# run some postgres commands on the
# mounted (mapped) database volume
I want to map a volume on the runner container to the postgres service.
Why do I want to do this? It is my attempted workaround to use that volume as an artifact. I will then map it to a
postgres image in the next job to perform some postgres operations on the database (specifically
pg_dump to export the updated database).
postgres container used by the service already has a facility to map a volume, I would imagine the same should be available in Gitlab CI jobs as well. However, the above syntax doesn’t work, and I couldn’t find the proper syntax from Gitlab CI documentation.
What is the correct syntax to use here to map the volume to the postgres service?
Gitlab CI version : gitlab-runner 16.3.0 (8ec04662)
volume mounting is not supported in GitLab CI, because GitLab Runner have different types of executors including a non Docker ones, where it will be quite impossible to do.
Here is the complete GitLab CI reference: https://docs.gitlab.com/ee/ci/yaml
If it’s not documented there, it’s not possible.
As, I understand you need to work with same PostgreSQL DB in several jobs. One possibility how to achieve it is to dump the data and restore in each job as needed.
- name: postgres:latest
- initialize and do something with the database
- pg_dump > db_dump.sql
- pg_restore db_dump.sql
- do other things with your database
Alternative, based on your environment, complexity and size of your DB, is to spin up or start a stopped real PostgreSQL database in your (Cloud) provider. Very simple example for AWS:
- aws rds create-db-instance --db-instance-identifier $DB_INSTANCE ...
- export PGHOST=$(aws rds describe-db-instances --db-instance-identifier $DB_INSTANCE --output json | jq -r '.DBInstances.Endpoint.Address')
- psql -h $PGHOST ...
- aws rds delete-db-instance --db-instance-identifier $DB_INSTANCE
Thank you for the ideas.
The “dump and restore” workaround is what I had been using, but that requires me to install postgres again inside the job (even though there’s already a linked service running postgres). I had hoped that Gitlab would have thought of a cleaner solution. I will go with that for now.
This approach of passing huge files between jobs in the same workflow using “artifacts” and installing lots of other software in every job (usually to run just one command) looks incredibly wasteful to me.
But ok, it’s Gitlab, so I should get used to the standard solution of “just build a new container for every single command you want to run”. Anyway even if there’s a feature request for it, they won’t bother for years unless you’re a “premium customer with 2000 seats”.
“I am done with Gitlab” is what I would I have loved to say, but unfortunately, I have to use it at work, so I guess the next best is “I am done with trying to make sense of Gitlab”. I will make do with whatever workarounds that get my work done and stop worrying about why Gitlab can’t care about usability of their products.