Dev pipeline strategy

Hello everyone,

I have a website with a database running on our Kubernetes cluster. What I would like to do is when I create a new branch a copy of the production database is made and then a Dev database is created from that backup. I have that part working already with a Kubernetes job.

What I am curious about is how would I take the changes made to that Dev database across multiple commits.

So far I have thought about.

  • Using a Environment with a the $CI_COMMIT_REF_SLUG as its name. I would also name the database the same thing as the SLUG and let the script in the job sort our what needs to happen to the database.

    • the issue I see here is that I still don’t know what to do with the database when the branch is merged up stream to main.
    • the next commit doesn’t actually know about the last commit its just a guess also if a branch is made from this branch then it would be treated as a new database.

I am using the Git Lab’s SAAS build I am not %100 sure what version that currently is. Any help or advice would be amazing as I can’t seem to wrap my head around what the best strategy would be.

So I have done something similar to this. I have come up with a way to deploy review apps, including review app database. Everything for the review app is created based on the $CI_COMMIT_BRANCH variable. This allows future commits to not have to create everything anew, but just update what is already there. This is an actual line from my .gitlab-ci.yml file:

DATABASE=$(echo $CI_COMMIT_BRANCH | sed ‘s/-//g’ | sed 's////g’ | sed ‘s/\//g’ | sed 's/ //g’ | tr ‘[:upper:]’ ‘[:lower:]’)

Once we are done with the review app and everything looks good, a merge request is opened. When a merge request is opened, all of the review app stuff is removed from k8s and then deployed to a stage environment.