Looking for advice on deploying front and back end together

Hi, I’m looking for advice and best practices on building and deploying our front end (Vue) and back end (Node) together. I’ve tried to search for relevant information, but I’m not sure I understand the basic concepts enough or the vocabulary enough to find anything helpful for our use case. The docs and examples I have found all seem to be based around a single code base (eg. PHP). Our code is currently in a single repository (though loosely coupled so separating them would be easy). We are currently building them separately and deploying manually, but that’s taking up a lot of time now that our team has grown and the code is shipping more frequently. I’m looking for advice on how to use GitLab tools to help automate the test, build and deploy (including migrations if possible). We are using GitLab.com and Digital Ocean. I just need a little orientation on how to build multiple code bases. I’ve seen some suggestions from installing multiple runners in different locations on the server (which seems like a dirty hack to me, but??) to using multiple executors, only I’m not sure if that’s the correct terminology. Any advice would be appreciated. TIA

I have the same situation: a mono-repo with different projects loosely coupled in different languages.

What we do is having just one runner, of course, and then having different jobs on the Gitlab-CI file.

Something like:

stages:
   - test
   - build
   - deploy

testPython:
   stage: test
   script:
      - cd pythonDirectory
      - ./test.sh

testGo:
   stage: test
   script:
      - cd goDirectory
      - ./test.sh

buildPython:
   needs: testPython
   stage: build
   script:
      - cd pythonDirectory
      - ./build.sh
   artifacts:
     paths:
       - pythonDirectory/pythonExec

buildGo:
   needs: testGo
   stage: build
   script:
      - cd goDirectory
      - ./build.sh
   artifacts:
     paths:
       - goDirectory/goExec

deployPython:
  needs: buildPython
  stage: deploy
  script:
      - cd pythonDirectory
      - ./deploy.sh

deployGo:
  needs: buildGo
  stage: deploy
  script:
      - cd goDirectory
      - ./deploy.sh

The important bit is the “needs” keyword, that allows you to explain the dependencies among jobs :slight_smile:

Ah, thank you! The “needs” keyword is very helpful. Does the working directory reset to the project root for each job in the runner or do I need to cd ../ type thing? Do jobs generally reference bash scripts? I’m good with bash scripting, I’m just curious if that’s a common / best practice? Thanks again! Very helpful.

Yeah, the cwd resets for every job as far as I know.

Personally, I use bash scripts for complicated things or if the instructions are more than 6-7, or if we wrote already the script to work locally.

The negative side of the scripts is that then you cannot “include” the .gitlab-ci in another project, making harder to re-use them :slight_smile:

1 Like

Thank you!

@rpadovani You can get around the reuse problem by putting your shared scripts in functions in the pipeline definition.

As a rule of thumb, that’s what we do for all the “CI” functions – the things that are only expected to run in CI and are not specific to a project. This is for things like publishing different artifact types, managing the published artifacts, and the like.

If a project can follow an existing pattern, then it need only include our ci project. If it needs to customize, it can make use of all the shared functions. Also adding additional patterns is easy because we have a growing library of functions that any new pattern can use.

Lastly, sometimes you want to debug locally. That’s not too hard either – you can use a tool like spruce and jq to pull your functions out of the yaml into a file. Then you can have a test drover that sources your functions.

Now, for a monorepo this probably isn’t such a big deal, but it’s pretty helpful when dealing with a multirepo usvc architectures.