Debugging .gitlab-ci.yml locally with jobs that contain dependencies
I’m using the GitLab runner to debug locally this .gitlab-ci.yml
image: "openfoam-v2012_ubuntu:focal"
stages:
- build
- run
- visualize
# - test
build_apps:
stage: build
script:
- source /opt/OpenFOAM/OpenFOAM-v2012/etc/bashrc || true
- ./Allwmake
- ls $FOAM_USER_APPBIN
- ls $FOAM_APPBIN
artifacts:
paths:
- /root/OpenFOAM/-v2012/platforms/linux64GccDPInt32Opt/bin/foamTestFvcReconstruct
param_study:
stage: run
dependencies:
- build_apps
script:
- source /opt/OpenFOAM/OpenFOAM-v2012/etc/bashrc || true
- ls $FOAM_USER_APPBIN
In the build_apps job, executables are compiled and installed that are required in param_study. Is it possible to debug the entire pipeline locally, and not just a single job using a command similar to the one below?
sudo gitlab-runner exec docker --docker-pull-policy never build_apps
I expected it to be possible to execute the entire pipeline, or a sub-set of the pipeline, along these lines:
sudo gitlab-runner exec docker --docker-pull-policy never build_apps param_study
But this doesn’t seem to be possible. Is there another argument to exec that tells the runner to execute every job in the pipeline? I haven’t found anything in the --help output. Thanks!
I am bumping this thread because gitlab-ci-local doesn’t seem to install/work on Ubuntu 20.04 properly. Is there an “official” way to debug CI pipelines locally with gitlab-runner or gitlab-multi-runner?
Can I run one job after another and re-use the artifacts when using gitlab-runner exec to test the jobs locally?
The simple answer to this question is “no”, there is gitlab-runner exec and that’s it.
If you are using Docker it’s a bit easier – you can just run a Bash shell on the image you use locally, and manually step through the commands in your .gitlab-ci.yml, but there’s no simple way to automate this, that I know of.
There is a problem also with gitlab-runner exec that I can’t figure out. When I run the first job - the build job - everything is fine. Then if I want to move on and manually start the next job - that depends on the artifacts from the build job - the job fails, and it can’t find the required data. It’s starts from scratch (from a fresh local repository checkout). I’ve been reading about using cache but this didn’t help. Is there a way to run job A with gitlab-runner exec, locally, create artifacts, and re-use them locally when running a job B that depends on A?
I’ve never tried this, but have you had a go at using gitlab-runner cache-archiver and so on?
My guess is that it probably isn’t possible to do what you want. If you can’t just add lots of debug statements to your .gitlab-ci.yml, you might be better off going to the location of your last build, su-ing to gitlab-runner and running the commands manually. I realise that doesn’t help with the cache…
Needed to debug some behat tests which were failing only in the pipeline and that helped!
Only I need to run the command as sudo because it tries to change the ownership of the files at some point, and indeed after the pipeline run locally all files and folders belongs to root and are chmod 777. I’d recommend not running this in a repo where you’ll be willing to push changes after.
Create a duplicate repo locally, run there, mirror your changes in your clean repo.
So weirdly, I retried from a fresh copy of my repo. This time it didn’t fail on “chgrp” command (so I was able to run it not using sudo), but still it’s overriding the mod of my files.
I guess it’s not related to GCL, but might be because I’m using Robo in my pipeline.