CI job causing segmentation fault only from GitLab - memory issue?

I am attempting to set up GitLab CI/CD with my project, with the GitLab instance hosted internal to my company, and the gitlab-runner hosted on my project’s separate virtual machine. I’m using the bash shell in CentOS 7. When I use the command "gitlab-runner exec shell ", the program that the job runs executes without issues. However, when I commit to my repository and GitLab executes the runner, the program generates a segmentation fault and the job fails. This made me wonder whether the runner is hitting some kind of memory limit when it tries to run the program? We got seg faults on running the program in our own environment when our machine only had 4GB of memory allocated; they disappeared when that was raised to 16GB. I’m totally new to GitLab CI/CD, and have not had much luck searching for answers on this issue, so any feedback or ideas on how to proceed would be greatly appreciated!


please share the GitLab version you are using, /help at your server’s website. Also, please add the content of your .gitlab-ci,yml to get a better idea about your CI pipeline and executed jobs. Last, please also add some details about the project itself, i.e. are you compiling C++ code, or doing some resource intensive package building?


Thank you for your response!

GitLab version: GitLab Community Edition 12.6.4

Contents of my .gitlab-ci.yml file:

- sweep
- deploy

stage: sweep
- make all
- bash

stage: deploy
- master
- echo “Do your deploy here”

The project is a large Fortran codebase that does analysis of electrical power systems. I am trying to build the Fortran program that does the analysis (this succeeds) and then run it on some test cases (this generates the segmentation fault). The script calls a perl script which runs a sweep of test cases, passing the appropriate input files and command line arguments to the second perl script that calls the Fortran program. Everything runs up until the Fortran program, which starts, then hits a segmentation fault.


sounds interesting. I have zero knowledge about Fortran but I could imagine that the environment inside the GitLab runner and shell executor is different to a “normal” Linux shell.

In such situations, I’ll try to go the iterative way: Reduce the amount of tests fired, and see at which point the tests start failing. Maybe there is one which does “something weird” with memory allocations.

What happens if you let the program run with gdb and writing the stack trace then? Does that crash too, or does it survive due to the slowed down debugging vm?

Maybe the stack trace from the segfault provides some indications of what’s going wrong here.


1 Like

Thank you for the advice! Unfortunately every test is the same - run the entire program beginning to end - but with different inputs. I just tried running a single test and still got the segmentation fault. I’m not sure how to set it up to run with gdb as I haven’t used it before, but I will do some research and give that a try.

1 Like

So, it turns out that it was a stack size issue in the bash shell that the gitlab-runner was using. I added an increase to the stack size (ulimit -S -s 1000000) into my test script before running the program, and it was able to run.