Hi everyone, im new into worknig with pipelines so i would ask for help for this problem im facing.
I’m trying to integrate semgrep (and later on the secret detection tool) into my preproduction pipeline triggered with an API call, so that when i trigger it the test stage starts and performs an analysis on every file of the project in the develop branch (for the moment, later i was thinking to scan only the files updated since the last pipeline if possible).
Also, im running a private runner, so not the one provided by gitlab (where by the way this code works).
So what i’ve done is to use the semgrep docker image tag and use it as it was an application and run the semgrep application with a list of files.
The stage looks like this in my .gitlab-ci.yml
:
stages:
- test
- building
- deploy
semgrep-sast:
stage: test
image: semgrep/semgrep
before_script:
- |
git fetch origin develop
git checkout develop
# pip install semgrep
echo "Checking Semgrep installation..."
semgrep --version
echo "Semgrep is ready to use."
script:
- |
git ls-tree -r develop --name-only > changed_files.txt 2>/dev/null || git ls-tree -r HEAD --name-only > changed_files.txt
# Debugging: Show detected files
echo "Files to scan:"
cat changed_files.txt
semgrep scan --config p/ci $(cat changed_files.txt) --sarif --output gl-sast-report.json
allow_failure: false
artifacts:
name: sast
paths:
- gl-sast-report.json
reports:
sast: gl-sast-report.json
rules:
- if: '$IS_DEV == "true"'
when: on_success
- when: never
But when i try to trigger my pipeline with an http request, in the job logs i get an error saing that semgrep doesn’t exists:
Switched to a new branch 'develop'
Branch 'develop' set up to track remote branch 'develop' from 'origin'.
Checking Semgrep installation...
bash: line 135: semgrep: command not found
Uploading artifacts for failed job
00:00
Uploading artifacts...
Runtime platform arch=amd64 os=linux pid=178064 revision=6a42249e version=14.8.1
WARNING: gl-sast-report.json: no matching files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit status 1
I don’t undestand what i’m missing…
I was thinking about using a docker image instead and then install semgrep, but maybe there’s a better way of doing it?
Since i have many pushes into my develop branch and im not using merge requests i suppose the guide that says to include the “test” stage and include the Security/SAST.gitlab-ci.yml
module doesnt work (ref. GitLab Security Essentials - Hands-On Lab: Configure SAST, Secret Detection, and DAST | The GitLab Handbook ).
For the secret-detection part im using this time the template from the guide: Security/Secret-Detection.gitlab-ci.yml
with this stage on my .gitlab-ci.yml
:
secret_detection:
stage: test
dependencies:
- semgrep-sast
before_script:
- |
git fetch origin develop
git checkout develop
rules:
- if: '$IS_DEV == "true"'
when: on_success
- when: never
allow_failure: false
# artifacts:
# paths:
# - gl-secret-detection-report.json
# reports:
# secret_detection: gl-secret-detection-report.json
but here i have a different error saying that /analyze
doesn’t exists:
* branch develop -> FETCH_HEAD
Switched to a new branch 'develop'
Branch 'develop' set up to track remote branch 'develop' from 'origin'.
$ /analyzer run
bash: line 134: /analyzer: No such file or directory
Uploading artifacts for failed job
00:00
Uploading artifacts...
Runtime platform arch=amd64 os=linux pid=177857 revision=6a42249e version=14.8.1
WARNING: gl-secret-detection-report.json: no matching files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit status 1
Here also don’t know where to start…
Thanks in advance for everything, hopefully someone can help
(And also since im starting now, some tips would be great!)