How to Perform GitLab Container Scanning Before Pushing Images Using GitLab CI with Kaniko Without Using Artifacts?

I am currently working on setting up a GitLab CI/CD pipeline for building Docker images using Kaniko and scanning them for vulnerabilities using GitLab Container Scanning. However, I’m facing a challenge in performing the container scan before pushing the images to the registry, and I’m looking for alternatives to using artifacts.

Here’s the issue:

Context: Our organization is focused on minimizing the size of our GitLab Enterprise instance, and saving the images as artifacts during the CI/CD process is discouraged due to space constraints.

Security Concerns: We are also concerned about security. We don’t want to push Docker images to the registry if they have critical vulnerabilities. This poses a security risk, and we want to ensure that images are scanned for vulnerabilities before pushing.

GitLab recommends pushing the images first and then scanning them, but this goes against our security policies.

so basically i have 2 questions:

  1. Are there alternative approaches or best practices to integrate GitLab Container Scanning into our GitLab CI/CD pipeline with Kaniko so that we can perform the vulnerability scan before pushing the images to the registry without using artifacts?
  2. Additionally, is it possible to configure GitLab CI to run two jobs using the same storage, where one job builds the image and another scans it, without the need to save the image as an artifact in between?
1 Like

Hi,

I am facing the same issue, did you find any solution ?

Regards

Regarding your second question, it is possible to configure GitLab CI to run two jobs using the same storage where one job builds the image and another scans it, without the need to save the image as an artifact in between. You can define separate stages in your CI/CD pipeline, where the first job builds the image and uploads it to the registry, and the second job pulls the image and performs the vulnerability scan

Nope :frowning:

Thanks for the reply, i know about this Artifacts “feature” but it isn’t what i need.
i want an option to share storage as of running the entire pipeline on the same pod on k8s cluster, from what i understand each job run on different pod.
i can also understand if there was shared PVC between the pods but i did not see any docs about it.
I hope maybe someone could clarify on this.

as an example in jenkins, by default each pipeline runs on a pod and each stage in the pipeline is a container inside it, which for me is a more reasonable approach.