GitLab Runner marks job as failed when it's actually successful

Hi.
I was lucky enough to run into this odd thing.
I am using runner deployed to k8s cluster to run a simple bash script

  (jfrog rt s ${repo}/${name}/* --url "${the url}" --apikey "$KEY" 2> /dev/null | sort -r | awk -v ym=$YM '$2 ~ ym {print $2}' | awk -F'/' '{print $5}' | awk -F'.' '{print $2}' | head -n1) 

This works 100% I have tested in on the docker container and ran a single pipeline with single stage more than 30 times. As soon as I added more than one job to the stage that runs exactly the same liner above, 3/5 jobs fail and its very random, BUT the script actually works and gives output as expected.
I have verified that these jobs are picked up by different pods. I tried running them with resource groups, tried adding rule to retry when the script fails - it made no difference.

I am running out of ideas to try. I keep getting ERROR: Job failed: command terminated with exit code 1
Yet, script works.

Any ideas what could be causing this?

Thanks :slight_smile:

I think I figured it out. For those that happen to run into this issue - it turned out that head -n1 was breaking the script. The workaround is to use awk 'NR == 1'

I am still not sure why head returns error 1 maybe I am not piping in the right order.