Testing that my Runner is configured properly

I’m working on a runner on my local machine with some options I’ve never used before. What’s the best way to test that it works? Should I register it with our live Gitlab instance? (I’d rather not have to push a change to a repo every time, but I guess that’s not a deal breaker.) Or is there something available that would locally mock the endpoints that the runner uses to get jobs, source, artifacts, etc.?

I had some early success with gitlab-runner exec, but wasn’t able to get it working when running the Runner itself in Docker and using the Docker executor. I believe the problem had to do with getting the executor to see the volume mount that had the git repo for the job I was asking to exec, but I didn’t dig too deep.

For more context, I’m using Docker Compose to run the runner (as in this example) along with a service that I need to be persistently available on the same network as the jobs (we’re experimenting with Dagger, which might/should give better caching performance if the engine persists outside of a single job). There are probably other ways to do it, but this is the route I’m taking right now.

For anyone’s future reference: I ended up creating a project under my personal area in our Gitlab instance and configured my locally running Runner as a Project Runner. There were a few shortcomings:

  1. Having to push every time I wanted to make a change. (When I didn’t change the code of the project under test, I could just click the Run Pipeline button, which wasn’t too bad.)
  2. Runners didn’t seem to cleanup even when I put a gitlab-runner unregister in my shutdown script. This may have been my fault, but it doesn’t seem to have negatively impacted things so far.

Had I not been able to register a Project Runner (I’m guessing some environments might lock that down), my next plan was to look into setting up a Gitlab instance on my own machine. Not sure how complicated that would be though since I’ve never done that before.