Is there a way to build gitlab cicd tests programatically from urls?

Our current QA team uses a self hosted test harness which executes a series of XML tests that designate how to run specific use cases. I am now looking into bringing this over to GitLab CICD and execute all tests via a commit trigger.

I currently have the ability of running a single one of our test harness tests by building an auto-run url. In other words, when the url is built, I can paste it into a browser and it will bring up our test harness, populate the various options and run the testcase. Currently I have a single Jest test that runs, I build the 1000+ auto-run links into an array and then push that into puppeteer to run. The problem I am having difficulty with is getting a 1-to-1 response from each test since these are all under a single jest test.

What I would like to know is whether there is a way that I can preprocess / build the url array with our autorun links and then have each of those individual urls handled as separate e2e tests in gitlabs CICD pipeline. I was reading the documentation but I havent been able to figure out a way to perform the desired task. In the end what I am wanting to achieve here is taking advantage of Gitlabs CICD dashboard to display the status of each individual test case rather than pushing the individual statuses to a db and then creating my own dashboard to report on those statuses.

If anyone has any ideas or advice, please send them my way. Also if it is not possible to perform the above, does anyone know if there is a way I can create a custom dashboard that I can display in Gitlab’s CICD pipeline page? Thanks!

Hi, @daniel.x.krotov. Do you have any examples that you can share to help us understand what you’re trying to achieve?

It sounds like you want to create a pipeline with N jobs, where each job calls a pre-determined URL and succeeds/fails according to the response. Is that more or less correct?

If this is right, I think an easier approach might be to use unit test reports to display the results of the tests in the UI.

This is a lot simpler than using dynamic child pipelines which might do what you are describing.

With the unit test report approach, all you have to do is a pipeline with one or more jobs that will:

  1. prepare the list of URLs to call
  2. call each of them and record the result
  3. prepare a junit report based on the recorded results

Thanks @thiagocsf for the response. I apologize for the late response on my end. We had a family emergency and I had to head down south quickly and was away from work. Truly appreciate the help though.

So you are correct…currently our QA selects an xml test to run from a portal. That runs through a sequence of events that then yield about 5-20 individual requests with query params that need to each be validated against a baseline and determine if the current, lets say 5 requests with our current build, match the requests ran when we executed against production build. So one xml test will fire 5 requests that then need to be compared and if a query param doesn’t match, then report on what request and query param doesnt match. Once complete, then they select the next xml and perform the same.

So I am now populating an array of all these xml tests which will each have 5-20 individual requests each that need to be compared. My problem previously was that I was running all 1000+ e2e tests under one stage which meant I was not getting results at an individual xml test, but rather at a full parent level and reporting “all tests had ran” and if any had errored, it would fail or if they all passed it would be a success. But I wanted this style of reporting at an individual test level.

Recently I found out about jest’s test.each functionality that I am going to test out tomorrow. Running this locally with a sample set of xml, seemed to yield me results I was somewhat hoping for. I still may need to work with an external db for more intricate details, but this I have to still research some.

When I ran my sample run the other night, it ran fine locally, but when I checked the code into gitlab and it performed a run it failed with this new approach. I am debugging now and it was code done late at night, so wondering if it is just something stupid I did. I will be back at work this week and looking into it deeper, but just wanted to reach back out and thank you for your help! Truly appreciated.

1 Like