Replace this template with your information
I have a build that works locally, but on gitlab it fails. Even though I have a rule to poll for http availability, when the test starts I get Connection refused again. Any idea what this might be?
This is the gitlab ci job: integration-test-13.12.1-ce.0 (#1817196641) · Jobs · Stefan Lobbenmeier / intellij_gitlab_pipeline_monitor · GitLab
Locally the build can run with
export GITLAB_CONTAINER_TAG=gitlab/gitlab-ce:latest
gradle test
- GitLab Version: gitlab.com, gitlab/gitlab-ce:latest in CI
- Add the CI configuration from
.gitlab-ci.yml
and other configuration if relevant (e.g. docker-compose.yml)
For gitlab.com this one: .gitlab-ci.yml · add-integration-tests · Stefan Lobbenmeier / intellij_gitlab_pipeline_monitor · GitLab
I checked several way to wait for the container to be running, using the healthcheck, no configuration and checking if /projects responds with http 200. They all work locally, but on gitlab.com they fail.
I am now also printing the logs of the container, so perhaps within those logs it can be seen what the issue might be / if the server somehow shuts down again?
I think I figured it out - localhost is probably not the correct hostname of the container:
2021-11-24 23:37:34 INFO DockerClientFactory:190 - Docker host IP address is docker
After changing to GenericContainer#getHost I now get this result: integration-test-latest (#1817647451) · Jobs · Stefan Lobbenmeier / intellij_gitlab_pipeline_monitor · GitLab
Hi,
looking at the .gitlab-ci.yml changes, I assume you are running a GitLab Docker instance inside the CI/CD job. The project README says that the Intellij extension can fetch the CI pipeline status.
The problem is that the GitLab Docker Container doesn’t have CI pipelines by default, so you won’t be able to use it for testing the extension’s data fetch, or as a mock endpoint. Also, creating and destroying the GitLab container for every CI job being triggered can get very expensive in resource and CI minute usage.
I’d suggest a different path - have a public test project on gitlab.com which has CI/CD configured, and a weekly pipeline schedule. Use this project in your CI/CD configuration for the intellij_gitlab_pipeline_monitor to fetch the data from the public API. Or monitor one of the available projects, like gitlab-org/gitlab-runner
. Or run your own cloud VM with GitLab and query its API endpoints.
The GitLab CI Pipeline Exporter uses a similar approach with the quickstart configuration.
Cheers,
Michael
I see, thanks for the feedback. I agree that it might be overkill to start and configure a whole gitlab instance just to have this endpoint.
But the proposed alternative of testing against a public gitlab.com instance does not quite work for me. I also want to verify the plugins behaviour against older versions of gitlab-ce:
NullPointerException when connecting the project, Possibly "Field 'ref' doesn't exist on type 'Pipeline'" (#32) · Issues · ppi / intellij_gitlab_pipeline_monitor · GitLab
Is there any other possibility to test against older gitlab instances?
Hi,
thanks for adding the detail about testing older GitLab versions. Hm. Can you share more about the plans, e.g. which version you want to support, for how long, and how often you need to test against this specific version?
I’m thinking of a cold boot VM which runs different versions of GitLab and runner in containers, and you’ll boot it up when your CI/CD build triggers it, or keep it running only on specific hours to save costs. I would suggest Terraform/Ansible for automating these steps. Or you’ll find a way to run the containers with docker-compose in CI/CD, seed the pipeline runs via API calls and then test the plugin.
Building on the GDK with Gitpod provisioned environments could also be an idea. doc/howto/gitpod.md · main · GitLab.org / GitLab Development Kit · GitLab
Cheers,
Michael
Hmm difficult to answer those questions since I only forked the original project.
Looking at the history I would estimate the CI to fluctuate around 5 builds a month
So in terms of cost I doubt that we will even come close to the limit.
In terms of adding pipelines / runners I think it should be fine to just test the pipeline states failed or pending for now, which should be possible without runners. Setting up the instance somewhere else seems like overkill to me, at least for now.