I’ve seen that there are many jobs in the sidekiq job queues and found out that they are related to the advanced search (elasticsearch). So I deactivated the advanced search (under Admin area > Settings > Advanced Search) and the performance of new commits and CI/CD jobs was normal again.
Maybe a reindex of the advanced search caused this problem.
This is also affecting our self-hosted instance of Gitlab EE. We noticed it when we first upgraded to 16.0. Unfortunately, disabling ElasticSearch didn’t seem to help.
We have also noticed that our runners don’t seem to pick up jobs anymore either.
EDIT: We disabled ElasticSearch and avoided using Gitlab the rest of the day. Everything seemed to start working like nothing happened about 10-15 hours later. Perhaps it had to work through outstanding Sidekiq job queues or something.
I faced the same issue after updating to Gitlab 16.0.1. Commits in merge requests were not shown immediately after pushing them and it took about 10 minutes for them to appear in the Gitlab frontend. The same delay occurred when triggering a CI/CD job, with the job remaining in the pending state for several minutes before starting.
To solve this problem, I checked the Gitlab documentation and found a similar reported issue. It turned out that increasing the Unicorn workers helped in reducing the delay. I followed the steps mentioned in the documentation to increase the number of Unicorn workers in my self-hosted v16.0.1-ee instance.
After making this configuration change and restarting Gitlab, I observed that the commits were now shown almost instantly in the Gitlab frontend and the CI/CD jobs started promptly without a significant delay.
I hope this solution works for you as well!