Gitlab 8.10.2 not responding, high memory and CPU utilization

Hi:
After upgrading to 8.10, our gitlab server would stopped responding after several hours. It’s memory utilization went up to 15 GB from a initial 1GB. CPU utilization by “ruby” went up to 90% when I used browser to access it. I needed to reboot the VM in order to recover (sudo gitlab-ctl restart did not help).
I went through the production.log and found the following interesting entries. Please let me know if you have any suggestion.
Right now the server is running Gitlab CE 8.10.2

Thanks a lot.
-Lui
==================== Extract from production.log =============
/opt/gitlab/embedded/service/gem/ruby/2.1.0/bin/unicorn:23:in load' /opt/gitlab/embedded/service/gem/ruby/2.1.0/bin/unicorn:23:in
** [Raven] Raven 1.1.0 configured not to send errors.
** [Raven] Raven 1.1.0 configured not to send errors.
** [Raven] Raven 1.1.0 configured not to send errors.
Invalid cron_jobs config key: ‘historical_data_worker’. Check your gitlab config file.
Invalid cron_jobs config key: ‘update_all_mirrors_worker’. Check your gitlab config file.
Invalid cron_jobs config key: ‘update_all_remote_mirrors_worker’. Check your gitlab config file.
Invalid cron_jobs config key: ‘ldap_sync_worker’. Check your gitlab config file.
Invalid cron_jobs config key: ‘geo_bulk_notify_worker’. Check your gitlab config file.
Started GET “/” for 10.200.97.124 at 2016-07-27 11:09:22 -0700
Processing by RootController#index as HTML
Read fragment views/namespaces/9-20150318222610719075000/projects/163-20160727025250092125000/root/index/application_settings/1-20160726174109032925000/v2.3/c5a8ab8a495d7588471f368562783c37 (0.5ms)


Read fragment views/events/1406-20160613183351887115000/application_settings/1-20160726174109032925000/v2.2/74920d59f780c251077aaf38bfe3a1cd (0.4ms)
Completed 500 Internal Server Error in 1030ms (ActiveRecord: 89.9ms)
ActionView::Template::Error (iv length too short):
1: .event-title
2: %span.author_name= link_to_author event
3: %span{class: event.action_name}
4: = event_action_name(event)
5:
6: - if event.project
app/models/project.rb:486:in import_url' app/models/project.rb:522:in external_import?’
app/models/event.rb:188:in action_name' app/views/events/event/_created_project.html.haml:3:in _app_views_events_event__created_project_html_haml__1539341359828332794_166658880’
app/views/events/_event.html.haml:10:in block in _app_views_events__event_html_haml__2766169276611327676_165183120' app/views/events/_event.html.haml:6:in _app_views_events__event_html_haml__2766169276611327676_165183120’
app/views/events/_events.html.haml:1:in _app_views_events__events_html_haml___4017112989202906242_164534840' app/controllers/application_controller.rb:211:in pager_json’
app/controllers/users_controller.rb:17:in block (2 levels) in show' app/controllers/users_controller.rb:7:in show’
lib/gitlab/middleware/go.rb:16:in `call’
Started GET “/u/releasemgr?limit=20&offset=0” for 10.200.98.55 at 2016-07-27 14:16:32 -0700
Processing by UsersController#show as JSON
Parameters: {“limit”=>“20”, “offset”=>“0”, “username”=>“releasemgr”}
Read fragment views/events/1623-20160716022925392810000/application_settings/1-20160726174109032925000/v2.2/74920d59f780c251077aaf38bfe3a1cd (0.7ms)

I think we have solved the problem. We noticed some projects’ main pages are giving 500 error. They are still operational though (clone, push, pull, etc). We can still access them using browser by adding “/tree/master” at the end of url.

Turns out these repositories have been renamed in the past, before migrating to version 8. After removing these repositories and adding them back. the problem went away. No more error message or heavy cpu/RAM usage.

So it seems the problem is related to repository renaming.