We are using Gitlab Agent to deploy on our Kube clusters with the help of HELM.
We have this kind of error who are often popping in our CI/CD jobs :
Error: UPGRADE FAILED: release backend failed, and has been rolled back due to atomic being set: failed to refresh resource information: GitLab Agent Server: HTTP->gRPC: failed to read gRPC response: rpc error: code = Canceled desc = context canceled
Jobs are taking 3 to 5 minutes to deploy but this error is really random and is happening on multiple Kube clusters.
On the agent side we have errors who look like the one returned by the job
{“level”:“error”,“time”:“2022-11-22T10:04:13.396Z”,“msg”:“Error handling a connection”,“mod_name”:“reverse_tunnel”,“error”:“rpc error: code = Unavailable desc = error reading from server: failed to get reader: failed to read frame header: EOF”,“agent_id”:15764}
But they are never correlated to the ones we have in the CI.
It’s really annoying because when this is happening deployment is rollbacked and app can end in an inconsistent state.
We experience the same issues. Random connection issues which lead to failed/inconsistent helm deployments. @arnaud.beun.sorare did you found a solution in the meantime?
I am still facing the issue with this kind of error.
transport.go:2242: Unsolicited response received on idle HTTP channel starting with "HTTP/1.0 400 Bad Request\nCache-Control: no-cache\nConnection: close\nContent-Type: text/html\n\n<!DOCTYPE html>\n<html>\n<head>\n <meta content=\"width=device-width, initial-scale=1, maximum-scale=1\" name=\"viewport\">\n <title>400 Bad Request</title>\n <style>\n body {\n color: #666;\n text-align: center;\n font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n margin: auto;\n font-size: 14px;\n }\n\n h1 {\n font-size: 56px;\n line-height: 100px;\n font-weight: normal;\n color: #456;\n }\n\n h2 {\n font-size: 24px;\n color: #666;\n line-height: 1.5em;\n }\n\n h3 {\n color: #456;\n font-size: 20px;\n font-weight: normal;\n line-height: 28px;\n }\n\n hr {\n max-width: 800px;\n margin: 18px auto;\n border: 0;\n border-top: 1px solid #EEE;\n border-bottom: 1px solid white;\n }\n\n img {\n max-width: 40vw;\n }\n\n .container {\n margin: auto 20px;\n }\n </style>\n</head>\n\n<body>\n <h1>\n <img src=\"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjEwIiBoZWlnaHQ9IjIxMCIgdmlld0JveD0iMCAwIDIxMCAyMTAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiAgPHBhdGggZD0iTTEwNS4wNjE0IDIwMy42NTVsMzguNjQtMTE4LjkyMWgtNzcuMjhsMzguNjQgMTE4LjkyMXoiIGZpbGw9IiNlMjQzMjkiLz4KICA8cGF0aCBkPSJNMTA1LjA2MTQgMjAzLjY1NDhsLTM4LjY0LTExOC45MjFoLTU0LjE1M2w5Mi43OTMgMTE4LjkyMXoiIGZpbGw9IiNmYzZkMjYiLz4KICA8cGF0aCBkPSJNMTIuMjY4NSA4NC43MzQxbC0xMS43NDIgMzYuMTM5Yy0xLjA3MSAzLjI5Ni4xMDIgNi45MDcgMi45MDYgOC45NDRsMTAxLjYyOSA3My44MzgtOTIuNzkzLTExOC45MjF6IiBmaWxsPSIjZmNhMzI2Ii8+CiAgPHBhdGggZD0iTTEyLjI2ODUgODQuNzM0Mmg1NC4xNTNsLTIzLjI3My03MS42MjVjLTEuMTk3LTMuNjg2LTYuNDExLTMuNjg1LTcuNjA4IDBsLTIzLjI3MiA3MS42MjV6IiBmaWxsPSIjZTI0MzI5Ii8+CiAgPHBhdGggZD0iTTEwNS4wNjE0IDIwMy42NTQ4bDM4LjY0LTExOC45MjFoNTQuMTUzbC05Mi43OTMgMTE4LjkyMXoiIGZpbGw9IiNmYzZkMjYiLz4KICA8cGF0aCBkPSJNMTk3Ljg1NDQgODQuNzM0MWwxMS43NDIgMzYuMTM5YzEuMDcxIDMuMjk2LS4xMDIgNi45MDctMi45MDYgOC45NDRsLTEwMS42MjkgNzMuODM4IDkyLjc5My0xMTguOTIxeiIgZmlsbD0iI2ZjYTMyNiIvPgogIDxwYXRoIGQ9Ik0xOTcuODU0NCA4NC43MzQyaC01NC4xNTNsMjMuMjczLTcxLjYyNWMxLjE5Ny0zLjY4NiA2LjQxMS0zLjY4NSA3LjYwOCAwbDIzLjI3MiA3MS42MjV6IiBmaWxsPSIjZTI0MzI5Ii8+Cjwvc3ZnPgo=\" alt=\"GitLab Logo\" /><br />\n 400\n </h1>\n <div class=\"container\">\n <h3>Your browser sent an invalid request.</h3>\n <hr />\n <p>Please contact your GitLab administrator if you think this is a mistake.</p>\n </div>\n</body>\n</html>\n<html>\n"; err=<nil>
Error: UPGRADE FAILED: release backend failed, and has been rolled back due to atomic being set: GitLab Agent Server: HTTP->gRPC: failed to read gRPC response: rpc error: code = Canceled desc = context canceled
It’s happening every day and we are looking for another solution, using an external tool like ArgoCd or by calling the Kube api directly without using the Agent.
Error: UPGRADE FAILED: could not get information about the resource: GitLab Agent Server: HTTP->gRPC: failed to read gRPC response: rpc error: code = Canceled desc = context canceled
And this is what I see in the GitLab Agent Log:
{"level":"error","time":"2022-12-14T02:30:54.900Z","msg":"Error handling a connection","mod_name":"reverse_tunnel","error":"rpc error: code = Unavailable desc = error reading from server: failed to get reader: failed to read frame header: EOF","agent_id":10664}
{"level":"warn","time":"2022-12-14T02:30:54.900Z","msg":"GetConfiguration.Recv failed","error":"rpc error: code = Unavailable desc = error reading from server: failed to get reader: failed to read frame header: EOF","agent_id":10664}
I’m not 100% sure if the GitLab Job errors and the Gitlab Agent errors are related (regarding the timestamps they are not).
We are using GitLab shared runners from the SaaS version of GitLab and connect to the Agent which is deployed to our Kubernetes Cluster.
Are you using shared runners as well or do you host dedicated runners?