Zuul German becomes unresponsive
Hi Team, We at Western digital IT team , is using Zuul version R4.1.0 . We have come across an issue whereby the gearman becomes unresponsive without any issues with scheduler service . We are using default gearman that comes bundles with zuul scheduler. We have scheduler and executer/merger services running on separate vms. We have come across the issue twice in 3 month’s time. Restarting zuul-scheduler service fixes the issue and zuul works as usual. Please let us know if it is a known issue and if you have any solution for the same. We will be happy to provide additional information if required. Gearman log : 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146fb860 type: UNKNOWN> 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146d87b8 type: UNKNOWN> 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146fb860 type: UNKNOWN> 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146d87b8 type: UNKNOWN> We were able to telnet to the gearman port on the scheduler host during this time of issue occurrence. [root@uls-op-zuul4 zuul]# telnet 127.0.0.1<http://127.0.0.1> 4730 Trying 127.0.0.1<http://127.0.0.1>... Connected to 127.0.0.1<http://127.0.0.1>. The zuul webservice becomes unresponsive with the following error in log : 2022-10-27 05:27:22,387 DEBUG zuul.RPCClient: Submitting job zuul:tenant_list with data {} 2022-10-27 05:32:22,388 ERROR gear.Client.b'Zuul Web Server': Connection <gear.Connection 0x7f9f9a14be80 host: uls-op-zuul4.wdc.com<http://uls-op-zuul4.wdc.com> port: 4730> timed out waiting for a response to a submit job request: <gear.Job 0x7f9f88730fd0 handle: None name: zuul:tenant_list unique: 1666848442.3878057> We can see the following error in scheduler log even though the scheduler service is up and running 2022-10-27 06:35:16,589 ERROR gear.Client.b'Zuul Merge Client': Connection <gear.Connection 0x7f58104de128 host: uls-op-zuul4.wdc.com<http://uls-op-zuul4.wdc.com> port: 4730> timed out waiting for a response to a submit job request: <gear.Job 0x7f578d950e80 handle: None name: merger:merge unique: 29be9ce958ef4d19b0e75e274c6dd741> Regards, Mithun Mistry IT engineering , Western digital
On 2022-10-27 09:53:46 +0000 (+0000), Mithun Mistry wrote:
We at Western digital IT team , is using Zuul version R4.1.0 .
Be aware that 4.1.0 is 1.5 years out of date, and the overall design of Zuul has moved on such that it no longer relies on the Gearman protocol at all.
We have come across an issue whereby the gearman becomes unresponsive without any issues with scheduler service .
What version of the gear library do you have installed with your scheduler? There have been fixes merged to improve stability in it, though not at all recently (continued development on gear ceased last year when we began removing it from Zuul). -- Jeremy Stanley
Hi Jeremy , Thank you for your Suggestions. Much appreciated. Apologies for typo on zuul version earlier. We are using a modified version of zuul version 4.5.1 . No modifications were done related to gear . The gear package version we are using with scheduler is gear==0.16.0 which is the latest version of gear python package available in pypi. https://pypi.org/project/gear/#history Please let us know if there is additional versions of gear package available with fixes or if there is some code change in gear library that may increase stability of gear plugin. Also let us know if you think upgrading the zuul package to latest and getting rid of Gearman plugin is the only solution to fix this instability . Regards, Mithun Mistry IT Engineering, Western digital. On 27/10/22, 5:29 PM, "Jeremy Stanley" <fungi@yuggoth.org> wrote: On 2022-10-27 09:53:46 +0000 (+0000), Mithun Mistry wrote: > We at Western digital IT team , is using Zuul version R4.1.0 . Be aware that 4.1.0 is 1.5 years out of date, and the overall design of Zuul has moved on such that it no longer relies on the Gearman protocol at all. > We have come across an issue whereby the gearman becomes > unresponsive without any issues with scheduler service . What version of the gear library do you have installed with your scheduler? There have been fixes merged to improve stability in it, though not at all recently (continued development on gear ceased last year when we began removing it from Zuul). -- Jeremy Stanley
On 2022-10-27 14:24:29 +0000 (+0000), Mithun Mistry wrote:
Apologies for typo on zuul version earlier. We are using a modified version of zuul version 4.5.1 .
Not that I would suggest running an old version, but there were at least later versions of Zuul which still used the Gearman protocol, 4.12.0 being the last to do so.
No modifications were done related to gear .
The gear package version we are using with scheduler is gear==0.16.0 which is the latest version of gear python package available in pypi. https://pypi.org/project/gear/#history
Please let us know if there is additional versions of gear package available with fixes or if there is some code change in gear library that may increase stability of gear plugin.
I don't recall seeing such behavior with the final version of gear (the one you're using), but perhaps someone else on the list remembers having worked around something similar. It looks like there were a couple of changes related to Python 3.9 support which merged after 0.16.0 was tagged, but nothing indicating a fix for what you've described. The only thing which springs to mind is that sometimes gear could take a very long time to respond if its job queue became massive (perhaps look for signs of significant memory consumption for that child process, or try querying it with a socket client which won't time out if the response time gets really long and see if it's tracking a lot more registrations and jobs that usual).
Also let us know if you think upgrading the zuul package to latest and getting rid of Gearman plugin is the only solution to fix this instability .
By staying on an old version of Zuul you're missing quite a few fixes for major security vulnerabilities, in addition to stability and scalability improvements (not to mention great new features, of course). Zuul's release notes can be found here, for reference, with further details: https://zuul-ci.org/docs/zuul/releasenotes.html https://zuul-ci.org/docs/nodepool/releasenotes.html If you need it, the Zuul website also lists a couple of commercial options for help with things like upgrading, training, developing new features (which could possibly replace the things you modified your fork for), or even hosting and running a Zuul deployment for you. I'm not affiliated with those companies, but they are significant contributors to and supporters of our community: https://zuul-ci.org/start -- Jeremy Stanley
On Thu, Oct 27, 2022, at 8:13 AM, Jeremy Stanley wrote:
The only thing which springs to mind is that sometimes gear could take a very long time to respond if its job queue became massive (perhaps look for signs of significant memory consumption for that child process, or try querying it with a socket client which won't time out if the response time gets really long and see if it's tracking a lot more registrations and jobs that usual).
Sending `status\n` to geard over a socket connection should report info back. In addition to what Jeremy suggested you might also want to run a separate geard process so that you can configure debug logging for it and see if that shows any helpful info. Another option would be to strace the process to see where it might be getting stuck. Clark
Hi Jeremy / Clark, Thank you very much for your excellent pointers and suggestions . Much appreciated. We will check the Gearman performance as per your suggestions and also plan to upgrade zuul to the latest version . Regards, Mithun Mistry On 27/10/22, 8:51 PM, "Clark Boylan" <cboylan@sapwetik.org> wrote: CAUTION: This email originated from outside of Western Digital. Do not click on links or open attachments unless you recognize the sender and know that the content is safe. On Thu, Oct 27, 2022, at 8:13 AM, Jeremy Stanley wrote: > The only thing which springs to mind is that sometimes gear could > take a very long time to respond if its job queue became massive > (perhaps look for signs of significant memory consumption for that > child process, or try querying it with a socket client which won't > time out if the response time gets really long and see if it's > tracking a lot more registrations and jobs that usual). Sending `status\n` to geard over a socket connection should report info back. In addition to what Jeremy suggested you might also want to run a separate geard process so that you can configure debug logging for it and see if that shows any helpful info. Another option would be to strace the process to see where it might be getting stuck. Clark
Hi Jeremy / Clark, Some of our jobs run for 12+ hours . The jobs basically trigger Jenkins builds and periodically polls Jenkins (every 5 minutes) to see if the Jenkins jobs gets completed. Do you think this kind of long duration jobs that keep on polling remote urls maybe causing the gearman to become unresponsive ? Would like to get your valuable inputs here . We have not seen any issue with memory and cpu on the systems. We are seeing the issue with gearman very frequently now. We are planning to run geard as a separate process and debug . We plan to upgrade zuul to latest soon but for now we want to get some stability in the system . Regards, Mithun On 28/10/22, 11:14 AM, "Mithun Mistry" <Mithun.Mistry@wdc.com> wrote: Hi Jeremy / Clark, Thank you very much for your excellent pointers and suggestions . Much appreciated. We will check the Gearman performance as per your suggestions and also plan to upgrade zuul to the latest version . Regards, Mithun Mistry On 27/10/22, 8:51 PM, "Clark Boylan" <cboylan@sapwetik.org> wrote: CAUTION: This email originated from outside of Western Digital. Do not click on links or open attachments unless you recognize the sender and know that the content is safe. On Thu, Oct 27, 2022, at 8:13 AM, Jeremy Stanley wrote: > The only thing which springs to mind is that sometimes gear could > take a very long time to respond if its job queue became massive > (perhaps look for signs of significant memory consumption for that > child process, or try querying it with a socket client which won't > time out if the response time gets really long and see if it's > tracking a lot more registrations and jobs that usual). Sending `status\n` to geard over a socket connection should report info back. In addition to what Jeremy suggested you might also want to run a separate geard process so that you can configure debug logging for it and see if that shows any helpful info. Another option would be to strace the process to see where it might be getting stuck. Clark
participants (3)
-
Clark Boylan
-
Jeremy Stanley
-
Mithun Mistry