From cboylan at sapwetik.org Mon Oct 10 20:24:57 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 10 Oct 2022 13:24:57 -0700 Subject: Updating the Zuul User Survey Message-ID: <48fc1541-dfcc-4051-bd21-b034e0a2527d@app.fastmail.com> Hello everyone, We've been asked if we have any feedback for updating the Zuul User Survey. The current survey can be found at https://www.surveymonkey.com/r/K2B2MWL and is linked to from https://zuul-ci.org/users.html. I think the rough high level plan is to move this survey to another platform, and when we do that add an alias redirect under https://zuul-ci.org to make platforms more portable in the future without needing to change URLs. This is a great opportunity to edit the content of the survey as everything needs to be ported anyway. I've started collecting feedback on this etherpad: https://etherpad.opendev.org/p/qubhH2rvz2m-JlSaLTUX please feel free to add your feedback there as well. We will be collecting this feedback until November 7, 2022. Then the feedback will be used to port into the new survey system at that time. Clark From amy at demarco.com Fri Oct 14 22:10:11 2022 From: amy at demarco.com (Amy Marrich) Date: Fri, 14 Oct 2022 17:10:11 -0500 Subject: [Diversity][PTG] Diversity and Inclusion WG session at PTG and meeting changes Message-ID: The Diversity and Inclusion WG will be holding its session on Monday at 14:00UTC in the Folsom room. On the agenda[0] are holding a Diversity Survey for the Foundation in 2023 which includes all projects and Community Leadership Mentorship. We have also moved our regular monthly meetings to the Second Tuesday of the month at 14:00UTC and we have also moved back from video to IRC with the meetings taking place in #openinfra-diversity on OFTC. Amy (spotz) 0 - https://etherpad.opendev.org/p/oct2022-ptg-diversity -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Fri Oct 21 17:49:57 2022 From: helena at openstack.org (Helena Spease) Date: Fri, 21 Oct 2022 10:49:57 -0700 Subject: Track Chair Nominations Close in One Week! Message-ID: <6B4469A2-B024-4033-AF33-0843B065AED2@openstack.org> Hey everyone! Track Chair nominations > for the 2023 OpenInfra Summit in Vancouver (June 13-15, 2023) are closing soon! Please submit your nominations before they close on October 28, 2022 Track Chairs for each Track will help build the Summit schedule, and are made up of individuals working in open infrastructure. Responsibilities include: - Help the Summit team put together the best possible content based on your subject matter expertise - Promote the individual Tracks within your networks - Review the submissions and Community voting results in your particular Track - Determine if there are any major content gaps in your Track, and if so, potentially solicit additional speakers directly to submit - Ensure diversity of speakers and companies represented in your Track - Avoid vendor sales pitches, focusing more on real-world user stories and technical, in-the-trenches experiences 2023 Summit Tracks: - 5G/NFV & Edge - AI/Machine Learning/HPC - CI/CD - Container Infrastructure - Getting Started - Hardware Enablement - Open Development - Private & Hybrid Cloud - Public Cloud - Security - Hands On Workshops Full track descriptions are available here >. If you?re interested in nominating yourself or someone else to be a member of the Summit Track Chairs for a specific Track, please fill out the nomination form . Nominations will close on October 28th, 2022. Track Chairs selections will occur before we close the Call for Presentations (CFP) so that the Chairs can host office hours to consult on submissions, and help promote the event. CFP will be opening in November, registration and sponsorship information are already available. Please email speakersupport at openinfra.dev with any questions or feedback. Cheers, Helena Spease -------------- next part -------------- An HTML attachment was scrubbed... URL: From arichist at gmail.com Tue Oct 25 09:06:45 2022 From: arichist at gmail.com (A richist) Date: Tue, 25 Oct 2022 12:06:45 +0300 Subject: Intermittent bug(?) with dependencies-by-topic Queue Message-ID: Hello Zuul folk, We have 2 projects using the same Queue defined as: - queue: name: Topic allow-circular-dependencies: true dependencies-by-topic: true per-branch: true With this job and project setup: - job: name: project1-build runs: ... - job: name: project2-build runs: ... dependencies: project1-build - project: name: project1 queue: Topic gate: jobs: - project1-build - project: name: project2 queue: Topic gate: jobs: - project2-build A developer gates a Gerrit change for project1, followed closely by a change for project2. Both changes share the same value in the Topic field, and the same branch. When checking the web status page, the changes appear to enqueue in the right order, under the same Queue. The issue is that job project2-build is immediately started without waiting for the completion project1-build, leading to incorrect testing. What's more, the change for project1 is merged on successful testing despite failures in the jobs for project2. It seems Zuul simply does not treat these two changes as a set of circular dependencies, though, to my understanding, it should. We've tried checking the /status API endpoint while this is happening, and noticed no "bundle_id" field is shown for either of the "items" in the queue - which we assume is a symptom of the same underlying issue. Stranger still, we have a Zuul Staging environment where this issue does not reproduce, project2-build waits for project-build and a "bundle_id" does exist in the status JSON object every time. Both the Staging and the Production environments use the same version of Zuul (6.4.0), the only difference between the environments we can think of is that the Staging environment runs a single instance of scheduler/executor/web (on the same machine), while the Production environment uses multiple instances of scheduler / executor / merger (but only 1 web) spread across multiple hosts. Is it possible we're hitting some race-condition, or some kind of cache (Gerrit / Zookeeper / Zuul itself)? Are we missing something here? Is there anything else we can try and do to figure this out? Regards, Arichist -------------- next part -------------- An HTML attachment was scrubbed... URL: From Simon.Westphahl at bmw.de Tue Oct 25 09:23:46 2022 From: Simon.Westphahl at bmw.de (Simon.Westphahl at bmw.de) Date: Tue, 25 Oct 2022 09:23:46 +0000 Subject: Intermittent bug(?) with dependencies-by-topic Queue In-Reply-To: References: Message-ID: Hi, this issue is fixed by https://review.opendev.org/c/zuul/zuul/+/861155 and included in Zuul 7.1.0. As you noticed this is only a problem when you run multiple schedulers. Best regards, Simon From arichist at gmail.com Tue Oct 25 09:32:14 2022 From: arichist at gmail.com (A richist) Date: Tue, 25 Oct 2022 12:32:14 +0300 Subject: Intermittent bug(?) with dependencies-by-topic Queue In-Reply-To: References: Message-ID: Thank you Simon, much appreciated! Will upgrade at once! On Tue, 25 Oct 2022 at 12:23, wrote: > Hi, > > this issue is fixed by https://review.opendev.org/c/zuul/zuul/+/861155 > and included in Zuul 7.1.0. > As you noticed this is only a problem when you run multiple schedulers. > > Best regards, > Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Oct 25 18:33:18 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 25 Oct 2022 13:33:18 -0500 Subject: Fwd: Help us plan the next PTG! In-Reply-To: References: Message-ID: Hello Everyone! Congratulations on a great virtual PTG last week :) The OpenInfra Foundation is hosting the OpenInfra Summit in Vancouver [1], June 13 -15, 2023. We are trying to determine the level of interest among contributors to OpenInfra projects on attending a co-located PTG in Vancouver. *The exact format and dates are still being determined.* At this time, we are evaluating the level of interest from an attendee and employer perspective. Please complete the following poll so we can measure the level of interest and plan accordingly. Future updates will be distributed to the project mailing lists as well as previous PTG attendees. Poll: https://openinfrafoundation.formstack.com/forms/openinfra_ptg_2023 As a reminder, we are also gathering feedback for the virtual PTG here: https://etherpad.opendev.org/p/Oct2022_PTGFeedback -Kendall Nelson (diablo_rojo) [1] https://openinfra.dev/summit/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mithun.Mistry at wdc.com Thu Oct 27 09:53:46 2022 From: Mithun.Mistry at wdc.com (Mithun Mistry) Date: Thu, 27 Oct 2022 09:53:46 +0000 Subject: Zuul German becomes unresponsive Message-ID: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> Hi Team, We at Western digital IT team , is using Zuul version R4.1.0 . We have come across an issue whereby the gearman becomes unresponsive without any issues with scheduler service . We are using default gearman that comes bundles with zuul scheduler. We have scheduler and executer/merger services running on separate vms. We have come across the issue twice in 3 month?s time. Restarting zuul-scheduler service fixes the issue and zuul works as usual. Please let us know if it is a known issue and if you have any solution for the same. We will be happy to provide additional information if required. Gearman log : 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: 2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: We were able to telnet to the gearman port on the scheduler host during this time of issue occurrence. [root at uls-op-zuul4 zuul]# telnet 127.0.0.1 4730 Trying 127.0.0.1... Connected to 127.0.0.1. The zuul webservice becomes unresponsive with the following error in log : 2022-10-27 05:27:22,387 DEBUG zuul.RPCClient: Submitting job zuul:tenant_list with data {} 2022-10-27 05:32:22,388 ERROR gear.Client.b'Zuul Web Server': Connection port: 4730> timed out waiting for a response to a submit job request: We can see the following error in scheduler log even though the scheduler service is up and running 2022-10-27 06:35:16,589 ERROR gear.Client.b'Zuul Merge Client': Connection port: 4730> timed out waiting for a response to a submit job request: Regards, Mithun Mistry IT engineering , Western digital -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Oct 27 11:59:35 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 27 Oct 2022 11:59:35 +0000 Subject: Zuul German becomes unresponsive In-Reply-To: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> References: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> Message-ID: <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> On 2022-10-27 09:53:46 +0000 (+0000), Mithun Mistry wrote: > We at Western digital IT team , is using Zuul version R4.1.0 . Be aware that 4.1.0 is 1.5 years out of date, and the overall design of Zuul has moved on such that it no longer relies on the Gearman protocol at all. > We have come across an issue whereby the gearman becomes > unresponsive without any issues with scheduler service . What version of the gear library do you have installed with your scheduler? There have been fixes merged to improve stability in it, though not at all recently (continued development on gear ceased last year when we began removing it from Zuul). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Mithun.Mistry at wdc.com Thu Oct 27 14:24:29 2022 From: Mithun.Mistry at wdc.com (Mithun Mistry) Date: Thu, 27 Oct 2022 14:24:29 +0000 Subject: Zuul German becomes unresponsive In-Reply-To: <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> References: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> Message-ID: Hi Jeremy , Thank you for your Suggestions. Much appreciated. Apologies for typo on zuul version earlier. We are using a modified version of zuul version 4.5.1 . No modifications were done related to gear . The gear package version we are using with scheduler is gear==0.16.0 which is the latest version of gear python package available in pypi. https://pypi.org/project/gear/#history Please let us know if there is additional versions of gear package available with fixes or if there is some code change in gear library that may increase stability of gear plugin. Also let us know if you think upgrading the zuul package to latest and getting rid of Gearman plugin is the only solution to fix this instability . Regards, Mithun Mistry IT Engineering, Western digital. ?On 27/10/22, 5:29 PM, "Jeremy Stanley" wrote: On 2022-10-27 09:53:46 +0000 (+0000), Mithun Mistry wrote: > We at Western digital IT team , is using Zuul version R4.1.0 . Be aware that 4.1.0 is 1.5 years out of date, and the overall design of Zuul has moved on such that it no longer relies on the Gearman protocol at all. > We have come across an issue whereby the gearman becomes > unresponsive without any issues with scheduler service . What version of the gear library do you have installed with your scheduler? There have been fixes merged to improve stability in it, though not at all recently (continued development on gear ceased last year when we began removing it from Zuul). -- Jeremy Stanley From fungi at yuggoth.org Thu Oct 27 15:13:14 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 27 Oct 2022 15:13:14 +0000 Subject: Zuul German becomes unresponsive In-Reply-To: References: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> Message-ID: <20221027151110.lraffnkxf36rjkbd@yuggoth.org> On 2022-10-27 14:24:29 +0000 (+0000), Mithun Mistry wrote: > Apologies for typo on zuul version earlier. > We are using a modified version of zuul version 4.5.1 . Not that I would suggest running an old version, but there were at least later versions of Zuul which still used the Gearman protocol, 4.12.0 being the last to do so. > No modifications were done related to gear . > > The gear package version we are using with scheduler is > gear==0.16.0 which is the latest version of gear python package > available in pypi. https://pypi.org/project/gear/#history > > Please let us know if there is additional versions of gear package > available with fixes or if there is some code change in gear > library that may increase stability of gear plugin. I don't recall seeing such behavior with the final version of gear (the one you're using), but perhaps someone else on the list remembers having worked around something similar. It looks like there were a couple of changes related to Python 3.9 support which merged after 0.16.0 was tagged, but nothing indicating a fix for what you've described. The only thing which springs to mind is that sometimes gear could take a very long time to respond if its job queue became massive (perhaps look for signs of significant memory consumption for that child process, or try querying it with a socket client which won't time out if the response time gets really long and see if it's tracking a lot more registrations and jobs that usual). > Also let us know if you think upgrading the zuul package to > latest and getting rid of Gearman plugin is the only solution to > fix this instability . By staying on an old version of Zuul you're missing quite a few fixes for major security vulnerabilities, in addition to stability and scalability improvements (not to mention great new features, of course). Zuul's release notes can be found here, for reference, with further details: https://zuul-ci.org/docs/zuul/releasenotes.html https://zuul-ci.org/docs/nodepool/releasenotes.html If you need it, the Zuul website also lists a couple of commercial options for help with things like upgrading, training, developing new features (which could possibly replace the things you modified your fork for), or even hosting and running a Zuul deployment for you. I'm not affiliated with those companies, but they are significant contributors to and supporters of our community: https://zuul-ci.org/start -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Thu Oct 27 15:20:37 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 27 Oct 2022 08:20:37 -0700 Subject: Zuul German becomes unresponsive In-Reply-To: <20221027151110.lraffnkxf36rjkbd@yuggoth.org> References: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> <20221027151110.lraffnkxf36rjkbd@yuggoth.org> Message-ID: <9a009d78-64d7-4ed1-817e-bcca68589189@app.fastmail.com> On Thu, Oct 27, 2022, at 8:13 AM, Jeremy Stanley wrote: > The only thing which springs to mind is that sometimes gear could > take a very long time to respond if its job queue became massive > (perhaps look for signs of significant memory consumption for that > child process, or try querying it with a socket client which won't > time out if the response time gets really long and see if it's > tracking a lot more registrations and jobs that usual). Sending `status\n` to geard over a socket connection should report info back. In addition to what Jeremy suggested you might also want to run a separate geard process so that you can configure debug logging for it and see if that shows any helpful info. Another option would be to strace the process to see where it might be getting stuck. Clark From Mithun.Mistry at wdc.com Fri Oct 28 05:44:32 2022 From: Mithun.Mistry at wdc.com (Mithun Mistry) Date: Fri, 28 Oct 2022 05:44:32 +0000 Subject: Zuul German becomes unresponsive In-Reply-To: <9a009d78-64d7-4ed1-817e-bcca68589189@app.fastmail.com> References: <7BE65395-9FD2-43BD-BE81-FEAD3F200E40@wdc.com> <20221027115934.5ejyuhhw7a56wozg@yuggoth.org> <20221027151110.lraffnkxf36rjkbd@yuggoth.org> <9a009d78-64d7-4ed1-817e-bcca68589189@app.fastmail.com> Message-ID: <834F2E49-D9F1-498D-B221-19633C9EBC33@wdc.com> Hi Jeremy / Clark, Thank you very much for your excellent pointers and suggestions . Much appreciated. We will check the Gearman performance as per your suggestions and also plan to upgrade zuul to the latest version . Regards, Mithun Mistry ?On 27/10/22, 8:51 PM, "Clark Boylan" wrote: CAUTION: This email originated from outside of Western Digital. Do not click on links or open attachments unless you recognize the sender and know that the content is safe. On Thu, Oct 27, 2022, at 8:13 AM, Jeremy Stanley wrote: > The only thing which springs to mind is that sometimes gear could > take a very long time to respond if its job queue became massive > (perhaps look for signs of significant memory consumption for that > child process, or try querying it with a socket client which won't > time out if the response time gets really long and see if it's > tracking a lot more registrations and jobs that usual). Sending `status\n` to geard over a socket connection should report info back. In addition to what Jeremy suggested you might also want to run a separate geard process so that you can configure debug logging for it and see if that shows any helpful info. Another option would be to strace the process to see where it might be getting stuck. Clark