From cboylan at sapwetik.org Mon Nov 5 18:36:27 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 05 Nov 2018 10:36:27 -0800 Subject: Community Infrastructure Berlin Summit Onboarding Session Message-ID: <1541442987.3761150.1566423440.2A7B1A3E@webmail.messagingengine.com> Hello everyone, My apologies for cross posting but wanted to make sure the various developer groups saw this. Rather than use the Infrastructure Onboarding session in Berlin [0] for infrastructure sysadmin/developer onboarding, I thought we could use the time for user onboarding. We've got quite a few new groups interacting with us recently, and it would probably be useful to have a session on what we do, how people can take advantage of this, and so on. I've been brainstorming ideas on this etherpad [1]. If you think you'll attend the session and find any of these subjects to be useful please +1 them. Also feel free to add additional topics. I expect this will be an informal session that directly targets the interests of those attending. Please do drop by if you have any interest in using this infrastructure at all. This is your chance to better understand Zuul job configuration, the test environments themselves, the metrics and data we collect, and basically anything else related to the community developer infrastructure. [0] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22950/infrastructure-project-onboarding [1] https://etherpad.openstack.org/p/openstack-infra-berlin-onboarding Hope to see you there, Clark From cboylan at sapwetik.org Thu Nov 8 19:42:19 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 08 Nov 2018 11:42:19 -0800 Subject: OpenDev, the future of OpenStack Infra Message-ID: <1541706139.393458.1570439656.675141A3@webmail.messagingengine.com> Hello everyone, Sorry for another cross post so soon. In the land before time we had Stackforge. Stackforge gave non-OpenStack projects a place to live with their own clearly defined "not OpenStack" namespacing. As the wheel of time spun we realized that many Stackforge projects were becoming OpenStack projects and we would have to migrate them. This involved Gerrit downtimes to rename things safely. To ease the pain of this, the TC decided that all projects developed in the OpenStack Infrastructure could live under the OpenStack git namespace to simplify migrations. Unfortunately this had the effect of creating confusion over which projects were officially a part of OpenStack, and whether or not projects that were not OpenStack could use our project hosting. Stackforge lived on under a different name, "unofficial project hosting", but many potential infrastructure users either didn't understand this or didn't want that strong association to OpenStack for their project hosting [0]. Turns out that we want to be able to host OpenStack and non-OpenStack projects together without confusion in a way that makes all of the projects involved happy. In an effort to make this a reality the OpenStack Infra team has been working through a process to rename itself to make it clear that our awesome project infrastructure and open collaboration tooling is community run, not just for OpenStack, but for others that want to be involved. To this end we've acquired the opendev.org domain which will allow us to host services under a neutral name as the OpenDev Infrastructure team. The OpenStack community will continue to be the largest and a primary user for the OpenDev Infrastructure team, but our hope in making our infrastructure services more inclusive is that we'll also attract new contributors, which will ultimately benefit OpenStack and other open infrastructure projects. Our goals for OpenDev are to: * Encourage additional active infrastructure contributors to help us scale. Make it clear that this is community-run tooling & infrastructure and everyone can get involved. * Make open source collaboration tools and project infrastructure more accessible to those that want it. * Have exposure to and dogfooding of OpenStack clouds as viable open source cloud providers. * Enable more projects to take advantage of the OpenStack-pioneered model of development and collaboration, including recommended practices like code review and gating. * Help build relationships with new and adjacent open source projects and create an inclusive space for collaboration and open source development. Much of this is still in the early planning stages. This is the infrastructure team's current thinking on the subject, but understand we have an existing community from which we'd like to see buy-in and involvement. To that end we have tried to compile a list of expected FAQ/Q&A information below, but feel free to followup either on this thread or with myself for anything we haven't considered already. Any transition will be slow and considered so don't expect everything to change overnight. But don't be surprised if you run into some new HTTP redirects as we reorganize the names under which services run. We'll also be sure to keep you informed on any major (and probably minor) transition steps so that they won't surprise you. Thank you, Clark [0] It should be noted that some projects did not mind this and hosted with OpenStack Infra anyway. ARA is an excellent example of this. FAQ * What is OpenDev? OpenDev is community-run tools and infrastructure services for collaboratively developing open source software. The OpenDev infrastruture team is the community of people who operate the infrastructure under the umbrella of the OpenStack Foundation. * What services are you offering? What is the expected timeline? In the near-term we expect to transition simple services like etherpad hosting to the OpenDev domain. It wil take us months and potentially up to a year to transition key infrastructure pieces like Git and Gerrit. Example services managed by the team today include etherpad, wiki, the zuul and nodepool CI system, git and gerrit, and other minor systems like pbx conferencing and survey tools. * Where will these services live? We've acquired opendev.org and are planning to set up DNS hosting very soon. We will post a simple information page and FAQ on the website and build it out as necessary over time. * Why are you changing from the OpenStack infrastructure team to the OpenDev infrastructure team? In the same way we want to signal that our services are not strictly for OpenStack projects, and that not every project using our services is an official part of OpenStack, we want to make it clear that our team also serves this larger community. * Who should use OpenDev services? Does it have to be projects related to OpenStack, or any open source projects? In short, open source contributors who share our community values, especially those who might want to help contribute to improving and maintaining OpenDev infrastructure over time. Projects using OpenDev hosted git and gerrit services should have an OSI-approved license. * Will the OpenStack projects live at git.opendev.org? All projects hosted with OpenDev will live at git.opendev.org. For backwards compatibility reasons, at the very least git.openstack.org will be an alias for git.opendev.org for the forseeable future. The same is true of the other existing whitelabel git domains such as git.starlingx.io and git.zuul-ci.org. Whether or not other 'whitelabel' domains are created is an open question. Given a neutral domain name, the desire for such sites may not seem as necessary. * Does this mean the infrastruture team will be spending less time on OpenStack? OpenStack will continue to be the largest and a primary user for the OpenDev Infrastructure team, and we expect that our work will benefit all users. There will be additional effort required as we transition to the new namespace and reorganize, but over the long term we hope this inclusive approach will help us attract new contributors and ultimately benefit OpenStack. * Are OpenStack cloud test resources the only resources that will be used? At the present time all of the donated resources come to us from a combination of OpenStack Public and Private clouds. Nobody from any of the proprietary clouds has asked to donate resources to us. It is conceivable that the shift to OpenDev could open the door to those cloud providers wanting to donate some cloud resources. Assuming nodepool supports talking to those clouds, it is certainly a possibility, but at the moment it's all speculation. * Is this name associated with the OpenDev Conferences (opendevconf.com) that OpenStack Foundation has previously organized? Yes! They are related, albeit indirectly. To quote from the conference's promotional site, "the focus is on bringing together composable open infrastructure technologies across communities and industries." The possibility of cross-promotional tie-ins could prove synergistic, since the conference and the collaboratory we're building share a lot of similar values and goals, and are ultimately supported by the same donors, community and foundation. * How will OpenDev be governed? Will the OpenStack TC retain oversight over it? The OpenDev governance discussion is just getting started, but like all OSF-supported initiatives, OpenDev follows the Four Opens, so it will ultimately be directly governed by OpenDev contributors. While it won't be under the sole oversight of the OpenStack Technical Committee anymore, OpenDev users (in particular OpenStack) should be represented in the governance model so that they can feed back their requirements to the OpenDev team. From Tobias.Henkel at bmw.de Mon Nov 12 15:37:11 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 12 Nov 2018 15:37:11 +0000 Subject: Semaphores and resources Message-ID: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> Hi Zuulers, We just discovered an (in our case) unpleasant behaviour of zuul and semaphores. Scenario: We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue. The current behaviour of zuul is that it first gets the nodes and then locks the semaphore. This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes. In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation. After some discussion on IRC this would be a proposal to solve this problem: We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of ‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first. This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok. What do the other semaphore users out there think about this? Kind regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tobias.Henkel at bmw.de Tue Nov 13 20:48:52 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 13 Nov 2018 20:48:52 +0000 Subject: Dequeuing buildset on first error Message-ID: <61F7E089-C645-4A7F-AE42-BEE41FFF06BC@bmw.de> Hi Zuulers, We have several projects that have buildsets with fast (few minutes, e.g. linters) and longer running (> 40 minutes, e.g. big fat c++ compilation) jobs. We often see the case that one of the linter jobs failed and already could give feedback. However zuul waits for the completion of the fat compile job before reporting. Generally this is the desired behaviour because we want to give as much feedback as possible. However some projects may want to decide that quick feedback is more important than the most complete feedback possible. So I would propose to add a ‘report-on-first-failure’ flag to the project-pipeline. The default would be false in order to retain the current behaviour. If the deque happens the aborted jobs will have the ABORTED state so it can be clearly seen in the reporting that these were aborted. Enabling this for certain projects also could help to reduce resource consumption in quota-constrained periods. What do you think about this? Kind regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.chenuet at schibsted.com Tue Nov 13 21:52:55 2018 From: guillaume.chenuet at schibsted.com (Guillaume Chenuet) Date: Tue, 13 Nov 2018 22:52:55 +0100 Subject: Dequeuing buildset on first error In-Reply-To: <61F7E089-C645-4A7F-AE42-BEE41FFF06BC@bmw.de> References: <61F7E089-C645-4A7F-AE42-BEE41FFF06BC@bmw.de> Message-ID: <42E0D85D-0584-489A-BD37-39E1157BD9E2@schibsted.com> Hi Tobias, Sorry for my lake of information about your CI (Gerrit or not, etc.) and your workflows. What about running these jobs on different pipelines ? At leboncoin, we have 3 ‘check’ pipelines (means running on each patchset): quality, integration and build. With this setup, we can have an asynchronous vote on Gerrit under different labels: fast voting on quality pipeline/label (ex: code linter - small jobs) and a slow result with build pipeline/label (long run jobs). BTW, if I’m missing the point, your idea looks good to me. Kind Regards. Guillaume > Le 13 nov. 2018 à 21:48, a écrit : > > Hi Zuulers, > > We have several projects that have buildsets with fast (few minutes, e.g. linters) and longer running (> 40 minutes, e.g. big fat c++ compilation) jobs. > We often see the case that one of the linter jobs failed and already could give feedback. However zuul waits for the completion of the fat compile job before reporting. > Generally this is the desired behaviour because we want to give as much feedback as possible. > However some projects may want to decide that quick feedback is more important than the most complete feedback possible. > > So I would propose to add a ‘report-on-first-failure’ flag to the project-pipeline. The default would be false in order to retain the current behaviour. > If the deque happens the aborted jobs will have the ABORTED state so it can be clearly seen in the reporting that these were aborted. > > Enabling this for certain projects also could help to reduce resource consumption in quota-constrained periods. > > What do you think about this? > > Kind regards > Tobias > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at mpagel.de Wed Nov 14 05:54:09 2018 From: mike at mpagel.de (Mike Pagel) Date: Wed, 14 Nov 2018 06:54:09 +0100 Subject: Dequeuing buildset on first error In-Reply-To: <42E0D85D-0584-489A-BD37-39E1157BD9E2@schibsted.com> Message-ID: <8ij6jjl6gfocs3fcl55cvq6e.1542174484344@email.android.com> Hi, I am not sure I understand the suggested approach of using two pipelines in a gated project. Isn't a pipeline by definition the single point of decision if a change is good or not? For check this might work, but then you'd still waste resources in the long running build set. I like Tobias' suggestion. However, I would not make this a pipeline attribute, certainly not a pipeline definition attribute. Instead I'd want to decide within the context of a particular project's pipeline, which jobs should be able to abort it. If the linters succeed, I usually then do like to see all real builds get completed. In fact I see this at the same level as the voting attribute of a job. It's pretty much the hard core version of voting=true. Best, Mike ________________________________ Von: Guillaume Chenuet Gesendet: Dienstag, 13. November 2018 22:52 An: Tobias.Henkel at bmw.de Cc: zuul-discuss at lists.zuul-ci.org Betreff: Re: Dequeuing buildset on first error Hi Tobias, Sorry for my lake of information about your CI (Gerrit or not, etc.) and your workflows.  What about running these jobs on different pipelines ?  At leboncoin, we have 3 ‘check’ pipelines (means running on each patchset): quality, integration and build.  With this setup, we can have an asynchronous vote on Gerrit under different labels: fast voting on quality pipeline/label (ex: code linter - small jobs) and a slow result with build pipeline/label (long run jobs).  BTW, if I’m missing the point, your idea looks good to me.  Kind Regards.  Guillaume Le 13 nov. 2018 à 21:48, a écrit : Hi Zuulers,   We have several projects that have buildsets with fast (few minutes, e.g. linters) and longer running (> 40 minutes, e.g. big fat c++ compilation) jobs. We often see the case that one of the linter jobs failed and already could give feedback. However zuul waits for the completion of the fat compile job before reporting. Generally this is the desired behaviour because we want to give as much feedback as possible. However some projects may want to decide that quick feedback is more important than the most complete feedback possible.   So I would propose to add a ‘report-on-first-failure’ flag to the project-pipeline. The default would be false in order to retain the current behaviour. If the deque happens the aborted jobs will have the ABORTED state so it can be clearly seen in the reporting that these were aborted.   Enabling this for certain projects also could help to reduce resource consumption in quota-constrained periods.   What do you think about this?   Kind regards Tobias > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Nov 14 08:58:09 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 14 Nov 2018 00:58:09 -0800 Subject: Dequeuing buildset on first error In-Reply-To: <8ij6jjl6gfocs3fcl55cvq6e.1542174484344@email.android.com> (Mike Pagel's message of "Wed, 14 Nov 2018 06:54:09 +0100") References: <8ij6jjl6gfocs3fcl55cvq6e.1542174484344@email.android.com> Message-ID: <87zhuc6noe.fsf@meyer.lemoncheese.net> Mike Pagel writes: > Hi, > > I am not sure I understand the suggested approach of using two > pipelines in a gated project. Isn't a pipeline by definition the > single point of decision if a change is good or not? For check this > might work, but then you'd still waste resources in the long running > build set. Indeed that's true for gating -- but I think the technique can be used for check pipelines. And I think there are two concerns -- providing quick feedback, which the multiple pipeline option addresses, and conserving build resources, which Tobias's suggestion addresses. (Perhaps with the new Gerrit status plugin and GitHub checks API, we might be able to experiment with reporting jobs as they complete.) > I like Tobias' suggestion. However, I would not make this a pipeline > attribute, certainly not a pipeline definition attribute. Instead I'd > want to decide within the context of a particular project's pipeline, > which jobs should be able to abort it. If the linters succeed, I > usually then do like to see all real builds get completed. In fact I > see this at the same level as the voting attribute of a job. It's > pretty much the hard core version of voting=true. Agreed -- that's how I read Tobias's suggestion. -Jim From corvus at inaugust.com Wed Nov 14 09:06:52 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 14 Nov 2018 01:06:52 -0800 Subject: Semaphores and resources In-Reply-To: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> (Tobias Henkel's message of "Mon, 12 Nov 2018 15:37:11 +0000") References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> Message-ID: <87va506n9v.fsf@meyer.lemoncheese.net> writes: > Hi Zuulers, > > We just discovered an (in our case) unpleasant behaviour of zuul and semaphores. > > Scenario: > We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we > have 30 items with job ‘foo’ in the queue. > The current behaviour of zuul is that it first gets the nodes and then locks the semaphore. > This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes. > In case the job is expensive and long running this can become quite a > big resource waste problem if we’re in a quota constrained situation. > > After some discussion on IRC this would be a proposal to solve this problem: > We could make the order of ‘request nodes’ and ‘lock semaphore’ > configurable per job with a default to the safe behaviour of > ‘lock semaphore’ then ‘request nodes’. Jobs would still have the > possibility to do the fast path by requesting the nodes first. > This makes sense if you have lightweight jobs that should run in a > fast consecutive sequence and you know that some waste of resources is > ok. > > What do the other semaphore users out there think about this? If no one sees any potential problems with the new behavior, I think this sounds like a good change (including making it the default). -Jim From lukasz.lukasiewicz at codilime.com Wed Nov 14 09:27:32 2018 From: lukasz.lukasiewicz at codilime.com (=?UTF-8?B?xYF1a2FzeiDFgXVrYXNpZXdpY3o=?=) Date: Wed, 14 Nov 2018 10:27:32 +0100 Subject: Dequeuing buildset on first error In-Reply-To: <87zhuc6noe.fsf@meyer.lemoncheese.net> References: <8ij6jjl6gfocs3fcl55cvq6e.1542174484344@email.android.com> <87zhuc6noe.fsf@meyer.lemoncheese.net> Message-ID: The feature would do a lot of good for our 'periodic-nightly' pipeline in Tungsten Fabric. We pull majority of our projects every 24h and build them. If one of the projects fails to build, we won't be publishing our artifacts anyway and we'll restart the build process. This would basically remove an extra step of dequeueing the buildset for us. +1 Lukas -- ------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Wed Nov 14 09:48:42 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 14 Nov 2018 20:48:42 +1100 Subject: Semaphores and resources In-Reply-To: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> Message-ID: (Had originally forgot to reply-all) I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm not sure the use case of locking first is valid enough to justify the extra complexity of making it an option. The point of a semaphore is often not to waste resources, so I fail to understand the use case of holding nodes ahead of time. Besides, if you have resources to spare (or "waste"), then you could have a bigger ready allocation in your nodepool meaning that new jobs will pick up a node reasonably quickly anyway. Again though, as I'm not a user of this feature I'm likely missing something :-). On Tue, Nov 13, 2018 at 2:37 AM wrote: > Hi Zuulers, > > > > We just discovered an (in our case) unpleasant behaviour of zuul and > semaphores. > > > > Scenario: > > We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have > 30 items with job ‘foo’ in the queue. > > The current behaviour of zuul is that it first gets the nodes and then > locks the semaphore. > > This means that 5 jobs run and 25 wait for the semaphore but already got > and locked their nodes. > > In case the job is expensive and long running this can become quite a big > resource waste problem if we’re in a quota constrained situation. > > > > After some discussion on IRC this would be a proposal to solve this > problem: > > We could make the order of ‘request nodes’ and ‘lock semaphore’ > configurable per job with a default to the safe behaviour of > > ‘lock semaphore’ then ‘request nodes’. Jobs would still have the > possibility to do the fast path by requesting the nodes first. > > This makes sense if you have lightweight jobs that should run in a fast > consecutive sequence and you know that some waste of resources is ok. > > > > What do the other semaphore users out there think about this? > > > > Kind regards > > Tobias > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Wed Nov 14 10:13:24 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 14 Nov 2018 21:13:24 +1100 Subject: Dequeuing buildset on first error In-Reply-To: <87zhuc6noe.fsf@meyer.lemoncheese.net> References: <8ij6jjl6gfocs3fcl55cvq6e.1542174484344@email.android.com> <87zhuc6noe.fsf@meyer.lemoncheese.net> Message-ID: I like this suggestion. Typically I would use child_jobs to exit early if a pre-check (such as linting) fails. This however means the child jobs don't start until the linting has finished. This extra option would allow us to start jobs earlier and not waste resources if more results aren't needed. On Wed, Nov 14, 2018 at 7:58 PM James E. Blair wrote: > Mike Pagel writes: > > > Hi, > > > > I am not sure I understand the suggested approach of using two > > pipelines in a gated project. Isn't a pipeline by definition the > > single point of decision if a change is good or not? For check this > > might work, but then you'd still waste resources in the long running > > build set. > > Indeed that's true for gating -- but I think the technique can be used > for check pipelines. > > And I think there are two concerns -- providing quick feedback, which > the multiple pipeline option addresses, and conserving build resources, > which Tobias's suggestion addresses. > > (Perhaps with the new Gerrit status plugin and GitHub checks API, we > might be able to experiment with reporting jobs as they complete.) > > > I like Tobias' suggestion. However, I would not make this a pipeline > > attribute, certainly not a pipeline definition attribute. Instead I'd > > want to decide within the context of a particular project's pipeline, > > which jobs should be able to abort it. If the linters succeed, I > > usually then do like to see all real builds get completed. In fact I > > see this at the same level as the voting attribute of a job. It's > > pretty much the hard core version of voting=true. > > Agreed -- that's how I read Tobias's suggestion. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Nov 14 11:14:48 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Nov 2018 11:14:48 +0000 Subject: Semaphores and resources In-Reply-To: References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> Message-ID: <1542193872.9spvau8ljj.tristanC@fedora> On November 14, 2018 9:48 am, Joshua Hesketh wrote: > (Had originally forgot to reply-all) > > I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm > not sure the use case of locking first is valid enough to justify the extra > complexity of making it an option. > > The point of a semaphore is often not to waste resources, so I fail to > understand the use case of holding nodes ahead of time. Besides, if you > have resources to spare (or "waste"), then you could have a bigger ready > allocation in your nodepool meaning that new jobs will pick up a node > reasonably quickly anyway. > When semaphores are used by fast jobs (e.g. publication), and if nodepool is running at capacity, it may be better to hold the node ahead of time. Though it sounds like a special case which shouldn't be the default behavior. -Tristan > Again though, as I'm not a user of this feature I'm likely missing > something :-). > > On Tue, Nov 13, 2018 at 2:37 AM wrote: > >> Hi Zuulers, >> >> >> >> We just discovered an (in our case) unpleasant behaviour of zuul and >> semaphores. >> >> >> >> Scenario: >> >> We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have >> 30 items with job ‘foo’ in the queue. >> >> The current behaviour of zuul is that it first gets the nodes and then >> locks the semaphore. >> >> This means that 5 jobs run and 25 wait for the semaphore but already got >> and locked their nodes. >> >> In case the job is expensive and long running this can become quite a big >> resource waste problem if we’re in a quota constrained situation. >> >> >> >> After some discussion on IRC this would be a proposal to solve this >> problem: >> >> We could make the order of ‘request nodes’ and ‘lock semaphore’ >> configurable per job with a default to the safe behaviour of >> >> ‘lock semaphore’ then ‘request nodes’. Jobs would still have the >> possibility to do the fast path by requesting the nodes first. >> >> This makes sense if you have lightweight jobs that should run in a fast >> consecutive sequence and you know that some waste of resources is ok. >> >> >> >> What do the other semaphore users out there think about this? >> >> >> >> Kind regards >> >> Tobias >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jaroslaw.lukow at codilime.com Thu Nov 15 22:26:22 2018 From: jaroslaw.lukow at codilime.com (=?UTF-8?B?SmFyZWsgxYF1a293?=) Date: Thu, 15 Nov 2018 23:26:22 +0100 Subject: Slides from Tungsten talk on Berlin Summit Message-ID: Hi all, As promised, I'm sending the link to our slide deck: https://wiki.tungsten.io/download/attachments/1409104/Zuul_as_a_build_system_15.11.2018.pdf Regards, -- Jarosław Łukow DevOps Engineer jaroslaw.lukow at codilime.com -- ------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages. From guillaume.chenuet at schibsted.com Thu Nov 15 22:54:43 2018 From: guillaume.chenuet at schibsted.com (Guillaume Chenuet) Date: Thu, 15 Nov 2018 23:54:43 +0100 Subject: Slides from Tungsten talk on Berlin Summit In-Reply-To: References: Message-ID: Thanks! Your talk was very useful with a lot of practical feedback. Guillaume > Le 15 nov. 2018 à 23:26, Jarek Łukow a écrit : > > Hi all, > > As promised, I'm sending the link to our slide deck: > https://wiki.tungsten.io/download/attachments/1409104/Zuul_as_a_build_system_15.11.2018.pdf > > Regards, > -- > Jarosław Łukow > DevOps Engineer > jaroslaw.lukow at codilime.com > > -- > > > ------------------------------- > The information in this email is > confidential and may be legally privileged, it may contain information that > is confidential in CodiLime Sp. z o. o. It is intended solely for the > addressee. Any access to this email by third parties is unauthorized. If > you are not the intended recipient of this message, any disclosure, > copying, distribution or any action undertaken or neglected in reliance > thereon is prohibited and may result in your liability for damages. > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Sun Nov 18 09:34:24 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 18 Nov 2018 09:34:24 +0000 Subject: Roadmap for running jobs on OpenShift Message-ID: <1542533012.7xqkdv04pn.tristanC@fedora> Hello everyone, Please find below a list of topics I think we should address in order to land the OpenShift driver. Kube config integration in zuul-executor ---------------------------------------- As of now, the k8s and the openshift drivers need a setup function in the exector module in order to be usable: https://review.openstack.org/570668 The Openshift drivers --------------------- https://review.openstack.org/570667 and https://review.openstack.org/590335 are 2 variants to support different kinds of cluster accounts: self-provisioner and user. The code may be refactored in a single driver. Inventory update by job ----------------------- We should consider a pre-run phase to create new resources and update the job inventory using zuul_return to enable the test phase to uses "hosts: pod", as proposed in: https://review.openstack.org/590092 Roles and parent jobs --------------------- The containers use-cases need a lot of love to manage builds and to enable new workflows, for example, to promote an image from one cluster to a central registry and trigger updates in others clusters. Here is an initial build&deploy Openshift pod as well as a prepare-workspace implementation using oc rsync proposed here: https://review.openstack.org/570669 Even though this doesn't have to be part of Zuul, I think we should build new roles in zuul-jobs as a community. Define the security context --------------------------- It seems like opinions diverge on the security requirements for running untrusted jobs in pods. At the summit, I was told that we should keep on using the root user so that bindep and other service operations stay the same. I strongly disagree with this approach because containers that run as root have far more permissions than their workload requires and this could be abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster shall be restricted to non-root users to mitigate privilege escallation. Even though this can be discussed later, I think it's important to set the expectation immediately and keep users safe. How many users are interested in these topics and what are your review bandwidths? Can we get an estimated release date for OpenShift users? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From artem.goncharov at gmail.com Sun Nov 18 09:44:29 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Sun, 18 Nov 2018 10:44:29 +0100 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542533012.7xqkdv04pn.tristanC@fedora> References: <1542533012.7xqkdv04pn.tristanC@fedora> Message-ID: Hi Tristan, I totally agree with the security considerations you mentioned. As already mentioned this couple of times I am interested in Openshift&Zuul love from both sides and will try to spend some time reviewing mentioned changes next week. Thanks for your work. Artem (gtema) On Sun, 18 Nov 2018, 10:34 Tristan Cacqueray, wrote: > Hello everyone, > > Please find below a list of topics I think we should address in order > to land the OpenShift driver. > > Kube config integration in zuul-executor > ---------------------------------------- > As of now, the k8s and the openshift drivers need a setup function in > the exector module in order to be usable: > https://review.openstack.org/570668 > > The Openshift drivers > --------------------- > https://review.openstack.org/570667 and > https://review.openstack.org/590335 > are 2 variants to support different kinds of cluster accounts: > self-provisioner and user. The code may be refactored in a single > driver. > > Inventory update by job > ----------------------- > We should consider a pre-run phase to create new resources and update the > job > inventory using zuul_return to enable the test phase to uses "hosts: pod", > as proposed in: https://review.openstack.org/590092 > > Roles and parent jobs > --------------------- > The containers use-cases need a lot of love to manage builds and > to enable new workflows, for example, to promote an image from one > cluster to a central registry and trigger updates in others clusters. > Here is an initial build&deploy Openshift pod as well as > a prepare-workspace implementation using oc rsync proposed here: > https://review.openstack.org/570669 > Even though this doesn't have to be part of Zuul, I think we should > build new roles in zuul-jobs as a community. > > Define the security context > --------------------------- > It seems like opinions diverge on the security requirements for running > untrusted jobs in pods. At the summit, I was told that we should keep on > using the root user so that bindep and other service operations stay the > same. > I strongly disagree with this approach because containers that run as root > have far more permissions than their workload requires and this could be > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift > cluster > shall be restricted to non-root users to mitigate privilege escallation. > Even though this can be discussed later, I think it's important to set the > expectation immediately and keep users safe. > > > How many users are interested in these topics and what are your review > bandwidths? Can we get an estimated release date for OpenShift users? > > Regards, > -Tristan > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Nov 18 11:41:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 18 Nov 2018 11:41:07 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542533012.7xqkdv04pn.tristanC@fedora> References: <1542533012.7xqkdv04pn.tristanC@fedora> Message-ID: <20181118114106.zw3p53tv3dromu4y@yuggoth.org> On 2018-11-18 09:34:24 +0000 (+0000), Tristan Cacqueray wrote: [...] > Define the security context > --------------------------- > It seems like opinions diverge on the security requirements for running > untrusted jobs in pods. At the summit, I was told that we should keep on > using the root user so that bindep and other service operations stay the same. > I strongly disagree with this approach because containers that run as root > have far more permissions than their workload requires and this could be > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster > shall be restricted to non-root users to mitigate privilege escallation. > Even though this can be discussed later, I think it's important to set the > expectation immediately and keep users safe. [...] Would using Kata for the container runtime solve this dilemma by still allowing jobs to do things like install additional packages under execution without exposing the host's kernel to the workload? We've discovered over the years that with a diverse job set, preinstalling everything a job might need in images doesn't scale (and neither does trying to maintain job-specific images). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From artem.goncharov at gmail.com Sun Nov 18 12:16:25 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Sun, 18 Nov 2018 13:16:25 +0100 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <20181118114106.zw3p53tv3dromu4y@yuggoth.org> References: <1542533012.7xqkdv04pn.tristanC@fedora> <20181118114106.zw3p53tv3dromu4y@yuggoth.org> Message-ID: On Sun, 18 Nov 2018, 12:41 Jeremy Stanley, wrote: > On 2018-11-18 09:34:24 +0000 (+0000), Tristan Cacqueray wrote: > [...] > > Define the security context > > --------------------------- > > It seems like opinions diverge on the security requirements for running > > untrusted jobs in pods. At the summit, I was told that we should keep on > > using the root user so that bindep and other service operations stay the > same. > > I strongly disagree with this approach because containers that run as > root > > have far more permissions than their workload requires and this could be > > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift > cluster > > shall be restricted to non-root users to mitigate privilege escallation. > > Even though this can be discussed later, I think it's important to set > the > > expectation immediately and keep users safe. > [...] > > Would using Kata for the container runtime solve this dilemma by > still allowing jobs to do things like install additional packages > under execution without exposing the host's kernel to the workload? > We've discovered over the years that with a diverse job set, > preinstalling everything a job might need in images doesn't scale > (and neither does trying to maintain job-specific images). > While it might be generally an option there is one certain limitation to remember about Kata - it requires nested virtualization, which might not be available in the Openshift/K8 cluster. This is at least something I have faced now and was discussing with Kata people. Therefore I am personally not sure it would be a right way to put such a limitation/requirement. Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Sun Nov 18 14:31:30 2018 From: aj at suse.com (Andreas Jaeger) Date: Sun, 18 Nov 2018 15:31:30 +0100 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542533012.7xqkdv04pn.tristanC@fedora> References: <1542533012.7xqkdv04pn.tristanC@fedora> Message-ID: <7de26c65-420e-8413-c9a7-2d2570e2137d@suse.com> On 18/11/2018 10.34, Tristan Cacqueray wrote: > Hello everyone, > > Please find below a list of topics I think we should address in order > to land the OpenShift driver. Can we make this work on any distribution of Kubernetes? Or do you need specific things that will work only on OpenShift? Andreas > [...] -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From tdecacqu at redhat.com Sun Nov 18 15:30:22 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 18 Nov 2018 15:30:22 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: References: <1542533012.7xqkdv04pn.tristanC@fedora> <20181118114106.zw3p53tv3dromu4y@yuggoth.org> Message-ID: <1542552165.wbxhhvdjeo.tristanC@fedora> On November 18, 2018 12:16 pm, Artem Goncharov wrote: > On Sun, 18 Nov 2018, 12:41 Jeremy Stanley, wrote: > >> On 2018-11-18 09:34:24 +0000 (+0000), Tristan Cacqueray wrote: >> [...] >> > Define the security context >> > --------------------------- >> > It seems like opinions diverge on the security requirements for running >> > untrusted jobs in pods. At the summit, I was told that we should keep on >> > using the root user so that bindep and other service operations stay the >> same. >> > I strongly disagree with this approach because containers that run as >> root >> > have far more permissions than their workload requires and this could be >> > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift >> cluster >> > shall be restricted to non-root users to mitigate privilege escallation. >> > Even though this can be discussed later, I think it's important to set >> the >> > expectation immediately and keep users safe. >> [...] >> >> Would using Kata for the container runtime solve this dilemma by >> still allowing jobs to do things like install additional packages >> under execution without exposing the host's kernel to the workload? >> We've discovered over the years that with a diverse job set, >> preinstalling everything a job might need in images doesn't scale >> (and neither does trying to maintain job-specific images). >> Perhaps it is possible to install job requirements without root access, e.g. make bindep unpack the header and tool in ~/.local ? > > While it might be generally an option there is one certain limitation to > remember about Kata - it requires nested virtualization, which might not be > available in the Openshift/K8 cluster. This is at least something I have > faced now and was discussing with Kata people. Therefore I am personally > not sure it would be a right way to put such a limitation/requirement. > Kata or KubeVirt might be a solution but it would be quite a limitation for some users... -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Sun Nov 18 15:32:19 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 18 Nov 2018 15:32:19 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <7de26c65-420e-8413-c9a7-2d2570e2137d@suse.com> References: <1542533012.7xqkdv04pn.tristanC@fedora> <7de26c65-420e-8413-c9a7-2d2570e2137d@suse.com> Message-ID: <1542552830.a0otcyoq1e.tristanC@fedora> On November 18, 2018 2:31 pm, Andreas Jaeger wrote: > On 18/11/2018 10.34, Tristan Cacqueray wrote: >> Hello everyone, >> >> Please find below a list of topics I think we should address in order >> to land the OpenShift driver. > > Can we make this work on any distribution of Kubernetes? Or do you need > specific things that will work only on OpenShift? > The "Kube config integration in zuul-executor", "Inventory update by job" and "Define the security context" topics are relevant for any distribution of Kubernetes. OpenShift comes with extended security and image building processes that should be taken into account for a more complete integration. Thus the "Roles and parent jobs" topic is more challenging to support any distribution. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hashar at free.fr Mon Nov 19 15:06:26 2018 From: hashar at free.fr (Antoine Musso) Date: Mon, 19 Nov 2018 16:06:26 +0100 Subject: Dequeuing buildset on first error In-Reply-To: <61F7E089-C645-4A7F-AE42-BEE41FFF06BC@bmw.de> References: <61F7E089-C645-4A7F-AE42-BEE41FFF06BC@bmw.de> Message-ID: On 13/11/2018 21:48, Tobias.Henkel at bmw.de wrote: > We have several projects that have buildsets with fast (few minutes, > e.g. linters) and longer running (> 40 minutes, e.g. big fat c++ > compilation) jobs. > > We often see the case that one of the linter jobs failed and already > could give feedback. However zuul waits for the completion of the fat > compile job before reporting. > > Generally this is the desired behaviour because we want to give as > much feedback as possible. > > However some projects may want to decide that quick feedback is more > important than the most complete feedback possible. > > So I would propose to add a ‘report-on-first-failure’ flag to the > project-pipeline. The default would be false in order to retain the > current behaviour. > > If the deque happens the aborted jobs will have the ABORTED state so > it can be clearly seen in the reporting that these were aborted. > > Enabling this for certain projects also could help to reduce resource > consumption in quota-constrained periods. > > What do you think about this? > Hello, We are still running Zuul 2.5 so take the ideas below with a grain of salt with regard to Zuul v3. Zuul 2.5 has an option for DependentPipeline to first run fast jobs before triggering other jobs. The fast jobs (such as linters) one has to mark them with *hold-following-changes*. Still in Zuul 2.5, you could chain your jobs in some tree: project:    name: foo    jobs:      - linter:          - unit:             - integration Which would run the *linter* job first, if it is a success, run *unit* and finally *integration*. I have used it as some point but it does not play well when having templates that separate linters and unit. We also had the issue of filtering out linters, which would thus never trigger and prevent the unit/integration tests from running. For MediaWiki at least, I went with an other way. Zuul triggers jobs, but the jobs use a python script that has all the logic. The script clone the repo, runs linter, clone other repos, performs the install then run unit/integration/end to end tests.  Thus the flow is offloaded from Zuul to a script which is easier for end users to reproduce and hack on. A while back I have seen a Gerrit/Zuul install that had linters vote Linted +1 on success. That would be then consumed by another pipeline to run unit tests and vote Unit +1 which in turns ran the integration tests and ultimately submit. It might be the easiest approach. One drawback is that the different stages run serially which delay the final result. But it offers faster feedback in case of small errors. (again: all the above from a Zuul 2.5 perspective, I haven't come to v3 yet :( ) -- Antoine "hashar" Musso CI infra for Wikimedia Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Nov 19 17:37:39 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 19 Nov 2018 09:37:39 -0800 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542533012.7xqkdv04pn.tristanC@fedora> (Tristan Cacqueray's message of "Sun, 18 Nov 2018 09:34:24 +0000") References: <1542533012.7xqkdv04pn.tristanC@fedora> Message-ID: <875zwt3r4s.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > Hello everyone, > > Please find below a list of topics I think we should address in order > to land the OpenShift driver. > > Kube config integration in zuul-executor > ---------------------------------------- > As of now, the k8s and the openshift drivers need a setup function in > the exector module in order to be usable: https://review.openstack.org/570668 > > The Openshift drivers > --------------------- > https://review.openstack.org/570667 and https://review.openstack.org/590335 > are 2 variants to support different kinds of cluster accounts: > self-provisioner and user. The code may be refactored in a single > driver. > > Inventory update by job > ----------------------- > We should consider a pre-run phase to create new resources and update the job > inventory using zuul_return to enable the test phase to uses "hosts: pod", > as proposed in: https://review.openstack.org/590092 > > Roles and parent jobs > --------------------- > The containers use-cases need a lot of love to manage builds and > to enable new workflows, for example, to promote an image from one > cluster to a central registry and trigger updates in others clusters. > Here is an initial build&deploy Openshift pod as well as > a prepare-workspace implementation using oc rsync proposed here: > https://review.openstack.org/570669 > Even though this doesn't have to be part of Zuul, I think we should > build new roles in zuul-jobs as a community. Thanks for the road map -- those all seem like good next steps. I'll leave detailed feedback to the reviews, but I very much agree with the sentiment in the last item. I also believe that we should be able to get a k8s up and running to attach to OpenStack's Zuul soon -- apparently one of our clouds has a Magnum we can use. That's not necessary for any of this, but it might make some collaboration a little easier. > Define the security context > --------------------------- > It seems like opinions diverge on the security requirements for running > untrusted jobs in pods. At the summit, I was told that we should keep on > using the root user so that bindep and other service operations stay the same. > I strongly disagree with this approach because containers that run as root > have far more permissions than their workload requires and this could be > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster > shall be restricted to non-root users to mitigate privilege escallation. > Even though this can be discussed later, I think it's important to set the > expectation immediately and keep users safe. I suspect this is going to take a bit of discussion to get through. To start with, I think we're proceeding from different base assumptions. You say that "containers that run as root have far more permissions than their workload requires" -- that's not certain to be true. That may be true for some workloads, but certainly not all. I don't think a job running as root in a container is substantially different than a job running in a VM as root. And being able to run jobs as root is one of Zuul's strong points. We get a lot of users because of that ability. And it's far from unusual to run containerized processes as the root user. I just spun up a pod with an ssh daemon running as root inside of GKE (perhaps the "reference" kubernetes environment?) using only the defaults, and it works fine. I agree that we should set expectations early -- and one of those is that Zuul will run whatever workloads your underlying infrastructure will support. Now, of course, some folks are likely going to want to run jobs on containers (and VMs) without root access. And like VMs, we should support those in containers too. But if those jobs require special images, then those folks are going to have to do more work to make those images. To get concrete for a minute, consider the tox-py35 job. If it runs on a VM without necessary dependencies installed, it will install them, as root. So it doesn't assume root privileges, unless it is missing dependencies. The same approach should work on containers. In either case, if you want to run on a VM or a container without root, you must supply an image with the necessary dependencies. The main thing is that support for root-privileged workloads is a first-class feature of Zuul, and that's consistent for containers and VMs. With that out of the way, I'm happy for us to work on supporting both use-cases. The easiest way to proceed will be for us to start by assuming we have root access. Then I think we can work on recommended processes for folks who want a more restricted environment. -Jim From Tobias.Henkel at bmw.de Mon Nov 19 18:32:29 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 19 Nov 2018 18:32:29 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <875zwt3r4s.fsf@meyer.lemoncheese.net> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> Message-ID: <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> On 19.11.18, 18:38, "James E. Blair" wrote: Tristan Cacqueray writes: ... > Define the security context > --------------------------- > It seems like opinions diverge on the security requirements for running > untrusted jobs in pods. At the summit, I was told that we should keep on > using the root user so that bindep and other service operations stay the same. > I strongly disagree with this approach because containers that run as root > have far more permissions than their workload requires and this could be > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster > shall be restricted to non-root users to mitigate privilege escallation. > Even though this can be discussed later, I think it's important to set the > expectation immediately and keep users safe. I suspect this is going to take a bit of discussion to get through. To start with, I think we're proceeding from different base assumptions. You say that "containers that run as root have far more permissions than their workload requires" -- that's not certain to be true. That may be true for some workloads, but certainly not all. I don't think a job running as root in a container is substantially different than a job running in a VM as root. And being able to run jobs as root is one of Zuul's strong points. We get a lot of users because of that ability. And it's far from unusual to run containerized processes as the root user. I just spun up a pod with an ssh daemon running as root inside of GKE (perhaps the "reference" kubernetes environment?) using only the defaults, and it works fine. I agree that we should set expectations early -- and one of those is that Zuul will run whatever workloads your underlying infrastructure will support. Now, of course, some folks are likely going to want to run jobs on containers (and VMs) without root access. And like VMs, we should support those in containers too. But if those jobs require special images, then those folks are going to have to do more work to make those images. To get concrete for a minute, consider the tox-py35 job. If it runs on a VM without necessary dependencies installed, it will install them, as root. So it doesn't assume root privileges, unless it is missing dependencies. The same approach should work on containers. In either case, if you want to run on a VM or a container without root, you must supply an image with the necessary dependencies. The main thing is that support for root-privileged workloads is a first-class feature of Zuul, and that's consistent for containers and VMs. With that out of the way, I'm happy for us to work on supporting both use-cases. The easiest way to proceed will be for us to start by assuming we have root access. Then I think we can work on recommended processes for folks who want a more restricted environment. I agree with this. However I have to note that the default on OpenShift is to not allow root. I don't see real problems with that at first glance as this only covers what jobs can do. In order to cope with this I can think of three things we probably should do. The first is that we should document the fact that a specific job or role in zuul-jobs needs root access. This is an issue with static nodes too as they don't necessarily have root access. Maybe we could enhance zuul-sphinx with a tag for this. The second is that we could add support in nodepool to specify the scc to use. This makes it easier for openshift admins to allow root for nodepool pods without having to modify the default scc and makes it much easier to justify root (!= privileges, openshift is a bit paranoid about this) because changing the default scc in a multi tenant environment is a no-go. The third is to make a big note about the openshift behavior in the docs so users know that they either have no root or need to modify the default scc or need to configure nodepool with option 2 if we implement that. - Tobias From cboylan at sapwetik.org Mon Nov 19 18:32:34 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 19 Nov 2018 10:32:34 -0800 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <875zwt3r4s.fsf@meyer.lemoncheese.net> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> Message-ID: <1542652354.3630368.1582202672.64A2F1AE@webmail.messagingengine.com> On Mon, Nov 19, 2018, at 9:37 AM, James E. Blair wrote: > Tristan Cacqueray writes: > > > Hello everyone, > > > > Please find below a list of topics I think we should address in order > > to land the OpenShift driver. > > > > Kube config integration in zuul-executor > > ---------------------------------------- > > As of now, the k8s and the openshift drivers need a setup function in > > the exector module in order to be usable: https://review.openstack.org/570668 > > > > The Openshift drivers > > --------------------- > > https://review.openstack.org/570667 and https://review.openstack.org/590335 > > are 2 variants to support different kinds of cluster accounts: > > self-provisioner and user. The code may be refactored in a single > > driver. > > > > Inventory update by job > > ----------------------- > > We should consider a pre-run phase to create new resources and update the job > > inventory using zuul_return to enable the test phase to uses "hosts: pod", > > as proposed in: https://review.openstack.org/590092 > > > > Roles and parent jobs > > --------------------- > > The containers use-cases need a lot of love to manage builds and > > to enable new workflows, for example, to promote an image from one > > cluster to a central registry and trigger updates in others clusters. > > Here is an initial build&deploy Openshift pod as well as > > a prepare-workspace implementation using oc rsync proposed here: > > https://review.openstack.org/570669 > > Even though this doesn't have to be part of Zuul, I think we should > > build new roles in zuul-jobs as a community. > > Thanks for the road map -- those all seem like good next steps. I'll > leave detailed feedback to the reviews, but I very much agree with the > sentiment in the last item. > > I also believe that we should be able to get a k8s up and running to > attach to OpenStack's Zuul soon -- apparently one of our clouds has a > Magnum we can use. That's not necessary for any of this, but it might > make some collaboration a little easier. > > > Define the security context > > --------------------------- > > It seems like opinions diverge on the security requirements for running > > untrusted jobs in pods. At the summit, I was told that we should keep on > > using the root user so that bindep and other service operations stay the same. > > I strongly disagree with this approach because containers that run as root > > have far more permissions than their workload requires and this could be > > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster > > shall be restricted to non-root users to mitigate privilege escallation. > > Even though this can be discussed later, I think it's important to set the > > expectation immediately and keep users safe. > > I suspect this is going to take a bit of discussion to get through. > > To start with, I think we're proceeding from different base assumptions. > You say that "containers that run as root have far more permissions than > their workload requires" -- that's not certain to be true. That may be > true for some workloads, but certainly not all. > > I don't think a job running as root in a container is substantially > different than a job running in a VM as root. And being able to run > jobs as root is one of Zuul's strong points. We get a lot of users > because of that ability. > > And it's far from unusual to run containerized processes as the root > user. I just spun up a pod with an ssh daemon running as root inside of > GKE (perhaps the "reference" kubernetes environment?) using only the > defaults, and it works fine. Worth noting that on older GKE clusters the pods had root access to the entire cloud account (maybe not entire account, but to lots of fun things). Root or not in the container itself this would've been bad to expose to CI jobs. They have since clamped this down and it should be fine if you create a new cluster today. I do think this illustrates that we will likely need to express the security concerns that must be accounted for when running Zuul jobs in a way that is consumable by users such that they can accommodate Zuul in a way that makes sense for their use cases. Simply disallowing root is likely not enough and may not be appropriate in some cases. > > I agree that we should set expectations early -- and one of those is > that Zuul will run whatever workloads your underlying infrastructure > will support. > > Now, of course, some folks are likely going to want to run jobs on > containers (and VMs) without root access. And like VMs, we should > support those in containers too. But if those jobs require special > images, then those folks are going to have to do more work to make those > images. > > To get concrete for a minute, consider the tox-py35 job. If it runs on > a VM without necessary dependencies installed, it will install them, as > root. So it doesn't assume root privileges, unless it is missing > dependencies. The same approach should work on containers. In either > case, if you want to run on a VM or a container without root, you must > supply an image with the necessary dependencies. > > The main thing is that support for root-privileged workloads is a > first-class feature of Zuul, and that's consistent for containers and > VMs. With that out of the way, I'm happy for us to work on supporting > both use-cases. The easiest way to proceed will be for us to start by > assuming we have root access. Then I think we can work on recommended > processes for folks who want a more restricted environment. > ++ More and more CI tools are constraining the problem set they can accommodate. While this may help them avoid dealing with these difficult problems, one of the things that makes Zuul useful to the OpenStack project is that it attempts to address these issues instead. I see this as an opportunity for Zuul if the alternatives push towards "run $command in $prebuildimage" and force that on users. Clark From fungi at yuggoth.org Mon Nov 19 19:18:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Nov 2018 19:18:19 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> Message-ID: <20181119191819.otxsupa7okix3vue@yuggoth.org> On 2018-11-19 18:32:29 +0000 (+0000), Tobias.Henkel at bmw.de wrote: [...] > The first is that we should document the fact that a specific job > or role in zuul-jobs needs root access. This is an issue with > static nodes too as they don't necessarily have root access. Maybe > we could enhance zuul-sphinx with a tag for this. [...] I like this. If we had some standardized means of marking jobs/playbooks/roles as requiring root privs and a means of testing for those which don't but attempt to use it anyway, then we might be able to also reliably early-fail a job with a more helpful error directly from Zuul if it or one of its components needs root but is targeting a backend which doesn't permit that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tdecacqu at redhat.com Tue Nov 20 04:00:16 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 20 Nov 2018 04:00:16 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542652354.3630368.1582202672.64A2F1AE@webmail.messagingengine.com> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> <1542652354.3630368.1582202672.64A2F1AE@webmail.messagingengine.com> Message-ID: <1542677744.927nsx5l47.tristanC@fedora> On November 19, 2018 6:32 pm, Clark Boylan wrote: > On Mon, Nov 19, 2018, at 9:37 AM, James E. Blair wrote: >> Tristan Cacqueray writes: >> >> > Hello everyone, >> > >> > Please find below a list of topics I think we should address in order >> > to land the OpenShift driver. >> > >> > Kube config integration in zuul-executor >> > ---------------------------------------- >> > As of now, the k8s and the openshift drivers need a setup function in >> > the exector module in order to be usable: https://review.openstack.org/570668 >> > >> > The Openshift drivers >> > --------------------- >> > https://review.openstack.org/570667 and https://review.openstack.org/590335 >> > are 2 variants to support different kinds of cluster accounts: >> > self-provisioner and user. The code may be refactored in a single >> > driver. >> > >> > Inventory update by job >> > ----------------------- >> > We should consider a pre-run phase to create new resources and update the job >> > inventory using zuul_return to enable the test phase to uses "hosts: pod", >> > as proposed in: https://review.openstack.org/590092 >> > >> > Roles and parent jobs >> > --------------------- >> > The containers use-cases need a lot of love to manage builds and >> > to enable new workflows, for example, to promote an image from one >> > cluster to a central registry and trigger updates in others clusters. >> > Here is an initial build&deploy Openshift pod as well as >> > a prepare-workspace implementation using oc rsync proposed here: >> > https://review.openstack.org/570669 >> > Even though this doesn't have to be part of Zuul, I think we should >> > build new roles in zuul-jobs as a community. >> >> Thanks for the road map -- those all seem like good next steps. I'll >> leave detailed feedback to the reviews, but I very much agree with the >> sentiment in the last item. >> >> I also believe that we should be able to get a k8s up and running to >> attach to OpenStack's Zuul soon -- apparently one of our clouds has a >> Magnum we can use. That's not necessary for any of this, but it might >> make some collaboration a little easier. >> That would help curate the roles and jobs upstream if they can be tested directly. Would it be possible to setup an Openshift too? >> > Define the security context >> > --------------------------- >> > It seems like opinions diverge on the security requirements for running >> > untrusted jobs in pods. At the summit, I was told that we should keep on >> > using the root user so that bindep and other service operations stay the same. >> > I strongly disagree with this approach because containers that run as root >> > have far more permissions than their workload requires and this could be >> > abused by untrusted playbooks. Nodepool accesses to a k8s or Openshift cluster >> > shall be restricted to non-root users to mitigate privilege escallation. >> > Even though this can be discussed later, I think it's important to set the >> > expectation immediately and keep users safe. >> >> I suspect this is going to take a bit of discussion to get through. >> >> To start with, I think we're proceeding from different base assumptions. >> You say that "containers that run as root have far more permissions than >> their workload requires" -- that's not certain to be true. That may be >> true for some workloads, but certainly not all. >> Root privileges are quite extensive and we should refine what it means for Zuul jobs. It seems like we mostly want filesystem capabilities, but root privilege also covers: * kernel auditing * ptrace any process * create special files (mknod) * network interface and firewall administration * raw packets * change process uid * reboot and use kexec * create chroot * load kernel module * change system clock * change process priority I don't think container workload should require any of the above privileges. >> I don't think a job running as root in a container is substantially >> different than a job running in a VM as root. And being able to run >> jobs as root is one of Zuul's strong points. We get a lot of users >> because of that ability. >> >> And it's far from unusual to run containerized processes as the root >> user. I just spun up a pod with an ssh daemon running as root inside of >> GKE (perhaps the "reference" kubernetes environment?) using only the >> defaults, and it works fine. > > Worth noting that on older GKE clusters the pods had root access to the entire cloud account (maybe not entire account, but to lots of fun things). Root or not in the container itself this would've been bad to expose to CI jobs. They have since clamped this down and it should be fine if you create a new cluster today. > > I do think this illustrates that we will likely need to express the security concerns that must be accounted for when running Zuul jobs in a way that is consumable by users such that they can accommodate Zuul in a way that makes sense for their use cases. Simply disallowing root is likely not enough and may not be appropriate in some cases. > K8S may authorize such privileges by default, but using default settings doesn't seem secure for running multi-tenant workloads. I am not familiar with K8S but we should at least suggest Zuul operators follow: https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/ >> >> I agree that we should set expectations early -- and one of those is >> that Zuul will run whatever workloads your underlying infrastructure >> will support. >> >> Now, of course, some folks are likely going to want to run jobs on >> containers (and VMs) without root access. And like VMs, we should >> support those in containers too. But if those jobs require special >> images, then those folks are going to have to do more work to make those >> images. >> >> To get concrete for a minute, consider the tox-py35 job. If it runs on >> a VM without necessary dependencies installed, it will install them, as >> root. So it doesn't assume root privileges, unless it is missing >> dependencies. The same approach should work on containers. In either >> case, if you want to run on a VM or a container without root, you must >> supply an image with the necessary dependencies. >> >> The main thing is that support for root-privileged workloads is a >> first-class feature of Zuul, and that's consistent for containers and >> VMs. With that out of the way, I'm happy for us to work on supporting >> both use-cases. The easiest way to proceed will be for us to start by >> assuming we have root access. Then I think we can work on recommended >> processes for folks who want a more restricted environment. >> > ++ > > More and more CI tools are constraining the problem set they can accommodate. While this may help them avoid dealing with these difficult problems, one of the things that makes Zuul useful to the OpenStack project is that it attempts to address these issues instead. I see this as an opportunity for Zuul if the alternatives push towards "run $command in $prebuildimage" and force that on users. > Can we please proceed with assuming the least privileges first? Openshift does not allow root-privileges by default, and I would like to ensure that Zuul can be used in the default configuration. The dependencies management you mentioned can be addressed using BuildConfig. Alternatively, we can update bindep to be usable without system packages, for example, it could extract headers and applications in ~/.local. Assuming root-privileges in containers does not sound like the right approach, especially when Zuul can execute malicious tasks (e.g. if untrusted-projects with "include: job" are hosted publicly). Supporting that feature warrants a big warning regarding the risk of later exploitation, and we should strongly recommend Openstack or Kata containers be used instead. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Tue Nov 20 04:04:17 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 20 Nov 2018 04:04:17 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <20181119191819.otxsupa7okix3vue@yuggoth.org> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> <20181119191819.otxsupa7okix3vue@yuggoth.org> Message-ID: <1542686502.7cthchfncz.tristanC@fedora> On November 19, 2018 7:18 pm, Jeremy Stanley wrote: > On 2018-11-19 18:32:29 +0000 (+0000), Tobias.Henkel at bmw.de wrote: > [...] >> The first is that we should document the fact that a specific job >> or role in zuul-jobs needs root access. This is an issue with >> static nodes too as they don't necessarily have root access. Maybe >> we could enhance zuul-sphinx with a tag for this. > [...] > > I like this. If we had some standardized means of marking > jobs/playbooks/roles as requiring root privs and a means of testing > for those which don't but attempt to use it anyway, then we might be > able to also reliably early-fail a job with a more helpful error > directly from Zuul if it or one of its components needs root but is > targeting a backend which doesn't permit that. This may be difficult if the information is in the roles since Zuul doesn't know what roles a job use. How difficult would it be for the scheduler to discover what are the roles associated with any given job? -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Tue Nov 20 04:11:37 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 20 Nov 2018 04:11:37 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> Message-ID: <1542686661.dcc43xe1tq.tristanC@fedora> On November 19, 2018 6:32 pm, Tobias.Henkel at bmw.de wrote: > The second is that we could add support in nodepool to specify > the scc to use. This makes it easier for openshift admins to allow root > for nodepool pods without having to modify the default scc and > makes it much easier to justify root (!= privileges, openshift is a bit paranoid about this) > because changing the default scc in a multi tenant environment is a no-go. Users also suggested we could describe the container spec body and the role binding information in the nodepool configuration. We could extend that mechanism to generic object to be applied to the cluster... However, changing scc needs admin cluster access which is currently not required. The nodepool openshift driver is currently usable by regular developper account and this is the most frequent use-case. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Tue Nov 20 12:44:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 20 Nov 2018 12:44:23 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542686502.7cthchfncz.tristanC@fedora> References: <1542533012.7xqkdv04pn.tristanC@fedora> <875zwt3r4s.fsf@meyer.lemoncheese.net> <728D96F4-3C3D-43F7-8DF6-FF58D0FBE63E@bmw.de> <20181119191819.otxsupa7okix3vue@yuggoth.org> <1542686502.7cthchfncz.tristanC@fedora> Message-ID: <20181120124422.oh6rubpabdvhhamw@yuggoth.org> On 2018-11-20 04:04:17 +0000 (+0000), Tristan Cacqueray wrote: > On November 19, 2018 7:18 pm, Jeremy Stanley wrote: > > On 2018-11-19 18:32:29 +0000 (+0000), Tobias.Henkel at bmw.de wrote: > > [...] > >> The first is that we should document the fact that a specific job > >> or role in zuul-jobs needs root access. This is an issue with > >> static nodes too as they don't necessarily have root access. Maybe > >> we could enhance zuul-sphinx with a tag for this. > > [...] > > > > I like this. If we had some standardized means of marking > > jobs/playbooks/roles as requiring root privs and a means of testing > > for those which don't but attempt to use it anyway, then we might be > > able to also reliably early-fail a job with a more helpful error > > directly from Zuul if it or one of its components needs root but is > > targeting a backend which doesn't permit that. > > This may be difficult if the information is in the roles since Zuul > doesn't know what roles a job use. How difficult would it be for > the scheduler to discover what are the roles associated with any given > job? Oh, right. If we want to avoid Zuul abusing Ansible internals to identify jobs using those roles then we would need to stick to having any needs-root-access flag as a job-level construct. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mike at mpagel.de Mon Nov 26 13:47:12 2018 From: mike at mpagel.de (Mike Pagel) Date: Mon, 26 Nov 2018 14:47:12 +0100 Subject: Multi-repo merge Message-ID: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> Zuul is great in gating projects consisting of many repositories. It even supports dependencies of changes within a single or across multiple repositories. Our project is making heavy use of this. Every now and then we are facing dependencies that unfortunately are not unidirectional. I know, we should have better thought about clean repo structures in the first place, but one not too artificial example is a situation where repositories are structured to express ownership, visibility and write access. In Github for instance, all this is configured at the repository level. If you like see the sample how such a situation could arise. --- Start of sample --- Assume company A is the owner of repository A, holding operating system and a top-level application framework (the "main" function in C). Further assume company B is the owner of repository B, implementing a module on top of the operating system, and the module is invoked by A's top level application. At the build level the dependencies are clear and well defined, but at the repo level they are bidirectional. Some changes in A and B will depend on each other. It turns out that a clean, backward compatible PR design, leading to unidirectional changes as required by zuul is almost always possible, but not always easy. E.g. think of a change where A replaces a BSP in their operating system, which requires adaptation in B. It might not be possible to temporarily have two BSPs in A, the the change to A will break the build. Nor can B simply switch its usage of APIs, as that, by itself would break the build as well. The only chance to get this merged would be to temporarily disable some checks, allow for the first merge, then reenable and merge the second change. Ugly. --- End of sample --- It would very much simplify our workflows if an "atomic" merge (from zuul's point of view) would be possible. I can think of some ways to sort of realize this with zuul, but none of them is really safe, even if zuul is the only party executing merges to protected branches. What would be your suggestion to accommodate such a requirement? Thanks a lot for any insight and ideas you might have. Best, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brendan.jackman at bluwirelesstechnology.com Mon Nov 26 13:47:48 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Mon, 26 Nov 2018 13:47:48 +0000 Subject: Testing multiple commits in one job with Gerrit Message-ID: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> Hi all, I'm gradually setting up Zuul for my org, and basically I have just one impedance mismatch between Zuul's behaviour and my requirements: Due to pressure on our test hardware I don't think running full end-to-end tests on every individual commit will scale. What I'd ultimately like is a check/gate setup where jobs in the check pipeline runs on every single commit, but jobs in the gate pipeline only run once for an entire "stack" of commits, and then that entire stack can be merged at once. IIUC this is basically the behaviour you get already with GitHub, where the zuul notion of "change" maps to a whole Pull Request rather than an individual commit. I started looking into ways something similar can be achieved with Gerrit. After some comments from #zuul and starting at the code a bit this strategy is the one I'm considering: enable one Zuul change to encapsulate multiple Gerrit changes. (Gerrit and Zuul both use the word "change", which works because Gerrit changes and Zuul changes are currently 1:1. That wouldn't be the caes any more with my proposal so I'll always say "Gerrit change" or "Zuul change"). I haven't figured out exactly how the Gerrit changes should be grouped together, and how this should be configured in the Gerrit trigger, but I think the basic idea would be to inject extra events into the Gerrit event stream (somewhere around GerritEventConnector._handleEvent). One example might be that we detect when an entire stack of Gerrit changes has received some common set of approvals, we inject an event called (say) "zuul-change-stack-approved". The user can then have a pipeline.trigger.gerrit entry with this event, and that trigger would create a Zuul change encapsulating the entire stack of Gerrit changes. These wouldn't need to actually be treated in the code like events coming from Gerrit, but this basic idea would allow expressing this behaviour with the existing structure of the Gerrit trigger configuration. I think it would then be a matter of finding the places where the Scheduler and pipeline Managers interact with the Gerrit driver using references to Zuul changes, and ensuring that those parts of the Gerrit driver can handling being passed the new class of Zuul change. I think most of these would involve checking for if the Zuul change is of the new type and then iterating over the constituent Gerrit changes to performe some action against each (e.g. GerritConnection.canMerge, GerritRefFilter.matches,  and, with a little more complication, GerritReporter.report). Does that seem like a reasonable approach to the problem? Cheers, Brendan From fungi at yuggoth.org Mon Nov 26 14:54:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Nov 2018 14:54:58 +0000 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> Message-ID: <20181126145458.v5ceoy22izzxvthz@yuggoth.org> On 2018-11-26 13:47:48 +0000 (+0000), Brendan Jackman wrote: [...] > this is basically the behaviour you get already with GitHub, where > the zuul notion of "change" maps to a whole Pull Request rather > than an individual commit. [...] By way of explanation, this design choice was basically driven by a difference in the predominant development workflows between Gerrit and GitHub: to iterate on a change for a repository under Gerrit, the author typically replaces the old commit with a new one and reviewers look at inter-patch diffs from the change history to see what's been altered; by comparison to iterate on a change for a repository under GitHub, the author typically proposes subsequent commits in a lightweight branch which are reviewed as a part of that same change (I expect this is because GitHub lacks any other sort of revision tracking for pull requests). The usual expectation from GitHub users is that the commits which make up a PR may not themselves be complete and testable changes. Gerrit, on the other hand, doesn't really have a concrete unit of work which aggregates multiple changes since the relationships between changes within a dependent change series can be altered at any time through rebasing. That is to say, Zuul's behavior in aggregating multiple commits for a single GitHub pull request is to a great extent enabled by the difference between GitHub's data model and Gerrit's, so to assume discrete Gerrit changes and individual commits in a GitHub PR are implemented in Zuul as comparable constructs is a significant leap. GitHub merges the set of commits from a pull request, Gerrit merges each change one at a time. > the basic idea would be to inject extra events into the Gerrit > event stream (somewhere around GerritEventConnector._handleEvent). > One example might be that we detect when an entire stack of Gerrit > changes has received some common set of approvals, we inject an > event called (say) "zuul-change-stack-approved". The user can then > have a pipeline.trigger.gerrit entry with this event, and that > trigger would create a Zuul change encapsulating the entire stack > of Gerrit changes. These wouldn't need to actually be treated in > the code like events coming from Gerrit, but this basic idea would > allow expressing this behaviour with the existing structure of the > Gerrit trigger configuration. [...] If this is a goal, it probably needs to start with figuring out how upstream Gerrit might at least eventually support such a grouping concept (design of cross-project dependencies for Zuul 2.0 resulted in a lot of the same sorts of design discussion, though in the end we settled on metadata in commit messages). My initial logistical questions are, how does Zuul calculate the boundaries of the set to know when it has "all" the changes expected (can that set be altered over time or does it have to be declared only once completely assembled)? If alterable and open-ended, how should Zuul behave when alteration occurs between approval and merge (abort testing for the full set, test additions as a separate set)? And what should the behavior be if testing for that group of changes fails (report failure to all of them, retest a subset)? I'm not personally convinced it makes sense as a feature, but it's at least an interesting thought exercise. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Mon Nov 26 14:57:26 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 26 Nov 2018 09:57:26 -0500 Subject: Semaphores and resources In-Reply-To: <1542193872.9spvau8ljj.tristanC@fedora> References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> <1542193872.9spvau8ljj.tristanC@fedora> Message-ID: I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option. -Dave On Wed, Nov 14, 2018 at 6:14 AM Tristan Cacqueray wrote: > On November 14, 2018 9:48 am, Joshua Hesketh wrote: > > (Had originally forgot to reply-all) > > > > I'm not a user of semaphores, so perhaps my opinion is invalid, however > I'm > > not sure the use case of locking first is valid enough to justify the > extra > > complexity of making it an option. > > > > The point of a semaphore is often not to waste resources, so I fail to > > understand the use case of holding nodes ahead of time. Besides, if you > > have resources to spare (or "waste"), then you could have a bigger ready > > allocation in your nodepool meaning that new jobs will pick up a node > > reasonably quickly anyway. > > > When semaphores are used by fast jobs (e.g. publication), and if nodepool > is running at capacity, it may be better to hold the node ahead of time. > > Though it sounds like a special case which shouldn't be the default > behavior. > > -Tristan > > > > Again though, as I'm not a user of this feature I'm likely missing > > something :-). > > > > On Tue, Nov 13, 2018 at 2:37 AM wrote: > > > >> Hi Zuulers, > >> > >> > >> > >> We just discovered an (in our case) unpleasant behaviour of zuul and > >> semaphores. > >> > >> > >> > >> Scenario: > >> > >> We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we > have > >> 30 items with job ‘foo’ in the queue. > >> > >> The current behaviour of zuul is that it first gets the nodes and then > >> locks the semaphore. > >> > >> This means that 5 jobs run and 25 wait for the semaphore but already got > >> and locked their nodes. > >> > >> In case the job is expensive and long running this can become quite a > big > >> resource waste problem if we’re in a quota constrained situation. > >> > >> > >> > >> After some discussion on IRC this would be a proposal to solve this > >> problem: > >> > >> We could make the order of ‘request nodes’ and ‘lock semaphore’ > >> configurable per job with a default to the safe behaviour of > >> > >> ‘lock semaphore’ then ‘request nodes’. Jobs would still have the > >> possibility to do the fast path by requesting the nodes first. > >> > >> This makes sense if you have lightweight jobs that should run in a fast > >> consecutive sequence and you know that some waste of resources is ok. > >> > >> > >> > >> What do the other semaphore users out there think about this? > >> > >> > >> > >> Kind regards > >> > >> Tobias > >> _______________________________________________ > >> Zuul-discuss mailing list > >> Zuul-discuss at lists.zuul-ci.org > >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > >> > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tobias.Henkel at bmw.de Mon Nov 26 15:31:13 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 26 Nov 2018 15:31:13 +0000 Subject: Semaphores and resources In-Reply-To: References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> <1542193872.9spvau8ljj.tristanC@fedora> Message-ID: <1934BBA2-4704-452C-8BCE-616EB9325E96@bmw.de> The time penalty would be the wait time for a node per iteration. So especially with the resource pressure in openstack at the moment this can take quite some time. This is configurable per job. So keeping in mind that this will waste some resources it should only be used for quick and cheap jobs. I’ve already a change up for review that implements the described behavior: https://review.openstack.org/619004 -Tobias On 26.11.18, 15:58, "David Shrewsbury" > wrote: I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option. -Dave On Wed, Nov 14, 2018 at 6:14 AM Tristan Cacqueray > wrote: On November 14, 2018 9:48 am, Joshua Hesketh wrote: > (Had originally forgot to reply-all) > > I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm > not sure the use case of locking first is valid enough to justify the extra > complexity of making it an option. > > The point of a semaphore is often not to waste resources, so I fail to > understand the use case of holding nodes ahead of time. Besides, if you > have resources to spare (or "waste"), then you could have a bigger ready > allocation in your nodepool meaning that new jobs will pick up a node > reasonably quickly anyway. > When semaphores are used by fast jobs (e.g. publication), and if nodepool is running at capacity, it may be better to hold the node ahead of time. Though it sounds like a special case which shouldn't be the default behavior. -Tristan > Again though, as I'm not a user of this feature I'm likely missing > something :-). > > On Tue, Nov 13, 2018 at 2:37 AM > wrote: > >> Hi Zuulers, >> >> >> >> We just discovered an (in our case) unpleasant behaviour of zuul and >> semaphores. >> >> >> >> Scenario: >> >> We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have >> 30 items with job ‘foo’ in the queue. >> >> The current behaviour of zuul is that it first gets the nodes and then >> locks the semaphore. >> >> This means that 5 jobs run and 25 wait for the semaphore but already got >> and locked their nodes. >> >> In case the job is expensive and long running this can become quite a big >> resource waste problem if we’re in a quota constrained situation. >> >> >> >> After some discussion on IRC this would be a proposal to solve this >> problem: >> >> We could make the order of ‘request nodes’ and ‘lock semaphore’ >> configurable per job with a default to the safe behaviour of >> >> ‘lock semaphore’ then ‘request nodes’. Jobs would still have the >> possibility to do the fast path by requesting the nodes first. >> >> This makes sense if you have lightweight jobs that should run in a fast >> consecutive sequence and you know that some waste of resources is ok. >> >> >> >> What do the other semaphore users out there think about this? >> >> >> >> Kind regards >> >> Tobias >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Nov 26 15:35:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Nov 2018 07:35:42 -0800 Subject: Multi-repo merge In-Reply-To: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> (Mike Pagel's message of "Mon, 26 Nov 2018 14:47:12 +0100") References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> Message-ID: <87ftvnua0h.fsf@meyer.lemoncheese.net> "Mike Pagel" writes: > Zuul is great in gating projects consisting of many repositories. It even > supports dependencies of changes within a single or across multiple > repositories. Our project is making heavy use of this. Every now and then we > are facing dependencies that unfortunately are not unidirectional. I know, > we should have better thought about clean repo structures in the first > place, but one not too artificial example is a situation where repositories > are structured to express ownership, visibility and write access. In Github > for instance, all this is configured at the repository level. I would be happy to have such a feature in Zuul, though implementing it is a bit complex. There are two things we need -- the ability for Zuul to push directly to a source: https://storyboard.openstack.org/#!/story/1021878 and then support for bi-directional cross-repo-dependencies: https://storyboard.openstack.org/#!/story/1021879 We need support for Zuul pushing changes in order to merge changes to multiple repositories "atomically" (or as near to that as we can approximate). I'm not aware of anyone currently working on that; if you or someone else is interested, please let me know. (In case you're curious -- we didn't implement this when Zuul was developed for OpenStack because we decided that we wanted OpenStack to be continuously deployable from commit to commit, which means we couldn't ever require someone to upgrade two components at once.) -Jim From fungi at yuggoth.org Mon Nov 26 15:41:02 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Nov 2018 15:41:02 +0000 Subject: Multi-repo merge In-Reply-To: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> Message-ID: <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> On 2018-11-26 14:47:12 +0100 (+0100), Mike Pagel wrote: [...] > It would very much simplify our workflows if an "atomic" merge > (from zuul's point of view) would be possible. I can think of some > ways to sort of realize this with zuul, but none of them is really > safe, even if zuul is the only party executing merges to protected > branches. [...] This could be viewed as a subset of the problem described in Brendan Jackman's "Testing multiple commits in one job with Gerrit" thread from today, extended to cross-project dependencies. The commonality, as I see it, is in running tests only once for a related set of changes (in the same project or across multiple projects) and then merging them "together." Years ago when working out the implementation for cross-project dependencies we hotly debated exactly this "atomic" merge possibility. At the time, Zuul was still very much focused on the needs of the OpenStack community and so it was relatively easy to declare that any sort of tight coupling between projects leading to changes which couldn't be tested discretely in a serialized order was simply a result of poor coding practices and against the community's design principles. This idea has come up regularly over the history of Zuul, and while these days the project is a lot more open to supporting other less-strict workflows and policies, the safety concerns to which you allude are still very much a reason to be worried. I don't see how to make such atomicity guarantees atop other backend services which lack any "atomic" merge feature themselves. What should Zuul do if it encounters an error when attempting to merge some of the required changes after testing succeeds (force in a revert of the others somehow, or leave things in an inconsistent state)? I think if we can answer that or work with the upstream maintainers of the various backends to implement some sort of transactions which can be rolled back prior to committing them, it might be safer. That almost certainly implies a performance cost for the backend itself, however, since it can't claim the necessary atomicity either without some sort of (likely intractable from their perspectives) locking solution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Mon Nov 26 15:43:55 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Nov 2018 07:43:55 -0800 Subject: Semaphores and resources In-Reply-To: (David Shrewsbury's message of "Mon, 26 Nov 2018 09:57:26 -0500") References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> <1542193872.9spvau8ljj.tristanC@fedora> Message-ID: <878t1fu9ms.fsf@meyer.lemoncheese.net> David Shrewsbury writes: > I'm a bit late to the discussion, so maybe this point is moot, but is there > any measurable > gains from ever doing the node allocation first? I.e., do we have numbers > showing this to > actually be more efficient in the "quick job" case? I'm mostly just > wondering how an admin > would know how/when to tune for this specific case if we left "allocate > then lock semaphore" > as an option. I think it would depend on the nature of the job and the load on the system, so I'm not sure we can generalize there. I will just say that in OpenStack's case, we considered (and still may if we get around to it) implementing AFS mirror updates as a Zuul job; in that case I'd be more than happy for the system to get a head start on requesting a node (which could take hours to obtain) at the risk of waiting a few minutes for a previous job to finish. Indeed -- consider two such jobs in our system launched at the same time during a period where it takes 2 hours to allocate a node. If we obtained the semaphore first, then then those two jobs would take 4 hours to complete, but if we obtained the semaphore last, they would take 2 hours. That at least is the sort of thing a user (not necessarily admin) would want to consider. -Jim From corvus at inaugust.com Mon Nov 26 15:48:45 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Nov 2018 07:48:45 -0800 Subject: Multi-repo merge In-Reply-To: <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> (Jeremy Stanley's message of "Mon, 26 Nov 2018 15:41:02 +0000") References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> Message-ID: <87tvk3suua.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > I don't see how to make such atomicity guarantees atop other backend > services which lack any "atomic" merge feature themselves. What > should Zuul do if it encounters an error when attempting to merge > some of the required changes after testing succeeds (force in a > revert of the others somehow, or leave things in an inconsistent > state)? I think if we can answer that or work with the upstream > maintainers of the various backends to implement some sort of > transactions which can be rolled back prior to committing them, it > might be safer. That almost certainly implies a performance cost for > the backend itself, however, since it can't claim the necessary > atomicity either without some sort of (likely intractable from their > perspectives) locking solution. I replied around the same time as you, and touched on this some; in short, having Zuul perform the merges and pushes (rather than making the API call to have the remote gerrit/github do it) gives us confidence that the result will be accepted, and, in the unlikely case that something goes wrong, I think we should unwind back to the previous state (presumably if we encounter an error on the second of two pushes, we can still push a revert to the first repo). For that reason, I consider zuul-push to be the first step in implementing this. -Jim From shrewsbury.dave at gmail.com Mon Nov 26 15:54:29 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 26 Nov 2018 10:54:29 -0500 Subject: Semaphores and resources In-Reply-To: <878t1fu9ms.fsf@meyer.lemoncheese.net> References: <52F7DB13-9CF9-453F-AD1A-F37080E8D23A@bmw.de> <1542193872.9spvau8ljj.tristanC@fedora> <878t1fu9ms.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Nov 26, 2018 at 10:43 AM James E. Blair wrote: > David Shrewsbury writes: > > > I'm a bit late to the discussion, so maybe this point is moot, but is > there > > any measurable > > gains from ever doing the node allocation first? I.e., do we have numbers > > showing this to > > actually be more efficient in the "quick job" case? I'm mostly just > > wondering how an admin > > would know how/when to tune for this specific case if we left "allocate > > then lock semaphore" > > as an option. > > I think it would depend on the nature of the job and the load on the > system, so I'm not sure we can generalize there. I will just say that > in OpenStack's case, we considered (and still may if we get around to > it) implementing AFS mirror updates as a Zuul job; in that case I'd be > more than happy for the system to get a head start on requesting a node > (which could take hours to obtain) at the risk of waiting a few minutes > for a previous job to finish. > > Indeed -- consider two such jobs in our system launched at the same time > during a period where it takes 2 hours to allocate a node. If we > obtained the semaphore first, then then those two jobs would take 4 > hours to complete, but if we obtained the semaphore last, they would > take 2 hours. > This example helps cement the usefulness in my head. Thanks. We should consider adding it to the documentation for the new option (or something similar). > That at least is the sort of thing a user (not necessarily admin) would > want to consider. > > -Jim > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 26 16:03:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Nov 2018 16:03:20 +0000 Subject: Multi-repo merge In-Reply-To: <87tvk3suua.fsf@meyer.lemoncheese.net> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> Message-ID: <20181126160320.hiu2mxwjjyftrbnk@yuggoth.org> On 2018-11-26 07:48:45 -0800 (-0800), James E. Blair wrote: > Jeremy Stanley writes: > > > I don't see how to make such atomicity guarantees atop other backend > > services which lack any "atomic" merge feature themselves. What > > should Zuul do if it encounters an error when attempting to merge > > some of the required changes after testing succeeds (force in a > > revert of the others somehow, or leave things in an inconsistent > > state)? I think if we can answer that or work with the upstream > > maintainers of the various backends to implement some sort of > > transactions which can be rolled back prior to committing them, it > > might be safer. That almost certainly implies a performance cost for > > the backend itself, however, since it can't claim the necessary > > atomicity either without some sort of (likely intractable from their > > perspectives) locking solution. > > I replied around the same time as you, and touched on this some; in > short, having Zuul perform the merges and pushes (rather than making the > API call to have the remote gerrit/github do it) gives us confidence > that the result will be accepted, and, in the unlikely case that > something goes wrong, I think we should unwind back to the previous > state (presumably if we encounter an error on the second of two pushes, > we can still push a revert to the first repo). > > For that reason, I consider zuul-push to be the first step in > implementing this. An excellent solution. We may want to make sure Zuul is able to push --force as well so that it can "roll back" incomplete sets if one push in a set fails for some reason. I have a feeling this would all be extremely failure-prone in cases where Zuul is not the only account pushing commits directly to the repositories, but it's also fair to declare that danger and let the operator shoot themselves in the foot if they disregard the warning. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Mon Nov 26 16:04:06 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 26 Nov 2018 16:04:06 +0000 Subject: Multi-repo merge In-Reply-To: <87tvk3suua.fsf@meyer.lemoncheese.net> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> Message-ID: On 26.11.18, 16:49, "James E. Blair" wrote: Jeremy Stanley writes: > I don't see how to make such atomicity guarantees atop other backend > services which lack any "atomic" merge feature themselves. What > should Zuul do if it encounters an error when attempting to merge > some of the required changes after testing succeeds (force in a > revert of the others somehow, or leave things in an inconsistent > state)? I think if we can answer that or work with the upstream > maintainers of the various backends to implement some sort of > transactions which can be rolled back prior to committing them, it > might be safer. That almost certainly implies a performance cost for > the backend itself, however, since it can't claim the necessary > atomicity either without some sort of (likely intractable from their > perspectives) locking solution. I replied around the same time as you, and touched on this some; in short, having Zuul perform the merges and pushes (rather than making the API call to have the remote gerrit/github do it) gives us confidence that the result will be accepted, and, in the unlikely case that something goes wrong, I think we should unwind back to the previous state (presumably if we encounter an error on the second of two pushes, we can still push a revert to the first repo). For that reason, I consider zuul-push to be the first step in implementing this. I think this feature should be optional and disabled by default. Unfortunately in Github there is currently no possibility for an app to direct push onto a protected branch. So at least for github this would be blocked until this is resolved by Github. There is an open ticket at github for this [1]. It is already open since one and a half year. So we don't know when this will be available. But I think many projects (at least here) would already be happy with a best effort solution without the direct push and manual roll-back if something goes wrong. So we could start with direct push for gerrit and without direct push for github and add a big fat warning about non-automatic rollback to the docs. -Tobias From corvus at inaugust.com Mon Nov 26 16:40:56 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Nov 2018 08:40:56 -0800 Subject: Multi-repo merge In-Reply-To: (Tobias Henkel's message of "Mon, 26 Nov 2018 16:04:06 +0000") References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> Message-ID: <87r2f7rduv.fsf@meyer.lemoncheese.net> writes: > I think this feature should be optional and disabled by default. > Unfortunately in Github there is currently no possibility for an app to direct > push onto a protected branch. So at least for github this would be blocked > until this is resolved by Github. There is an open ticket at github for this [1]. > It is already open since one and a half year. So we don't know when this will > be available. If we hand over enforcement of branch protection to Zuul, is that an issue? I.e., turn off branch protection and have Zuul enforce the review requirements, etc, in a pipeline configuration? It needs to do that anyway since, of course, if it can push, then it can necessarily override requirements (this is true in GitHub and Gerrit). -Jim From pabelanger at redhat.com Mon Nov 26 16:51:14 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 26 Nov 2018 11:51:14 -0500 Subject: Multi-repo merge In-Reply-To: <87r2f7rduv.fsf@meyer.lemoncheese.net> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> <87r2f7rduv.fsf@meyer.lemoncheese.net> Message-ID: <20181126165114.GA5378@localhost.localdomain> On Mon, Nov 26, 2018 at 08:40:56AM -0800, James E. Blair wrote: > writes: > > > I think this feature should be optional and disabled by default. > > Unfortunately in Github there is currently no possibility for an app to direct > > push onto a protected branch. So at least for github this would be blocked > > until this is resolved by Github. There is an open ticket at github for this [1]. > > It is already open since one and a half year. So we don't know when this will > > be available. > > If we hand over enforcement of branch protection to Zuul, is that an > issue? I.e., turn off branch protection and have Zuul enforce the > review requirements, etc, in a pipeline configuration? > As far my understanding of branch protect and usage, is more to stop users of doing something behind zuuls back. By disabling branch protection, it open the option for a user again to do this. Currently, I believe users need write access to github repos (in my case) because there is no other way to also give the user access to add labels to issues (if I am incorrect, I'd love to see the solution). > It needs to do that anyway since, of course, if it can push, then it can > necessarily override requirements (this is true in GitHub and Gerrit). > - Paul From Tobias.Henkel at bmw.de Mon Nov 26 18:26:29 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 26 Nov 2018 18:26:29 +0000 Subject: Multi-repo merge In-Reply-To: <20181126165114.GA5378@localhost.localdomain> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> <87r2f7rduv.fsf@meyer.lemoncheese.net> <20181126165114.GA5378@localhost.localdomain> Message-ID: <0D2A166C-30D0-4439-A7F1-8A548A994C39@bmw.de> On 26.11.18, 17:51, "Paul Belanger" wrote: On Mon, Nov 26, 2018 at 08:40:56AM -0800, James E. Blair wrote: > writes: > > > I think this feature should be optional and disabled by default. > > Unfortunately in Github there is currently no possibility for an app to direct > > push onto a protected branch. So at least for github this would be blocked > > until this is resolved by Github. There is an open ticket at github for this [1]. > > It is already open since one and a half year. So we don't know when this will > > be available. > > If we hand over enforcement of branch protection to Zuul, is that an > issue? I.e., turn off branch protection and have Zuul enforce the > review requirements, etc, in a pipeline configuration? > As far my understanding of branch protect and usage, is more to stop users of doing something behind zuuls back. By disabling branch protection, it open the option for a user again to do this. Currently, I believe users need write access to github repos (in my case) because there is no other way to also give the user access to add labels to issues (if I am incorrect, I'd love to see the solution). > It needs to do that anyway since, of course, if it can push, then it can > necessarily override requirements (this is true in GitHub and Gerrit). > - Paul Unfortunately the only way to restrict pushing to the branch is branch protection so if branch protection would be disabled zuul could push but any other user too. This holds true at least when using branch&pull workflow which is the typical workflow with Github Enterprise. - Tobias P.S. I forgot to post the link to the github ticket in my earlier mail: [1] https://platform.github.community/t/repositories-which-have-protected-branches-with-push-restrictions-have-no-ability-to-grant-push-rights-to-integrations/1376 From stepanenko at selectel.ru Mon Nov 26 20:28:45 2018 From: stepanenko at selectel.ru (=?utf-8?B?0JDQu9C10LrRgdC10Lkg0KHRgtC10L/QsNC90LXQvdC60L4=?=) Date: Mon, 26 Nov 2018 23:28:45 +0300 Subject: Zuul CI and Gitlab Message-ID: Hi. At the summit in Berlin was interested in how the Zuul CI system. GitHub support has been announced, but we're using GitLab. Will there be added support for Networks in the future ? -- Aleksei Stepanenko, Selectel Team From brendan.jackman at bluwirelesstechnology.com Tue Nov 27 06:17:09 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Tue, 27 Nov 2018 06:17:09 +0000 Subject: Multi-repo merge In-Reply-To: References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> Message-ID: <0b13db93-ec5c-9e13-90f7-b72486688841@bluwirelesstechnology.com> On 26/11/2018 23:04, Tobias.Henkel at bmw.de wrote: > On 26.11.18, 16:49, "James E. Blair" wrote: > > Jeremy Stanley writes: > > > I don't see how to make such atomicity guarantees atop other backend > > services which lack any "atomic" merge feature themselves. What > > should Zuul do if it encounters an error when attempting to merge > > some of the required changes after testing succeeds (force in a > > revert of the others somehow, or leave things in an inconsistent > > state)? I think if we can answer that or work with the upstream > > maintainers of the various backends to implement some sort of > > transactions which can be rolled back prior to committing them, it > > might be safer. That almost certainly implies a performance cost for > > the backend itself, however, since it can't claim the necessary > > atomicity either without some sort of (likely intractable from their > > perspectives) locking solution. > > I replied around the same time as you, and touched on this some; in > short, having Zuul perform the merges and pushes (rather than making the > API call to have the remote gerrit/github do it) gives us confidence > that the result will be accepted, and, in the unlikely case that > something goes wrong, I think we should unwind back to the previous > state (presumably if we encounter an error on the second of two pushes, > we can still push a revert to the first repo). > > For that reason, I consider zuul-push to be the first step in > implementing this. > > [...] > But I think many projects (at least here) would already be happy with a best effort > solution without the direct push and manual roll-back if something goes wrong. > So we could start with direct push for gerrit and without direct push for github > and add a big fat warning about non-automatic rollback to the docs. I agree with this - I actually thought about a similar issue when considering my "testing multiple commits with a single job" proposal and decided that a best-effort approach would be more than enough - any errors that somehow arose unexpectedly during submission of a series of commits would result in an untested tip of master, but as long as it's clear to the maintainer when this has happened I think that's an acceptable risk. I don't think pursuit of never-ever-broken-master Nirvana need block the development of useful stuff. From brendan.jackman at bluwirelesstechnology.com Tue Nov 27 06:28:22 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Tue, 27 Nov 2018 06:28:22 +0000 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: <20181126145458.v5ceoy22izzxvthz@yuggoth.org> References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> <20181126145458.v5ceoy22izzxvthz@yuggoth.org> Message-ID: On 26/11/2018 21:54, Jeremy Stanley wrote: > On 2018-11-26 13:47:48 +0000 (+0000), Brendan Jackman wrote: > [...] >> this is basically the behaviour you get already with GitHub, where >> the zuul notion of "change" maps to a whole Pull Request rather >> than an individual commit. > [...] > > By way of explanation, this design choice was basically driven by a > difference in the predominant development workflows between Gerrit > and GitHub: to iterate on a change for a repository under Gerrit, > the author typically replaces the old commit with a new one and > reviewers look at inter-patch diffs from the change history to see > what's been altered; by comparison to iterate on a change for a > repository under GitHub, the author typically proposes subsequent > commits in a lightweight branch which are reviewed as a part of that > same change (I expect this is because GitHub lacks any other sort of > revision tracking for pull requests). > > The usual expectation from GitHub users is that the commits which > make up a PR may not themselves be complete and testable changes. > Gerrit, on the other hand, doesn't really have a concrete unit of > work which aggregates multiple changes since the relationships > between changes within a dependent change series can be altered at > any time through rebasing. That is to say, Zuul's behavior in > aggregating multiple commits for a single GitHub pull request is to > a great extent enabled by the difference between GitHub's data model > and Gerrit's, so to assume discrete Gerrit changes and individual > commits in a GitHub PR are implemented in Zuul as comparable > constructs is a significant leap. GitHub merges the set of commits > from a pull request, Gerrit merges each change one at a time. Sure, it definitely makes sense how the differences in the workflows & models led to the differences in behaviour. In fact my org currently uses Gitlab, which in its current incarnation, is basically a Github clone with a basic revision-tracking feature. So I considered just writing a Gitlab driver for Zuul. Maybe that's a more pragmatic choice but Gerrit is (IMO...) a better code review tool than Gitlab, and my test HW availability issues would be a sad reason to be prevented from using it. A way to reduce pressure on test resources seems like something that would be useful to others too so that's why I wanted ot investigate this idea despite it introducing a bit of friction against Gerrit's natural model. >> the basic idea would be to inject extra events into the Gerrit >> event stream (somewhere around GerritEventConnector._handleEvent). >> One example might be that we detect when an entire stack of Gerrit >> changes has received some common set of approvals, we inject an >> event called (say) "zuul-change-stack-approved". The user can then >> have a pipeline.trigger.gerrit entry with this event, and that >> trigger would create a Zuul change encapsulating the entire stack >> of Gerrit changes. These wouldn't need to actually be treated in >> the code like events coming from Gerrit, but this basic idea would >> allow expressing this behaviour with the existing structure of the >> Gerrit trigger configuration. > [...] > > If this is a goal, it probably needs to start with figuring out how > upstream Gerrit might at least eventually support such a grouping > concept (design of cross-project dependencies for Zuul 2.0 resulted > in a lot of the same sorts of design discussion, though in the end > we settled on metadata in commit messages). My initial feeling is solving a problem that is outside of Gerrit's problem domain - I don't think it would make sense as a specific feature inside Gerrit. But I have thought about how topics relate to this (see below). > My initial logistical questions are, how does Zuul calculate the > boundaries of the set to know when it has "all" the changes expected > (can that set be altered over time or does it have to be declared > only once completely assembled)? If alterable and open-ended, how > should Zuul behave when alteration occurs between approval and merge > (abort testing for the full set, test additions as a separate set)? I haven't though them through in detail but there are 3 ways I considered this could potentially be done: 1. Time-based chunking: if there are multiple proposed commits in a dependency    chain and the first Gerrit change gets the required approval, Zuul waits N    seconds to see if the second one gets an approval. If it doesn't it tests and    potentially merges the first commit. If it does then it resets the timer and    waits to see if the next Gerrit change gets the approval. If the timer runs    out or we reach a Gerrit change with no child, the whole chain gets tested as    one Zuul change and potentially merged. This strategy feels a little wild to    me to to configure/explain but it could be quite ergonomic once set up. It    also feels a bit like a fundamental change to Zuul's model, just jimmied into    the Gerrit driver instead of where it really belongs. 2. You just say if a Gerrit change has a topic matching a regex then every other    change with that topic has to get the approvals before testing. Then they all    become a single Zuul change. Not sure how this one would work with cross-repo    dependencies or if there were no Depends-On but a topic's changes spanned    multiple repos. Or if a change has multiple topics matching the regex. Or if    it makes any sense for non-linear dependency chains. 3. (This is the one I mainly considered:) In the example I gave above you would    just inject the zuul-change-stack-approved event whenever you saw a chain of    proposed commits that started with a child of (a commit in) the target branch    and ended with a commit with no child, that all have a common approval.    If between approval and merge another Gerrit change was added to that chain    and then it also got the approval, another event would be generated and a new    Zuul change would enter the pipeline, representing the new entirety of the    chaing of Gerrit changes. I guess this would be similar to two Github PRs    being approved in quick succession where one contains all the commits of the    other, plus an extra commit.    If between approval and merge new patch versions were pushed then approved    then I think a new Zuul change could just enter the pipeline. If configured    to do so then the old one would be dequeued (in its entirety - it would be a    singular Zuul change from the pipeline's PoV), if not it would continue and    then just fail to merge at the end. Just like for a singular Gerrit    change. I must admit as I wrote this I started to see some complexity I hadn't' considered wrt. the Change.isUpdateOf logic. I will keep thinking about it. > And what should the behavior be if testing for that group of changes > fails (report failure to all of them, retest a subset)? My assumption with all of these is that if the tests fail you report a failure against every Gerrit change (the tip commits gets a "this test failed gate checks" and the ancestors get a "this test was tested as part of X and failed - if it is useful in its own right, you could resubmit it indepdendently") > I'm not personally convinced it makes sense as a feature, but it's > at least an interesting thought exercise. Thanks for considering it! - Brendan From brendan.jackman at bluwirelesstechnology.com Tue Nov 27 06:51:12 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Tue, 27 Nov 2018 06:51:12 +0000 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: <20181126145458.v5ceoy22izzxvthz@yuggoth.org> References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> <20181126145458.v5ceoy22izzxvthz@yuggoth.org> Message-ID: [Sorry, my mail client butchered the other reply, let me try to make it readable] On 26/11/2018 21:54, Jeremy Stanley wrote: > On 2018-11-26 13:47:48 +0000 (+0000), Brendan Jackman wrote: > [...] >> this is basically the behaviour you get already with GitHub, where >> the zuul notion of "change" maps to a whole Pull Request rather >> than an individual commit. > [...] > > By way of explanation, this design choice was basically driven by a > difference in the predominant development workflows between Gerrit > and GitHub: to iterate on a change for a repository under Gerrit, > the author typically replaces the old commit with a new one and > reviewers look at inter-patch diffs from the change history to see > what's been altered; by comparison to iterate on a change for a > repository under GitHub, the author typically proposes subsequent > commits in a lightweight branch which are reviewed as a part of that > same change (I expect this is because GitHub lacks any other sort of > revision tracking for pull requests). > > The usual expectation from GitHub users is that the commits which > make up a PR may not themselves be complete and testable changes. > Gerrit, on the other hand, doesn't really have a concrete unit of > work which aggregates multiple changes since the relationships > between changes within a dependent change series can be altered at > any time through rebasing. That is to say, Zuul's behavior in > aggregating multiple commits for a single GitHub pull request is to > a great extent enabled by the difference between GitHub's data model > and Gerrit's, so to assume discrete Gerrit changes and individual > commits in a GitHub PR are implemented in Zuul as comparable > constructs is a significant leap. GitHub merges the set of commits > from a pull request, Gerrit merges each change one at a time. Sure, it definitely makes sense how the differences in the workflows & models led to the differences in behaviour. In fact my org currently uses Gitlab, which in its current incarnation, is basically a Github clone with a basic revision-tracking feature. So I considered just writing a Gitlab driver for Zuul. Maybe that's a more pragmatic choice but Gerrit is (IMO...) a better code review tool than Gitlab, and my test HW availability issues would be a sad reason to be prevented from using it. A way to reduce pressure on test resources seems like something that would be useful to others too so that's why I wanted to investigate this idea despite it introducing a friction against Gerrit's natural model. >> the basic idea would be to inject extra events into the Gerrit >> event stream (somewhere around GerritEventConnector._handleEvent). >> One example might be that we detect when an entire stack of Gerrit >> changes has received some common set of approvals, we inject an >> event called (say) "zuul-change-stack-approved". The user can then >> have a pipeline.trigger.gerrit entry with this event, and that >> trigger would create a Zuul change encapsulating the entire stack >> of Gerrit changes. These wouldn't need to actually be treated in >> the code like events coming from Gerrit, but this basic idea would >> allow expressing this behaviour with the existing structure of the >> Gerrit trigger configuration. > [...] > > If this is a goal, it probably needs to start with figuring out how > upstream Gerrit might at least eventually support such a grouping > concept (design of cross-project dependencies for Zuul 2.0 resulted > in a lot of the same sorts of design discussion, though in the end > we settled on metadata in commit messages). My initial feeling is solving a problem that is outside of Gerrit's problem domain - I don't think it would make sense as a specific feature inside Gerrit. But I have thought about how topics relate to this (see below) > My initial logistical questions are, how does Zuul calculate the > boundaries of the set to know when it has "all" the changes expected > (can that set be altered over time or does it have to be declared > only once completely assembled)? If alterable and open-ended, how > should Zuul behave when alteration occurs between approval and merge > (abort testing for the full set, test additions as a separate set)? I haven't though them through in detail but there are 3 ways I considered this could potentially be done: 1. Time-based chunking: if there are multiple proposed commits in a dependency    chain and the first Gerrit change gets the required approval, Zuul waits N    seconds to see if the second one gets an approval. If it doesn't it tests and    potentially merges the first commit. If it does then it resets the timer and    waits to see if the next Gerrit change gets the approval. If the timer runs    out or we reach a Gerrit change with no child, the whole chain gets tested as    one Zuul change and potentially merged. This strategy feels a little wild to    me to to configure/explain but it could be quite ergonomic once set up. It    also feels a bit like a fundamental change to Zuul's model, just jimmied into    the Gerrit driver instead of where it really belongs. 2. You just say if a Gerrit change has a topic matching a regex then every other    change with that topic has to get the approvals before testing. Then they all    become a single Zuul change. Not sure how this one would work with cross-repo    dependencies or if there were no Depends-On but a topic's changes spanned    multiple repos. Or if a change has multiple topics matching the regex. Or if    it makes any sense for non-linear dependency chains. 3. (This is the one I mainly considered:) In the example I gave above you would    just inject the zuul-change-stack-approved event whenever you saw a chain of    proposed commits that started with a child of (a commit in) the target branch    and ended with a commit with no child, that all have a common approval.    If between approval and merge another Gerrit change was added to that chain    and then it also got the approval, another event would be generated and a new    Zuul change would enter the pipeline, representing the new entirety of the    chaing of Gerrit changes. I guess this would be similar to two Github PRs    being approved in quick succession where one contains all the commits of the    other, plus an extra commit.    If between approval and merge new patch versions were pushed then approved    then I think a new Zuul change could just enter the pipeline. If configured    to do so then the old one would be dequeued (in its entirety - it would be a    singular Zuul change from the pipeline's PoV), if not it would continue and    then just fail to merge at the end. Just like for a singular Gerrit    change. I must admit as I wrote this I started to see some complexity I hadn't' considered wrt. the Change.isUpdateOf logic. I will keep thinking about it. > And what should the behavior be if testing for that group of changes > fails (report failure to all of them, retest a subset)? My assumption with all of these is that if the tests fail you report a failure against every Gerrit change (the tip commits gets a "this test failed gate checks" and the ancestors get a "this test was tested as part of X and failed - if it is useful in its own right, you could resubmit it indepdendently") > I'm not personally convinced it makes sense as a feature, but it's > at least an interesting thought exercise. Thanks for considering it! - Brendan From mike at mpagel.de Tue Nov 27 07:08:18 2018 From: mike at mpagel.de (Mike Pagel) Date: Tue, 27 Nov 2018 08:08:18 +0100 Subject: AW: Multi-repo merge In-Reply-To: <0b13db93-ec5c-9e13-90f7-b72486688841@bluwirelesstechnology.com> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> <0b13db93-ec5c-9e13-90f7-b72486688841@bluwirelesstechnology.com> Message-ID: <001801d4861f$f9cbacd0$ed630670$@mpagel.de> Thanks everybody for your feedback so far. Regardless of whether zuul really strives for atomicity including potential roll-backs (letting zuul force-push a roll-back sounds scary to me) I would also like to start the discussion what such a feature would look like to the user. Unlike https://github.com/continental/megamerge, presented at a recent Github Universe, where an additional tool/UI is put on top of Github, I would like this to look absolutely trivial from a user experience point of view. Therefore, I'd like to try to stick as close as possible to the current "Depends-On" syntax. Here I could envision two approaches. (A) Mark a dependency tree We could simply keep using Depends-On as usual. A partial change (that is depended on) will not merge stand-alone, as it will not compile by itself. Well, if it does, then fine, just do it. But usually, a partial change will never enter the gate on its own. So we end up with a set of changes, forming a DAG via "Depends-On". Today, this tree cannot me merged (sequentially). Now, a user would pick the root change and put some additional annotation like "Merge-With-Dependencies". Zuul would then collect all dependencies (as it usually does) and gate it all together. Possible disadvantage: The dependencies are directed. This might be semantically confusing, if two changes are considered to be at the very same level. The user would then have to arbitrarily pick one of them to be the dependent change. Technically not a problem, but it might "feel" weird. On the other hand one can argue that it is correct that one change "Depends-On" the other, because it cannot be built/merged without. (B) Mark a merge set Alternatively, another annotation could be used to form an open set of changes that must be merged together. Similar to "Depends-On" a user to put a "Merge-Set: " into changes. Zuul would then have to find all changes marked this way and integrate them. The nice thing here is the lack of direction of dependencies, all changes are considered equal. However, IMHO such an approach seems alien to the current zuul implementation. (C) Cyclic Depends-On A user would create a dependency cycle (currently not allowed in zuul) and hereby define the atomic merge set. I fact, I have seen our users do that multiple times, just assuming it would work. Zuul would be able to determine the set of changes to be integrated together. Big advantage: no new keyword, no new workflow to learn. I'm interested to see other suggestions. Between the above two I would vote for (A) or (C), since they are Best, Mike -----Ursprüngliche Nachricht----- Von: Brendan Jackman [mailto:brendan.jackman at bluwirelesstechnology.com] Gesendet: Dienstag, 27. November 2018 07:17 An: Tobias.Henkel at bmw.de; corvus at inaugust.com; fungi at yuggoth.org Cc: zuul-discuss at lists.zuul-ci.org Betreff: Re: Multi-repo merge On 26/11/2018 23:04, Tobias.Henkel at bmw.de wrote: > On 26.11.18, 16:49, "James E. Blair" wrote: > > Jeremy Stanley writes: > > > I don't see how to make such atomicity guarantees atop other backend > > services which lack any "atomic" merge feature themselves. What > > should Zuul do if it encounters an error when attempting to merge > > some of the required changes after testing succeeds (force in a > > revert of the others somehow, or leave things in an inconsistent > > state)? I think if we can answer that or work with the upstream > > maintainers of the various backends to implement some sort of > > transactions which can be rolled back prior to committing them, it > > might be safer. That almost certainly implies a performance cost for > > the backend itself, however, since it can't claim the necessary > > atomicity either without some sort of (likely intractable from their > > perspectives) locking solution. > > I replied around the same time as you, and touched on this some; in > short, having Zuul perform the merges and pushes (rather than making the > API call to have the remote gerrit/github do it) gives us confidence > that the result will be accepted, and, in the unlikely case that > something goes wrong, I think we should unwind back to the previous > state (presumably if we encounter an error on the second of two pushes, > we can still push a revert to the first repo). > > For that reason, I consider zuul-push to be the first step in > implementing this. > > [...] > But I think many projects (at least here) would already be happy with > a best effort solution without the direct push and manual roll-back if something goes wrong. > So we could start with direct push for gerrit and without direct push > for github and add a big fat warning about non-automatic rollback to the docs. I agree with this - I actually thought about a similar issue when considering my "testing multiple commits with a single job" proposal and decided that a best-effort approach would be more than enough - any errors that somehow arose unexpectedly during submission of a series of commits would result in an untested tip of master, but as long as it's clear to the maintainer when this has happened I think that's an acceptable risk. I don't think pursuit of never-ever-broken-master Nirvana need block the development of useful stuff. _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mike at mpagel.de Tue Nov 27 08:35:57 2018 From: mike at mpagel.de (Mike Pagel) Date: Tue, 27 Nov 2018 09:35:57 +0100 Subject: AW: Multi-repo merge In-Reply-To: <001801d4861f$f9cbacd0$ed630670$@mpagel.de> References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> <0b13db93-ec5c-9e13-90f7-b72486688841@bluwirelesstechnology.com> <001801d4861f$f9cbacd0$ed630670$@mpagel.de> Message-ID: <001a01d4862c$382ccc50$a88664f0$@mpagel.de> Sorry for some editing kludges in my last comment. I know I gave three not two options and I would vote for (A) or (C) not since they are best :-), but because they are closest to today's behavior of zuul. Mike -----Ursprüngliche Nachricht----- Von: Mike Pagel [mailto:mike at mpagel.de] Gesendet: Dienstag, 27. November 2018 08:08 An: zuul-discuss at lists.zuul-ci.org Betreff: AW: Multi-repo merge Thanks everybody for your feedback so far. Regardless of whether zuul really strives for atomicity including potential roll-backs (letting zuul force-push a roll-back sounds scary to me) I would also like to start the discussion what such a feature would look like to the user. Unlike https://github.com/continental/megamerge, presented at a recent Github Universe, where an additional tool/UI is put on top of Github, I would like this to look absolutely trivial from a user experience point of view. Therefore, I'd like to try to stick as close as possible to the current "Depends-On" syntax. Here I could envision two approaches. (A) Mark a dependency tree We could simply keep using Depends-On as usual. A partial change (that is depended on) will not merge stand-alone, as it will not compile by itself. Well, if it does, then fine, just do it. But usually, a partial change will never enter the gate on its own. So we end up with a set of changes, forming a DAG via "Depends-On". Today, this tree cannot me merged (sequentially). Now, a user would pick the root change and put some additional annotation like "Merge-With-Dependencies". Zuul would then collect all dependencies (as it usually does) and gate it all together. Possible disadvantage: The dependencies are directed. This might be semantically confusing, if two changes are considered to be at the very same level. The user would then have to arbitrarily pick one of them to be the dependent change. Technically not a problem, but it might "feel" weird. On the other hand one can argue that it is correct that one change "Depends-On" the other, because it cannot be built/merged without. (B) Mark a merge set Alternatively, another annotation could be used to form an open set of changes that must be merged together. Similar to "Depends-On" a user to put a "Merge-Set: " into changes. Zuul would then have to find all changes marked this way and integrate them. The nice thing here is the lack of direction of dependencies, all changes are considered equal. However, IMHO such an approach seems alien to the current zuul implementation. (C) Cyclic Depends-On A user would create a dependency cycle (currently not allowed in zuul) and hereby define the atomic merge set. I fact, I have seen our users do that multiple times, just assuming it would work. Zuul would be able to determine the set of changes to be integrated together. Big advantage: no new keyword, no new workflow to learn. I'm interested to see other suggestions. Between the above two I would vote for (A) or (C), since they are Best, Mike -----Ursprüngliche Nachricht----- Von: Brendan Jackman [mailto:brendan.jackman at bluwirelesstechnology.com] Gesendet: Dienstag, 27. November 2018 07:17 An: Tobias.Henkel at bmw.de; corvus at inaugust.com; fungi at yuggoth.org Cc: zuul-discuss at lists.zuul-ci.org Betreff: Re: Multi-repo merge On 26/11/2018 23:04, Tobias.Henkel at bmw.de wrote: > On 26.11.18, 16:49, "James E. Blair" wrote: > > Jeremy Stanley writes: > > > I don't see how to make such atomicity guarantees atop other backend > > services which lack any "atomic" merge feature themselves. What > > should Zuul do if it encounters an error when attempting to merge > > some of the required changes after testing succeeds (force in a > > revert of the others somehow, or leave things in an inconsistent > > state)? I think if we can answer that or work with the upstream > > maintainers of the various backends to implement some sort of > > transactions which can be rolled back prior to committing them, it > > might be safer. That almost certainly implies a performance cost for > > the backend itself, however, since it can't claim the necessary > > atomicity either without some sort of (likely intractable from their > > perspectives) locking solution. > > I replied around the same time as you, and touched on this some; in > short, having Zuul perform the merges and pushes (rather than making the > API call to have the remote gerrit/github do it) gives us confidence > that the result will be accepted, and, in the unlikely case that > something goes wrong, I think we should unwind back to the previous > state (presumably if we encounter an error on the second of two pushes, > we can still push a revert to the first repo). > > For that reason, I consider zuul-push to be the first step in > implementing this. > > [...] > But I think many projects (at least here) would already be happy with > a best effort solution without the direct push and manual roll-back if something goes wrong. > So we could start with direct push for gerrit and without direct push > for github and add a big fat warning about non-automatic rollback to the docs. I agree with this - I actually thought about a similar issue when considering my "testing multiple commits with a single job" proposal and decided that a best-effort approach would be more than enough - any errors that somehow arose unexpectedly during submission of a series of commits would result in an untested tip of master, but as long as it's clear to the maintainer when this has happened I think that's an acceptable risk. I don't think pursuit of never-ever-broken-master Nirvana need block the development of useful stuff. _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Tue Nov 27 15:57:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 27 Nov 2018 07:57:32 -0800 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: (Brendan Jackman's message of "Tue, 27 Nov 2018 06:51:12 +0000") References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> <20181126145458.v5ceoy22izzxvthz@yuggoth.org> Message-ID: <8736rmldhv.fsf@meyer.lemoncheese.net> Brendan Jackman writes: > Sure, it definitely makes sense how the differences in the workflows & models > led to the differences in behaviour. In fact my org currently uses Gitlab, which > in its current incarnation, is basically a Github clone with a basic > revision-tracking feature. So I considered just writing a Gitlab driver for > Zuul. Maybe that's a more pragmatic choice but Gerrit is (IMO...) a better code > review tool than Gitlab, and my test HW availability issues would be a sad > reason to be prevented from using it. A way to reduce pressure on test resources > seems like something that would be useful to others too so that's why I wanted > to investigate this idea despite it introducing a friction against Gerrit's > natural model. We'd love a gitlab driver and there are other folks who would be interested in helping if you decided to go that direction. > I haven't though them through in detail but there are 3 ways I considered this > could potentially be done: > > 1. Time-based chunking: if there are multiple proposed commits in a dependency >    chain and the first Gerrit change gets the required approval, Zuul waits N >    seconds to see if the second one gets an approval. If it doesn't it tests and >    potentially merges the first commit. If it does then it resets the timer and >    waits to see if the next Gerrit change gets the approval. If the timer runs >    out or we reach a Gerrit change with no child, the whole chain gets tested as >    one Zuul change and potentially merged. This strategy feels a little wild to >    me to to configure/explain but it could be quite ergonomic once set up. It >    also feels a bit like a fundamental change to Zuul's model, just jimmied into >    the Gerrit driver instead of where it really belongs. This is similar to something else that's come up recently in discussion, which I'm calling the bisecting pipeline manager. It would be a dependent pipeline manager (so it can gate) which, rather than testing each commit, would test batches of commits (largely determined by the timing of when it runs), merging them all, or, if they fail, splitting the batch in half on each failure (modulo their internal dependencies). It's not exactly what you described, but it is a similar mechanism where the goal is to use fewer test resources when gating (and therefore support longer test times). It could also actually be combined with the NNFI algorithm to operate in parallel as well (if, say, you wanted to support two or more batches running at the same time). It's still not exactly a trivial problem, however, it is generally useful and isn't Gerrit specific. But back to your situation -- I think we've beet talking mostly about a gate pipeline -- what do you want to do about check (where Zuul runs jobs on patchset upload)? Do you want to avoid having a check pipeline? Have a check pipeline, but not require a result from it in order to gate and put it at very low precedence (so that your system will get around to checking changes as capacity permits)? Or do you also want to modify check pipeline behavior so that it somehow only runs jobs on the end of a dependency chain? Or something else? -Jim From corvus at inaugust.com Tue Nov 27 16:00:07 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 27 Nov 2018 08:00:07 -0800 Subject: AW: Multi-repo merge In-Reply-To: <001801d4861f$f9cbacd0$ed630670$@mpagel.de> (Mike Pagel's message of "Tue, 27 Nov 2018 08:08:18 +0100") References: <007701d4858e$890da4d0$9b28ee70$@mpagel.de> <20181126154102.cikgpoyt3l6q3clr@yuggoth.org> <87tvk3suua.fsf@meyer.lemoncheese.net> <0b13db93-ec5c-9e13-90f7-b72486688841@bluwirelesstechnology.com> <001801d4861f$f9cbacd0$ed630670$@mpagel.de> Message-ID: <87y39ejyt4.fsf@meyer.lemoncheese.net> "Mike Pagel" writes: > (C) Cyclic Depends-On > > A user would create a dependency cycle (currently not allowed in zuul) > and hereby define the atomic merge set. I fact, I have seen our users > do that multiple times, just assuming it would work. Zuul would be > able to determine the set of changes to be integrated together. Big > advantage: no new keyword, no new workflow to learn. I think this is the way to do it. This is what we've come up with in our previous design discussions. -Jim From brendan.jackman at bluwirelesstechnology.com Wed Nov 28 05:23:56 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Wed, 28 Nov 2018 05:23:56 +0000 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: <8736rmldhv.fsf@meyer.lemoncheese.net> References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> <20181126145458.v5ceoy22izzxvthz@yuggoth.org> <8736rmldhv.fsf@meyer.lemoncheese.net> Message-ID: <29994b63-aef4-b6a2-52c4-23e5fa23000c@bluwirelesstechnology.com> On 27/11/2018 22:57, James E. Blair wrote: > Brendan Jackman writes: > >> Sure, it definitely makes sense how the differences in the workflows & models >> led to the differences in behaviour. In fact my org currently uses Gitlab, which >> in its current incarnation, is basically a Github clone with a basic >> revision-tracking feature. So I considered just writing a Gitlab driver for >> Zuul. Maybe that's a more pragmatic choice but Gerrit is (IMO...) a better code >> review tool than Gitlab, and my test HW availability issues would be a sad >> reason to be prevented from using it. A way to reduce pressure on test resources >> seems like something that would be useful to others too so that's why I wanted >> to investigate this idea despite it introducing a friction against Gerrit's >> natural model. > We'd love a gitlab driver and there are other folks who would be > interested in helping if you decided to go that direction. > >> I haven't though them through in detail but there are 3 ways I considered this >> could potentially be done: >> >> 1. Time-based chunking: if there are multiple proposed commits in a dependency >>    chain and the first Gerrit change gets the required approval, Zuul waits N >>    seconds to see if the second one gets an approval. If it doesn't it tests and >>    potentially merges the first commit. If it does then it resets the timer and >>    waits to see if the next Gerrit change gets the approval. If the timer runs >>    out or we reach a Gerrit change with no child, the whole chain gets tested as >>    one Zuul change and potentially merged. This strategy feels a little wild to >>    me to to configure/explain but it could be quite ergonomic once set up. It >>    also feels a bit like a fundamental change to Zuul's model, just jimmied into >>    the Gerrit driver instead of where it really belongs. > This is similar to something else that's come up recently in discussion, > which I'm calling the bisecting pipeline manager. It would be a > dependent pipeline manager (so it can gate) which, rather than testing > each commit, would test batches of commits (largely determined by the > timing of when it runs), merging them all, or, if they fail, splitting > the batch in half on each failure (modulo their internal dependencies). > It's not exactly what you described, but it is a similar mechanism where > the goal is to use fewer test resources when gating (and therefore > support longer test times). It could also actually be combined with the > NNFI algorithm to operate in parallel as well (if, say, you wanted to > support two or more batches running at the same time). > > It's still not exactly a trivial problem, however, it is generally > useful and isn't Gerrit specific. That's really interesting - what you've described is certainly a more elegant and generic version of what I was imagining, it would certainly solve my issue. Maybe that's the way to go. I'll try to dig up the discusson and read up on the idea. (The only negative thing that comes to mind is that when you (or,  when someone with programming habits like mine) post a patch  series, the first N-1 are often nothing but preparatory  refactoring to enable the final commit's diff to be focussed on  adding the feature/fixing the bug. It would be a little annoying  if the gate job found that the business end of the patch-stack  was broken in an unforeseen way, but then went ahead and merged  all the preparatory refactoring commits anyway. Maybe you need to  re-write the final commit and change the approach to all your  refactoring. But that's a problem (if you see it as a problem)  with the current dependent manager anyway, obviously not a  dealbreaker.) > > But back to your situation -- I think we've beet talking mostly about a > gate pipeline -- what do you want to do about check (where Zuul runs > jobs on patchset upload)? Do you want to avoid having a check pipeline? > Have a check pipeline, but not require a result from it in order to gate > and put it at very low precedence (so that your system will get around > to checking changes as capacity permits)? Or do you also want to modify > check pipeline behavior so that it somehow only runs jobs on the end of > a dependency chain? Or something else? I haven't thought too much about how my proposal applies to the independent pipeline manager. In my scenario the check pipeline jobs don't put much pressure on test resources - they don't take very long and often don't require special HW (unit/module tests, compilation, formatting checks), it's the sort of thing I'd want to run against every commit. I guess I assumed that's how everyone uses the check/gate model. Cheers, Brendan From fungi at yuggoth.org Wed Nov 28 14:04:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Nov 2018 14:04:23 +0000 Subject: Testing multiple commits in one job with Gerrit In-Reply-To: <29994b63-aef4-b6a2-52c4-23e5fa23000c@bluwirelesstechnology.com> References: <76747197-1bbe-9210-ede6-db39d2726128@bluwirelesstechnology.com> <20181126145458.v5ceoy22izzxvthz@yuggoth.org> <8736rmldhv.fsf@meyer.lemoncheese.net> <29994b63-aef4-b6a2-52c4-23e5fa23000c@bluwirelesstechnology.com> Message-ID: <20181128140423.dw3iy54v4bvqba3k@yuggoth.org> On 2018-11-28 05:23:56 +0000 (+0000), Brendan Jackman wrote: [...] > In my scenario the check pipeline jobs don't put much pressure on > test resources - they don't take very long and often don't require > special HW (unit/module tests, compilation, formatting checks), > it's the sort of thing I'd want to run against every commit. I > guess I assumed that's how everyone uses the check/gate model. For OpenStack projects, as a counter-point, we generally run the exact same set of jobs in both check and gate pipelines. The check pipeline is more about providing immediate feedback to change authors and reviewers, while the gate pipeline is there to make sure that the jobs which ran in check who-knows-how-long-ago are still passing with the latest state of the target branch (and with any changes which happen to get approved ahead of it but haven't yet merged). Having a change pass check jobs but then fail in the gate is frustrating and time-consuming for both change authors and reviewers, so providing them with the earliest possible picture of whether a change is likely to merge has been taken as a priority in that workflow. With limited CI resources at your disposal I can see where you might make the compromise to reserve your more resource-intensive jobs only for approved changes, but it's definitely not how everyone uses Zuul and wasn't actually the reason for the check+gate design. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tdecacqu at redhat.com Fri Nov 30 06:36:04 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 30 Nov 2018 06:36:04 +0000 Subject: Projects config introspection Message-ID: <1543558641.271piektfn.tristanC@fedora> Hi all, I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions: * Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used. Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query. Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side. Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625 Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jaroslaw.lukow at codilime.com Fri Nov 30 17:20:32 2018 From: jaroslaw.lukow at codilime.com (=?UTF-8?B?SmFyZWsgxYF1a293?=) Date: Fri, 30 Nov 2018 18:20:32 +0100 Subject: Projects config introspection In-Reply-To: <1543558641.271piektfn.tristanC@fedora> References: <1543558641.271piektfn.tristanC@fedora> Message-ID: Hi, for me, being able to serialize Tenant (or Layout) config into a plain json object using the API will be ok. Our current usecases are: 1) Finding unused configuration items (jobs, secrets) and in turn also unused playbooks. 2) Graphing the job inheritance structure. Right now we do it by reading the configuration files on our own [1], but I think is not the best way to go forward. We need to checkout the project repos manually and if we'd want to do anything more advanced we can end up duplicating the Zuul parsing logic - unnecessary and error-prone work. [1] https://github.com/tungsten-infra/zuul-jobs-grapher Regards, JarekOn Fri, 30 Nov 2018 at 07:36, Tristan Cacqueray wrote: > > Hi all, > > I'd like to start the discussion around introspecting projects > configs. The goal is to be able to efficiently answer the > following questions: > > * Which projects use a specific job, > * Which jobs use a specific nodeset, > * Which nodesets or jobs use a specific label, and > * Which jobs, labels or nodesets are not used. > > Putting aside the job variants model (which isn't well exposed > in the current REST APIs), the easiest solution would be to add > new endpoints, such as /api/job-per-project/{job_name}, or > /api/label-per-project/{label_name}. But this feels sub-optimal > and we need a new route for each type of query. > > Another solution would be to add a global toDict() method for > the whole Tenant configuration object, which can be exposed at > /api/config (or simply dumped using a zuul-scheduler command line > argument). This would enable custom search to be done client-side. > > Another solution would be to implement the query using GraphQL > custom resolver as started in https://review.openstack.org/574625 > > Regards, > -Tristan > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -- Jarosław Łukow DevOps Engineer jaroslaw.lukow at codilime.com CodiLime Sp. z o.o. - Ltd. company with its registered office in Poland, 02-493 Warsaw, ul. Krancowa 5. Registered by The District Court for the Capital City of Warsaw, XII Commercial Department of the National Court Register. Entered into National Court Register under No. KRS 0000388871. Tax identification number (NIP) 5272657478. Statistical number (REGON) 142974628. -- ------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages.