From tdecacqu at redhat.com Sat Dec 1 09:29:15 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sat, 01 Dec 2018 09:29:15 +0000 Subject: Projects config introspection In-Reply-To: References: <1543558641.271piektfn.tristanC@fedora> Message-ID: <1543655456.jq2hs5yndm.tristanC@fedora> Thanks Jarek for the prompt feedback. On November 30, 2018 5:20 pm, Jarek Łukow wrote: > Hi, > > for me, being able to serialize Tenant (or Layout) config into a plain > json object using the API > will be ok. > It seems like that is the path of least resistance to get $subject going on. Though a formal JSON schema of the zuul data model needs lots of rigor and thinkering to efficiently reprensent the relation between all the objects and their source contexts... To get started here is a simple /config endpoints that may be enough for most use-cases: https://review.openstack.org/621344 > Our current usecases are: > 1) Finding unused configuration items (jobs, secrets) and in turn also > unused playbooks. Regarding job's playbooks, here is a change to add them to the job object: https://review.openstack.org/621343 > 2) Graphing the job inheritance structure. Right now we do it by > reading the configuration > files on our own [1], but I think is not the best way to go forward. > We need to checkout the > project repos manually and if we'd want to do anything more advanced > we can end up > duplicating the Zuul parsing logic - unnecessary and error-prone work. > > [1] https://github.com/tungsten-infra/zuul-jobs-grapher > Job inheritance graph was proposed here: https://review.openstack.org/537869 but it was rejected because it doesn't handle branches and variants... -Tristan > Regards, > JarekOn Fri, 30 Nov 2018 at 07:36, Tristan Cacqueray > wrote: >> >> Hi all, >> >> I'd like to start the discussion around introspecting projects >> configs. The goal is to be able to efficiently answer the >> following questions: >> >> * Which projects use a specific job, >> * Which jobs use a specific nodeset, >> * Which nodesets or jobs use a specific label, and >> * Which jobs, labels or nodesets are not used. >> >> Putting aside the job variants model (which isn't well exposed >> in the current REST APIs), the easiest solution would be to add >> new endpoints, such as /api/job-per-project/{job_name}, or >> /api/label-per-project/{label_name}. But this feels sub-optimal >> and we need a new route for each type of query. >> >> Another solution would be to add a global toDict() method for >> the whole Tenant configuration object, which can be exposed at >> /api/config (or simply dumped using a zuul-scheduler command line >> argument). This would enable custom search to be done client-side. >> >> Another solution would be to implement the query using GraphQL >> custom resolver as started in https://review.openstack.org/574625 >> >> Regards, >> -Tristan >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > -- > > Jarosław Łukow > > DevOps Engineer > > > jaroslaw.lukow at codilime.com > > CodiLime Sp. z o.o. - Ltd. company with its registered office in > Poland, 02-493 Warsaw, ul. Krancowa 5. > Registered by The District Court for the Capital City of Warsaw, XII > Commercial Department of the National Court Register. > Entered into National Court Register under No. KRS 0000388871. Tax > identification number (NIP) 5272657478. Statistical number (REGON) > 142974628. > > -- > > > ------------------------------- > The information in this email is > confidential and may be legally privileged, it may contain information that > is confidential in CodiLime Sp. z o. o. It is intended solely for the > addressee. Any access to this email by third parties is unauthorized. If > you are not the intended recipient of this message, any disclosure, > copying, distribution or any action undertaken or neglected in reliance > thereon is prohibited and may result in your liability for damages. > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Sat Dec 1 17:58:39 2018 From: corvus at inaugust.com (James E. Blair) Date: Sat, 01 Dec 2018 09:58:39 -0800 Subject: Projects config introspection In-Reply-To: <1543558641.271piektfn.tristanC@fedora> (Tristan Cacqueray's message of "Fri, 30 Nov 2018 06:36:04 +0000") References: <1543558641.271piektfn.tristanC@fedora> Message-ID: <87y3995dtc.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > Hi all, > > I'd like to start the discussion around introspecting projects > configs. The goal is to be able to efficiently answer the > following questions: > > * Which projects use a specific job, > * Which jobs use a specific nodeset, > * Which nodesets or jobs use a specific label, and > * Which jobs, labels or nodesets are not used. > > Putting aside the job variants model (which isn't well exposed > in the current REST APIs), the easiest solution would be to add > new endpoints, such as /api/job-per-project/{job_name}, or > /api/label-per-project/{label_name}. But this feels sub-optimal > and we need a new route for each type of query. On the contrary, the job variants are exposed in the current API. When you request a job, you get all the variants. I think that's about all one could ask for. :) But I do agree that it's not a concern here -- the goal here is to find out what *job* a project uses. We don't have to figure out what variants it would use. Once we identify that a job is used by a project, we can inspect the project itself to learn about what variants (project-pipeline and global) might be at play. This is bikeshedding, but I don't think we need a new rest endpoint for that; I think we can hang it off of the job endpoint (e.g., /api/job/{name}/projects or something). > Another solution would be to add a global toDict() method for > the whole Tenant configuration object, which can be exposed at > /api/config (or simply dumped using a zuul-scheduler command line > argument). This would enable custom search to be done client-side. I do not think we should do this. The interconnectedness of objects is complex and still changing, and I don't think we should spend time trying to create a schema. Furthermore, the result will be *huge* and I'd rather we send fewer 20MB data structures to clients rather than more. Finally, parsing and interrogating such a huge data structure on the client side will make for a lot of complex code. I'd much rather have a robust API that can answer any individual question conveniently. > Another solution would be to implement the query using GraphQL > custom resolver as started in https://review.openstack.org/574625 This is also promising. In comments on that change you offered to demonstrate what that would look like applied to non-SQL data. I think it would help to do that so we can evaluate it. One further thing however -- I'm not sure how much more complexity we should undertake in the REST API before we begin work on the Zuul v4 effort -- moving the configuration data into Zookeeper. The current method of gearman queries is not very efficient, and as we add more complexity, it's only going to get worse and make the ultimate transition more difficult. I think we can start on the effort to move things into ZK soon, and that will make building these kinds of APIs much easier. -Jim From tdecacqu at redhat.com Sun Dec 2 09:32:09 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 02 Dec 2018 09:32:09 +0000 Subject: Projects config introspection In-Reply-To: <87y3995dtc.fsf@meyer.lemoncheese.net> References: <1543558641.271piektfn.tristanC@fedora> <87y3995dtc.fsf@meyer.lemoncheese.net> Message-ID: <1543737239.2wybml6hlq.tristanC@fedora> On December 1, 2018 5:58 pm, James E. Blair wrote: > Tristan Cacqueray writes: > >> Hi all, >> >> I'd like to start the discussion around introspecting projects >> configs. The goal is to be able to efficiently answer the >> following questions: >> >> * Which projects use a specific job, >> * Which jobs use a specific nodeset, >> * Which nodesets or jobs use a specific label, and >> * Which jobs, labels or nodesets are not used. >> >> Putting aside the job variants model (which isn't well exposed >> in the current REST APIs), the easiest solution would be to add >> new endpoints, such as /api/job-per-project/{job_name}, or >> /api/label-per-project/{label_name}. But this feels sub-optimal >> and we need a new route for each type of query. > > On the contrary, the job variants are exposed in the current API. When > you request a job, you get all the variants. I think that's about all > one could ask for. :) > Looking at: https://zuul.openstack.org/job/devstack-tox-functional What is the difference between the two stable/pike variants? How to tell which parent's variant is the correct parent? Similarly in the projects api, you can not easily differenciate project-template's job from local variant or non-modified jobs. Shouldn't there be a way to reference a job's or project config's variants uniquely? > But I do agree that it's not a concern here -- the goal here is to find > out what *job* a project uses. We don't have to figure out what > variants it would use. Once we identify that a job is used by a > project, we can inspect the project itself to learn about what variants > (project-pipeline and global) might be at play. > Regarding labels, you might want to know if a project use a specific variants for some popular jobs. > This is bikeshedding, but I don't think we need a new rest endpoint for > that; I think we can hang it off of the job endpoint (e.g., > /api/job/{name}/projects or something). > Well the routes can be attached to the existing endpoints, but it's still a very different type of query... >> Another solution would be to add a global toDict() method for >> the whole Tenant configuration object, which can be exposed at >> /api/config (or simply dumped using a zuul-scheduler command line >> argument). This would enable custom search to be done client-side. > > I do not think we should do this. The interconnectedness of objects is > complex and still changing, and I don't think we should spend time > trying to create a schema. Furthermore, the result will be *huge* and > I'd rather we send fewer 20MB data structures to clients rather than > more. Finally, parsing and interrogating such a huge data structure on > the client side will make for a lot of complex code. I'd much rather > have a robust API that can answer any individual question conveniently. > Note that it's not something you would query frequently, but rather a periodic dataset for cleaning or refactoring purpose (e.g. when changing a nodeset or deleting branches). I've built another utility to query every endpoints to get the job<->project<->nodeset relationships. Having a raw dump of the config would be much more convenient to use... For what it worths, here is the utility: https://softwarefactory-project.io/r/#/c/13135/4/tools/create_zuul_status_data.py >> Another solution would be to implement the query using GraphQL >> custom resolver as started in https://review.openstack.org/574625 > > This is also promising. In comments on that change you offered to > demonstrate what that would look like applied to non-SQL data. I think > it would help to do that so we can evaluate it. > The scheduler model is partially exposed over GraphQL since PS3, there is even a test to demonstrate the query language: https://review.openstack.org/#/c/574625/3/tests/unit/test_web.py Though I haven't looked at the UI part yet. > One further thing however -- I'm not sure how much more complexity we > should undertake in the REST API before we begin work on the Zuul v4 > effort -- moving the configuration data into Zookeeper. The current > method of gearman queries is not very efficient, and as we add more > complexity, it's only going to get worse and make the ultimate > transition more difficult. I think we can start on the effort to move > things into ZK soon, and that will make building these kinds of APIs > much easier. > Hum, I didn't realized the configuration data was being moved to zookeeper... What is the roadmap for Zuul v4? Before growing the zookeeper dependency, could we please consider etcd again? I recently found a year old issue about unauthorized connection being accepted, which is not very encouraging... https://issues.apache.org/jira/browse/ZOOKEEPER-2937 Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Mon Dec 3 06:49:23 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 3 Dec 2018 06:49:23 +0000 Subject: Projects config introspection In-Reply-To: <1543737239.2wybml6hlq.tristanC@fedora> References: <1543558641.271piektfn.tristanC@fedora> <87y3995dtc.fsf@meyer.lemoncheese.net> <1543737239.2wybml6hlq.tristanC@fedora> Message-ID: On 02.12.18, 10:32, "Tristan Cacqueray" wrote: On December 1, 2018 5:58 pm, James E. Blair wrote: [...] > One further thing however -- I'm not sure how much more complexity we > should undertake in the REST API before we begin work on the Zuul v4 > effort -- moving the configuration data into Zookeeper. The current > method of gearman queries is not very efficient, and as we add more > complexity, it's only going to get worse and make the ultimate > transition more difficult. I think we can start on the effort to move > things into ZK soon, and that will make building these kinds of APIs > much easier. > Hum, I didn't realized the configuration data was being moved to zookeeper... What is the roadmap for Zuul v4? Before growing the zookeeper dependency, could we please consider etcd again? I recently found a year old issue about unauthorized connection being accepted, which is not very encouraging... https://issues.apache.org/jira/browse/ZOOKEEPER-2937 Zookeeper is already deeply baked into the whole system. I just looked into the differences to etcd and I think the for us most important difference is the way locks work. In zookeeper the lock is an ephemeral node that exists as long as the session exists. This is checked with a heartbeat per session. In contrast etcd has no session concept. Locks are modeled via leases where each lease which needs to be validated every few seconds with a heartbeat. As we make a massive use of locks we would need to handle potentially thousands of locks at the same time which would translate into thousands of heartbeats in etcd every few seconds compared to just one with zookeeper. Further we have with kazoo a zk library with many ready to use recipes while with etcd I only found the lock concept in the python-etcd3 library. So we would need to create our own recipes for caching, leader election,... So I'm not really sure if it's worth switching to etcd in the state we are now. Exchanging this would be a large effort. However the zk stuff is pretty good abstracted in zuul and nodepool so I think if we consider a switch in the future I don't think the effort will be such a big difference compared to switching now. Regarding Zuul v4, I think of that for all topics that are related to make the scheduler scale out ready. That also involves to replace gearman with zookeeper in most places. And by coincidence I've just pushed up my first thoughts about that as a spec: https://review.openstack.org/621479 -Tobias From Tobias.Henkel at bmw.de Mon Dec 3 07:02:06 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 3 Dec 2018 07:02:06 +0000 Subject: Road to fully scale out Zuul Message-ID: Hi Zuulers, currently the zuul-scheduler is the only component of zuul that is not scale out ready and thus a single point of failure. We always had the goal to make the scheduler scale out ready at some point in time. In order to make this more concrete so we can start to work on this I tried to transform my thoughts on this into a spec: https://review.openstack.org/621479 I'd be interested in what you think about this. Cheers, Tobias From tdecacqu at redhat.com Mon Dec 3 08:00:36 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 03 Dec 2018 08:00:36 +0000 Subject: Projects config introspection In-Reply-To: References: <1543558641.271piektfn.tristanC@fedora> <87y3995dtc.fsf@meyer.lemoncheese.net> <1543737239.2wybml6hlq.tristanC@fedora> Message-ID: <1543822882.o59b35efx0.tristanC@fedora> Thank you Tobias for looking into that again. Please find some comments inlined below. On December 3, 2018 6:49 am, Tobias.Henkel at bmw.de wrote: > On 02.12.18, 10:32, "Tristan Cacqueray" wrote: > > On December 1, 2018 5:58 pm, James E. Blair wrote: > [...] > > > One further thing however -- I'm not sure how much more complexity we > > should undertake in the REST API before we begin work on the Zuul v4 > > effort -- moving the configuration data into Zookeeper. The current > > method of gearman queries is not very efficient, and as we add more > > complexity, it's only going to get worse and make the ultimate > > transition more difficult. I think we can start on the effort to move > > things into ZK soon, and that will make building these kinds of APIs > > much easier. > > > Hum, I didn't realized the configuration data was being moved to > zookeeper... What is the roadmap for Zuul v4? > Before growing the zookeeper dependency, could we please consider > etcd again? I recently found a year old issue about unauthorized > connection being accepted, which is not very encouraging... > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 > > Zookeeper is already deeply baked into the whole system. I just looked > into the differences to etcd and I think the for us most important difference > is the way locks work. In zookeeper the lock is an ephemeral node that > exists as long as the session exists. This is checked with a heartbeat per session. > In contrast etcd has no session concept. Locks are modeled via leases where > each lease which needs to be validated every few seconds with a heartbeat. > As we make a massive use of locks we would need to handle potentially thousands > of locks at the same time which would translate into thousands of heartbeats > in etcd every few seconds compared to just one with zookeeper. > It seems like you can attach many locks to a single lease. All the keys attached to a lease are deleted if the lease expires, which seems like a similar implementation of zookeeper's lock. > Further we have with kazoo a zk library with many ready to use recipes while with > etcd I only found the lock concept in the python-etcd3 library. So we would need > to create our own recipes for caching, leader election,... > So I'm not really sure if it's worth switching to etcd in the state we are now. > Exchanging this would be a large effort. However the zk stuff is pretty good > abstracted in zuul and nodepool so I think if we consider a switch in the future > I don't think the effort will be such a big difference compared to switching now. > If we are going to use kazoo recipes, isn't this going to make a switch later more difficult? > Regarding Zuul v4, I think of that for all topics that are related to make the > scheduler scale out ready. That also involves to replace gearman with zookeeper > in most places. And by coincidence I've just pushed up my first thoughts about > that as a spec: https://review.openstack.org/621479 > That is looking great :) -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Mon Dec 3 22:06:20 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 03 Dec 2018 14:06:20 -0800 Subject: Zuul project update 2018-12-03 Message-ID: <87h8fuxo2r.fsf@meyer.lemoncheese.net> Hi, It's been a while since I've sent one of these -- mostly due to all the conferences and holidays which seemed to all happen at once. There's a lot going on: I'm working on support for dynamically adjusting node request priorities. Because of the current heavy load on OpenStack's Zuul system, this should help us get results faster on changes to Zuul. I also have changes in progress to add support for storing information about artifacts (but not the artifacts themselves) in the SQL database. That will make it easier for jobs to store artifacts in arbitrary locations and have later jobs use those artifacts. In particular, this is in service of the idea of a "promotion" pipeline. Finally, I'll be proposing a formal governance structure for the project this week -- something I've been drafting for a while. Paul has been adding support for executor zones -- a long-standing feature request. This allows admins to locate executors near to the node resources they use. Shrews added support for generic node attributes via config file to nodepool (mainly supporting the executor zone feature). Monty is continuing work on refactoring of nodepool TaskManager across openstacksdk and keystoneauth libraries, and is also working with tristanC towards supporting both statsd and prometheus for metrics collection. There is a new project "promstats" as part of this effort. Tobias has been working on zk caching in nodepool, and per project and tenant usage statistics, and a spec for a scale-out scheduler. Tristan has proposed a roadmap for Openshift support, started discussion about config introspection and worked on a web ui to handle generic API errors. Clint is working on producing reliable unit test(s) for AWS/EC2 driver. Thanks everyone! -Jim From tdecacqu at redhat.com Wed Dec 5 10:35:48 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 05 Dec 2018 10:35:48 +0000 Subject: Software Factory 3.2 released Message-ID: <1544005978.lyzfl1ojv9.tristanC@fedora> We are pleased to announce the release of Software Factory version 3.2: https://www.softwarefactory-project.io/releases/3.2 This new release consolidates the 3.1 version with updated packages, minor fixes and some new features such as an improved update mechanism. Software Factory version 3.2 integrates Zuul and Nodepool version 3.3.1. Here is the list of features we picked in the Software Factory version of Zuul and Nodepool that are still under review upstream. We are confident these features will eventually land upstream, but they might evolve by then. They should therefore be considered a "tech preview", although we will do our best to integrate these in the least disruptive way possible: * Improved dashboard: https://review.openstack.org/#/q/topic:react-zuul * Openshift driver: https://review.openstack.org/#/q/topic:container-build-resources * RunC driver: https://review.openstack.org/535556 The Nodepool package also contains some backports to mitigate stability issues found in the (upstream) version 3.3.1: * Set pool for error'ed instances: https://review.openstack.org/621681 * Set type for error'ed instances: https://review.openstack.org/622101 * Log exceptions deleting ZK nodes: https://review.openstack.org/621301 * Fix race when deleting Node znodes: https://review.openstack.org/622403 * Count leaked nodes in unmanaged quota: https://review.openstack.org/621040 * Cleanup routine to delete empty nodes: https://review.openstack.org/622616 Best regards, The Software Factory Team -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Wed Dec 5 12:43:44 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 5 Dec 2018 07:43:44 -0500 Subject: Software Factory 3.2 released In-Reply-To: <1544005978.lyzfl1ojv9.tristanC@fedora> References: <1544005978.lyzfl1ojv9.tristanC@fedora> Message-ID: <20181205124344.GA15457@localhost.localdomain> On Wed, Dec 05, 2018 at 10:35:48AM +0000, Tristan Cacqueray wrote: > We are pleased to announce the release of Software Factory version 3.2: > https://www.softwarefactory-project.io/releases/3.2 > > This new release consolidates the 3.1 version with updated packages, minor > fixes and some new features such as an improved update mechanism. > > Software Factory version 3.2 integrates Zuul and Nodepool version 3.3.1. > Here is the list of features we picked in the Software Factory version of > Zuul and Nodepool that are still under review upstream. > We are confident these features will eventually land upstream, but they might > evolve by then. They should therefore be considered a "tech preview", > although we will do our best to integrate these in the least disruptive way > possible: > * Improved dashboard: https://review.openstack.org/#/q/topic:react-zuul > * Openshift driver: > https://review.openstack.org/#/q/topic:container-build-resources > * RunC driver: https://review.openstack.org/535556 > > The Nodepool package also contains some backports to mitigate stability issues > found in the (upstream) version 3.3.1: > * Set pool for error'ed instances: https://review.openstack.org/621681 > * Set type for error'ed instances: https://review.openstack.org/622101 > * Log exceptions deleting ZK nodes: https://review.openstack.org/621301 > * Fix race when deleting Node znodes: https://review.openstack.org/622403 > * Count leaked nodes in unmanaged quota: https://review.openstack.org/621040 This ^ patch has a bug in it, where quota will be not calculated properly. This was fixed by tobiash in https://review.openstack.org/621286/ > * Cleanup routine to delete empty nodes: https://review.openstack.org/622616 > > Best regards, > The Software Factory Team - Paul From claire at openstack.org Wed Dec 5 22:34:12 2018 From: claire at openstack.org (Claire Massey) Date: Wed, 5 Dec 2018 16:34:12 -0600 Subject: [Newsletter] What's Happening in the Open Infrastructure Community Message-ID: <24CC8FB0-A1BE-487D-9843-C983503C2D60@openstack.org> Hi everyone, This week, the OSF team distributed the first Open Infrastructure Community Newsletter. Our goal with the bi-weekly newsletter is to provide a digest of the latest developments and activities across open infrastructure projects, events and users. This week, the newsletter highlighted the Berlin Summit as well as brief updates from Zuul and other OSF projects - OpenStack, Airship, Kata Containers and StarlingX. You can checkout the full newsletter on Superuser [1], and if you are interested in receiving the upcoming newsletters, you can subscribe here [2]. If you would like to contribute to a future newsletter or have feedback, please reach out to community at openstack.org . Thanks, Claire [1] http://superuser.openstack.org/articles/inside-open-infrastructure-newsletter1 [2] https://www.openstack.org/community/email-signup -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Mon Dec 10 08:31:02 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 10 Dec 2018 08:31:02 +0000 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1542533012.7xqkdv04pn.tristanC@fedora> References: <1542533012.7xqkdv04pn.tristanC@fedora> Message-ID: <1544413789.lziy2a2rs2.tristanC@fedora> On November 18, 2018 9:34 am, Tristan Cacqueray wrote: > Kube config integration in zuul-executor > ---------------------------------------- > As of now, the k8s and the openshift drivers need a setup function in > the exector module in order to be usable: https://review.openstack.org/570668 > [...] > Can we get an estimated release date for OpenShift users? > > Hi, It has been 20 days since the last reply and only one review has been made. Can we please make a tentative schedule for Openshift support? Drone.io[0] recently announced official support for k8s and there is argoproj[1] which is also building a ci/cd setup for k8s. I think we are only missing the zuul-executor patch to be able to implement similar jobs in Zuul. Regards, -Tristan [0]: https://blog.drone.io/drone-goes-kubernetes-native/ [1]: https://argoproj.github.io/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From brendan.jackman at bluwirelesstechnology.com Mon Dec 10 09:28:15 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Mon, 10 Dec 2018 09:28:15 +0000 Subject: Adding an unconditional cleanup job phase Message-ID: Hi all, Looking, for the moment, for lower-hanging fruit in my quest to reduce test resource pressure with Zuul. One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset. I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway. If you agree so far, can we talk semantics & implementation? I think it would be sensible that these playbooks are _never_ killed, but that obviously poses a risk that users can get stuck with an unkillable job running. I guess a simple solution is to have a distinct timeout from the job timeout, which covers just the running of these playbooks. We recommend setting this pretty short (default 30 secs, say), and strictly minimising the logic in these playbooks. If I'm still making sense then from what I've read so far, it seems all the complexity here can just go in AnsibleJob - the gearman communication would already support this. The client would just send the executor:stop TextJob as normal, and then the server can return its response asynchronously when the cleanup playbooks have finished? How does that sound? Cheers, Brendan From mordred at inaugust.com Mon Dec 10 15:47:51 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 10 Dec 2018 09:47:51 -0600 Subject: Projects config introspection In-Reply-To: <1543737239.2wybml6hlq.tristanC@fedora> Message-ID: An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Dec 10 15:49:41 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 10 Dec 2018 07:49:41 -0800 Subject: Adding an unconditional cleanup job phase In-Reply-To: (Brendan Jackman's message of "Mon, 10 Dec 2018 09:28:15 +0000") References: Message-ID: <8736r5jsa2.fsf@meyer.lemoncheese.net> Brendan Jackman writes: > One unfortunate hack in my use-case is that we currently lock and > release test resources which are shared with another system. This > is handled by an ansible role that sends API calls to that other > system. This means summarily killing a running job can prevent a > resource from getting unlocked. To get around this I've had to > disable pipeline.dequeue-on-new-patchset. That is unfortunate -- that's probably causing a lot of extra resources to be consumed. > I'd like to add stage where playbooks can be added that still > gets run even if the job gets stopped. My guess is that this is > also useful to other people who are using the static nodepool > driver and would like to unconditionally clean up files left > behind by jobs - does that sound correct? It generally seems that > there would be value in having and unconditional cleanup stage, > anyway. We've also discussed having a cleanup *job* which runs after all other jobs are complete, though I think that adding support for pausing jobs reduced the need for that somewhat (since a job can run, pause, and continue after all other jobs regardless of whether they succeed). I don't think that directly addresses your use case because it can still be aborted. One downside of this approach is that Zuul can provide no guarantees that the playbook will run, only a best effort. If the Zuul executor is killed, there's nothing we can do about it. Because of that, the most reliable way to deal with resource locks is a lock that doesn't require explicit unlocking. The way that Zuul locks nodes in Nodepool accounts for this (if Zuul is killed, the locks are automatically released). Is it possible for you to encode the resources you need as static nodes in Nodepool and add them to your nodeset requests? Then you would get the benefit of Nodepool's locking. If not, we have discussed adding support for more resource types to Nodepool; perhaps that would help. > If you agree so far, can we talk semantics & implementation? > > I think it would be sensible that these playbooks are _never_ > killed, but that obviously poses a risk that users can get stuck > with an unkillable job running. I guess a simple solution is to > have a distinct timeout from the job timeout, which covers just > the running of these playbooks. We recommend setting this pretty > short (default 30 secs, say), and strictly minimising the logic > in these playbooks. > > If I'm still making sense then from what I've read so far, it > seems all the complexity here can just go in AnsibleJob - the > gearman communication would already support this. The client > would just send the executor:stop TextJob as normal, and then the > server can return its response asynchronously when the cleanup > playbooks have finished? > > How does that sound? If we still think a cleanup phase is the best approach despite the caveats (which we should document), the implementation you describe sounds good. A busy system would probably need more than 30 seconds (I'd still probably give it a few minutes). -Jim From corvus at inaugust.com Mon Dec 10 16:20:19 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 10 Dec 2018 08:20:19 -0800 Subject: Roadmap for running jobs on OpenShift In-Reply-To: <1544413789.lziy2a2rs2.tristanC@fedora> (Tristan Cacqueray's message of "Mon, 10 Dec 2018 08:31:02 +0000") References: <1542533012.7xqkdv04pn.tristanC@fedora> <1544413789.lziy2a2rs2.tristanC@fedora> Message-ID: <87o99ticak.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On November 18, 2018 9:34 am, Tristan Cacqueray wrote: >> Kube config integration in zuul-executor >> ---------------------------------------- >> As of now, the k8s and the openshift drivers need a setup function in >> the exector module in order to be usable: https://review.openstack.org/570668 >> > [...] >> Can we get an estimated release date for OpenShift users? >> >> > > Hi, > > It has been 20 days since the last reply and only one review has been > made. Can we please make a tentative schedule for Openshift support? > > Drone.io[0] recently announced official support for k8s and there is > argoproj[1] which is also building a ci/cd setup for k8s. > I think we are only missing the zuul-executor patch to be able to > implement similar jobs in Zuul. I believe the next step on your roadmap is https://review.openstack.org/570668 -- I've reviewed that with only minor suggestions; I think we're pretty close. Thanks, Jim From mordred at inaugust.com Mon Dec 10 16:54:50 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 10 Dec 2018 10:54:50 -0600 Subject: Projects config introspection In-Reply-To: <1543822882.o59b35efx0.tristanC@fedora> Message-ID: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> An HTML attachment was scrubbed... URL: From brendan.jackman at bluwirelesstechnology.com Tue Dec 11 04:49:32 2018 From: brendan.jackman at bluwirelesstechnology.com (Brendan Jackman) Date: Tue, 11 Dec 2018 04:49:32 +0000 Subject: Adding an unconditional cleanup job phase In-Reply-To: <8736r5jsa2.fsf@meyer.lemoncheese.net> References: <8736r5jsa2.fsf@meyer.lemoncheese.net> Message-ID: <7e997763-c40d-664c-938d-640407afa07c@bluwirelesstechnology.com> On 10/12/2018 22:49, James E. Blair wrote: > Brendan Jackman writes: > >> One unfortunate hack in my use-case is that we currently lock and >> release test resources which are shared with another system. This >> is handled by an ansible role that sends API calls to that other >> system. This means summarily killing a running job can prevent a >> resource from getting unlocked. To get around this I've had to >> disable pipeline.dequeue-on-new-patchset. > That is unfortunate -- that's probably causing a lot of extra resources > to be consumed. > >> I'd like to add stage where playbooks can be added that still >> gets run even if the job gets stopped. My guess is that this is >> also useful to other people who are using the static nodepool >> driver and would like to unconditionally clean up files left >> behind by jobs - does that sound correct? It generally seems that >> there would be value in having and unconditional cleanup stage, >> anyway. > We've also discussed having a cleanup *job* which runs after all other > jobs are complete, though I think that adding support for pausing jobs > reduced the need for that somewhat (since a job can run, pause, and > continue after all other jobs regardless of whether they succeed). I > don't think that directly addresses your use case because it can still > be aborted. > > One downside of this approach is that Zuul can provide no guarantees > that the playbook will run, only a best effort. If the Zuul executor is > killed, there's nothing we can do about it. Yeah, to be honest I'd resigned myself to the fact that if Zuul components get restarted I'm gona have to manually check for unreleased locks, and I put them on a 12 hour timeout for that reason too. (Luckily there's decent visibility on the locks). > Because of that, the most reliable way to deal with resource locks is a > lock that doesn't require explicit unlocking. The way that Zuul locks > nodes in Nodepool accounts for this (if Zuul is killed, the locks are > automatically released). Is it possible for you to encode the resources > you need as static nodes in Nodepool and add them to your nodeset > requests? Then you would get the benefit of Nodepool's locking. If > not, we have discussed adding support for more resource types to > Nodepool; perhaps that would help. Yeah it may be worth investigating a smarter locking scheme. For context the resources are pieces of specialised HW, and they are shared with a Jenkins system where they are arbitrated by the Lockable Resources Plugin. Support was added for sharing by creating a Jenkins job that when run, locks the requested resource then watches a given file until it is deleted, whereupon it releases the lock. So the Zuul job acquires the lock by triggering the Jenkins job, then releases the resource by deleting its own workspace (Jenkins can see the Zuul workspace over NFS). Perhaps a smarter scheme would be to have Jenkins connect to a TCP endpoint to report that the lock is acquired, then release it at soon as it can't communicate with the endpoint any more. I still think that an unconditional cleanup phase would be useful though, even if best-effort, because with dequeue-on-new-patchset we are gona end up with a lot of job workspaces to garbage-collect (with the best-effort cleanup phase I would expect it to be a small enough problem that our infra will change before we need to worry about it!). Making Zuul properly aware of the resources sounds cleaner but it actually brings the unfortunate complication that I want to strictly minimise the regions where the resources are locked: I don't want jobs to be holding locks while they compile code or upload logs, for example. >> If you agree so far, can we talk semantics & implementation? >> >> I think it would be sensible that these playbooks are _never_ >> killed, but that obviously poses a risk that users can get stuck >> with an unkillable job running. I guess a simple solution is to >> have a distinct timeout from the job timeout, which covers just >> the running of these playbooks. We recommend setting this pretty >> short (default 30 secs, say), and strictly minimising the logic >> in these playbooks. >> >> If I'm still making sense then from what I've read so far, it >> seems all the complexity here can just go in AnsibleJob - the >> gearman communication would already support this. The client >> would just send the executor:stop TextJob as normal, and then the >> server can return its response asynchronously when the cleanup >> playbooks have finished? >> >> How does that sound? > If we still think a cleanup phase is the best approach despite the > caveats (which we should document), the implementation you describe > sounds good. A busy system would probably need more than 30 seconds > (I'd still probably give it a few minutes). > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From tdecacqu at redhat.com Tue Dec 11 06:45:22 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 11 Dec 2018 06:45:22 +0000 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> Message-ID: <1544503922.b3l7gx2t9k.tristanC@fedora> Thank you, Monty for such an extensive feedback. Please find my replies inlined. On December 10, 2018 4:54 pm, Monty Taylor wrote: > I've since learned that it's considered a Bad Idea to re-use the etcd k8s uses for apps running in k8s. That means you need to run an etcd for your app anyway, which ultimately means the k8s use of etcd is really not as meaningful/beneficial as I originally thought, reducing any perceived gain we might have from aligning with what the k8s community is doing. Given that one of the benefits of using k8s to run applications is to make installing complicated dependencies like zookeeper easier (and plenty of things out there require much more complicated dependency chains than we do) - I don't think "need to run a zookeeper but I already know how to run etcd in k8s" is a very compelling argument. > > Another argument has to do with distro packaging, as the complexity of distro packaging for java applications is high and support for zk inside of the RH distros has been low. I know people were already working on getting it in to rh distros properly, but even with that, last time I asked internally at RH about use of software dependencies upstream that were not in distro and/or hard to properly package (this was related to javascript tooling and frameworks) the answer was "write the software the way it needs to be written, don't let the distro packaging situation override the correct technical decision". So I'd argue we've already gotten the feedback needed from the provider of the distro with the youngest/smallest support for zk that it's ok. > I think this is a single argument: etcd is becoming more popular with k8s (and apparantly openstack). Using etcd for Zuul would make it simpler for operator since they are more likely to have the knowledge of running and maintaining an etcd cluster. Moreover, integrating Zookeeper in RH distro isn't really a technical issue; the dependencies are already packaged. However, the issue is the lack of engagement upstream which prevents proper support. [...] > The only reason I could imagine changing that in the future would be if zk upstream abandoned the project. Zookeeper itself is prety much "done" - it's a simple system that's been around for ages and additional 'features' are mostly done via recipes on top of its fundamentals ... so even a mostly absent upstream wouldn't really be a problem as long as it remains active enough to keep up with security fixes and to keep up with Java versions. I'm pretty sure zk is used in enough things that many people have a vested interest in (like Kafka) that it's unlikely zk loses enough upstream support to be a dangerous thing to be based on. I find this argument to be the most compeling. Zookeeper was orginally developed at Yahoo, and there have been no major releases since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] also seems to be removing the Zookeeper depencency... I disagree that Zookeeper is a simple system. It's a complex stack and it seems to be a painpoint for operations. I wouldn't say that it is a dangerous thing to be based on, but it's a problematic choice to make it such a central piece of Zuul services. > It is a little concerning that the auth issue you link to is still outstanding. However, if you look at the github PR linked: https://github.com/apache/zookeeper/pull/418 it's in merge conflict. I left a ping to the author and reviewer to see if it can be revived. That said - I would hope that zk auth isn't the only piece of the puzzle preventing unauthorized access. I'd expect defense in depth here, with at least some amount of additional restriction in place, such as VPN or iptables or similar. Even with that PR, the current system may not be enough as it doesn't support multiple auth and you only have a global username:password. For example, with the executor zoning, we may want to be able to revoke or change just one executor credential and not the whole system. > All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :) Perhaps it's not such an impossible feat to rewrite the zk.py module with an etcd backend. Would such a contribution be considered? Regards, -Tristan [0]: https://code.fb.com/data-infrastructure/location-aware-distribution-configuring-servers-at-scale/ [1]: https://issues.apache.org/jira/browse/KAFKA-6598 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Tue Dec 11 12:34:13 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 11 Dec 2018 07:34:13 -0500 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: <1544503922.b3l7gx2t9k.tristanC@fedora> References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> <1544503922.b3l7gx2t9k.tristanC@fedora> Message-ID: <20181211123413.GA27849@localhost.localdomain> On Tue, Dec 11, 2018 at 06:45:22AM +0000, Tristan Cacqueray wrote: > Thank you, Monty for such an extensive feedback. > Please find my replies inlined. > > On December 10, 2018 4:54 pm, Monty Taylor wrote: > > I've since learned that it's considered a Bad Idea to re-use the etcd k8s uses for apps running in k8s. That means you need to run an etcd for your app anyway, which ultimately means the k8s use of etcd is really not as meaningful/beneficial as I originally thought, reducing any perceived gain we might have from aligning with what the k8s community is doing. Given that one of the benefits of using k8s to run applications is to make installing complicated dependencies like zookeeper easier (and plenty of things out there require much more complicated dependency chains than we do) - I don't think "need to run a zookeeper but I already know how to run etcd in k8s" is a very compelling argument. > > > > Another argument has to do with distro packaging, as the complexity of distro packaging for java applications is high and support for zk inside of the RH distros has been low. I know people were already working on getting it in to rh distros properly, but even with that, last time I asked internally at RH about use of software dependencies upstream that were not in distro and/or hard to properly package (this was related to javascript tooling and frameworks) the answer was "write the software the way it needs to be written, don't let the distro packaging situation override the correct technical decision". So I'd argue we've already gotten the feedback needed from the provider of the distro with the youngest/smallest support for zk that it's ok. > > > > I think this is a single argument: etcd is becoming more popular with > k8s (and apparantly openstack). Using etcd for Zuul would make it > simpler for operator since they are more likely to have the knowledge > of running and maintaining an etcd cluster. Moreover, integrating > Zookeeper in RH distro isn't really a technical issue; the dependencies > are already packaged. However, the issue is the lack of engagement > upstream which prevents proper support. > FWIW: I'd had the same concerns as you, when we first decided to use Zookeeper. Originally I thought, great another java app to maintain. However, in the last year of running Zookeeper for zuul.o.o, there hasn't been much maintainence that needed to be done. There was a few hiccups at the start around snapshots, but so far zero issues with memory leaks or broken functionality. We even migrated from a single ZK instances, to 3 node ZK cluster, and don't believe there was any problems. > [...] > > > The only reason I could imagine changing that in the future would be if zk upstream abandoned the project. Zookeeper itself is prety much "done" - it's a simple system that's been around for ages and additional 'features' are mostly done via recipes on top of its fundamentals ... so even a mostly absent upstream wouldn't really be a problem as long as it remains active enough to keep up with security fixes and to keep up with Java versions. I'm pretty sure zk is used in enough things that many people have a vested interest in (like Kafka) that it's unlikely zk loses enough upstream support to be a dangerous thing to be based on. > > I find this argument to be the most compeling. Zookeeper was > orginally developed at Yahoo, and there have been no major releases > since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] > also seems to be removing the Zookeeper depencency... > > I disagree that Zookeeper is a simple system. It's a complex > stack and it seems to be a painpoint for operations. I wouldn't > say that it is a dangerous thing to be based on, but it's a > problematic choice to make it such a central piece of Zuul services. > As mentioned above, this so far hasn't been my case. Zookeeper just works and haven't needed to do anything. It is possible, as we leverage ZK more, this may changed, but we've done a good job keeping users out of zookeeper so far. > > It is a little concerning that the auth issue you link to is still outstanding. However, if you look at the github PR linked: https://github.com/apache/zookeeper/pull/418 it's in merge conflict. I left a ping to the author and reviewer to see if it can be revived. That said - I would hope that zk auth isn't the only piece of the puzzle preventing unauthorized access. I'd expect defense in depth here, with at least some amount of additional restriction in place, such as VPN or iptables or similar. > > Even with that PR, the current system may not be enough as > it doesn't support multiple auth and you only have a global > username:password. For example, with the executor zoning, > we may want to be able to revoke or change just one executor > credential and not the whole system. > > > All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :) > > Perhaps it's not such an impossible feat to rewrite the zk.py module with > an etcd backend. Would such a contribution be considered? > > Regards, > -Tristan > > [0]: https://code.fb.com/data-infrastructure/location-aware-distribution-configuring-servers-at-scale/ > [1]: https://issues.apache.org/jira/browse/KAFKA-6598 > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From shrewsbury.dave at gmail.com Tue Dec 11 15:32:20 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 11 Dec 2018 10:32:20 -0500 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: <1544503922.b3l7gx2t9k.tristanC@fedora> References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> <1544503922.b3l7gx2t9k.tristanC@fedora> Message-ID: On Tue, Dec 11, 2018 at 1:44 AM Tristan Cacqueray wrote: [snip] > All of that is to say - I think the ship of pondering replacing zk with > etcd has sailed and there's no real reason to think we might return to it. > kazoo recpies help us use zk fully, and that's a good thing. :) > > Perhaps it's not such an impossible feat to rewrite the zk.py module with > an etcd backend. Would such a contribution be considered? > > I did write that module (trying to keep all zk interaction hidden behind an API of our own design) with the intent that it would make it easier for us to switch to a different backend if ZooKeeper didn't work out. However, even though the code may be easier to write, I think we have to consider all of the problem domain knowledge that we will lose, and will now have to reset with something else, if we do switch. A lot of that knowledge shows up in several other areas of code, too (how we deal with locks, lost connections, refreshing data, etc). I think that's a HUGE negative for us if we were to switch, basically setting us back months (maybe years), at least on a knowledge front. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Dec 11 15:45:04 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 11 Dec 2018 07:45:04 -0800 Subject: Adding an unconditional cleanup job phase In-Reply-To: <7e997763-c40d-664c-938d-640407afa07c@bluwirelesstechnology.com> (Brendan Jackman's message of "Tue, 11 Dec 2018 04:49:32 +0000") References: <8736r5jsa2.fsf@meyer.lemoncheese.net> <7e997763-c40d-664c-938d-640407afa07c@bluwirelesstechnology.com> Message-ID: <8736r4f4ov.fsf@meyer.lemoncheese.net> Brendan Jackman writes: > On 10/12/2018 22:49, James E. Blair wrote: >> Brendan Jackman writes: >> >>> One unfortunate hack in my use-case is that we currently lock and >>> release test resources which are shared with another system. This >>> is handled by an ansible role that sends API calls to that other >>> system. This means summarily killing a running job can prevent a >>> resource from getting unlocked. To get around this I've had to >>> disable pipeline.dequeue-on-new-patchset. >> That is unfortunate -- that's probably causing a lot of extra resources >> to be consumed. >> >>> I'd like to add stage where playbooks can be added that still >>> gets run even if the job gets stopped. My guess is that this is >>> also useful to other people who are using the static nodepool >>> driver and would like to unconditionally clean up files left >>> behind by jobs - does that sound correct? It generally seems that >>> there would be value in having and unconditional cleanup stage, >>> anyway. >> We've also discussed having a cleanup *job* which runs after all other >> jobs are complete, though I think that adding support for pausing jobs >> reduced the need for that somewhat (since a job can run, pause, and >> continue after all other jobs regardless of whether they succeed). I >> don't think that directly addresses your use case because it can still >> be aborted. >> >> One downside of this approach is that Zuul can provide no guarantees >> that the playbook will run, only a best effort. If the Zuul executor is >> killed, there's nothing we can do about it. > Yeah, to be honest I'd resigned myself to the fact that if Zuul > components get restarted I'm gona have to manually check for > unreleased locks, and I put them on a 12 hour timeout for that > reason too. (Luckily there's decent visibility on the locks). >> Because of that, the most reliable way to deal with resource locks is a >> lock that doesn't require explicit unlocking. The way that Zuul locks >> nodes in Nodepool accounts for this (if Zuul is killed, the locks are >> automatically released). Is it possible for you to encode the resources >> you need as static nodes in Nodepool and add them to your nodeset >> requests? Then you would get the benefit of Nodepool's locking. If >> not, we have discussed adding support for more resource types to >> Nodepool; perhaps that would help. > Yeah it may be worth investigating a smarter locking scheme. For > context the resources are pieces of specialised HW, and they are > shared with a Jenkins system where they are arbitrated by the > Lockable Resources Plugin. Support was added for sharing by > creating a Jenkins job that when run, locks the requested > resource then watches a given file until it is deleted, whereupon > it releases the lock. So the Zuul job acquires the lock by > triggering the Jenkins job, then releases the resource by > deleting its own workspace (Jenkins can see the Zuul workspace > over NFS). Perhaps a smarter scheme would be to have Jenkins > connect to a TCP endpoint to report that the lock is acquired, > then release it at soon as it can't communicate with the endpoint > any more. > > I still think that an unconditional cleanup phase would be useful > though, even if best-effort, because with dequeue-on-new-patchset > we are gona end up with a lot of job workspaces to > garbage-collect (with the best-effort cleanup phase I would > expect it to be a small enough problem that our infra will change > before we need to worry about it!). > > Making Zuul properly aware of the resources sounds cleaner but it > actually brings the unfortunate complication that I want to > strictly minimise the regions where the resources are locked: I > don't want jobs to be holding locks while they compile code or > upload logs, for example. That all makes sense, thanks. We should make sure to put the caveats in the documentation so it doesn't surprise anyone. -Jim From cboylan at sapwetik.org Tue Dec 11 17:34:16 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 11 Dec 2018 09:34:16 -0800 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: <1544503922.b3l7gx2t9k.tristanC@fedora> References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> <1544503922.b3l7gx2t9k.tristanC@fedora> Message-ID: <1544549656.659334.1606085296.51432E1C@webmail.messagingengine.com> On Mon, Dec 10, 2018, at 10:45 PM, Tristan Cacqueray wrote: Snip > I find this argument to be the most compeling. Zookeeper was > orginally developed at Yahoo, and there have been no major releases > since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] > also seems to be removing the Zookeeper depencency... Worth noting that Kafka has rejected etcd as well and appears to be implementing things internally themselves. At least according to this bug link. > Snip > > Even with that PR, the current system may not be enough as > it doesn't support multiple auth and you only have a global > username:password. For example, with the executor zoning, > we may want to be able to revoke or change just one executor > credential and not the whole system. I don't think this is true. ZooKeeper seems to support an auth scheme using Kerberos and SASL which would allow for every client and server to have its own principal. You can also apply ACLs to paths on top of the authenticated clients. This might not be the simplest setup but should allow for you to revoke a single client's (or server's) access to the cluster. > Snip > [1]: https://issues.apache.org/jira/browse/KAFKA-6598 From tdecacqu at redhat.com Wed Dec 12 01:53:14 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 12 Dec 2018 01:53:14 +0000 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> <1544503922.b3l7gx2t9k.tristanC@fedora> Message-ID: <1544571915.xx9xgk859a.tristanC@fedora> On December 11, 2018 3:32 pm, David Shrewsbury wrote: > On Tue, Dec 11, 2018 at 1:44 AM Tristan Cacqueray > wrote: > > [snip] > >> All of that is to say - I think the ship of pondering replacing zk with >> etcd has sailed and there's no real reason to think we might return to it. >> kazoo recpies help us use zk fully, and that's a good thing. :) >> >> Perhaps it's not such an impossible feat to rewrite the zk.py module with >> an etcd backend. Would such a contribution be considered? >> >> > I did write that module (trying to keep all zk interaction hidden behind an > API of our > own design) with the intent that it would make it easier for us to switch > to a different > backend if ZooKeeper didn't work out. However, even though the code may be > easier > to write, I think we have to consider all of the problem domain knowledge > that we will > lose, and will now have to reset with something else, if we do switch. A > lot of that knowledge > shows up in several other areas of code, too (how we deal with locks, lost > connections, > refreshing data, etc). I think that's a HUGE negative for us if we were to > switch, basically > setting us back months (maybe years), at least on a knowledge front. > It seems like the domain knowledge may not be all lost, and another implementation would likely benefit from what you learned in the several other areas of the code. I would like to give etcd a shot before ZooKeeper takes over zuul configuration. But such effort requires your consent. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Dec 12 01:53:32 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 12 Dec 2018 01:53:32 +0000 Subject: Another etcd thread (was Re: Projects config introspection) In-Reply-To: <1544549656.659334.1606085296.51432E1C@webmail.messagingengine.com> References: <8f6191c0-2507-4dcd-a1a0-00a5d1a9ecf0@email.android.com> <1544503922.b3l7gx2t9k.tristanC@fedora> <1544549656.659334.1606085296.51432E1C@webmail.messagingengine.com> Message-ID: <1544575263.qnw479xmxg.tristanC@fedora> On December 11, 2018 5:34 pm, Clark Boylan wrote: > On Mon, Dec 10, 2018, at 10:45 PM, Tristan Cacqueray wrote: > > Snip > >> I find this argument to be the most compeling. Zookeeper was >> orginally developed at Yahoo, and there have been no major releases >> since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] >> also seems to be removing the Zookeeper depencency... > > Worth noting that Kafka has rejected etcd as well and appears to be implementing things internally themselves. At least according to this bug link. Yes, it seems like they are moving away from configuration management systems alltogether because it's not a good fit. Bottom line is, users seems to be moving away from ZooKeeper. >> > > Snip > >> >> Even with that PR, the current system may not be enough as >> it doesn't support multiple auth and you only have a global >> username:password. For example, with the executor zoning, >> we may want to be able to revoke or change just one executor >> credential and not the whole system. > > I don't think this is true. ZooKeeper seems to support an auth scheme using Kerberos and SASL which would allow for every client and server to have its own principal. You can also apply ACLs to paths on top of the authenticated clients. > > This might not be the simplest setup but should allow for you to revoke a single client's (or server's) access to the cluster. That seems correct, though it's unclear how you would add multiple principals to the znode's acl. Note that ZooKeeper permission are not hierarchical, any change to the acl needs to be applied to all the nodes. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From claire at openstack.org Thu Dec 13 22:27:31 2018 From: claire at openstack.org (Claire Massey) Date: Thu, 13 Dec 2018 16:27:31 -0600 Subject: Creation of OSF Project Confirmation Guidelines Message-ID: <9A42F0A2-03C6-4AC1-925F-9DD554D78994@openstack.org> Hi everyone, As you know, Zuul is a pilot project supported by the OpenStack Foundation (OSF). As such, we wanted to make sure you’re aware of an effort recently kicked off a sub group of the OSF Board of Directors to draft guidelines[1] for confirming pilot projects as full, top-level, open infrastructure project of the Foundation. The current pilot projects that will be eligible for confirmation review in 2019-2020 are Airship, Kata Containers, StarlingX and Zuul. Currently the OpenStack project is the only confirmed project. The discussion about drafting these guidelines is just getting started, and *we invite you to participate in this effort* by reviewing the working draft in this etherpad and adding in your feedback and thoughts starting on line 139: https://etherpad.openstack.org/p/BrainstormingOSFProjectConfirmationGuidelines There will most likely be an open call scheduled in early 2019 where additional feedback will also be welcomed. I will share that invitation with you once it’s schedule. The goal is to come up with a set of points that the Board can use when reviewing pilot projects for confirmation. This topic has previously been discussed at a very high-level at previous Board meetings, including the September 18th meeting[2] (starting around slide 38). 1. https://etherpad.openstack.org/p/ProjectConfirmationGuidelines 2. https://docs.google.com/presentation/d/10UyCpxkjPqC3kT-dYRpBxzNT39i2OhlggJvzGDosMz0/edit#slide=id.g4274351d5e_1_392 Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Dec 17 22:57:28 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 17 Dec 2018 14:57:28 -0800 Subject: Zuul project update 2018-12-17 Message-ID: <87o99jahif.fsf@meyer.lemoncheese.net> Hi, The past two weeks in the Zuul project have seen: David discovered an issue in node deletion that was leaving empty ZooKeeper nodes and added code to nodepool to clean those up automatically. He also fixed a race in the test_handler_poll_session_expired nodepool test. Tobias worked on spec for scale out scheduler (what we've been informally calling Zuul v4 -- where we likely connect all components via Zookeeper and remove our use of Gearman). He also fixed an issue with github status checks. Paul added nodepool.node_id support for job inventory files (so that performance problems in jobs can be traced back to the hypervisor on which they ran). And I've continued the relative-priority work (including a change to group projects in check as they are grouped in gate), as well as a revision to our proposed governance document. A number of us are likely to be less active over the next few weeks. I don't plan to send another status update email until January 7. Please help write the next update email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From pabelanger at redhat.com Tue Dec 18 15:43:53 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 18 Dec 2018 10:43:53 -0500 Subject: Zuul project update 2018-12-17 In-Reply-To: <87o99jahif.fsf@meyer.lemoncheese.net> References: <87o99jahif.fsf@meyer.lemoncheese.net> Message-ID: <20181218154353.GA9203@localhost.localdomain> On Mon, Dec 17, 2018 at 02:57:28PM -0800, James E. Blair wrote: > Hi, > > The past two weeks in the Zuul project have seen: > > David discovered an issue in node deletion that was leaving empty > ZooKeeper nodes and added code to nodepool to clean those up > automatically. He also fixed a race in the > test_handler_poll_session_expired nodepool test. > > Tobias worked on spec for scale out scheduler (what we've been > informally calling Zuul v4 -- where we likely connect all components via > Zookeeper and remove our use of Gearman). > > He also fixed an issue with github status checks. > > Paul added nodepool.node_id support for job inventory files (so that > performance problems in jobs can be traced back to the hypervisor on > which they ran). > > And I've continued the relative-priority work (including a change to > group projects in check as they are grouped in gate), as well as a > revision to our proposed governance document. > > A number of us are likely to be less active over the next few weeks. I > don't plan to send another status update email until January 7. > Do you think we'll be able to tag release of zuul / nodepool this week, or should we hold off until after the break? - Paul > Please help write the next update email by updating this etherpad: > > https://etherpad.openstack.org/p/zuul-update-email > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mrhillsman at gmail.com Tue Dec 18 15:47:54 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 18 Dec 2018 09:47:54 -0600 Subject: Zuul project update 2018-12-17 In-Reply-To: <20181218154353.GA9203@localhost.localdomain> References: <87o99jahif.fsf@meyer.lemoncheese.net> <20181218154353.GA9203@localhost.localdomain> Message-ID: Thanks for the update Jim On Tue, Dec 18, 2018 at 9:44 AM Paul Belanger wrote: > On Mon, Dec 17, 2018 at 02:57:28PM -0800, James E. Blair wrote: > > Hi, > > > > The past two weeks in the Zuul project have seen: > > > > David discovered an issue in node deletion that was leaving empty > > ZooKeeper nodes and added code to nodepool to clean those up > > automatically. He also fixed a race in the > > test_handler_poll_session_expired nodepool test. > > > > Tobias worked on spec for scale out scheduler (what we've been > > informally calling Zuul v4 -- where we likely connect all components via > > Zookeeper and remove our use of Gearman). > > > > He also fixed an issue with github status checks. > > > > Paul added nodepool.node_id support for job inventory files (so that > > performance problems in jobs can be traced back to the hypervisor on > > which they ran). > > > > And I've continued the relative-priority work (including a change to > > group projects in check as they are grouped in gate), as well as a > > revision to our proposed governance document. > > > > A number of us are likely to be less active over the next few weeks. I > > don't plan to send another status update email until January 7. > > > Do you think we'll be able to tag release of zuul / nodepool this week, > or should we hold off until after the break? > > - Paul > > > Please help write the next update email by updating this etherpad: > > > > https://etherpad.openstack.org/p/zuul-update-email > > > > -Jim > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Tue Dec 18 16:15:01 2018 From: claire at openstack.org (Claire Massey) Date: Tue, 18 Dec 2018 10:15:01 -0600 Subject: CFP Open Until January 23, Open Infrastructure Summit in Denver Message-ID: <22136D38-E6DF-4373-AE18-24DD35EAC544@openstack.org> Hi everyone, FYI - the CFP is now open for the first Open Infrastructure Summit (formerly the OpenStack Summit) which will be held in Denver, Colorado April 29 - May 1, 2019. Wednesday, *January 23* is the deadline to Submit presentations . The Open Infrastructure Summit is organized by OSF and designed to be a place where open source infrastructure communities can come together and collaborate in the open. Zuul will have a large and prominent presence at the event so please submit talks to the CFP and plan to attend! SUBMIT YOUR PRESENTATION Important info: Based on previous Program Committee and attendee feedback, we have added / updated three Tracks: Security, Getting Started, and Open Development (previously Open Source Community). You can find the Track descriptions here . All of the OSF pilot projects —including Airship, Kata Containers, StarlingX and Zuul — will be front and center alongside other open source communities like Ansible, Cloud Foundry, Docker, Kubernetes, and many more. The Open Infrastructure Summit (formerly the OpenStack Summit), has evolved to recognize our diverse audience, and to signal to the market that the event is relevant for all IT infrastructure decision makers. If you’re interested in influencing the Summit content, apply to be a Programming Committee member *, where you can also find a full list of time requirements and expectations. Nominations will close on January 4, 2019. The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. *OSF Staff will serve as the Programming Committee for the Getting Started Track. Denver Summit registration and sponsor sales are currently open. Learn more and email summit at openstack.org with any questions. Please email speakersupport at openstack.org with any questions or feedback. Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From akjayaku at cisco.com Wed Dec 19 04:50:07 2018 From: akjayaku at cisco.com (Akhil Jayakumar (akjayaku)) Date: Wed, 19 Dec 2018 04:50:07 +0000 Subject: How to define zuul.executor.inventory_file on a localhost Message-ID: Hi All, Our openstack cinder ci jobs are failing due to below error. Do you know where we need to define zuul.executor variable “inventory_file”? Appreciate your help. 2018-12-18 06:48:58.993454 | TASK [log-inventory : Copy ansible inventory to logs dir] 2018-12-18 06:48:59.045029 | primary | ERROR 2018-12-18 06:48:59.045544 | primary | { 2018-12-18 06:48:59.045637 | primary | "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'inventory_file'\n\nThe error appears to have been in '/tmp/03e62a72bdbc4480937f15e45000abb3/trusted/project_1/git.zuul-ci.org/zuul-jobs/roles/log-inventory/tasks/main.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.4.html#inventory\n- name: Copy ansible inventory to logs dir\n ^ here\n" 2018-12-18 06:48:59.045713 | primary | } 2018-12-18 06:48:59.048055 | 2018-12-18 06:48:59.048294 | PLAY RECAP 2018-12-18 06:48:59.048470 | primary | ok: 18 changed: 10 unreachable: 0 failed: 1 Thanks, Akhil. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Dec 19 05:45:59 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 19 Dec 2018 05:45:59 +0000 Subject: How to define zuul.executor.inventory_file on a localhost In-Reply-To: References: Message-ID: <1545198244.z392e6me5d.tristanC@fedora> On December 19, 2018 4:50 am, Akhil Jayakumar (akjayaku) wrote: > Hi All, > > Our openstack cinder ci jobs are failing due to below error. > > Do you know where we need to define zuul.executor variable “inventory_file”? > > Appreciate your help. > > 2018-12-18 06:48:58.993454 | TASK [log-inventory : Copy ansible inventory to logs dir] > 2018-12-18 06:48:59.045029 | primary | ERROR > 2018-12-18 06:48:59.045544 | primary | { > 2018-12-18 06:48:59.045637 | primary | "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'inventory_file'\n\nThe error appears to have been in '/tmp/03e62a72bdbc4480937f15e45000abb3/trusted/project_1/git.zuul-ci.org/zuul-jobs/roles/log-inventory/tasks/main.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.4.html#inventory\n- name: Copy ansible inventory to logs dir\n ^ here\n" > 2018-12-18 06:48:59.045713 | primary | } > 2018-12-18 06:48:59.048055 | > 2018-12-18 06:48:59.048294 | PLAY RECAP > 2018-12-18 06:48:59.048470 | primary | ok: 18 changed: 10 unreachable: 0 failed: 1 > Hi Akhil, The inventory_file is now set by the executor since this change: https://review.openstack.org/578235 Is your executor service up-to-date? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From akjayaku at cisco.com Wed Dec 19 06:43:37 2018 From: akjayaku at cisco.com (Akhil Jayakumar (akjayaku)) Date: Wed, 19 Dec 2018 06:43:37 +0000 Subject: How to define zuul.executor.inventory_file on a localhost In-Reply-To: <1545198244.z392e6me5d.tristanC@fedora> References: <1545198244.z392e6me5d.tristanC@fedora> Message-ID: <8CB5925A-E016-436D-B49B-8C9810144931@cisco.com> Hi Tristan, Yes, it is up to date. This change works for localhost also? Thanks, Akhil On 19/12/18, 11:15 AM, "Tristan Cacqueray" wrote: On December 19, 2018 4:50 am, Akhil Jayakumar (akjayaku) wrote: > Hi All, > > Our openstack cinder ci jobs are failing due to below error. > > Do you know where we need to define zuul.executor variable “inventory_file”? > > Appreciate your help. > > 2018-12-18 06:48:58.993454 | TASK [log-inventory : Copy ansible inventory to logs dir] > 2018-12-18 06:48:59.045029 | primary | ERROR > 2018-12-18 06:48:59.045544 | primary | { > 2018-12-18 06:48:59.045637 | primary | "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'inventory_file'\n\nThe error appears to have been in '/tmp/03e62a72bdbc4480937f15e45000abb3/trusted/project_1/git.zuul-ci.org/zuul-jobs/roles/log-inventory/tasks/main.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.4.html#inventory\n- name: Copy ansible inventory to logs dir\n ^ here\n" > 2018-12-18 06:48:59.045713 | primary | } > 2018-12-18 06:48:59.048055 | > 2018-12-18 06:48:59.048294 | PLAY RECAP > 2018-12-18 06:48:59.048470 | primary | ok: 18 changed: 10 unreachable: 0 failed: 1 > Hi Akhil, The inventory_file is now set by the executor since this change: https://review.openstack.org/578235 Is your executor service up-to-date? Regards, -Tristan From mike at mpagel.de Wed Dec 19 08:13:49 2018 From: mike at mpagel.de (Mike Pagel) Date: Wed, 19 Dec 2018 09:13:49 +0100 Subject: Different queues across branches? Message-ID: <007b01d49772$c57daa00$5078fe00$@mpagel.de> Hey zuulers, we recently had to create a maintenance branch across all repositories required to build the project. Since the two branches no longer affect each other in terms of a green build I intended to decouple the gating process of master and branch by changing the gate's queue name in the branch. I was expecting the branch gates to be executed in the new queue and the master gates being executed in the original one. However, it looks like all gates are still happening in the same original queue. Did I expect something wrong? In case it matters, we are using implied branch matching for our jobs. Thanks a lot, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Dec 19 15:49:37 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 19 Dec 2018 07:49:37 -0800 Subject: Different queues across branches? In-Reply-To: <007b01d49772$c57daa00$5078fe00$@mpagel.de> (Mike Pagel's message of "Wed, 19 Dec 2018 09:13:49 +0100") References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> Message-ID: <874lb98qjy.fsf@meyer.lemoncheese.net> "Mike Pagel" writes: > Hey zuulers, > > we recently had to create a maintenance branch across all repositories > required to build the project. Since the two branches no longer affect each > other in terms of a green build I intended to decouple the gating process of > master and branch by changing the gate's queue name in the branch. I was > expecting the branch gates to be executed in the new queue and the master > gates being executed in the original one. > > However, it looks like all gates are still happening in the same original > queue. > > Did I expect something wrong? In case it matters, we are using implied > branch matching for our jobs. Hi, Currently queues are shared at the project level (so all branches participate in the same queue). This behavior is inherited from OpenStack's use of branches where we perform upgrade testing between different branches, and so they do affect each other. More often than not, when I talk to people about this, they simply haven't considered that a change to a maintenance branch might break upgrades. So I think this is a good default behavior to keep, and it's worth highlighting the reason in the documentation. However, some projects may not be burdened by that requirement, and in those cases it would probably be safe to segregate the queues by branch. I think we'd need to think about about how to do that (is it a boolean per-tenant, per-project, or should we be able to select it on a project-branch basis?). And it would be a little tricky to implement, as the assumption shows up a lot in the code. But I think the option would be welcome. -Jim From mike at mpagel.de Wed Dec 19 17:08:36 2018 From: mike at mpagel.de (Mike Pagel) Date: Wed, 19 Dec 2018 18:08:36 +0100 Subject: AW: Different queues across branches? In-Reply-To: <874lb98qjy.fsf@meyer.lemoncheese.net> References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> Message-ID: <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> Hey, I'd like to understand this a little better. > Currently queues are shared at the project level (so all branches participate in the same queue). I guess I sort of expected incorrectly to have _multiple_ project definitions for a repository, if I have _multiple_ protected branches in there. I sort of extended the behavior from job definitions... Since that seems to be wrong, from which branch is a job definition then actually taken? Always from master? Why would master be special for zuul? Because of project.default-branch? But if that is the case, what happens if I add a new job to the project's pipelines in master that does not exist on the branch (or change dependencies, project variables, project templates, etc.)? I think I don't understand, how I can have a single project for a repository without forcing all branches to have identical or at least compatible jobs, templates etc. What am I missing? Can't I just completely reconfigure a project's pipeline (instance) in a branch? > where we perform upgrade testing between different branches, and so they do affect each other. That is also interesting. So you are checking a change on one branch for compatibility in another branch? How do you configure this elegantly? Do you have to rename a job "build" in a branch e.g. to "build-2", in order to launch its two versions "build" and "build-2" as part of the same pipeline? Thank you for your insights, Mike From corvus at inaugust.com Wed Dec 19 18:01:45 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 19 Dec 2018 10:01:45 -0800 Subject: AW: Different queues across branches? In-Reply-To: <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> (Mike Pagel's message of "Wed, 19 Dec 2018 18:08:36 +0100") References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> Message-ID: <871s6d75va.fsf@meyer.lemoncheese.net> "Mike Pagel" writes: > Hey, > > I'd like to understand this a little better. > >> Currently queues are shared at the project level (so all branches >> participate in the same queue). > > I guess I sort of expected incorrectly to have _multiple_ project > definitions for a repository, if I have _multiple_ protected branches > in there. I sort of extended the behavior from job definitions... It is very similar, in that all of the matching project definitions are combined; when running a job, all of the matching job definitions are combined. Both job and project definitions have branch matchers, causing them to sometimes be filtered. So a project definition in a config-project applies to all branches (it has no branch matcher), but one in a maintenance branch of a project will be combined with the one in the config-project only for changes on that branch. > Since that seems to be wrong, from which branch is a job definition > then actually taken? Always from master? Why would master be special > for zuul? Because of project.default-branch? The only way master is special is that it is processed first. > But if that is the case, what happens if I add a new job to the > project's pipelines in master that does not exist on the branch (or > change dependencies, project variables, project templates, etc.)? > > I think I don't understand, how I can have a single project for a > repository without forcing all branches to have identical or at least > compatible jobs, templates etc. What am I missing? Can't I just > completely reconfigure a project's pipeline (instance) in a branch? Hopefully the above makes it more clear -- you can have a single project definition outside of that repository (in a config project), or, within the untrusted-project itself, the definition applied to each branch lives within that branch. Plus, the two approaches can be combined. >> where we perform upgrade testing between different branches, and so >> they do affect each other. > > That is also interesting. So you are checking a change on one branch > for compatibility in another branch? How do you configure this > elegantly? Do you have to rename a job "build" in a branch e.g. to > "build-2", in order to launch its two versions "build" and "build-2" > as part of the same pipeline? We build upgrade jobs by having them checkout the old branch, install the software, then checkout the new branch and perform the upgrade. When zuul prepares the repo on disk, it prepares all of the branches it knows about, so checking out different branches on disk is okay to do (the repo contains the future states of all branches, not just the change under test). So we would have a job like "test-stable-to-master-upgrade" and that job is hard-coded (though we would use variables to make it easy to change) to test the stable and master branches. -Jim From fungi at yuggoth.org Wed Dec 19 18:04:18 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Dec 2018 18:04:18 +0000 Subject: How to define zuul.executor.inventory_file on a localhost In-Reply-To: <1545198244.z392e6me5d.tristanC@fedora> References: <1545198244.z392e6me5d.tristanC@fedora> Message-ID: <20181219180418.xslzrgqxxwdewgka@yuggoth.org> On 2018-12-19 05:45:59 +0000 (+0000), Tristan Cacqueray wrote: > On December 19, 2018 4:50 am, Akhil Jayakumar (akjayaku) wrote: > > Our openstack cinder ci jobs are failing due to below error. > > > > Do you know where we need to define zuul.executor variable “inventory_file”? [...] > The inventory_file is now set by the executor since this change: > https://review.openstack.org/578235 > > Is your executor service up-to-date? We also just had someone pop into the #zuul IRC channel on Freenode asking the same question. The feature to have the executor start supplying an inventory_file value first appeared in Zuul 3.2.0 so if `pip3 freeze|grep zuul` indicates you're running a version earlier than 3.2.0 you'll need to upgrade (or switch to an older forked copy of the zuul-jobs repository). Also be sure to review the release notes at https://zuul-ci.org/docs/zuul/releasenotes.html between the version you're running and the version to which you're upgrading. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Dec 19 18:13:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Dec 2018 18:13:17 +0000 Subject: AW: Different queues across branches? In-Reply-To: <871s6d75va.fsf@meyer.lemoncheese.net> References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> <871s6d75va.fsf@meyer.lemoncheese.net> Message-ID: <20181219181317.5x3umi7jbsqipx6g@yuggoth.org> On 2018-12-19 10:01:45 -0800 (-0800), James E. Blair wrote: [...] > We build upgrade jobs by having them checkout the old branch, install > the software, then checkout the new branch and perform the upgrade. > When zuul prepares the repo on disk, it prepares all of the branches it > knows about, so checking out different branches on disk is okay to do > (the repo contains the future states of all branches, not just the > change under test). > > So we would have a job like "test-stable-to-master-upgrade" and that job > is hard-coded (though we would use variables to make it easy to change) > to test the stable and master branches. The power this provides can't be overstated. That same job may test that the source code in your proposed future state of a stable branch across multiple projects can be successfully upgraded to that in the current state of your master branches, or the current state of stable to a proposed future state for master, or in some dependent series/queue it may even be testing that an upgrade from a proposed future state of stable to a proposed future state of master works as intended. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mike at mpagel.de Thu Dec 20 06:09:28 2018 From: mike at mpagel.de (Mike Pagel) Date: Thu, 20 Dec 2018 07:09:28 +0100 Subject: AW: AW: Different queues across branches? In-Reply-To: <871s6d75va.fsf@meyer.lemoncheese.net> References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> <871s6d75va.fsf@meyer.lemoncheese.net> Message-ID: <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> Thanks Jim and Jeremy for your explanations. Regarding the future upgrade test, I think I do understand what you are doing. However, this is rather explicit as I think you are creating dedicated jobs and playbooks to perform such an upgrade test. I have to think about whether that makes sense in our cases as well. Now, back to the original question. Thanks a lot for outlining how the projects are processed, but then the question is not answered, or I should say I did not get the answer... :-) We are defining zuul projects only in untrusted repositories. Regarding those, Jim said in his last post: > within the untrusted-project itself, the definition applied to each branch lives within that branch. And I do see this behavior: I removed jobs from pipelines in the maintenance branch project and they were no longer executed for the branch, but still ran on master, as expected. Also Jim said in his first response: > Currently queues are shared at the project level (so all branches > participate in the same queue). Was this assuming project definition in trusted branches? If I have dedicated projects per branch in an untrusted repo project, then I do have dedicated pipeline job lists for these branches (as seen in practice). So why would I not have dedicated pipeline queues as well? Or in other words: Why is there a difference in being branch specific in an untrusted project for: * project..jobs (branch specific) * project..queue (_not_ branch specific?) Thanks a lot! From mike at mpagel.de Thu Dec 20 06:53:37 2018 From: mike at mpagel.de (Mike Pagel) Date: Thu, 20 Dec 2018 07:53:37 +0100 Subject: AW: AW: Different queues across branches? In-Reply-To: <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> <871s6d75va.fsf@meyer.lemoncheese.net> <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> Message-ID: <004b01d49830$bc025a70$34070f50$@mpagel.de> Just browsed the source a bit and it seems that the code in DependentPipelineManager. buildChangeQueues() suggests that always the _first_ found queue name is used, searched across projects with identical names (e.g. from different branches?): https://github.com/openstack-infra/zuul/blob/ea3e777469a67fb3c55041df22f46f79a4f91f7c/zuul/manager/dependent.py#L48 This would explain why I see the current behavior. Did I spot the correct place? And if so, what was the reasoning for doing it this way? -----Ursprüngliche Nachricht----- Von: Mike Pagel [mailto:mike at mpagel.de] Gesendet: Donnerstag, 20. Dezember 2018 07:09 An: zuul-discuss at lists.zuul-ci.org Betreff: AW: AW: Different queues across branches? Thanks Jim and Jeremy for your explanations. Regarding the future upgrade test, I think I do understand what you are doing. However, this is rather explicit as I think you are creating dedicated jobs and playbooks to perform such an upgrade test. I have to think about whether that makes sense in our cases as well. Now, back to the original question. Thanks a lot for outlining how the projects are processed, but then the question is not answered, or I should say I did not get the answer... :-) We are defining zuul projects only in untrusted repositories. Regarding those, Jim said in his last post: > within the untrusted-project itself, the definition applied to each branch lives within that branch. And I do see this behavior: I removed jobs from pipelines in the maintenance branch project and they were no longer executed for the branch, but still ran on master, as expected. Also Jim said in his first response: > Currently queues are shared at the project level (so all branches > participate in the same queue). Was this assuming project definition in trusted branches? If I have dedicated projects per branch in an untrusted repo project, then I do have dedicated pipeline job lists for these branches (as seen in practice). So why would I not have dedicated pipeline queues as well? Or in other words: Why is there a difference in being branch specific in an untrusted project for: * project..jobs (branch specific) * project..queue (_not_ branch specific?) Thanks a lot! _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Thu Dec 20 16:57:59 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 20 Dec 2018 08:57:59 -0800 Subject: AW: AW: Different queues across branches? In-Reply-To: <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> (Mike Pagel's message of "Thu, 20 Dec 2018 07:09:28 +0100") References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> <871s6d75va.fsf@meyer.lemoncheese.net> <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> Message-ID: <87ftus3zl4.fsf@meyer.lemoncheese.net> "Mike Pagel" writes: > Just browsed the source a bit and it seems that the code in > DependentPipelineManager. buildChangeQueues() suggests that always the > _first_ found queue name is used, searched across projects with > identical names (e.g. from different branches?): > > https://github.com/openstack-infra/zuul/blob/ea3e777469a67fb3c55041df22f46f79a4f91f7c/zuul/manager/dependent.py#L48 > > This would explain why I see the current behavior. Did I spot the > correct place? And if so, what was the reasoning for doing it this > way? Yes -- when creating shared change queues from projects, the first appearance of a queue name is used, and this is true for any branch. So, unlike jobs definitions and invocations, a project's membership in a shared queue is global and immutable. It doesn't change based on branch, or the change under test. It is simply a characteristic of the project itself. There's some more documentation about that here: https://zuul-ci.org/docs/zuul/user/config.html#attr-project.%3Cpipeline%3E.queue -Jim From mike at mpagel.de Thu Dec 20 17:42:09 2018 From: mike at mpagel.de (Mike Pagel) Date: Thu, 20 Dec 2018 18:42:09 +0100 Subject: AW: AW: Different queues across branches? References: <007b01d49772$c57daa00$5078fe00$@mpagel.de> <874lb98qjy.fsf@meyer.lemoncheese.net> <003501d497bd$7b4a0a50$71de1ef0$@mpagel.de> <871s6d75va.fsf@meyer.lemoncheese.net> <001e01d4982a$90e9b230$b2bd1690$@mpagel.de> <87ftus3zl4.fsf@meyer.lemoncheese.net> Message-ID: <009101d4988b$554ef120$ffecd360$@mpagel.de> > Yes -- when creating shared change queues from projects, the first > appearance of a queue name is used, and this is true for any branch. > So, unlike jobs definitions and invocations, a project's membership in a shared queue is global and immutable. > It doesn't change based on branch, or the change under test. It is simply a characteristic of the project itself. I understand and I accept that this is the current state. In that case however, having multiple queue names is an erroneous condition. Depending on what "first" actually means I could imagine scenarios, where setting a different queue name in a branch can remove my project from the original queue used before on master and essentially break master's gating, because other (required) projects are still using the original queue name... Would you agree? Would it make sense to guard against this? Where would that kind of checking code go? What kind of exception should be thrown? Where would I find an example of such kind of configuration check code and associated tests? Mike From mike at mpagel.de Sat Dec 29 16:48:47 2018 From: mike at mpagel.de (Mike Pagel) Date: Sat, 29 Dec 2018 17:48:47 +0100 Subject: AW: AW: Different queues across branches? Message-ID: <002801d49f96$5ea799e0$1bf6cda0$@mpagel.de> Happy holidays to everyone! I wrote: > In that case however, having multiple queue names is an > erroneous condition. Just wanted to wrap this up. I realize that raising an exception (or similar behavior) to complain about this condition will break existing installations, which is not acceptable. In particular not for this kind of thing. Nevertheless the discussion helped me to understand the behavior in my setup. Thanks a lot, Mike -----Ursprüngliche Nachricht----- Von: Mike Pagel [mailto:mike at mpagel.de] Gesendet: Donnerstag, 20. Dezember 2018 18:42 An: 'zuul-discuss at lists.zuul-ci.org' Betreff: AW: AW: Different queues across branches? > Yes -- when creating shared change queues from projects, the first > appearance of a queue name is used, and this is true for any branch. > So, unlike jobs definitions and invocations, a project's membership in a shared queue is global and immutable. > It doesn't change based on branch, or the change under test. It is simply a characteristic of the project itself. I understand and I accept that this is the current state. In that case however, having multiple queue names is an erroneous condition. Depending on what "first" actually means I could imagine scenarios, where setting a different queue name in a branch can remove my project from the original queue used before on master and essentially break master's gating, because other (required) projects are still using the original queue name... Would you agree? Would it make sense to guard against this? Where would that kind of checking code go? What kind of exception should be thrown? Where would I find an example of such kind of configuration check code and associated tests? Mike