From corvus at inaugust.com Fri Jun 1 20:17:39 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 01 Jun 2018 13:17:39 -0700 Subject: Behavior change for file matchers Message-ID: <871sdqntwc.fsf@meyer.lemoncheese.net> Hi, A while back we had a discussion about changing the behavior of file matchers to be more intuitive. The consensus seemed to be that treating them as normal overridable attributes would be the best solution. In particular, that would allow folks to more easily define a file matcher in a job, and then override it at the project-template or project level. See this thread for the full context: http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-May/000397.html I've implemented Proposal 2 from that message: https://review.openstack.org/571745 I think we should merge that, and, after we're satisfied with it, release version 3.1 with it, to signify that it's an end-user visible behavior change. It will cause changes in which jobs are run, in ways that would be difficult to enumerate ahead of time. But it should only do so in more complex situations, and hopefully the folks in those situations will be relieved that the solution is easier to comprehend than the status quo. Along with that change I have also proposed a followup which causes files and irrelevant-files to become mutually exclusive options: https://review.openstack.org/571804 If one is used in a parent and the other in a child, the attribute used in the child wins and clears that which was set in the parent. This makes them easier to reason about, though it does so largely by making them less useful. I suspect that the class of things that it lets us express (example in that commit message and in the previously linked thread) can be expressed today with a single regular expression. However, we're looking to move these matchers to use re2 which lacks negative lookaheads. That may mean that if we merge both this change and the re2 switch, we may not be able to express that configuration at all anymore. It may still be worth merging this change to save Zuul admins from pulling their hair out when people ask about the interaction of the two options. Or we may want to keep the ability to use both because, when used well, it does express something unique and succinct. I'd like for folks to consider this change carefully and provide as much feedback as possible before we decide what to do. If we do decide to merge it, it probably makes sense to do so in the same release (3.1) as the previous change. -Jim From tdecacqu at redhat.com Mon Jun 4 02:15:13 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 04 Jun 2018 02:15:13 +0000 Subject: Upgrade to Ansible 2.5 In-Reply-To: <18DF5544-92AC-4E07-A0D7-E720D1B8F6C4@bmw.de> References: <87muxhncpr.fsf@meyer.lemoncheese.net> <18DF5544-92AC-4E07-A0D7-E720D1B8F6C4@bmw.de> Message-ID: <1528078280.n541uivtxi.tristanC@fedora> On May 14, 2018 6:47 pm, Tobias.Henkel at bmw.de wrote: > On 02.05.18, 20:18, "James E. Blair" wrote: > > Hi, > > We have some patches ready to upgrade Zuul to use Ansible 2.5, but there > is some concern about whether there may be incompatibilities with > playbooks and roles in zuul-jobs. > > I know some folks are running with local patches to use later Ansible > versions -- if you are one of them, and you are using zuul-jobs, please > let me know so we have an idea of what to expect. > > How do folks feel about merging the upgrade-to-2.5 changes and releasing > those as Zuul 3.1.0 -- say, next Thursday, May 10? If we do that, and > we find that there are incompatibilities, we will need to begin merging > fixes to zuul-jobs immediately. Hopefully, however, those fixes can use > the "ansible_version" variable in order to be backwards compatible, so > zuul-jobs continues to work for folks who haven't upgraded yet. > > Finally, we hope to soon have a spec detailing multi-version support for > Zuul, so that future upgrades can be handled more gracefully. > > -Jim > > We did our flag day on Friday and things are looking normal after weekend and a full working day. > > So do we want to define a new flag day? > > -Tobias > Could we now plan another flag day? -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Mon Jun 4 14:12:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 04 Jun 2018 07:12:32 -0700 Subject: Upgrade to Ansible 2.5 In-Reply-To: <1528078280.n541uivtxi.tristanC@fedora> (Tristan Cacqueray's message of "Mon, 04 Jun 2018 02:15:13 +0000") References: <87muxhncpr.fsf@meyer.lemoncheese.net> <18DF5544-92AC-4E07-A0D7-E720D1B8F6C4@bmw.de> <1528078280.n541uivtxi.tristanC@fedora> Message-ID: <87muwahc8f.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On May 14, 2018 6:47 pm, Tobias.Henkel at bmw.de wrote: >> On 02.05.18, 20:18, "James E. Blair" wrote: >> >> Hi, >> We have some patches ready to upgrade Zuul to use Ansible >> 2.5, but there >> is some concern about whether there may be incompatibilities with >> playbooks and roles in zuul-jobs. >> I know some folks are running with local patches to use >> later Ansible >> versions -- if you are one of them, and you are using zuul-jobs, please >> let me know so we have an idea of what to expect. >> How do folks feel about merging the upgrade-to-2.5 changes >> and releasing >> those as Zuul 3.1.0 -- say, next Thursday, May 10? If we do that, and >> we find that there are incompatibilities, we will need to begin merging >> fixes to zuul-jobs immediately. Hopefully, however, those fixes can use >> the "ansible_version" variable in order to be backwards compatible, so >> zuul-jobs continues to work for folks who haven't upgraded yet. >> Finally, we hope to soon have a spec detailing multi-version >> support for >> Zuul, so that future upgrades can be handled more gracefully. >> -Jim >> We did our flag day on Friday and things are looking normal after >> weekend and a full working day. >> >> So do we want to define a new flag day? >> >> -Tobias >> > > Could we now plan another flag day? Yes, how about we plan for Thursday, June 7? -Jim From corvus at inaugust.com Mon Jun 4 14:44:40 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 04 Jun 2018 07:44:40 -0700 Subject: Zuul project update 2018-06-04 Message-ID: <87vaayfw6f.fsf@meyer.lemoncheese.net> Hi, I skipped a few of these due to the Summit and related discombobulation, but it's back! The Summit and OpenDev was really good. We introduced Zuul to a lot of folks. A number of Zuul devs managed to find some time to talk in person about the container spec. I've pushed another revision of that, hopefully it's close to ready. Artem Goncharov (gtema) has started implementation of nodepool security group support. Tristan followed up on summit discussion around container and worked on a prototype implementation of an openshift resource provider[1]. He also started some howtos, recommended practice documentation about PTI[2], renamed the proposed OCI driver to RUNC, and added documentation to nodepool builder about setting diskimage env-vars in secure configuration. I finished the patches to convert zuul-web to cherrypy, those merged at the end of last week. I also pushed up two changes to implement features I've been promising for a while: overridable file matchers, and a new pipeline manager ("supercedent"). I'm hoping we merge the switch to Ansible 2.5 this week. Shrews has been working on refactoring the nodepool API to better support our different drivers, and especially the multi-label support needed by the static driver. Monty has been refreshing his angular v1 -> v5 and related patches. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email Thanks, Jim [1] https://review.openstack.org/#/q/topic:openshift-zuul-build-resource [2] https://review.openstack.org/571420 From mnaser at vexxhost.com Mon Jun 4 18:09:18 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 4 Jun 2018 14:09:18 -0400 Subject: Multi-tenancy architecture Message-ID: Hi everyone: We're evaluating running Zuul in a multi-tenant environment by leveraging the following feature: https://docs.openstack.org/infra/zuul/admin/components.html#attr-scheduler.tenant_config_script Now, the idea is to make it as self-serve as possible. I've been toying around with a few ideas in terms of implementing tenants and I've come to the following simplistic idea: - Every 'user' has a 'tenant' inside this environment - All projects share the same `config-projects` which contains the base job and publishing artifact and logs roles (config-project is not controlled by tenant) - `untrusted-projects` are all dynamically loaded, provided by the user This would mean that user would supply a project using $method, and as long as they drop in a .zuul.yml file that is appropriate, things will get going automatically (implying that a basic check/gate/post pipeline is in place for all users). The only challenge with this at the moment is that multi-tenancy in Nodepool doesn't seem to exist, which means that everyone will be using the same pool of resources, rather than possibly having every tenant run their jobs inside their own project. Is there any plans to look into addressing the nodepool issue (or maybe ways to workaround it?) and as a more general question, is this an "ok" way of doing things? Your responses are appreciated! Thanks, Mohammed From corvus at inaugust.com Mon Jun 4 18:29:13 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 04 Jun 2018 11:29:13 -0700 Subject: Multi-tenancy architecture In-Reply-To: (Mohammed Naser's message of "Mon, 4 Jun 2018 14:09:18 -0400") References: Message-ID: <87vaaybe2u.fsf@meyer.lemoncheese.net> Mohammed Naser writes: > Hi everyone: > > We're evaluating running Zuul in a multi-tenant environment by > leveraging the following feature: > > https://docs.openstack.org/infra/zuul/admin/components.html#attr-scheduler.tenant_config_script > > Now, the idea is to make it as self-serve as possible. I've been > toying around with a few ideas in terms of implementing tenants and > I've come to the following simplistic idea: > > - Every 'user' has a 'tenant' inside this environment > - All projects share the same `config-projects` which contains the > base job and publishing artifact and logs roles (config-project is not > controlled by tenant) > - `untrusted-projects` are all dynamically loaded, provided by the user This should work, but you may find it useful to have a second config-project for each tenant as well, depending on user need. We've also batted around the idea of a third type of project if we find that we want something that currently requires a config-project but don't want to hand over all the other permissions too. But we don't have a solid use case for that yet. Just something to keep in mind if you run into it. > This would mean that user would supply a project using $method, and as > long as they drop in a .zuul.yml file that is appropriate, things will (zuul.yaml) > get going automatically (implying that a basic check/gate/post > pipeline is in place for all users). The only challenge with this at > the moment is that multi-tenancy in Nodepool doesn't seem to exist, > which means that everyone will be using the same pool of resources, > rather than possibly having every tenant run their jobs inside their > own project. > > Is there any plans to look into addressing the nodepool issue (or > maybe ways to workaround it?) and as a more general question, is this > an "ok" way of doing things? Your responses are appreciated! Yes, that's come up, and my feeling from the conversations so far is there is probably consensus it would be desirable, but it wasn't designed in, so we'll need to think a little about how to do it. There's a stub story for it: 2001347. (Maybe it's as easy as adding a 'tenants:' attribute to pools in nodepool, having zuul send the tenant name in the request, and having nodepool ignore pools which don't match the tenant name?) -Jim From mnaser at vexxhost.com Mon Jun 4 18:42:32 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 4 Jun 2018 14:42:32 -0400 Subject: Multi-tenancy architecture In-Reply-To: <87vaaybe2u.fsf@meyer.lemoncheese.net> References: <87vaaybe2u.fsf@meyer.lemoncheese.net> Message-ID: Hi Jim, Thanks for your replies. I'm digesting all of this and hopefully coming up with a small PoC. On Mon, Jun 4, 2018 at 2:29 PM, James E. Blair wrote: > Mohammed Naser writes: > >> Hi everyone: >> >> We're evaluating running Zuul in a multi-tenant environment by >> leveraging the following feature: >> >> https://docs.openstack.org/infra/zuul/admin/components.html#attr-scheduler.tenant_config_script >> >> Now, the idea is to make it as self-serve as possible. I've been >> toying around with a few ideas in terms of implementing tenants and >> I've come to the following simplistic idea: >> >> - Every 'user' has a 'tenant' inside this environment >> - All projects share the same `config-projects` which contains the >> base job and publishing artifact and logs roles (config-project is not >> controlled by tenant) >> - `untrusted-projects` are all dynamically loaded, provided by the user > > This should work, but you may find it useful to have a second > config-project for each tenant as well, depending on user need. We've > also batted around the idea of a third type of project if we find that > we want something that currently requires a config-project but don't > want to hand over all the other permissions too. But we don't have a > solid use case for that yet. Just something to keep in mind if you run > into it. I'm looking for the very simple use case for now, I think exposing a project-config is something that would be a 'step #2' once this first part gets ironed out. >> This would mean that user would supply a project using $method, and as >> long as they drop in a .zuul.yml file that is appropriate, things will > > (zuul.yaml) > >> get going automatically (implying that a basic check/gate/post >> pipeline is in place for all users). The only challenge with this at >> the moment is that multi-tenancy in Nodepool doesn't seem to exist, >> which means that everyone will be using the same pool of resources, >> rather than possibly having every tenant run their jobs inside their >> own project. >> >> Is there any plans to look into addressing the nodepool issue (or >> maybe ways to workaround it?) and as a more general question, is this >> an "ok" way of doing things? Your responses are appreciated! > > Yes, that's come up, and my feeling from the conversations so far is > there is probably consensus it would be desirable, but it wasn't > designed in, so we'll need to think a little about how to do it. > > There's a stub story for it: 2001347. > > (Maybe it's as easy as adding a 'tenants:' attribute to pools in > nodepool, having zuul send the tenant name in the request, and having > nodepool ignore pools which don't match the tenant name?) That does sound pretty straightforward actually. I think this would be a really nice way about getting this done. Something to keep in mind would be that it would be nice if Nodepool published images and shared them with other tenants (now this is staring to be very OpenStack-specific) as it would help deduplicate the number of images when using multitenancy. > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From tdecacqu at redhat.com Mon Jun 4 23:50:56 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 04 Jun 2018 23:50:56 +0000 Subject: Multi-tenancy architecture In-Reply-To: References: Message-ID: <1528153259.1l7y7skhue.tristanC@fedora> On June 4, 2018 6:09 pm, Mohammed Naser wrote: > Hi everyone: > > We're evaluating running Zuul in a multi-tenant environment by > leveraging the following feature: > > https://docs.openstack.org/infra/zuul/admin/components.html#attr-scheduler.tenant_config_script > > Now, the idea is to make it as self-serve as possible. I've been > toying around with a few ideas in terms of implementing tenants and > I've come to the following simplistic idea: > > - Every 'user' has a 'tenant' inside this environment > - All projects share the same `config-projects` which contains the > base job and publishing artifact and logs roles (config-project is not > controlled by tenant) > - `untrusted-projects` are all dynamically loaded, provided by the user > IIRC, sharing a config-projects accross tenants may yield weird behavior, e.g. change going through each tenant gate pipeline causing merge errors. Though this may have been fixed since I first tried that. FWIW, the next version of Software Factory (3.1) will feature tenant deployment support. Here are some implementation details: * each tenant has its own config project so that log urls are tenant scoped. * the config project also defines the project list (zuul main.yaml) and the tenant_config_script loads projects list from every config projects so that tenant's users can manage their own zuul main.yaml * tenant http gateway has a whitelabel /zuul ui so that user can only see their pipelines, jobs and builds. Basically, the configuration playbook now supports deployment without zuul so that each tenant has its own software factory instance with all the services (e.g. code-search, repoxplorer, elk, etherpad, ...). Here is the architecture schema: http://www.softwarefactory-project.io/images/sf-master-tenant.png > This would mean that user would supply a project using $method, and as > long as they drop in a .zuul.yml file that is appropriate, things will > get going automatically (implying that a basic check/gate/post > pipeline is in place for all users). The only challenge with this at > the moment is that multi-tenancy in Nodepool doesn't seem to exist, > which means that everyone will be using the same pool of resources, > rather than possibly having every tenant run their jobs inside their > own project. > > Is there any plans to look into addressing the nodepool issue (or > maybe ways to workaround it?) and as a more general question, is this > an "ok" way of doing things? Your responses are appreciated! > > Thanks, > Mohammed Another challenge is to isolate executors for each tenants, there is another thread about that: http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-March/000081.html Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Tue Jun 5 05:13:33 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 5 Jun 2018 05:13:33 +0000 Subject: Upgrade to Ansible 2.5 In-Reply-To: <87muwahc8f.fsf@meyer.lemoncheese.net> References: <87muxhncpr.fsf@meyer.lemoncheese.net> <18DF5544-92AC-4E07-A0D7-E720D1B8F6C4@bmw.de> <1528078280.n541uivtxi.tristanC@fedora> <87muwahc8f.fsf@meyer.lemoncheese.net> Message-ID: <8D6FAE6A-1B69-41F8-8D72-98F2675DB13A@bmw.de> On 04.06.18, 16:13, "James E. Blair" wrote: Tristan Cacqueray writes: > On May 14, 2018 6:47 pm, Tobias.Henkel at bmw.de wrote: >> On 02.05.18, 20:18, "James E. Blair" wrote: >> >> Hi, >> We have some patches ready to upgrade Zuul to use Ansible >> 2.5, but there >> is some concern about whether there may be incompatibilities with >> playbooks and roles in zuul-jobs. >> I know some folks are running with local patches to use >> later Ansible >> versions -- if you are one of them, and you are using zuul-jobs, please >> let me know so we have an idea of what to expect. >> How do folks feel about merging the upgrade-to-2.5 changes >> and releasing >> those as Zuul 3.1.0 -- say, next Thursday, May 10? If we do that, and >> we find that there are incompatibilities, we will need to begin merging >> fixes to zuul-jobs immediately. Hopefully, however, those fixes can use >> the "ansible_version" variable in order to be backwards compatible, so >> zuul-jobs continues to work for folks who haven't upgraded yet. >> Finally, we hope to soon have a spec detailing multi-version >> support for >> Zuul, so that future upgrades can be handled more gracefully. >> -Jim >> We did our flag day on Friday and things are looking normal after >> weekend and a full working day. >> >> So do we want to define a new flag day? >> >> -Tobias >> > > Could we now plan another flag day? Yes, how about we plan for Thursday, June 7? -Jim I'm fine with Thursday. -Tobias From corvus at inaugust.com Tue Jun 5 14:18:52 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 05 Jun 2018 07:18:52 -0700 Subject: Multi-tenancy architecture In-Reply-To: <1528153259.1l7y7skhue.tristanC@fedora> (Tristan Cacqueray's message of "Mon, 04 Jun 2018 23:50:56 +0000") References: <1528153259.1l7y7skhue.tristanC@fedora> Message-ID: <87r2ll8gfn.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > IIRC, sharing a config-projects accross tenants may yield weird behavior, > e.g. change going through each tenant gate pipeline causing merge errors. > Though this may have been fixed since I first tried that. If that still happens, it's a bug. The system is definitely designed to share any type of project (config or untrusted) across tenants. > Another challenge is to isolate executors for each tenants, there is > another thread about that: > http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-March/000081.html That's a good thread to read for both of these issues. However, I'm not sure we're generally convinced that executor tenant isolation is necessary or desirable. My own feeling is that whatever hypothetical future vulnerability allows executor breakout is more than likely to be able to simply be repeated in another tenant anyway. Plus, once an executor is breached, it's possible to run arbitrary jobs on others. If isolation is important for security purposes, it might be better to run a Zuul for each tenant. The scheduler is not a big deal now; it's one of the least resource-intensive parts of Zuul. Sharing the executor system, (and in some use-cases, nodepool) is a big advantage of Zuul multi-tenancy. -Jim From corvus at inaugust.com Tue Jun 12 20:14:27 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 12 Jun 2018 13:14:27 -0700 Subject: Zuul project update 2018-06-12 Message-ID: <87bmcfpxss.fsf@meyer.lemoncheese.net> Hi, We landed the changes to upgrade to Ansible 2.5. Things mostly seem to be going well, but we're currently working through a few issues described in story 2002528. Tristan has rebased the extra zuul-web routes with the new cherrypy interface: https://review.openstack.org/#/q/topic:zuul-web-routes and rebased the extra webui pages with the new angular6 interface: https://review.openstack.org/#/q/topic:zuul-ui-pages Monty seems to be updating the angular6 patches, which I guess prompted Tristan's update. Shrews reports that the multi-label changes have merged and he is ready to focus on static driver work now. He also Reviewed and carried Jim's driver API changes through to merge. (Thanks!) Tobias has a number of bug fixes in review. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From tdecacqu at redhat.com Thu Jun 14 01:23:20 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 14 Jun 2018 01:23:20 +0000 Subject: Adding tags, extra-vars files and multiple run to job Message-ID: <1528938762.kc7685o6lg.tristanC@fedora> Hi, TripleO folks need some features to be able to re-use deployment playbook natively from a Zuul job. Would it be possible to: * https://storyboard.openstack.org/#!/story/2001353 Adds "tags" and "skip-tags" job attributes that would be used as "--tags" and "--skip-tags" ansible-playbook command line arguments. * https://storyboard.openstack.org/#!/story/2001354 Adds a "extra-vars" job attribute that would be used as "--extra-vars @file.yaml" ansible-playbook command line arguments * https://storyboard.openstack.org/#!/story/2002543 Make the "run" job attribute a list of playbook and make the run phase chain ansible-playbook executions. It seems like those features are simple to implement, so if there are no objections I'd like to work on them. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Thu Jun 14 12:40:49 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 14 Jun 2018 08:40:49 -0400 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: <1528938762.kc7685o6lg.tristanC@fedora> References: <1528938762.kc7685o6lg.tristanC@fedora> Message-ID: <20180614124049.GA31722@localhost.localdomain> On Thu, Jun 14, 2018 at 01:23:20AM +0000, Tristan Cacqueray wrote: > Hi, > > TripleO folks need some features to be able to re-use deployment > playbook natively from a Zuul job. Would it be possible to: > > * https://storyboard.openstack.org/#!/story/2001353 > Adds "tags" and "skip-tags" job attributes that would be used as > "--tags" and "--skip-tags" ansible-playbook command line arguments. > > * https://storyboard.openstack.org/#!/story/2001354 > Adds a "extra-vars" job attribute that would be used as > "--extra-vars @file.yaml" ansible-playbook command line arguments > I've started working on this: https://review.openstack.org/546474/ > * https://storyboard.openstack.org/#!/story/2002543 > Make the "run" job attribute a list of playbook and make the > run phase chain ansible-playbook executions. > I've already created this, but was abandoned on feature/zuulv3 branch: https://review.openstack.org/519596/ > It seems like those features are simple to implement, so if there are > no objections I'd like to work on them. > Feel free to continue on with them if you'd like. Paul From dmsimard at redhat.com Thu Jun 14 12:43:29 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Thu, 14 Jun 2018 08:43:29 -0400 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: <1528938762.kc7685o6lg.tristanC@fedora> References: <1528938762.kc7685o6lg.tristanC@fedora> Message-ID: I wonder if we'd really end up having to re-implement an interface to all the command line arguments/flags ? Is there a better way ? Should we, for example, just provide a generic way to add any arguments to the ansible-playbook command line ? David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Wed, Jun 13, 2018 at 9:23 PM, Tristan Cacqueray wrote: > Hi, > > TripleO folks need some features to be able to re-use deployment > playbook natively from a Zuul job. Would it be possible to: > > * https://storyboard.openstack.org/#!/story/2001353 > Adds "tags" and "skip-tags" job attributes that would be used as > "--tags" and "--skip-tags" ansible-playbook command line arguments. > > * https://storyboard.openstack.org/#!/story/2001354 > Adds a "extra-vars" job attribute that would be used as > "--extra-vars @file.yaml" ansible-playbook command line arguments > > * https://storyboard.openstack.org/#!/story/2002543 > Make the "run" job attribute a list of playbook and make the > run phase chain ansible-playbook executions. > > It seems like those features are simple to implement, so if there are > no objections I'd like to work on them. > > Regards, > -Tristan > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From mordred at inaugust.com Thu Jun 14 12:52:02 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 14 Jun 2018 07:52:02 -0500 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: <20180614124049.GA31722@localhost.localdomain> References: <1528938762.kc7685o6lg.tristanC@fedora> <20180614124049.GA31722@localhost.localdomain> Message-ID: <54dfc2f4-11ed-80cc-47f2-3f38f86c5804@inaugust.com> On 06/14/2018 07:40 AM, Paul Belanger wrote: > On Thu, Jun 14, 2018 at 01:23:20AM +0000, Tristan Cacqueray wrote: >> Hi, >> >> TripleO folks need some features to be able to re-use deployment >> playbook natively from a Zuul job. Would it be possible to: >> >> * https://storyboard.openstack.org/#!/story/2001353 >> Adds "tags" and "skip-tags" job attributes that would be used as >> "--tags" and "--skip-tags" ansible-playbook command line arguments. >> >> * https://storyboard.openstack.org/#!/story/2001354 >> Adds a "extra-vars" job attribute that would be used as >> "--extra-vars @file.yaml" ansible-playbook command line arguments >> > I've started working on this: > > https://review.openstack.org/546474/ > >> * https://storyboard.openstack.org/#!/story/2002543 >> Make the "run" job attribute a list of playbook and make the >> run phase chain ansible-playbook executions. >> > I've already created this, but was abandoned on feature/zuulv3 branch: > > https://review.openstack.org/519596/ I think before we were wondering about use-cases (adding it is/was easy - but making sure multiple-run playbooks made sense I think was the sticking point) I think the 're-use the existing production playbooks' use-case is one we explicitly want to serve - so if existing production playbooks are intended to be run in sequence, then that sounds like a good argument for why such a thing is needed. >> It seems like those features are simple to implement, so if there are >> no objections I'd like to work on them. >> > Feel free to continue on with them if you'd like. > > Paul > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From tdecacqu at redhat.com Thu Jun 14 14:30:22 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 14 Jun 2018 14:30:22 +0000 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: <54dfc2f4-11ed-80cc-47f2-3f38f86c5804@inaugust.com> References: <1528938762.kc7685o6lg.tristanC@fedora> <20180614124049.GA31722@localhost.localdomain> <54dfc2f4-11ed-80cc-47f2-3f38f86c5804@inaugust.com> Message-ID: <1528986415.ktewwpbm3x.tristanC@fedora> On June 14, 2018 12:52 pm, Monty Taylor wrote: > On 06/14/2018 07:40 AM, Paul Belanger wrote: >> On Thu, Jun 14, 2018 at 01:23:20AM +0000, Tristan Cacqueray wrote: >>> Hi, >>> >>> TripleO folks need some features to be able to re-use deployment >>> playbook natively from a Zuul job. Would it be possible to: >>> >>> * https://storyboard.openstack.org/#!/story/2001353 >>> Adds "tags" and "skip-tags" job attributes that would be used as >>> "--tags" and "--skip-tags" ansible-playbook command line arguments. >>> >>> * https://storyboard.openstack.org/#!/story/2001354 >>> Adds a "extra-vars" job attribute that would be used as >>> "--extra-vars @file.yaml" ansible-playbook command line arguments >>> >> I've started working on this: >> >> https://review.openstack.org/546474/ >> >>> * https://storyboard.openstack.org/#!/story/2002543 >>> Make the "run" job attribute a list of playbook and make the >>> run phase chain ansible-playbook executions. >>> >> I've already created this, but was abandoned on feature/zuulv3 branch: >> >> https://review.openstack.org/519596/ > > I think before we were wondering about use-cases (adding it is/was easy > - but making sure multiple-run playbooks made sense I think was the > sticking point) I think the 're-use the existing production playbooks' > use-case is one we explicitly want to serve - so if existing production > playbooks are intended to be run in sequence, then that sounds like a > good argument for why such a thing is needed. > That would avoid having to consolidate those into a single playbook. Though it seems like the biggest blocker is the lack of tags and extra-vars file support. >>> It seems like those features are simple to implement, so if there are >>> no objections I'd like to work on them. >>> >> Feel free to continue on with them if you'd like. >> >> Paul >> Thanks for the prompt feedback Paul, I'll rebase and try to get those patches working soon. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Thu Jun 14 14:34:23 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 14 Jun 2018 14:34:23 +0000 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: References: <1528938762.kc7685o6lg.tristanC@fedora> Message-ID: <1528986634.clbm54kzyf.tristanC@fedora> On June 14, 2018 12:43 pm, David Moreau Simard wrote: > I wonder if we'd really end up having to re-implement an interface to > all the command line arguments/flags ? Is there a better way ? > What other arguments would be useful? > Should we, for example, just provide a generic way to add any > arguments to the ansible-playbook command line ? > It sounds better to have a clear interface for job to use ansible-playbook command line features. And there are security considerations if we enable arbritary arguments. Thanks, -Tristan > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Wed, Jun 13, 2018 at 9:23 PM, Tristan Cacqueray wrote: >> Hi, >> >> TripleO folks need some features to be able to re-use deployment >> playbook natively from a Zuul job. Would it be possible to: >> >> * https://storyboard.openstack.org/#!/story/2001353 >> Adds "tags" and "skip-tags" job attributes that would be used as >> "--tags" and "--skip-tags" ansible-playbook command line arguments. >> >> * https://storyboard.openstack.org/#!/story/2001354 >> Adds a "extra-vars" job attribute that would be used as >> "--extra-vars @file.yaml" ansible-playbook command line arguments >> >> * https://storyboard.openstack.org/#!/story/2002543 >> Make the "run" job attribute a list of playbook and make the >> run phase chain ansible-playbook executions. >> >> It seems like those features are simple to implement, so if there are >> no objections I'd like to work on them. >> >> Regards, >> -Tristan >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Thu Jun 14 14:46:50 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 14 Jun 2018 07:46:50 -0700 Subject: Adding tags, extra-vars files and multiple run to job In-Reply-To: <1528986415.ktewwpbm3x.tristanC@fedora> (Tristan Cacqueray's message of "Thu, 14 Jun 2018 14:30:22 +0000") References: <1528938762.kc7685o6lg.tristanC@fedora> <20180614124049.GA31722@localhost.localdomain> <54dfc2f4-11ed-80cc-47f2-3f38f86c5804@inaugust.com> <1528986415.ktewwpbm3x.tristanC@fedora> Message-ID: <87r2l9h1d1.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: [Regarding multiple run playbooks] >> I think before we were wondering about use-cases (adding it is/was >> easy - but making sure multiple-run playbooks made sense I think was >> the sticking point) I think the 're-use the existing production >> playbooks' use-case is one we explicitly want to serve - so if >> existing production playbooks are intended to be run in sequence, >> then that sounds like a good argument for why such a thing is >> needed. >> > > That would avoid having to consolidate those into a single playbook. > Though it seems like the biggest blocker is the lack of tags and extra-vars > file support. Yes, the other two sound straightforward, this is the only one I'm a little concerned about. I think we should understand it more first, as it's a little surprising. Can you describe why there are multiple playbooks involved in the run phase? How are these playbooks normally used? Why are they split in the first place? And then finally, we should consider import_playbook -- how does implementing this feature compare to suggesting the use of import_playbook? Is it too onerous to ask users to do that, or, instead, would it produce a playbook that's useful inside and outside of zuul while still maintaining the current organizational structure of the split playbooks? -Jim From corvus at inaugust.com Thu Jun 14 23:56:30 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 14 Jun 2018 16:56:30 -0700 Subject: September PTG attendance Message-ID: <877en0dis1.fsf@meyer.lemoncheese.net> Hi, So that folks can start planning for the next PTG, I'd like to find out if we think it's valuable to have a formal Zuul track there, and if folks are planning to attend. I've set up an etherpad to collect information: https://etherpad.openstack.org/p/zuul-denver-ptg-2018 If you could fill it out and indicate whether you: plan to attend, or may attend if there's a standalone Zuul track, or if you won't be able to attend at all. Also, if you have a topic you would like to discuss in person at the PTG, please describe it there. I'll try to collect these within a couple days and we can have a better idea of whether we should make an effort to have a Zuul track, or skip this event in favor of working through our existing backlog. -Jim From corvus at inaugust.com Fri Jun 15 00:03:16 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 14 Jun 2018 17:03:16 -0700 Subject: Zuul participation in OpenCI Message-ID: <87muvwc3wb.fsf@meyer.lemoncheese.net> Hi, A number of us have been involved in the OpenCI effort: https://openci.io/ I think those of us who are, are bringing our experience with Zuul to the group, so it's fair to characterize the Zuul project as participating. It has been suggested that our logo could be added to the OpenCI web page under the "Practitioners from the following Communities" section. Does anyone think this is a good or bad idea? Please feel free to reply to me publicly or privately with any feedback. -Jim From gdk at redhat.com Fri Jun 15 06:08:49 2018 From: gdk at redhat.com (Greg Dekoenigsberg) Date: Fri, 15 Jun 2018 02:08:49 -0400 Subject: Zuul participation in OpenCI In-Reply-To: <87muvwc3wb.fsf@meyer.lemoncheese.net> References: <87muvwc3wb.fsf@meyer.lemoncheese.net> Message-ID: I think it's a good idea. :) On Thu, Jun 14, 2018, 8:03 PM James E. Blair wrote: > Hi, > > A number of us have been involved in the OpenCI effort: > > https://openci.io/ > > I think those of us who are, are bringing our experience with Zuul to > the group, so it's fair to characterize the Zuul project as > participating. It has been suggested that our logo could be added to > the OpenCI web page under the "Practitioners from the following > Communities" section. Does anyone think this is a good or bad idea? > Please feel free to reply to me publicly or privately with any > feedback. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jun 15 13:06:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jun 2018 13:06:44 +0000 Subject: Zuul participation in OpenCI In-Reply-To: <87muvwc3wb.fsf@meyer.lemoncheese.net> References: <87muvwc3wb.fsf@meyer.lemoncheese.net> Message-ID: <20180615130643.qj42pe7je6qoz7jh@yuggoth.org> On 2018-06-14 17:03:16 -0700 (-0700), James E. Blair wrote: [...] > It has been suggested that our logo could be added to the OpenCI > web page under the "Practitioners from the following Communities" > section. Does anyone think this is a good or bad idea? [...] I think it's accurate to characterize it as participation from the Zuul community, so including the logo there reflects at least some version of truth. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Fri Jun 15 14:55:32 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 15 Jun 2018 09:55:32 -0500 Subject: Zuul participation in OpenCI In-Reply-To: <20180615130643.qj42pe7je6qoz7jh@yuggoth.org> References: <87muvwc3wb.fsf@meyer.lemoncheese.net> <20180615130643.qj42pe7je6qoz7jh@yuggoth.org> Message-ID: On 06/15/2018 08:06 AM, Jeremy Stanley wrote: > On 2018-06-14 17:03:16 -0700 (-0700), James E. Blair wrote: > [...] >> It has been suggested that our logo could be added to the OpenCI >> web page under the "Practitioners from the following Communities" >> section. Does anyone think this is a good or bad idea? > [...] > > I think it's accurate to characterize it as participation from the > Zuul community, so including the logo there reflects at least some > version of truth. ++ And I think it's a good idea. From corvus at inaugust.com Mon Jun 18 13:47:27 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 18 Jun 2018 06:47:27 -0700 Subject: Zuul project update 2018-06-18 Message-ID: <87vaagtde8.fsf@meyer.lemoncheese.net> Hi, TristanC has started experimenting with graphQL. The idea is to see how it can be used to simplify the REST apis as well as enable powerful UI using appolo, in particular for pagination. Worked on extra-vars, ansible-tags and ansible-skip-tags job attributes. Found a couple more issues with the Angular6 patch when working on the new project and job page. I worked with Tobias and Clark on fixing a log streaming problem due to the ansible 2.5 upgrade. This week I'm traveling to China to deliver a talk about zuul at OpenInfra Days. I won't be around in channel as much, but should be able to review changes while in transit. Shrews completed static driver rewrite to pre-register nodes. Awaiting review/merge. Identified and fixed an issue where paused request handlers were not removed from active handlers if an exception from a provider operation occured. This potentially caused leaked nodes that remained ready+locked until a launcher restart. Tobiash worked on log streaming problem as mentioned above. Fixed ignored no_log caused by unrechable nodes (a security issue). Fixed caching of empty config. This week: work on improving tenant reconfigure behavior with github. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From fungi at yuggoth.org Tue Jun 19 12:21:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Jun 2018 12:21:59 +0000 Subject: Unintended variable disclosure on Ansible unreachable tasks despite no_log In-Reply-To: <1B0719BD-0122-4D58-B662-42576CCA946B@bmw.de> References: <1B0719BD-0122-4D58-B662-42576CCA946B@bmw.de> Message-ID: <20180619122159.zpn3alnt7unigbzi@yuggoth.org> On 2018-06-18 13:47:24 +0000 (+0000), Tobias.Henkel at bmw.de wrote: > We discovered that if nodes become offline during the build the > no_log attribute of a task got ignored. If the unreachable error > occurred in a task used with a loop variable (ex: with_items), the > contents of the loop items would be printed in the console. This > could lead to accidentally leaking credentials or secrets. [...] MITRE has assigned CVE-2018-12557 to this vulnerability. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From iwienand at redhat.com Wed Jun 20 02:16:16 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 20 Jun 2018 12:16:16 +1000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role Message-ID: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> Hello, The "validate-host" role [1] currently creates the "inventory.yaml" [2] file, which contains the ansible inventory information for the job. For some history, the role was originally called "debug-ansible" but renamed to "validate-host". Since it collects other ansible information, saving the inventory was added to the role shortly after the rename. As copying this inventory information requires access to the executor context, it must be run as a privileged job. However, the role also includes quite a number of other operations that do not have similar requirements. Due to not currently having an automated CI testing framework for privileged jobs, this leaves changes to these unrelated parts (like [3]) with insufficient testing coverage. The creation of the inventory file has been moved into a separate role "log-inventory" [4]. We now wish to remove the inventory creation from the "validate-host" role [5]. This will lower the surface-area of privileged jobs and also enable more straight-forward testing of the rest of the "validate-host" role [6]. For the OpenStack ecosystem, the new role will be added to the base job [7] so this should be entirely transparent. However, if you build your own base templates, you will need to add the "log-inventory" role to continue creating the inventory.yaml. Note that you can do this *now*, and thus be unaffected when the creation is removed from "validate-host". Per the deprecation policy [8], but taking into consideration * reviews posted for several weeks * pending debugging of gate issues with "validate-host" * the informational/debugging nature of the output * the relative ease of adding the new role to maintain the status quo I would suggest we merge these changes at the start of next week, around the 25th June Thank you -i [1] https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/validate-host [2] see any job results zuul-info/inventory.yaml [3] https://review.openstack.org/576351 [4] https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/log-inventory [5] https://review.openstack.org/563790 [6] https://review.openstack.org/563788 [7] https://review.openstack.org/563789 [8] https://zuul-ci.org/docs/zuul-jobs/deprecation.html From fungi at yuggoth.org Wed Jun 20 02:34:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jun 2018 02:34:56 +0000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role In-Reply-To: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> References: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> Message-ID: <20180620023456.5cqg54q6thovurjj@yuggoth.org> On 2018-06-20 12:16:16 +1000 (+1000), Ian Wienand wrote: [...] > For the OpenStack ecosystem, the new role will be added to the base > job [...] Should we also add it to the example base jobs in the zuul-base-jobs repository? We link to these from documentation as a minimal base job recommendation to reuse or copy as a starting point. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From iwienand at redhat.com Wed Jun 20 09:35:43 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 20 Jun 2018 19:35:43 +1000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role In-Reply-To: <20180620023456.5cqg54q6thovurjj@yuggoth.org> References: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> <20180620023456.5cqg54q6thovurjj@yuggoth.org> Message-ID: On 06/20/2018 12:34 PM, Jeremy Stanley wrote:> On 2018-06-20 12:16:16 +1000 (+1000), Ian Wienand wrote: > [...] >> For the OpenStack ecosystem, the new role will be added to the base >> job > [...] > > Should we also add it to the example base jobs in the zuul-base-jobs > repository? We link to these from documentation as a minimal base > job recommendation to reuse or copy as a starting point. The validate-host role isn't part of that, but maybe it could be (or maybe that's what you meant)? We can discuss in https://review.openstack.org/576802 -i From tdecacqu at redhat.com Tue Jun 26 11:40:34 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 26 Jun 2018 11:40:34 +0000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role In-Reply-To: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> References: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> Message-ID: <1530013132.sv3z4q6tbq.tristanC@fedora> On June 20, 2018 2:16 am, Ian Wienand wrote: [...] > However, if you build your own base templates, you will need to add > the "log-inventory" role to continue creating the inventory.yaml. > Note that you can do this *now*, and thus be unaffected when the > creation is removed from "validate-host". [...] Hi Ian, the log-inventory is currently un-usable for me without https://review.openstack.org/577674 Then for some other reasons, when a job run without nodeset on localhost, then the inventory_file is undefined and it needs to be set like that: - name: Define inventory_file fact set_fact: inventory_file: "/tmp/{{ zuul.build }}/ansible/inventory.yaml" Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Tue Jun 26 20:16:49 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 26 Jun 2018 13:16:49 -0700 Subject: Zuul project update 2018-06-26 Message-ID: <874lhpl2vi.fsf@meyer.lemoncheese.net> Hi, I was away giving a talk on Zuul last week. It looks like it's been a week of bugfixes: Tristan found a bunch of issues after upgrading their deployment: * Zuul refresh_deps doesn't work well with gerrit 2.14 schema change (change/patchset number are now int): https://review.openstack.org/577154 * Zuul configloader doesn't skip project that include: [] https://review.openstack.org/576016 * Nodepool repeatedly creates new deleting node when OpenStack provider fails to remove an instance: https://review.openstack.org/576288 He also split the initial connection loop out of the zk-retry patches: https://review.openstack.org/576047 and https://review.openstack.org/576048 . This make Zuul restart properly if the control plane reboots. Rebased runC, k8s and openshift nodepool drivers in preparation of the upcoming next release of Nodepool. Tobias worked on tenant reconfigure behavior[1] and on possible multi-enqueue/missing-dequeue issue in the github driver. [1] https://review.openstack.org/#/q/project:openstack-infra/zuul+branch:master+topic:tenant-reconfiguration Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From iwienand at redhat.com Wed Jun 27 01:11:30 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 27 Jun 2018 11:11:30 +1000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role In-Reply-To: <1530013132.sv3z4q6tbq.tristanC@fedora> References: <95264cef-f47f-4d36-977f-271170745c58@redhat.com> <1530013132.sv3z4q6tbq.tristanC@fedora> Message-ID: On 06/26/2018 09:40 PM, Tristan Cacqueray wrote: > Then for some other reasons, when a job run without nodeset on localhost, > then the inventory_file is undefined and it needs to be set like that: > > - name: Define inventory_file fact > set_fact: > inventory_file: "/tmp/{{ zuul.build }}/ansible/inventory.yaml" We determined it was correct ansible behaviour that inventory_file isn't set in this case [1]. The "/tmp" was the only kink in above, as zuul didn't directly export the path. It is suggested zuul could export this directly in [2]. The job can then pick that up in [3] and we should then have this case covered. -i [1] http://docs.ansible.com/ansible/latest/porting_guide_2.4.html#inventory [2] https://review.openstack.org/578235 [3] https://review.openstack.org/578234 From corvus at inaugust.com Fri Jun 29 16:24:20 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 29 Jun 2018 09:24:20 -0700 Subject: Matching tags against branches Message-ID: <87fu15a7d7.fsf@meyer.lemoncheese.net> Hi, Recently a discussion came up in OpenStack about moving some job invocations out of our global config-project and into individual project repos. Some of the jobs in question run in our "release" pipeline, which is triggered by tag events. These pipelines have Tag items enqueued into them, rather than Change items. Tags have a ref (the name of the tag), but they don't have a branch. This is because, in git, a tag may point to a commit that is on one branch, many branches, or no branches at all. Moreover, because of this, none of the code review systems attempt to provide "branch" information for tags. The current behavior in Zuul is that explicit branch matchers only match branches or refs. So you can define a job with "branches: stable", and it only run on changes to the "stable" branch. Or you can define a job with "branches: 3\..*" and it will only run on 3.x tags. Or you can say "branches: [stable, 3\..*]" and it will run on both. Implicit branch matchers match branches if the item has one (eg, a Change). But if the item does not (eg, a Tag), they always match. This means that a project can add a job to a tag-based pipeline in-repo, and it will work. The project stanza will get an implicit branch matcher, but that will always match the tag item, so the job will always run. But this means that ever project stanza for the project matches. So if the stable branch says "run foo on release" and the master branch says "run bar on release" both foo and bar will run on release. There is no way to implicitly have different jobs running on tags "from different branches" (I put that in scare quotes because that's an artificial construct that we're placing on git). Of course this can be done with explicit branch matchers, but determining which version numbers correspond to which branches and building those regexes can be high maintenance. If we want to make this a bit more intuitive, there are two approaches we could take. One would be to perform a dynamic reconfiguration for all tag items to determine the actual in-repo content for that tag, and use it to the exclusion of all other in-repo content from that project. That would produce the behavior of running whatever was defined in that commit. This is very intuitive for folks coming from other CI systems, however, it's rather resource intensive for Zuul, and could pose problems if, for example, a stable branch project-pipeline used a job defined on the master branch. The other approach would be to attempt to create the fiction that tags have branches. We could ask the mergers to run "git branch --contains" on the tagged commit and find out which branches it is in. We can then compare that set of branches to any explicit and implicit branch matchers. If any of the branches match, we'll consider the Tag item to have matched. This will probably do the expected thing in the typical case. There are still some edge cases, but they generally degrade to the situation we have today, in that multiple versions of a job will all be run. Of course, in any case (now or with this change) full control can always be achieved in a config-project with explicit branch matchers. I've begun implementing this idea, mostly as an exercise in exploration. I'm tentatively inclined to think that it's an improvement and we should consider proceeding with it. But I'd really like other folks to think about whether it's a good idea, and whether it might produce more problems than it solves. Here's the initial implementation; it's lacking only documentation at this point. Some of the edge cases are described in the commit message, so I refer you to that for further reading: https://review.openstack.org/578557 Thanks, Jim From doug at doughellmann.com Fri Jun 29 18:01:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 29 Jun 2018 14:01:25 -0400 Subject: Matching tags against branches In-Reply-To: <87fu15a7d7.fsf@meyer.lemoncheese.net> References: <87fu15a7d7.fsf@meyer.lemoncheese.net> Message-ID: <1530294886-sup-2648@lrrr.local> Excerpts from corvus's message of 2018-06-29 09:24:20 -0700: > Hi, > > Recently a discussion came up in OpenStack about moving some job > invocations out of our global config-project and into individual project > repos. Some of the jobs in question run in our "release" pipeline, > which is triggered by tag events. > > These pipelines have Tag items enqueued into them, rather than Change > items. Tags have a ref (the name of the tag), but they don't have a > branch. This is because, in git, a tag may point to a commit that is on > one branch, many branches, or no branches at all. Moreover, because of > this, none of the code review systems attempt to provide "branch" > information for tags. > > The current behavior in Zuul is that explicit branch matchers only match > branches or refs. So you can define a job with "branches: stable", and > it only run on changes to the "stable" branch. Or you can define a job > with "branches: 3\..*" and it will only run on 3.x tags. Or you can say > "branches: [stable, 3\..*]" and it will run on both. > > Implicit branch matchers match branches if the item has one (eg, a > Change). But if the item does not (eg, a Tag), they always match. This > means that a project can add a job to a tag-based pipeline in-repo, and > it will work. The project stanza will get an implicit branch matcher, > but that will always match the tag item, so the job will always run. > But this means that ever project stanza for the project matches. So > if the stable branch says "run foo on release" and the master branch > says "run bar on release" both foo and bar will run on release. > > There is no way to implicitly have different jobs running on tags "from > different branches" (I put that in scare quotes because that's an > artificial construct that we're placing on git). Of course this can be > done with explicit branch matchers, but determining which version > numbers correspond to which branches and building those regexes can be > high maintenance. > > If we want to make this a bit more intuitive, there are two approaches > we could take. One would be to perform a dynamic reconfiguration for > all tag items to determine the actual in-repo content for that tag, and > use it to the exclusion of all other in-repo content from that project. > That would produce the behavior of running whatever was defined in that > commit. This is very intuitive for folks coming from other CI systems, > however, it's rather resource intensive for Zuul, and could pose > problems if, for example, a stable branch project-pipeline used a job > defined on the master branch. > > The other approach would be to attempt to create the fiction that tags > have branches. We could ask the mergers to run "git branch --contains" > on the tagged commit and find out which branches it is in. We can then > compare that set of branches to any explicit and implicit branch > matchers. If any of the branches match, we'll consider the Tag item to > have matched. > > This will probably do the expected thing in the typical case. There are > still some edge cases, but they generally degrade to the situation we > have today, in that multiple versions of a job will all be run. Of > course, in any case (now or with this change) full control can always be > achieved in a config-project with explicit branch matchers. For the standard OpenStack releases processes we do frequently tag a commit that is only on one branch and then create another branch at that same commit within a short period of time. I wonder if the race condition implied there would be a regular source is problems. We could probably work out some way to ensure the branch only happens after the tag is processed, but it sounds like this approach isn't going to be an automatic win. > I've begun implementing this idea, mostly as an exercise in > exploration. I'm tentatively inclined to think that it's an improvement > and we should consider proceeding with it. But I'd really like other > folks to think about whether it's a good idea, and whether it might > produce more problems than it solves. > > Here's the initial implementation; it's lacking only documentation at > this point. Some of the edge cases are described in the commit message, > so I refer you to that for further reading: > > https://review.openstack.org/578557 > > Thanks, > > Jim > From fungi at yuggoth.org Fri Jun 29 18:07:11 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 Jun 2018 18:07:11 +0000 Subject: Matching tags against branches In-Reply-To: <1530294886-sup-2648@lrrr.local> References: <87fu15a7d7.fsf@meyer.lemoncheese.net> <1530294886-sup-2648@lrrr.local> Message-ID: <20180629180710.uv5fpcujqydqpmyi@yuggoth.org> On 2018-06-29 14:01:25 -0400 (-0400), Doug Hellmann wrote: [...] > For the standard OpenStack releases processes we do frequently tag > a commit that is only on one branch and then create another branch > at that same commit within a short period of time. I wonder if the > race condition implied there would be a regular source is problems. > We could probably work out some way to ensure the branch only happens > after the tag is processed, but it sounds like this approach isn't > going to be an automatic win. [...] Unless I'm not thinking the situation through, it seems like unwanted behavior would arise from a workflow like: 1. branch stable from master 2. merge new commits to master changing job behavior to be unsuitable for the stable branch 3. push a tag which refers to a commit present on both master and stable Or is it enough to have a tag appearing "on" both branches when its jobs run even if you haven't altered the job configuration on master yet? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Fri Jun 29 18:50:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 29 Jun 2018 14:50:01 -0400 Subject: Matching tags against branches In-Reply-To: <20180629180710.uv5fpcujqydqpmyi@yuggoth.org> References: <87fu15a7d7.fsf@meyer.lemoncheese.net> <1530294886-sup-2648@lrrr.local> <20180629180710.uv5fpcujqydqpmyi@yuggoth.org> Message-ID: <1530297195-sup-6430@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-06-29 18:07:11 +0000: > On 2018-06-29 14:01:25 -0400 (-0400), Doug Hellmann wrote: > [...] > > For the standard OpenStack releases processes we do frequently tag > > a commit that is only on one branch and then create another branch > > at that same commit within a short period of time. I wonder if the > > race condition implied there would be a regular source is problems. > > We could probably work out some way to ensure the branch only happens > > after the tag is processed, but it sounds like this approach isn't > > going to be an automatic win. > [...] > > Unless I'm not thinking the situation through, it seems like > unwanted behavior would arise from a workflow like: > > 1. branch stable from master > > 2. merge new commits to master changing job behavior to be > unsuitable for the stable branch > > 3. push a tag which refers to a commit present on both master and > stable > > Or is it enough to have a tag appearing "on" both branches when its > jobs run even if you haven't altered the job configuration on master > yet? I think there are 3 separate cases. I'm not sure which are important (or likely to trigger some wrong behavior) because I apparently can't wrap my head around how zuul actually decides what jobs to run (it seems I get it wrong in some subtle way every time I think I have it right). In the first case, I was pointing out that although we usually think of tags as being on one branch, there's a period of time around the RC1 phase of *every cycle* where that isn't necessarily the case, because the RC1 tag is where we make the branch. So, if there is any reason for zuul to "get confused" by a tag being on more than one branch (even if the configuration is the same), it's likely to happen regularly. The second case is where the race condition between when the tag is applied and when zuul does the work to figure out what jobs to run allows us to have different job configuration on the two branches. We are unlikely to change any job settings on the new stable branch (or master, for that matter) within the small time window before the jobs triggered by the tag have a chance to run, but that doesn't mean it won't ever happen. And then there's the situation where we go back and tag some older commit that happens to be on multiple branches. We tend not to do that, but don't have any particular rule against it. We don't merge branches together any more (and as far as I know all of the branches where we did do that are now marked -eol), so merging will not be a cause of a commit being on more than one branch. There is a very small chance that we would tag an existing commit that was already the base of a branch off of master and that had a different configuration. If we're following our practices that's not very likely, though, so I am not too worried about that case. Doug From corvus at inaugust.com Fri Jun 29 22:30:40 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 29 Jun 2018 15:30:40 -0700 Subject: Matching tags against branches In-Reply-To: <1530297195-sup-6430@lrrr.local> (Doug Hellmann's message of "Fri, 29 Jun 2018 14:50:01 -0400") References: <87fu15a7d7.fsf@meyer.lemoncheese.net> <1530294886-sup-2648@lrrr.local> <20180629180710.uv5fpcujqydqpmyi@yuggoth.org> <1530297195-sup-6430@lrrr.local> Message-ID: <87a7rd8bu7.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Excerpts from Jeremy Stanley's message of 2018-06-29 18:07:11 +0000: >> On 2018-06-29 14:01:25 -0400 (-0400), Doug Hellmann wrote: >> [...] >> > For the standard OpenStack releases processes we do frequently tag >> > a commit that is only on one branch and then create another branch >> > at that same commit within a short period of time. I wonder if the >> > race condition implied there would be a regular source is problems. >> > We could probably work out some way to ensure the branch only happens >> > after the tag is processed, but it sounds like this approach isn't >> > going to be an automatic win. >> [...] >> >> Unless I'm not thinking the situation through, it seems like >> unwanted behavior would arise from a workflow like: >> >> 1. branch stable from master >> >> 2. merge new commits to master changing job behavior to be >> unsuitable for the stable branch >> >> 3. push a tag which refers to a commit present on both master and >> stable >> >> Or is it enough to have a tag appearing "on" both branches when its >> jobs run even if you haven't altered the job configuration on master >> yet? > > I think there are 3 separate cases. I'm not sure which are important > (or likely to trigger some wrong behavior) because I apparently > can't wrap my head around how zuul actually decides what jobs to > run (it seems I get it wrong in some subtle way every time I think > I have it right). > > In the first case, I was pointing out that although we usually think > of tags as being on one branch, there's a period of time around the > RC1 phase of *every cycle* where that isn't necessarily the case, > because the RC1 tag is where we make the branch. So, if there is > any reason for zuul to "get confused" by a tag being on more than > one branch (even if the configuration is the same), it's likely to > happen regularly. In this case, Zuul would run the jobs for both branches (if the branches specify different job names, two jobs will run; if they have different variants, only the one job will run, but both variants will apply (ie, the configuration will be merged). In this specific case, I'm inclined to think that the configuration will not have diverged between the two branches, so we wouldn't be surprised by the behavior here. > The second case is where the race condition between when the tag > is applied and when zuul does the work to figure out what jobs to > run allows us to have different job configuration on the two branches. > We are unlikely to change any job settings on the new stable branch > (or master, for that matter) within the small time window before > the jobs triggered by the tag have a chance to run, but that doesn't > mean it won't ever happen. As soon as the tag is applied, Zuul start running the jobs for that tag. Even if that isn't instantaneous, the event goes into the same event queue as other trigger events, so it will certainly be processed before any changes proposed to either branch. However, if for some reason the jobs for the tag were delayed (or simply ran long) long enough for someone to propose *and* land a change to adding or removing a job in one of the branches that tagged commit showed up in, all before the jobs for the tag itself finished, then Zuul will start running the new job (or cancel the removed job) on the tag item in the pipeline. It will not, however, alter the configuration of a job it has already decided to run (even if it hasn't started running). In short, only job additions and removals might be affected by this, and those only in rare cases. > And then there's the situation where we go back and tag some older > commit that happens to be on multiple branches. We tend not to do > that, but don't have any particular rule against it. We don't merge > branches together any more (and as far as I know all of the branches > where we did do that are now marked -eol), so merging will not be > a cause of a commit being on more than one branch. There is a very > small chance that we would tag an existing commit that was already > the base of a branch off of master and that had a different > configuration. If we're following our practices that's not very > likely, though, so I am not too worried about that case. This would certainly cause both configurations to run, and by its nature, is the most likely to have divergent configuration between the branches. However, it also falls into the category where the same thing would happen today with in-repo config, so it doesn't strike me as a regression. Thanks for these detailed cases! -Jim From mordred at inaugust.com Sat Jun 30 13:38:18 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sat, 30 Jun 2018 08:38:18 -0500 Subject: Removing the add_host exclusion Message-ID: <20e0a37d-79da-da2a-0ad9-2e0686ebdb22@inaugust.com> Hey everybody, While working on the OpenStack Infra spec for updating our config management to use Ansible, which involves some pieces of using Zuul for CD, I ran in to an issue related to add_host - which we currently prevent people from using. I'd like to use add_host in jobs in a non-trusted repo - but add_host is on the list of things we block for security reasons. I **think** we're blocking it as a hold-over from the time before we had per-job ssh keys (we were worried about a user using add_host to connect to another job's nodes) Since we do per-build keys now, that vector is gone - and I can't come up with any more reasons why we should be blocking add_host from use. It can't be used to attack anything that we don't already protect from in a different place (like add_host with a connection_type: local would still get blocked on usage by the 'can't execute on localhost' exclusions) So I'd like to relax the exclusion - unless someone has a reason we should keep it? Thanks! Monty From pabelanger at redhat.com Sat Jun 30 15:56:05 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Sat, 30 Jun 2018 11:56:05 -0400 Subject: Removing the add_host exclusion In-Reply-To: <20e0a37d-79da-da2a-0ad9-2e0686ebdb22@inaugust.com> References: <20e0a37d-79da-da2a-0ad9-2e0686ebdb22@inaugust.com> Message-ID: <20180630155605.GA2665@localhost.localdomain> On Sat, Jun 30, 2018 at 08:38:18AM -0500, Monty Taylor wrote: > Hey everybody, > > While working on the OpenStack Infra spec for updating our config management > to use Ansible, which involves some pieces of using Zuul for CD, I ran in to > an issue related to add_host - which we currently prevent people from using. > > I'd like to use add_host in jobs in a non-trusted repo - but add_host is on > the list of things we block for security reasons. I **think** we're blocking > it as a hold-over from the time before we had per-job ssh keys (we were > worried about a user using add_host to connect to another job's nodes) > > Since we do per-build keys now, that vector is gone - and I can't come up > with any more reasons why we should be blocking add_host from use. It can't > be used to attack anything that we don't already protect from in a different > place (like add_host with a connection_type: local would still get blocked > on usage by the 'can't execute on localhost' exclusions) > > So I'd like to relax the exclusion - unless someone has a reason we should > keep it? > Don't we also need to allow for remote code execution on the executor to add the private keys of the host in add_host? For example today we use ssh-add to setup the private key for the log server. We'd likely need to do the same thing for anything else we want to access via add_host. Same question for known_hosts, we'd also have to whitelist it so the new SSH connection knows the host keys of the new system. In general, I think I am okay with the idea, there is nothing stopping untrusted jobs from working with remote systems, however one concern I do have is jobs start to rely on external services. However, this isn't a problem for zuul to solve, but more the operators of a zuul. We'd just be saying it would be okay for them to how do it directly from a zuul-executor in our control plane. As I type this out, maybe we also consider some sort of ACL system for add_host, where untrusted jobs could say add an IP range of our control plane. -Paul From tdecacqu at redhat.com Sat Jun 30 16:19:37 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sat, 30 Jun 2018 16:19:37 +0000 Subject: Removing the add_host exclusion In-Reply-To: <20e0a37d-79da-da2a-0ad9-2e0686ebdb22@inaugust.com> References: <20e0a37d-79da-da2a-0ad9-2e0686ebdb22@inaugust.com> Message-ID: <1530375372.c5w1desgox.tristanC@fedora> On June 30, 2018 1:38 pm, Monty Taylor wrote: [...] > So I'd like to relax the exclusion - unless someone has a reason we > should keep it? > That sounds reasonable, I actually need this for the openshift driver to add pods: https://review.openstack.org/#/c/570668/3/zuul/ansible/action/add_host.py -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: