From mrhillsman at gmail.com Wed Jan 17 02:10:19 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 16 Jan 2018 20:10:19 -0600 Subject: Zuul Setup - VPN or Environment with No Public Access Message-ID: Hey everyone, I wanted to start a discussion here of something I would like to setup in at least two environments and also consolidate into a tutorial/blog for others to build on/integrate. Please see image of what I have setup as one of the environments. Unfortunately one of the things documentation (could be I just have not come across it) does not explain is the flow/dependencies upon each other of the different daemons. I know some folks tried to explain what could be done but I was still a little loss so hoping this thread can flush out the details.​ -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 52487 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 17 16:48:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jan 2018 16:48:53 +0000 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: References: Message-ID: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: [...] > Unfortunately one of the things documentation (could be I just > have not come across it) does not explain is the flow/dependencies > upon each other of the different daemons. I know some folks tried > to explain what could be done but I was still a little loss so > hoping this thread can flush out the details. I agree, we could likely stand to include a diagram showing the traffic flow and default protocols/ports used for communication between the various components. VPNs aside, some users may want to design a more restrictive environment limiting communication between each service with a network firewall and so having all this information in one place in the documentation would be helpful. I know we've got gearman from the various (scheduler, mergers, executors) zuul daemons to the gearman server (which in many cases will still be on the same server as the scheduler due to running as a gear subprocess forked from it). Then we've got zookeeper from the various (scheduler, builder, launcher?) nodepool daemons to the zookeeper cluster members. There's also statsd from pretty much every one of the above to the statsd collector server. And OpenStack API calls from (all?) the nodepool daemons. Mergers (including the implicit ones run by executors) also need to be able to clone and fetch from git repositories listed in the configuration. And ansible (and rsync?) over ssh from the executors to the job nodes. Also the executors likely need to connect to other systems to perform archival/publication activities (logs, documentation, tarballs...). And of course the job nodes need access to whatever things they have to retrieve for testing. Oh, the zuul scheduler also needs to be able to make API calls to whatever systems its connection drivers are configured for (gerrit via ssh, github via https, et cetera). The zuul finger gateway daemon also needs to connect to the unprivileged finger high-ports on the executors. And the zuul web daemon needs access to... the finger gateway? Other things? And zuul (scheduler?) connects to an sql database for its db reporter. I guess zuul web needs to be able to query that too for the builds dashboard? What else am I forgetting? Or what did I get wrong above? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Jan 17 17:08:05 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 09:08:05 -0800 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> (Jeremy Stanley's message of "Wed, 17 Jan 2018 16:48:53 +0000") References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> Message-ID: <871sioquq2.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: > [...] >> Unfortunately one of the things documentation (could be I just >> have not come across it) does not explain is the flow/dependencies >> upon each other of the different daemons. I know some folks tried >> to explain what could be done but I was still a little loss so >> hoping this thread can flush out the details. > > I agree, we could likely stand to include a diagram showing the > traffic flow and default protocols/ports used for communication > between the various components. VPNs aside, some users may want to > design a more restrictive environment limiting communication between > each service with a network firewall and so having all this > information in one place in the documentation would be helpful. Indeed. We have a place in the documentation for this, but it was missing. I've taken a stab at filling it in, then used your reply as an initial code review. :) Here's the change: https://review.openstack.org/534841 We also need to see if Nodepool's documentation should be updated and start deep-linking to it from Zuul -- or decide if we want to duplicate any content. > I know we've got gearman from the various (scheduler, mergers, > executors) zuul daemons to the gearman server (which in many cases > will still be on the same server as the scheduler due to running as > a gear subprocess forked from it). > > Then we've got zookeeper from the various (scheduler, builder, > launcher?) nodepool daemons to the zookeeper cluster members. In Nodepool v3 we just have builders and launchers, but yes, both need to talk to the ZK cluster. > There's also statsd from pretty much every one of the above to the > statsd collector server. Statsd is optional, but I'd certainly recommend it. > And OpenStack API calls from (all?) the nodepool daemons. > > Mergers (including the implicit ones run by executors) also need to > be able to clone and fetch from git repositories listed in the > configuration. > > And ansible (and rsync?) over ssh from the executors to the job > nodes. Ansible rsync module uses ssh in our case. In the docs though, I just said "connect" since the actual mechanism may be different for, say, Windows users. But yeah, whatever Ansible is going to need to do to connect to the nodepool nodes, the executors need to be able to do that. > Also the executors likely need to connect to other systems to > perform archival/publication activities (logs, documentation, > tarballs...). > > And of course the job nodes need access to whatever things they have > to retrieve for testing. > > Oh, the zuul scheduler also needs to be able to make API calls to > whatever systems its connection drivers are configured for (gerrit > via ssh, github via https, et cetera). > > The zuul finger gateway daemon also needs to connect to the > unprivileged finger high-ports on the executors. > > And the zuul web daemon needs access to... the finger gateway? Other > things? Zuul web doesn't use the finger gateway, it also connects to the high finger ports on the executors. > And zuul (scheduler?) connects to an sql database for its db > reporter. I guess zuul web needs to be able to query that too for > the builds dashboard? > > What else am I forgetting? Or what did I get wrong above? That certainly seems like a good start. :) To directly address the case where the cloud that nodepool is using is on a VPN, I think specifically you would need to make sure that the Nodepool launchers, builders, and Zuul executors are on the VPN as well. I think that's it. -Jim From corvus at inaugust.com Wed Jan 17 23:02:11 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 15:02:11 -0800 Subject: Secret handling in branches Message-ID: <87fu74nl70.fsf@meyer.lemoncheese.net> Hi, Currently our handling of secrets in branches is less than ideal. There are two problems: 1) A secret may not be used by a job in another branch of the same project. 2) A secret may not be defined in more than one branch of the same project. The first issue is inconvenient for users which want to, for example, upload an image to dockerhub for all of the branches of their project. The second prohibits the obvious fix for the first issue, but it also means that as soon as a project with a secret definition branches, Zuul will have a configuration error. I believe the original intent was not to restrict secrets to a single branch. If we accept that a secret should be available for use by any branch of a project, the fix for that is simple. The fix for the second issue is also not too difficult -- we can simply permit duplicate definitions, but only on different branches of the same project and with the same content. That allows us to create a new branch from an existing one without a configuration error. I've implemented these fixes in this change: https://review.openstack.org/534956 In an IRC conversation, we briefly considered supporting distinct secrets on multiple branches -- that is, a secret with the same name on multiple branches of the same project but with potentially different values on each branch, but that's a bit more difficult to reason about, and ultimately doesn't make this more secure. An important thing to note about the change is that by permitting the use of secrets across branches, if a project has different group maintaining the different branches, it could allow a secret to leak from one group to the other. In other words, if a project uses secrets, you need to trust the maintainers of all the branches of that project equally. It's also worth noting that even in the current situation, where we (perhaps accidentally) only allowed the secret to be used within a single branch, it is equally trivial for a maintainer of another branch of the same project to simply copy the secret to their branch under a different name and expose it, since all branches of a project share a single private key. Preventing this would require different keys for different branches, which would make branching impossible. I think the proposed solution is the best way of handling this, given the constraints. I know some folks would like to use secrets in a production-promotion system, where distinct groups have differing levels of access control. I think in those cases, those secrets and jobs will have to be located in distinct projects, not branches. Please let me know your thoughts and if this affects your use of secrets. -Jim From fungi at yuggoth.org Wed Jan 17 23:10:08 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jan 2018 23:10:08 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <20180117231008.salbt3t7wn7btk4j@yuggoth.org> On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: [...] > The fix for the second issue is also not too difficult -- we can simply > permit duplicate definitions, but only on different branches of the same > project and with the same content. That allows us to create a new > branch from an existing one without a configuration error. [...] What's then the recommended workflow/sequence for changing a secret if it appears on more than one branch? First delete all but one so there are no remaining duplicates, then modify that one remaining copy? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Jan 17 23:28:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 15:28:42 -0800 Subject: Secret handling in branches In-Reply-To: <20180117231008.salbt3t7wn7btk4j@yuggoth.org> (Jeremy Stanley's message of "Wed, 17 Jan 2018 23:10:08 +0000") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> Message-ID: <87r2qom5ed.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: > [...] >> The fix for the second issue is also not too difficult -- we can simply >> permit duplicate definitions, but only on different branches of the same >> project and with the same content. That allows us to create a new >> branch from an existing one without a configuration error. > [...] > > What's then the recommended workflow/sequence for changing a secret > if it appears on more than one branch? First delete all but one so > there are no remaining duplicates, then modify that one remaining > copy? I think that's the only thing that would work with the proposed change. It might be a good idea to delete secrets (and nodesets, and semaphores -- I'm working on similar changes for these) shortly after branch creation just to keep things tidy. Retiring a secret would be straightforward -- just delete it from master and let it continue to live on any branches where it's still relevant. Alternatively, if we decide this is too onerous, we could allow secrets to have differing values across branches, and simply only use the first one defined. Or we could allow and store distinct values across branches, matching them up with the appropriate job variants on those branches. It would make the workflow you describe here easier -- we'd just need to keep in mind it doesn't change the security posture at all -- the keys would be the same (as mentioned in the previous email). It would mean there's a bit more "magic" going on, and may lead people to mistake which branch-version of a secret is being used. If we're not sure which way to go, the proposed change is the more conservative of the two approaches and is forward compatible with the other -- if we require that any duplicate secrets have the same data today, then we can chose to permit different data later without breaking existing usage. -Jim From clint at fewbar.com Thu Jan 18 00:57:41 2018 From: clint at fewbar.com (Clint Byrum) Date: Wed, 17 Jan 2018 16:57:41 -0800 Subject: Secret handling in branches In-Reply-To: <87r2qom5ed.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> <87r2qom5ed.fsf@meyer.lemoncheese.net> Message-ID: <1516235439-sup-9125@fewbar.com> Excerpts from corvus's message of 2018-01-17 15:28:42 -0800: > Jeremy Stanley writes: > > > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: > > [...] > >> The fix for the second issue is also not too difficult -- we can simply > >> permit duplicate definitions, but only on different branches of the same > >> project and with the same content. That allows us to create a new > >> branch from an existing one without a configuration error. > > [...] > > > > What's then the recommended workflow/sequence for changing a secret > > if it appears on more than one branch? First delete all but one so > > there are no remaining duplicates, then modify that one remaining > > copy? > > I think that's the only thing that would work with the proposed change. > It might be a good idea to delete secrets (and nodesets, and > semaphores -- I'm working on similar changes for these) shortly after > branch creation just to keep things tidy. > > Retiring a secret would be straightforward -- just delete it from master > and let it continue to live on any branches where it's still relevant. > > Alternatively, if we decide this is too onerous, we could allow secrets > to have differing values across branches, and simply only use the first > one defined. > > Or we could allow and store distinct values across branches, matching > them up with the appropriate job variants on those branches. It would > make the workflow you describe here easier -- we'd just need to keep in > mind it doesn't change the security posture at all -- the keys would be > the same (as mentioned in the previous email). It would mean there's a > bit more "magic" going on, and may lead people to mistake which > branch-version of a secret is being used. > > If we're not sure which way to go, the proposed change is the more > conservative of the two approaches and is forward compatible with the > other -- if we require that any duplicate secrets have the same data > today, then we can chose to permit different data later without breaking > existing usage. This is thorny because without some kind of hierarchy/ordering on which branch is loaded first after master (I assume master is always the top?) it's hard to make any rules about varying. Could we just add a branch matcher to secrets the way we do with jobs? I haven't thought through this, just didn't see it as an alternative in your summary. Basically you can only have a different value if you have an explicit branch matcher that differs too? From tdecacqu at redhat.com Thu Jan 18 02:22:04 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 18 Jan 2018 02:22:04 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <1516241463.bwi0kezn2s.tristanC@fedora> On January 17, 2018 11:02 pm, James E. Blair wrote: > Hi, > > Currently our handling of secrets in branches is less than ideal. There > are two problems: > > 1) A secret may not be used by a job in another branch of the same > project. > > 2) A secret may not be defined in more than one branch of the same > project. > Hi, it seems like there is an extra problem, not related to the branch use-case, but that could be taken into account if relevant: 3) A secret may not be overriden in a project pipeline. If this was possible, then a project could use the python-upload-pypi with their own account without having to create a new job. For example: - project: release: jobs: - python-upload-pypi: secrets: - name: pypi_info secret: project_pypi_account Note that Zuul doesn't prevent to merge such change, but the secret doesn't seems to be taken into account. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 484 bytes Desc: not available URL: From mrhillsman at gmail.com Thu Jan 18 02:30:20 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 17 Jan 2018 20:30:20 -0600 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: <871sioquq2.fsf@meyer.lemoncheese.net> References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> <871sioquq2.fsf@meyer.lemoncheese.net> Message-ID: Awesome, thanks for getting this started so quickly. A quick look at the patch and I am excited to see the detail coming out. So I have a publicly accessible Zookeeper and Gearman with other services in the same environment working fine. I added a zuul-executor and nodepool (builder+launcher) to an environment behind a VPN and they can successfully talk to the Zookeeper and Gearman in the other public environment. I notice however that nodepool is not creating any images or instances within the VPN environment and also would like to know how to trigger a job for the specific executor I have running within the VPN environment. On Wed, Jan 17, 2018 at 11:08 AM, James E. Blair wrote: > Jeremy Stanley writes: > > > On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: > > [...] > >> Unfortunately one of the things documentation (could be I just > >> have not come across it) does not explain is the flow/dependencies > >> upon each other of the different daemons. I know some folks tried > >> to explain what could be done but I was still a little loss so > >> hoping this thread can flush out the details. > > > > I agree, we could likely stand to include a diagram showing the > > traffic flow and default protocols/ports used for communication > > between the various components. VPNs aside, some users may want to > > design a more restrictive environment limiting communication between > > each service with a network firewall and so having all this > > information in one place in the documentation would be helpful. > > Indeed. We have a place in the documentation for this, but it was > missing. I've taken a stab at filling it in, then used your reply as an > initial code review. :) > > Here's the change: https://review.openstack.org/534841 > > We also need to see if Nodepool's documentation should be updated and > start deep-linking to it from Zuul -- or decide if we want to duplicate > any content. > > > I know we've got gearman from the various (scheduler, mergers, > > executors) zuul daemons to the gearman server (which in many cases > > will still be on the same server as the scheduler due to running as > > a gear subprocess forked from it). > > > > Then we've got zookeeper from the various (scheduler, builder, > > launcher?) nodepool daemons to the zookeeper cluster members. > > In Nodepool v3 we just have builders and launchers, but yes, both need > to talk to the ZK cluster. > > > There's also statsd from pretty much every one of the above to the > > statsd collector server. > > Statsd is optional, but I'd certainly recommend it. > > > And OpenStack API calls from (all?) the nodepool daemons. > > > > Mergers (including the implicit ones run by executors) also need to > > be able to clone and fetch from git repositories listed in the > > configuration. > > > > And ansible (and rsync?) over ssh from the executors to the job > > nodes. > > Ansible rsync module uses ssh in our case. In the docs though, I just > said "connect" since the actual mechanism may be different for, say, > Windows users. But yeah, whatever Ansible is going to need to do to > connect to the nodepool nodes, the executors need to be able to do > that. > > > Also the executors likely need to connect to other systems to > > perform archival/publication activities (logs, documentation, > > tarballs...). > > > > And of course the job nodes need access to whatever things they have > > to retrieve for testing. > > > > Oh, the zuul scheduler also needs to be able to make API calls to > > whatever systems its connection drivers are configured for (gerrit > > via ssh, github via https, et cetera). > > > > The zuul finger gateway daemon also needs to connect to the > > unprivileged finger high-ports on the executors. > > > > And the zuul web daemon needs access to... the finger gateway? Other > > things? > > Zuul web doesn't use the finger gateway, it also connects to the high > finger ports on the executors. > > > And zuul (scheduler?) connects to an sql database for its db > > reporter. I guess zuul web needs to be able to query that too for > > the builds dashboard? > > > > What else am I forgetting? Or what did I get wrong above? > > That certainly seems like a good start. :) > > To directly address the case where the cloud that nodepool is using is > on a VPN, I think specifically you would need to make sure that the > Nodepool launchers, builders, and Zuul executors are on the VPN as well. > I think that's it. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Thu Jan 18 15:44:33 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 07:44:33 -0800 Subject: Secret handling in branches In-Reply-To: <1516235439-sup-9125@fewbar.com> (Clint Byrum's message of "Wed, 17 Jan 2018 16:57:41 -0800") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> <87r2qom5ed.fsf@meyer.lemoncheese.net> <1516235439-sup-9125@fewbar.com> Message-ID: <87shb3kw7y.fsf@meyer.lemoncheese.net> Clint Byrum writes: > Excerpts from corvus's message of 2018-01-17 15:28:42 -0800: >> Jeremy Stanley writes: >> >> > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: >> > [...] >> >> The fix for the second issue is also not too difficult -- we can simply >> >> permit duplicate definitions, but only on different branches of the same >> >> project and with the same content. That allows us to create a new >> >> branch from an existing one without a configuration error. >> > [...] >> > >> > What's then the recommended workflow/sequence for changing a secret >> > if it appears on more than one branch? First delete all but one so >> > there are no remaining duplicates, then modify that one remaining >> > copy? >> >> I think that's the only thing that would work with the proposed change. >> It might be a good idea to delete secrets (and nodesets, and >> semaphores -- I'm working on similar changes for these) shortly after >> branch creation just to keep things tidy. >> >> Retiring a secret would be straightforward -- just delete it from master >> and let it continue to live on any branches where it's still relevant. >> >> Alternatively, if we decide this is too onerous, we could allow secrets >> to have differing values across branches, and simply only use the first >> one defined. >> >> Or we could allow and store distinct values across branches, matching >> them up with the appropriate job variants on those branches. It would >> make the workflow you describe here easier -- we'd just need to keep in >> mind it doesn't change the security posture at all -- the keys would be >> the same (as mentioned in the previous email). It would mean there's a >> bit more "magic" going on, and may lead people to mistake which >> branch-version of a secret is being used. >> >> If we're not sure which way to go, the proposed change is the more >> conservative of the two approaches and is forward compatible with the >> other -- if we require that any duplicate secrets have the same data >> today, then we can chose to permit different data later without breaking >> existing usage. > > This is thorny because without some kind of hierarchy/ordering on which > branch is loaded first after master (I assume master is always the top?) > it's hard to make any rules about varying. The current order is master, then others alphabetically. There are probably some places where that needs to change to "master or then others alphabetically". But I think otherwise that's pretty close to stabilized. > Could we just add a branch matcher to secrets the way we do with jobs? I > haven't thought through this, just didn't see it as an alternative in > your summary. Basically you can only have a different value if you have > an explicit branch matcher that differs too? That's a possibility, or we could even make it a bit more automatic since secrets are bound tightly to job-playbooks. That means whenever a secret is used, we know exactly which job variant requested it, and could use that branch, or copy it's branch matcher. The same isn't true for nodesets or semaphores though, and they also have the same issues. So if we did support different values, it would probably be good to do that in the same way for all 3 items, which probably leaves something like branch matchers as you suggest. I guess the main question is: is there a use case for unique values of secrets or nodesets across branches? (I'm fairly certain we never want differing values of semaphores -- we don't even allow them to be changed in speculative execution.) -Jim From corvus at inaugust.com Thu Jan 18 16:07:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 08:07:32 -0800 Subject: Secret handling in branches In-Reply-To: <1516241463.bwi0kezn2s.tristanC@fedora> (Tristan Cacqueray's message of "Thu, 18 Jan 2018 02:22:04 +0000") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <1516241463.bwi0kezn2s.tristanC@fedora> Message-ID: <87o9lrkv5n.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > Hi, it seems like there is an extra problem, not related to the branch > use-case, but that could be taken into account if relevant: > > 3) A secret may not be overriden in a project pipeline. > > If this was possible, then a project could use the python-upload-pypi > with their own account without having to create a new job. For example: > > - project: > release: > jobs: > - python-upload-pypi: > secrets: > - name: pypi_info > secret: project_pypi_account > > > Note that Zuul doesn't prevent to merge such change, but the secret > doesn't seems to be taken into account. Secrets are bound to the playbooks attached to the job variant where the secret is requested. So if you have: - job: name: parent run: playbooks/upload.yaml secrets: upload_creds - job: name: child run: playbook/expose_secrets.yaml And you run the child job, it will not have access to the upload_creds secret when it runs the expose_secrets playbook. Which is good. :) The case you cite is the inverse of that, but it fails for the same reason. If you added a playbook to that project-pipeline-variant, then it would be able to use project_pypi_account. I could see wanting to make this a one-way system, where you still can't do my example but could do yours. If we can convince ourselves that is safe in all cases, we could probably implement it. But for the moment, I think we settled on the idea that the same person(s) controlling the playbook that uses the secret should be the some ones controlling which secret is used. -Jim From corvus at inaugust.com Thu Jan 18 16:13:29 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 08:13:29 -0800 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: (Melvin Hillsman's message of "Wed, 17 Jan 2018 20:30:20 -0600") References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> <871sioquq2.fsf@meyer.lemoncheese.net> Message-ID: <87k1wfkuvq.fsf@meyer.lemoncheese.net> Melvin Hillsman writes: > Awesome, thanks for getting this started so quickly. A quick look at the > patch and I am excited to see the detail coming out. So I have a publicly > accessible Zookeeper and Gearman with other services in the same > environment working fine. I added a zuul-executor and nodepool > (builder+launcher) to an environment behind a VPN and they can successfully > talk to the Zookeeper and Gearman in the other public environment. I notice > however that nodepool is not creating any images or instances within the > VPN environment and also would like to know how to trigger a job for the > specific executor I have running within the VPN environment. It looks like in IRC you tracked the issue down to multiple nodepools sharing a zookeeper. Nodepool supports the zookeeper chroot option to facilitate that, but it looks like we neglected to add that option to Zuul. We should do that soon, then that case will be easier. When Zuul has multiple executors, there isn't a way to trigger a job on a specific executor. That's something we can support in the future -- here is a story with an implementation suggestion if someone wants to start working on it: https://storyboard.openstack.org/#!/story/2001125 In the mean time, since you're using VPNs, perhaps you could make all of your executors VPN clients for all of your environments? That way whichever executor gets a job with nodes in some environment, it can reach them. -Jim From Tobias.Henkel at bmw.de Thu Jan 18 17:51:02 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Thu, 18 Jan 2018 17:51:02 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <92B05C15-DB71-4C2B-BB7B-B2D3D616E4CA@bmw.de> On 18.01.18, 00:02, "James E. Blair" wrote: Hi, Currently our handling of secrets in branches is less than ideal. There are two problems: 1) A secret may not be used by a job in another branch of the same project. 2) A secret may not be defined in more than one branch of the same project. The first issue is inconvenient for users which want to, for example, upload an image to dockerhub for all of the branches of their project. The second prohibits the obvious fix for the first issue, but it also means that as soon as a project with a secret definition branches, Zuul will have a configuration error. I believe the original intent was not to restrict secrets to a single branch. If we accept that a secret should be available for use by any branch of a project, the fix for that is simple. The fix for the second issue is also not too difficult -- we can simply permit duplicate definitions, but only on different branches of the same project and with the same content. That allows us to create a new branch from an existing one without a configuration error. I've implemented these fixes in this change: https://review.openstack.org/534956 In an IRC conversation, we briefly considered supporting distinct secrets on multiple branches -- that is, a secret with the same name on multiple branches of the same project but with potentially different values on each branch, but that's a bit more difficult to reason about, and ultimately doesn't make this more secure. An important thing to note about the change is that by permitting the use of secrets across branches, if a project has different group maintaining the different branches, it could allow a secret to leak from one group to the other. In other words, if a project uses secrets, you need to trust the maintainers of all the branches of that project equally. It's also worth noting that even in the current situation, where we (perhaps accidentally) only allowed the secret to be used within a single branch, it is equally trivial for a maintainer of another branch of the same project to simply copy the secret to their branch under a different name and expose it, since all branches of a project share a single private key. Preventing this would require different keys for different branches, which would make branching impossible. I think the proposed solution is the best way of handling this, given the constraints. I know some folks would like to use secrets in a production-promotion system, where distinct groups have differing levels of access control. I think in those cases, those secrets and jobs will have to be located in distinct projects, not branches. I fully agree on that. Please let me know your thoughts and if this affects your use of secrets. -Jim That all makes sense to me and I think this won’t affect our current use of secrets. Kind regards Tobi _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Thu Jan 18 21:24:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 13:24:38 -0800 Subject: feature/zuulv3 branches merged into master Message-ID: <878tcukgh5.fsf@meyer.lemoncheese.net> Hi, Today we merged the feature/zuulv3 branches of Nodepool and Zuul into master. In order to delete the feature branch, now that it's no longer necessary, we need to close all of the changes on that branch. In order to do that, I have marked all of them as abandoned. I'd like to stress that this is a technicality and of course the changes themselves are still very much welcome. If you have a pending change to that branch, please re-propose the change to the master branch. If, when doing so, you could include a link to the previous version of the change, especially if previous reviews had been completed, that would be very helpful to reviewers. I'm including a summary of all the changes which were open on feature/zuulv3 shortly before I abandoned them. All 113 changes in status:open branch:feature/zuulv3 Number Subject Project Branch Topic Owner Updated C V W 383437 Increase build worker main loop delay nodepool feature/zuulv3 nodepool-zk James E. Blair 2017-12-18 1 -1 408951 Add command list into server.py zuul feature/zuulv3 launcher Joshua Hesketh 2017-12-18 -1 -1 426861 WIP: Add reporter for Federated Message Bus (fedmsg) zuul feature/zuulv3 426861 Paul Belanger 2017-12-18 -1 -1 429850 WIP: Re-enable test_time_database test zuul feature/zuulv3 enable-tests Paul Belanger 2017-12-18 -1 -1 446172 Add reconfigure_handler for executor logging zuul feature/zuulv3 Paul Belanger 2017-12-18 1 -1 450897 Re-enable test_timer_sshkey zuul feature/zuulv3 test-conflict Clint 'SpamapS' B 2017-12-18 -1 454826 [WIP] Integration: Get static nodes from nodepool zuul feature/zuulv3 Cullen Taylor 2017-12-18 -1 456090 Set the git sshkey for the connection zuul feature/zuulv3 zuulv3 Joshua Hesketh 2017-12-18 -1 -1 465852 Add Dockerfile nodepool feature/zuulv3 Tobias Henkel 2017-05-23 -1 1 -1 465912 Add Dockerfile zuul feature/zuulv3 Tobias Henkel 2017-12-28 2 1 -1 466071 Switch from testrepository to stestr zuul feature/zuulv3 stestr Clint 'SpamapS' B 2017-09-21 1 -1 468670 Re-enable test_merge_conflict_reports zuul feature/zuulv3 re-enable-tests Clint 'SpamapS' B 2017-09-21 2 -1 471175 WIP: Status branch protection checking for github zuul feature/zuulv3 branch-protection Jamie Lennox 2017-12-22 1 473811 Add build.started state flag zuul feature/zuulv3 streamer_testing Monty Taylor 2017-12-18 1 -1 -1 473985 Only prepend hostname on multi-node plays zuul feature/zuulv3 streamer_testing Monty Taylor 2017-12-18 1 -1 478675 Add ability to auto-generate simple one-line shell playbooks zuul feature/zuulv3 Monty Taylor 2017-12-18 -1 -1 480843 Ensure build.start_time is defined zuul feature/zuulv3 Tristan Cacqueray 2017-09-21 -1 1 481134 Add v3 update slides zuul feature/zuulv3 480759 Monty Taylor 2017-12-18 1 -1 487538 Use yarn and webpack to manage zuul-web javascript zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 -1 528295 ├─Add babel transpiling enabling use of ES6 features zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528296 │ └─Add StandardJS linting and analysis zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528373 │ └─Fix source_url handling for jobs view zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528297 │ └─Fix StandardJS warnings and turn them to errors zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528298 │ └─Add bundle analysis to the lint target zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-22 2 1 529193 │ └─Inject url endpoint information zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528374 │ └─Make bundle of build web content zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 -1 528437 │ └─Remove use strict zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 -1 529686 ├─Add nodeenv and plumb it in to tox zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 -1 529690 └─Add web server URL tests zuul feature/zuulv3 James E. Blair 2017-12-21 -1 494655 WIP Replace paste/webob with aiohttp zuul feature/zuulv3 aiohttp Monty Taylor 2017-09-21 -1 497962 Add zookeeper-server to bindep for Red Hat nodepool feature/zuulv3 Monty Taylor 2017-08-25 -1 -1 499224 Keep a cache of branches for projects and update on events zuul feature/zuulv3 github-ratelimit Monty Taylor 2017-09-21 -1 -1 499969 Add /node-list to the webapp nodepool feature/zuulv3 Tristan Cacqueray 2018-01-09 2 1 500254 └─Add /label-list to the webapp nodepool feature/zuulv3 Tristan Cacqueray 2018-01-09 2 1 531390 └─Refactor status functions, add web endpoints, allow params nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-09 1 531703 └─Add a separate module for node management commands nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-09 1 531718 └─webapp: add optional admin endpoint nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-10 1 500159 Always generate the ARA report, even on failure zuul feature/zuulv3 zuulv3 David Moreau Sima 2017-09-21 1 -1 502468 Emit a message to the job log if ansible crashes zuul feature/zuulv3 Monty Taylor 2017-09-21 -1 503166 Don't gather host keys for non ssh connections nodepool feature/zuulv3 windows-support Tobias Henkel 05:42 AM -1 504112 └─Add connection-port to provider diskimage nodepool feature/zuulv3 windows-support Tobias Henkel 05:45 AM -1 504238 Delete IncludeRole object from result object for include_role tas zuul feature/zuulv3 David Moreau Sima 2018-01-09 1 -1 504267 Move github webhook from webapp to zuul-web zuul feature/zuulv3 zuul-web Jesse Keating 2017-12-11 1 1 504464 Consume server.id from shade create exception nodepool feature/zuulv3 Monty Taylor 2017-12-18 2 -1 504526 Add SELinux type enforcement zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-09-21 1 504809 WIP: add abstract job attribute zuul feature/zuulv3 James E. Blair 2017-09-21 -1 505419 Disable action and lookup plugins from 2.4 zuul feature/zuulv3 zuulv3 Monty Taylor 2017-09-21 2 -1 505430 └─Port in changes from ansible 2.4 command module zuul feature/zuulv3 zuulv3 Monty Taylor 2017-09-21 -1 506871 Add restricted-node-labels global limits zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-07 1 508774 Add noop job description zuul feature/zuulv3 zuul-web Tristan Cacqueray 05:11 PM 2 -2 1 508793 WIP add repl zuul feature/zuulv3 James E. Blair 2017-10-02 508960 Add memory awareness to system load governor zuul feature/zuulv3 Clint 'SpamapS' B 05:03 AM 2 1 509294 Add jinja filters for manipulating zuul.projects zuul feature/zuulv3 Monty Taylor 2017-10-03 -1 509484 Log exceptions from MODULE FAILURE more consistently zuul feature/zuulv3 Monty Taylor 2017-10-05 2 -1 509531 Provide resource_name in logging as structured data nodepool feature/zuulv3 zuulv3 Monty Taylor 2017-12-18 -1 -1 509903 Remove references to pipelines, queues, and layouts on dequeue zuul feature/zuulv3 zuulv3-fixes James E. Blair 2017-10-06 -1 -1 510301 Add a CRD multi-branch test zuul feature/zuulv3 James E. Blair 2017-10-07 -1 512130 Enabled ssh retries for ansible zuul feature/zuulv3 zuulv3-fixes Paul Belanger 2017-10-17 2 1 -1 513003 WIP: Add script for deterministic key generation zuul feature/zuulv3 Tobias Henkel 2017-10-18 -1 513368 Add regex support to project stanzas zuul feature/zuulv3 Tobias Henkel 2018-01-05 1 513387 Add node list to webapp nodepool feature/zuulv3 Tobias Henkel 2017-10-19 1 513885 Refactor Zuul job to use lightweight Vue components zuul feature/zuulv3 cleanup-status Mohammed Naser 2017-12-12 1 1 513886 └─Switch change list to use Vue.js zuul feature/zuulv3 cleanup-status Mohammed Naser 2017-10-21 1 515169 Support autoholding nodes for specific changes/refs zuul feature/zuulv3 autohold-changese Krzysztof Klimond 2017-12-14 1 516920 builder: do not cleanup image for driver not managing image nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 2 1 468624 └─Implement a static driver for Nodepool nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 -1 526325 └─Refactor run_handler to be generic nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 532450 └─Refactor NodeLauncher to be generic nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 468753 └─Implement an OpenContainer driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 -2 -1 521356 └─Implement a Kubernetes driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 528982 └─Implement an Amazon EC2 driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 517067 Add BaseSource.getProjectReadonly and refactor zuul feature/zuulv3 517078 Clint 'SpamapS' B 2017-11-03 -1 -1 517078 └─Do not add invalid projets via the /keys API zuul feature/zuulv3 517078 Clint 'SpamapS' B 2017-11-16 2 -1 518279 mqtt: add basic reporter zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-29 2 1 519222 Add client unit testing zuul feature/zuulv3 enqueue-ref Ian Wienand 2017-12-06 2 1 519596 Allow run to be list of playbooks zuul feature/zuulv3 Paul Belanger 2017-11-20 -1 1 519654 Override HOME environment variable in bubblewrap zuul feature/zuulv3 bwrap-set-home Clint 'SpamapS' B 2017-11-16 -1 -1 520657 Run image object autocleanup after uploading images nodepool feature/zuulv3 Monty Taylor 2017-11-29 -1 520664 WIP: Convert from legacy to native devstack job nodepool feature/zuulv3 devstack_native David Shrewsbury 2018-01-02 1 520855 Ensure the merger prunes on update zuul feature/zuulv3 fetch-prune Ian Wienand 2017-11-20 -1 1 521625 Add angular to fetch-dependencies.sh zuul feature/zuulv3 Monty Taylor 2017-11-21 -1 1 523640 zk: use kazoo retry facilities nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 05:28 AM -2 1 523697 executor: add log_stream_port and log_stream_file settings zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-30 -1 1 524463 Remove zuul._projects zuul feature/zuulv3 project-transitio Ian Wienand 2017-12-01 -1 -1 524773 handler: fix support for handler without launch_manager nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 2 1 524620 └─Add a plugin interface for drivers nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 02:53 AM 1 525851 zk: automatically retry command when connection is lost zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-12-27 -2 1 527579 web: add /{tenant}/jobs/{job_name} route zuul feature/zuulv3 zuul-web Tristan Cacqueray 2017-12-13 -1 1 528588 Use bandit security scanner zuul feature/zuulv3 bandit Clint 'SpamapS' B 2017-12-17 1 528729 requirements: remove paramiko <2.0 cap zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-05 2 1 528969 config: add statsd-server config parameter nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-15 2 -1 529013 web: add OpenAPI documentation zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-12-27 2 529060 Make Zuul able to start with a broken config zuul feature/zuulv3 529060 Fabien Boucher 2018-01-08 1 529293 Set remote url on every getRepo in merger zuul feature/zuulv3 remote-url Tobias Henkel 2018-01-08 1 529821 Add winrm certificate handling zuul feature/zuulv3 windows-support Tobias Henkel 2018-01-17 1 530521 Slack driver zuul feature/zuulv3 slack-reporter Clint 'SpamapS' B 2018-01-10 -2 1 531009 Allow Ansible 2.4 zuul feature/zuulv3 ansible24 David Shrewsbury 2018-01-11 -2 1 531057 license: remove dos line break nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-04 -1 531171 WIP Rework log streaming to use logging zuul feature/zuulv3 zuul-stream-rewor Monty Taylor 2018-01-04 -1 531510 Delete stale jobdirs at startup zuul feature/zuulv3 James E. Blair 2018-01-12 -1 -1 531742 Also prefix the indexes if needed zuul feature/zuulv3 prefix-indexes Tobias Henkel 2018-01-08 1 531934 Add job for testing against devel of ansible zuul feature/zuulv3 turn-on-ansible Monty Taylor 2018-01-16 1 532023 Fix sql reporting with postgres zuul feature/zuulv3 fix-postgres Tobias Henkel 2018-01-09 1 532310 Switch to python3-dev / python3-devel in bindep.txt zuul feature/zuulv3 Paul Belanger 2018-01-10 2 1 532718 scheduler: better handle format status error zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-11 1 533256 Add DIB faq to docs nodepool feature/zuulv3 dib-faq Clark Boylan 2018-01-12 1 533509 ansible: honor command no_log module attribute zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-15 1 534190 Use re2 for change_matcher zuul feature/zuulv3 fix-zuul-re Clint 'SpamapS' B 2018-01-16 -1 534588 pep8: ignore E124 rule zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-17 1 534956 Handle secrets in branches zuul feature/zuulv3 James E. Blair 2018-01-17 2 1 534974 └─Handle nodesets in branches zuul feature/zuulv3 James E. Blair 05:23 PM 2 1 534988 zk: check for client in properties nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 02:20 AM 1 535423 WIP: Add provider info command nodepool feature/zuulv3 info_cmd David Shrewsbury 05:44 PM -1 Thanks for your understanding and your help, Jim From corvus at inaugust.com Mon Jan 22 15:44:48 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 22 Jan 2018 07:44:48 -0800 Subject: Job parent branches Message-ID: <87372x7v9r.fsf@meyer.lemoncheese.net> Hi, We've run into an issue a few times where a complex job defined in a project repo doesn't run because it can't find a parent job that matches the branch. In all the cases I've seen so far, it's children of OpenStack's devstack job which are at issue. That's because the devstack job is defined in the devstack repo (an untrusted project repo) with branches. That means that it has implied branch matchers attached to it. We could, of course, turn those off, however, the intention is for the devstack job to be branched. Running the Queens release is not the same as running Ocata, so the different branch variants should be used under different circumstances. The problem occurs when someone wants to inherit from the devstack job in a different repo, and there is not a devstack branch variant which matches the branch of the child job. The way Zuul pieces together the final job that is run is to take the name of the job specified in the project-pipeline config, and then collect all the variants of that job which match the change. Each time it finds a match, it looks at the parent of that variant, and recursively collects all the variants of the parent which match the change (eliminating duplicates). If it can't find at least one variant of the parent which matches the change, the job is not run. The first time this came to my attention was for the tempest job. During development, the devstack job has only been defined on the master branch of devstack. The tempest job is in a repo with no branches, so it doesn't have an implied branch matcher. When any project invokes it, it will match the branch, and then Zuul will look for its parent job, 'devstack', and as long as the change is on the master branch, it will find it and run. However, if the change is on stable/ocata, and there is no devstack job defined in the stable/ocata branch of devstack, then it won't run. I think the solution to this case is to backport the devstack job to devstack's stable branches. That's the situation we will ultimately be in after our natural branching progression anyway, and we should treat this as a one-time cost of the transition to Zuul v3. The other two instances where this has come up are for projects which should run a child of the devstack job on a branch which does not, and never will, exist in devstack. When nodepool had a feature/zuulv3 branch, we were unable to run the devstack integration test job on that branch because of this. Ansible's default branch is 'devel', which means in order for us to run the shade integration test job on changes to Ansible, Zuul looks for a devstack variant which matches the 'devel' branch. In all cases, these issues could technically be solved by creating devstack job variants which matched these other branches. However, it should not be incumbent on the devstack maintainers to anticipate all of the branch names of other projects which might want to use the devstack job. We could create a devstack job variant with an explicit branch matcher which matches all branches, and treat the stable branch variants as deltas from that, however, that is more difficult to manage and doesn't fit our branch creation model. I can think of two solutions to this problem: 1) Have Zuul use a default branch when searching for parents with no matching branch variants. Every project in Zuul has a default branch (which can be specified in the 'project' config stanza, but the default default is 'master'). If, when searching for the parents of a job, Zuul finds no matching variants, it could perform the search a second time with the default branch of the parent job's project (all variants of a job are defined in the same repo, so we know which project the job is in, so there's only one default branch to search for each parent). 2) Allow someone creating a child job to specify a fallback branch for Zuul to use when searching for parent jobs. Essentially, we would modify the algorithm in the same way as the previous suggestion, except instead of determining the fallback branch based on the default branch of the parent job, we would let the child job supply the fallback branch. At each level of recursion as we walk up the inheritance path, a new fallback branch could be provided. I'm inclined to implement option (1) at least. That is very similar to what already happens today with git repo checkouts. I.e., if you did manage to run one of the jobs in the examples I described, you would get the master branch of devstack checked out in the job because of the existing fallback behavior. It makes sense that the job matching algorithm would be similar. And I think in many of the cases where this issue comes up, it would do the right thing automatically (certainly it would have in all the examples thus far). I'm on the fence about option (2). I see no reason not to add it other than perhaps it's unnecessary complexity. Maybe we should avoid implementing it until we find a compelling case for it? Or do folks anticipate a need for that now? Thanks, Jim From shrewsbury.dave at gmail.com Mon Jan 22 16:08:20 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 22 Jan 2018 11:08:20 -0500 Subject: Job parent branches In-Reply-To: <87372x7v9r.fsf@meyer.lemoncheese.net> References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: Hi! On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair wrote: [snip] > 2) Allow someone creating a child job to specify a fallback branch for > Zuul to use when searching for parent jobs. > > Essentially, we would modify the algorithm in the same way as the > previous suggestion, except instead of determining the fallback branch > based on the default branch of the parent job, we would let the child > job supply the fallback branch. At each level of recursion as we walk > up the inheritance path, a new fallback branch could be provided. > > I'm inclined to implement option (1) at least. That is very similar to > what already happens today with git repo checkouts. I.e., if you did > manage to run one of the jobs in the examples I described, you would get > the master branch of devstack checked out in the job because of the > existing fallback behavior. It makes sense that the job matching > algorithm would be similar. And I think in many of the cases where this > issue comes up, it would do the right thing automatically (certainly it > would have in all the examples thus far). > > I'm on the fence about option (2). I see no reason not to add it other > than perhaps it's unnecessary complexity. Maybe we should avoid > implementing it until we find a compelling case for it? Or do folks > anticipate a need for that now? > Although I could see wanting the ability to do #2 (for example, matching a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 branch), I cannot immediately find a need for it right this instant. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Mon Jan 22 16:14:04 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 22 Jan 2018 11:14:04 -0500 Subject: Job parent branches In-Reply-To: References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Jan 22, 2018 at 11:08 AM, David Shrewsbury < shrewsbury.dave at gmail.com> wrote: > Hi! > > On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair > wrote: > > [snip] > > >> 2) Allow someone creating a child job to specify a fallback branch for >> Zuul to use when searching for parent jobs. >> >> Essentially, we would modify the algorithm in the same way as the >> previous suggestion, except instead of determining the fallback branch >> based on the default branch of the parent job, we would let the child >> job supply the fallback branch. At each level of recursion as we walk >> up the inheritance path, a new fallback branch could be provided. >> >> I'm inclined to implement option (1) at least. That is very similar to >> what already happens today with git repo checkouts. I.e., if you did >> manage to run one of the jobs in the examples I described, you would get >> the master branch of devstack checked out in the job because of the >> existing fallback behavior. It makes sense that the job matching >> algorithm would be similar. And I think in many of the cases where this >> issue comes up, it would do the right thing automatically (certainly it >> would have in all the examples thus far). >> >> I'm on the fence about option (2). I see no reason not to add it other >> than perhaps it's unnecessary complexity. Maybe we should avoid >> implementing it until we find a compelling case for it? Or do folks >> anticipate a need for that now? >> > > Although I could see wanting the ability to do #2 (for example, matching > a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 > branch), I cannot immediately find a need for it right this instant. > > Sorry, bad example. I meant something like nodepool:feature/zuulv4 against shade:feature/version2 (different theoretical branch name). -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Tue Jan 23 00:58:39 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 23 Jan 2018 11:58:39 +1100 Subject: Job parent branches In-Reply-To: References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: It seems to me that option #1 has no config change and is therefore "free" (as far as deployers are concerned) to implement. We could start with option #1 and see if that solves all of our potential use cases. However if we find we still need more fine-tuning we could do option #2 where-by if no fallback branch is specified on the child job, it then tries option #1 (so no change in functionality in "upgrading" to option #2, just a new feature). On Tue, Jan 23, 2018 at 3:14 AM, David Shrewsbury wrote: > > On Mon, Jan 22, 2018 at 11:08 AM, David Shrewsbury < > shrewsbury.dave at gmail.com> wrote: > >> Hi! >> >> On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair >> wrote: >> >> [snip] >> >> >>> 2) Allow someone creating a child job to specify a fallback branch for >>> Zuul to use when searching for parent jobs. >>> >>> Essentially, we would modify the algorithm in the same way as the >>> previous suggestion, except instead of determining the fallback branch >>> based on the default branch of the parent job, we would let the child >>> job supply the fallback branch. At each level of recursion as we walk >>> up the inheritance path, a new fallback branch could be provided. >>> >>> I'm inclined to implement option (1) at least. That is very similar to >>> what already happens today with git repo checkouts. I.e., if you did >>> manage to run one of the jobs in the examples I described, you would get >>> the master branch of devstack checked out in the job because of the >>> existing fallback behavior. It makes sense that the job matching >>> algorithm would be similar. And I think in many of the cases where this >>> issue comes up, it would do the right thing automatically (certainly it >>> would have in all the examples thus far). >>> >>> I'm on the fence about option (2). I see no reason not to add it other >>> than perhaps it's unnecessary complexity. Maybe we should avoid >>> implementing it until we find a compelling case for it? Or do folks >>> anticipate a need for that now? >>> >> >> Although I could see wanting the ability to do #2 (for example, matching >> a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 >> branch), I cannot immediately find a need for it right this instant. >> >> Sorry, bad example. I meant something like nodepool:feature/zuulv4 against > shade:feature/version2 (different theoretical branch name). > > > -Dave > -- > David Shrewsbury (Shrews) > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jan 23 23:07:29 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 23 Jan 2018 15:07:29 -0800 Subject: Job parent branches In-Reply-To: <87372x7v9r.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Mon, 22 Jan 2018 07:44:48 -0800") References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: <87607s2mz2.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > Hi, > > We've run into an issue a few times where a complex job defined in a > project repo doesn't run because it can't find a parent job that matches > the branch. > > In all the cases I've seen so far, it's children of OpenStack's devstack > job which are at issue. That's because the devstack job is defined in > the devstack repo (an untrusted project repo) with branches. That means > that it has implied branch matchers attached to it. > > We could, of course, turn those off, however, the intention is for the > devstack job to be branched. Running the Queens release is not the same > as running Ocata, so the different branch variants should be used under > different circumstances. > > The problem occurs when someone wants to inherit from the devstack job > in a different repo, and there is not a devstack branch variant which > matches the branch of the child job. > > The way Zuul pieces together the final job that is run is to take the > name of the job specified in the project-pipeline config, and then > collect all the variants of that job which match the change. Each time > it finds a match, it looks at the parent of that variant, and > recursively collects all the variants of the parent which match the > change (eliminating duplicates). If it can't find at least one variant > of the parent which matches the change, the job is not run. > > The first time this came to my attention was for the tempest job. > During development, the devstack job has only been defined on the master > branch of devstack. The tempest job is in a repo with no branches, so > it doesn't have an implied branch matcher. When any project invokes it, > it will match the branch, and then Zuul will look for its parent job, > 'devstack', and as long as the change is on the master branch, it will > find it and run. However, if the change is on stable/ocata, and there > is no devstack job defined in the stable/ocata branch of devstack, then > it won't run. > > I think the solution to this case is to backport the devstack job to > devstack's stable branches. That's the situation we will ultimately be > in after our natural branching progression anyway, and we should treat > this as a one-time cost of the transition to Zuul v3. > > The other two instances where this has come up are for projects which > should run a child of the devstack job on a branch which does not, and > never will, exist in devstack. > > When nodepool had a feature/zuulv3 branch, we were unable to run the > devstack integration test job on that branch because of this. Ansible's > default branch is 'devel', which means in order for us to run the shade > integration test job on changes to Ansible, Zuul looks for a devstack > variant which matches the 'devel' branch. > > In all cases, these issues could technically be solved by creating > devstack job variants which matched these other branches. However, it > should not be incumbent on the devstack maintainers to anticipate all of > the branch names of other projects which might want to use the devstack > job. We could create a devstack job variant with an explicit branch > matcher which matches all branches, and treat the stable branch variants > as deltas from that, however, that is more difficult to manage and > doesn't fit our branch creation model. > > I can think of two solutions to this problem: > > 1) Have Zuul use a default branch when searching for parents with no > matching branch variants. > > Every project in Zuul has a default branch (which can be specified in > the 'project' config stanza, but the default default is 'master'). If, > when searching for the parents of a job, Zuul finds no matching > variants, it could perform the search a second time with the default > branch of the parent job's project (all variants of a job are defined in > the same repo, so we know which project the job is in, so there's only > one default branch to search for each parent). > > 2) Allow someone creating a child job to specify a fallback branch for > Zuul to use when searching for parent jobs. > > Essentially, we would modify the algorithm in the same way as the > previous suggestion, except instead of determining the fallback branch > based on the default branch of the parent job, we would let the child > job supply the fallback branch. At each level of recursion as we walk > up the inheritance path, a new fallback branch could be provided. > > I'm inclined to implement option (1) at least. That is very similar to > what already happens today with git repo checkouts. I.e., if you did > manage to run one of the jobs in the examples I described, you would get > the master branch of devstack checked out in the job because of the > existing fallback behavior. It makes sense that the job matching > algorithm would be similar. And I think in many of the cases where this > issue comes up, it would do the right thing automatically (certainly it > would have in all the examples thus far). > > I'm on the fence about option (2). I see no reason not to add it other > than perhaps it's unnecessary complexity. Maybe we should avoid > implementing it until we find a compelling case for it? Or do folks > anticipate a need for that now? I've implemented the change described in (1): https://review.openstack.org/536998 In doing so, I found that a few things about it feel weird to me. First, the fact that this only applies to parents may mean that in some circumstances, users may still not get their expected behavior without performing some extra steps. For example, the shade/Ansible integration job Monty wants to run is currently only defined in the master branch of shade. So we could not directly add it to Ansible, since changes to the devel branch would not match. In fact, it's only after Monty's change to the branch matcher of that job to add devel (https://review.openstack.org/536159) that we get to the point where we can solve the problem of the parent not matching with (1). A solution to this dilemma would be to relax the branch matching constraints even further than in (1) -- we could, rather than only searching for default-branch variants of parent jobs, we could do so for all jobs as well. That means that if we say "run the integration job on changes to Ansible", as long as all the other matchers (files, etc) succeed, then something will run. In other words, we would no longer need to add the devel branch to the job definition in the shade repo. That's a two-edged blade -- it means that in many cases where we would not run a job now, we might in the future. Any time that someone added a job to a project-pipeline config on more branches than the job actually supported while counting on the job to simply not run on unsupported branches would see a behavior change -- the jobs would run where they did not before. But maybe that's okay? Maybe the fact that the project-pipeline variant of a job (i.e., where a user actually states "run this job for changes to this project") must always match is sufficient. Project-pipeline job invocations that are in in-repo configs are already tied to their branches. The biggest danger I see here would be for central configurations, where, for example, we might say "always run devstack on all branches of this project's changes". Today we will only run devstack on the branches where devstack is defined, but if we make this further change, we would always run devstack, and simply use the master version of devstack on branches which otherwise don't have a devstack job variant. I think the thing that bothers me the most about this is that once this is allowed, it's difficult to disallow. The only way to get the old behavior back would be to start adding more branch matchers to the project-pipeline configuration to further restrict when a job is run. On the other hand, I'm becoming more convinced that we should implement (2) from my message above. It seems arbitrary to only support falling back to the default branch of a project, when we know that branch mappings may be more complicated than that. Further, I'm starting to wonder if the approach in (2) might address my misgivings about (1). Perhaps if we made this kind of branch fallback always explicit, we can preserve the current conservative behavior, but still allow for the kind of branch mismatches we're running into. In the case of a job inheriting from another job (e.g., with Monty's change 536159), a user can say "I know this job is about to inherit from a job that may not match branches this job is designed to run on; in those cases, use this branch." Perhaps we can even say "use the default branch", and let folks explicitly opt-in to the implementation in (1). But also, in the case where there is no intermediate job (e.g., without Monty's change 536159), this attribute could still be set on the project-pipeline config. So where the shade job is added to Ansible, we could say "even though these changes are on devel, use the master shade job". I think I'd like to implement (2) on its own, without (1), and see what that looks like and if any other issues fall out of that. -Jim From corvus at inaugust.com Wed Jan 24 21:27:08 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 24 Jan 2018 13:27:08 -0800 Subject: Job parent branches In-Reply-To: <87607s2mz2.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 23 Jan 2018 15:07:29 -0800") References: <87372x7v9r.fsf@meyer.lemoncheese.net> <87607s2mz2.fsf@meyer.lemoncheese.net> Message-ID: <87d11zx80j.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > corvus at inaugust.com (James E. Blair) writes: > >> Hi, >> >> We've run into an issue a few times where a complex job defined in a >> project repo doesn't run because it can't find a parent job that matches >> the branch. >> >> In all the cases I've seen so far, it's children of OpenStack's devstack >> job which are at issue. That's because the devstack job is defined in >> the devstack repo (an untrusted project repo) with branches. That means >> that it has implied branch matchers attached to it. >> >> We could, of course, turn those off, however, the intention is for the >> devstack job to be branched. Running the Queens release is not the same >> as running Ocata, so the different branch variants should be used under >> different circumstances. >> >> The problem occurs when someone wants to inherit from the devstack job >> in a different repo, and there is not a devstack branch variant which >> matches the branch of the child job. >> >> The way Zuul pieces together the final job that is run is to take the >> name of the job specified in the project-pipeline config, and then >> collect all the variants of that job which match the change. Each time >> it finds a match, it looks at the parent of that variant, and >> recursively collects all the variants of the parent which match the >> change (eliminating duplicates). If it can't find at least one variant >> of the parent which matches the change, the job is not run. >> >> The first time this came to my attention was for the tempest job. >> During development, the devstack job has only been defined on the master >> branch of devstack. The tempest job is in a repo with no branches, so >> it doesn't have an implied branch matcher. When any project invokes it, >> it will match the branch, and then Zuul will look for its parent job, >> 'devstack', and as long as the change is on the master branch, it will >> find it and run. However, if the change is on stable/ocata, and there >> is no devstack job defined in the stable/ocata branch of devstack, then >> it won't run. >> >> I think the solution to this case is to backport the devstack job to >> devstack's stable branches. That's the situation we will ultimately be >> in after our natural branching progression anyway, and we should treat >> this as a one-time cost of the transition to Zuul v3. >> >> The other two instances where this has come up are for projects which >> should run a child of the devstack job on a branch which does not, and >> never will, exist in devstack. >> >> When nodepool had a feature/zuulv3 branch, we were unable to run the >> devstack integration test job on that branch because of this. Ansible's >> default branch is 'devel', which means in order for us to run the shade >> integration test job on changes to Ansible, Zuul looks for a devstack >> variant which matches the 'devel' branch. >> >> In all cases, these issues could technically be solved by creating >> devstack job variants which matched these other branches. However, it >> should not be incumbent on the devstack maintainers to anticipate all of >> the branch names of other projects which might want to use the devstack >> job. We could create a devstack job variant with an explicit branch >> matcher which matches all branches, and treat the stable branch variants >> as deltas from that, however, that is more difficult to manage and >> doesn't fit our branch creation model. >> >> I can think of two solutions to this problem: >> >> 1) Have Zuul use a default branch when searching for parents with no >> matching branch variants. >> >> Every project in Zuul has a default branch (which can be specified in >> the 'project' config stanza, but the default default is 'master'). If, >> when searching for the parents of a job, Zuul finds no matching >> variants, it could perform the search a second time with the default >> branch of the parent job's project (all variants of a job are defined in >> the same repo, so we know which project the job is in, so there's only >> one default branch to search for each parent). >> >> 2) Allow someone creating a child job to specify a fallback branch for >> Zuul to use when searching for parent jobs. >> >> Essentially, we would modify the algorithm in the same way as the >> previous suggestion, except instead of determining the fallback branch >> based on the default branch of the parent job, we would let the child >> job supply the fallback branch. At each level of recursion as we walk >> up the inheritance path, a new fallback branch could be provided. >> >> I'm inclined to implement option (1) at least. That is very similar to >> what already happens today with git repo checkouts. I.e., if you did >> manage to run one of the jobs in the examples I described, you would get >> the master branch of devstack checked out in the job because of the >> existing fallback behavior. It makes sense that the job matching >> algorithm would be similar. And I think in many of the cases where this >> issue comes up, it would do the right thing automatically (certainly it >> would have in all the examples thus far). >> >> I'm on the fence about option (2). I see no reason not to add it other >> than perhaps it's unnecessary complexity. Maybe we should avoid >> implementing it until we find a compelling case for it? Or do folks >> anticipate a need for that now? > > I've implemented the change described in (1): > https://review.openstack.org/536998 > > In doing so, I found that a few things about it feel weird to me. > First, the fact that this only applies to parents may mean that in some > circumstances, users may still not get their expected behavior without > performing some extra steps. For example, the shade/Ansible integration > job Monty wants to run is currently only defined in the master branch of > shade. So we could not directly add it to Ansible, since changes to the > devel branch would not match. In fact, it's only after Monty's change > to the branch matcher of that job to add devel > (https://review.openstack.org/536159) that we get to the point where we > can solve the problem of the parent not matching with (1). > > A solution to this dilemma would be to relax the branch matching > constraints even further than in (1) -- we could, rather than only > searching for default-branch variants of parent jobs, we could do so for > all jobs as well. That means that if we say "run the integration job on > changes to Ansible", as long as all the other matchers (files, etc) > succeed, then something will run. In other words, we would no longer > need to add the devel branch to the job definition in the shade repo. > > That's a two-edged blade -- it means that in many cases where we would > not run a job now, we might in the future. Any time that someone added > a job to a project-pipeline config on more branches than the job > actually supported while counting on the job to simply not run on > unsupported branches would see a behavior change -- the jobs would run > where they did not before. > > But maybe that's okay? Maybe the fact that the project-pipeline variant > of a job (i.e., where a user actually states "run this job for changes > to this project") must always match is sufficient. Project-pipeline job > invocations that are in in-repo configs are already tied to their > branches. The biggest danger I see here would be for central > configurations, where, for example, we might say "always run devstack on > all branches of this project's changes". Today we will only run > devstack on the branches where devstack is defined, but if we make this > further change, we would always run devstack, and simply use the master > version of devstack on branches which otherwise don't have a devstack > job variant. > > I think the thing that bothers me the most about this is that once this > is allowed, it's difficult to disallow. The only way to get the old > behavior back would be to start adding more branch matchers to the > project-pipeline configuration to further restrict when a job is run. > > On the other hand, I'm becoming more convinced that we should implement > (2) from my message above. It seems arbitrary to only support falling > back to the default branch of a project, when we know that branch > mappings may be more complicated than that. > > Further, I'm starting to wonder if the approach in (2) might address my > misgivings about (1). Perhaps if we made this kind of branch fallback > always explicit, we can preserve the current conservative behavior, but > still allow for the kind of branch mismatches we're running into. In > the case of a job inheriting from another job (e.g., with Monty's change > 536159), a user can say "I know this job is about to inherit from a job > that may not match branches this job is designed to run on; in those > cases, use this branch." Perhaps we can even say "use the default > branch", and let folks explicitly opt-in to the implementation in (1). > But also, in the case where there is no intermediate job (e.g., without > Monty's change 536159), this attribute could still be set on the > project-pipeline config. So where the shade job is added to Ansible, we > could say "even though these changes are on devel, use the master shade > job". > > I think I'd like to implement (2) on its own, without (1), and see what > that looks like and if any other issues fall out of that. I implemented a variant of (2) ... let's call it (3) ... which overloads the 'override-checkout' attributes (of both job, and job.required-projects) to additionally mean "pretend this change is on this branch". The implementation is in: https://review.openstack.org/537655 So a user could say: - job: name: foo parent: devstack override-checkout: master And if that job ran on a 'devel' branch change, when it went to look for a devstack job to inherit from, it would appear as if the change it was running for was on the master branch, rather than 'devel', and so it would match the devstack job defined on the 'master' branch, in addition to ensuring that when devstack was checked out, it got the 'master' branch. This is the part I like about this change. I think that most of the time when you are setting an override-checkout for a project, if you happen to also be using jobs defined for that project, you probably want those variants too. That example is general -- the same works for required-projects as well: - job: name: foo parent: devstack required-projects: - name: openstack-dev/devstack override-checkout: master This would only perform the branch masking described above when looking for devstack jobs -- when searching for any other jobs in the inheritance hierarchy, the actual branch would be used. The only misgiving I have about this is whether we might run into a problem where we don't want to affect the job variants collected, but do still want to affect the checkout. Or perhaps wanted to set those to different values. I know that some folks are using override-checkout to checkout tags rather than branches. I don't think this approach inhibits that -- I added a check that before overriding the branch matcher, we ensure it's an actual branch. So if you say "override-checkout: diablo-eol" that tag will still be checked out, but you'll still get the old branch matching behavior for the jobs. Basically, I don't think this hurts that use case, but it doesn't help it, in that there isn't a way to say "checkout diablo-eol, and use the pike job definition". Well, there is a way -- you just update the devstack job itself. It's unclear how far we should go in supporting this use case, which is effectively "using a job on a branch it wasn't intended for". Anyway, it would be very simple to split this out into a new variable and that should be safe. I think I like the approach of having it on both the job and job.required-projects -- it gives us a lot more flexibility and specificity, which I think could come in handy. So even if we do split it out, I think it should behave the same. -Jim From mrhillsman at gmail.com Wed Jan 17 02:10:19 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 16 Jan 2018 20:10:19 -0600 Subject: Zuul Setup - VPN or Environment with No Public Access Message-ID: Hey everyone, I wanted to start a discussion here of something I would like to setup in at least two environments and also consolidate into a tutorial/blog for others to build on/integrate. Please see image of what I have setup as one of the environments. Unfortunately one of the things documentation (could be I just have not come across it) does not explain is the flow/dependencies upon each other of the different daemons. I know some folks tried to explain what could be done but I was still a little loss so hoping this thread can flush out the details.​ -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 52487 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 17 16:48:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jan 2018 16:48:53 +0000 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: References: Message-ID: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: [...] > Unfortunately one of the things documentation (could be I just > have not come across it) does not explain is the flow/dependencies > upon each other of the different daemons. I know some folks tried > to explain what could be done but I was still a little loss so > hoping this thread can flush out the details. I agree, we could likely stand to include a diagram showing the traffic flow and default protocols/ports used for communication between the various components. VPNs aside, some users may want to design a more restrictive environment limiting communication between each service with a network firewall and so having all this information in one place in the documentation would be helpful. I know we've got gearman from the various (scheduler, mergers, executors) zuul daemons to the gearman server (which in many cases will still be on the same server as the scheduler due to running as a gear subprocess forked from it). Then we've got zookeeper from the various (scheduler, builder, launcher?) nodepool daemons to the zookeeper cluster members. There's also statsd from pretty much every one of the above to the statsd collector server. And OpenStack API calls from (all?) the nodepool daemons. Mergers (including the implicit ones run by executors) also need to be able to clone and fetch from git repositories listed in the configuration. And ansible (and rsync?) over ssh from the executors to the job nodes. Also the executors likely need to connect to other systems to perform archival/publication activities (logs, documentation, tarballs...). And of course the job nodes need access to whatever things they have to retrieve for testing. Oh, the zuul scheduler also needs to be able to make API calls to whatever systems its connection drivers are configured for (gerrit via ssh, github via https, et cetera). The zuul finger gateway daemon also needs to connect to the unprivileged finger high-ports on the executors. And the zuul web daemon needs access to... the finger gateway? Other things? And zuul (scheduler?) connects to an sql database for its db reporter. I guess zuul web needs to be able to query that too for the builds dashboard? What else am I forgetting? Or what did I get wrong above? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Jan 17 17:08:05 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 09:08:05 -0800 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> (Jeremy Stanley's message of "Wed, 17 Jan 2018 16:48:53 +0000") References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> Message-ID: <871sioquq2.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: > [...] >> Unfortunately one of the things documentation (could be I just >> have not come across it) does not explain is the flow/dependencies >> upon each other of the different daemons. I know some folks tried >> to explain what could be done but I was still a little loss so >> hoping this thread can flush out the details. > > I agree, we could likely stand to include a diagram showing the > traffic flow and default protocols/ports used for communication > between the various components. VPNs aside, some users may want to > design a more restrictive environment limiting communication between > each service with a network firewall and so having all this > information in one place in the documentation would be helpful. Indeed. We have a place in the documentation for this, but it was missing. I've taken a stab at filling it in, then used your reply as an initial code review. :) Here's the change: https://review.openstack.org/534841 We also need to see if Nodepool's documentation should be updated and start deep-linking to it from Zuul -- or decide if we want to duplicate any content. > I know we've got gearman from the various (scheduler, mergers, > executors) zuul daemons to the gearman server (which in many cases > will still be on the same server as the scheduler due to running as > a gear subprocess forked from it). > > Then we've got zookeeper from the various (scheduler, builder, > launcher?) nodepool daemons to the zookeeper cluster members. In Nodepool v3 we just have builders and launchers, but yes, both need to talk to the ZK cluster. > There's also statsd from pretty much every one of the above to the > statsd collector server. Statsd is optional, but I'd certainly recommend it. > And OpenStack API calls from (all?) the nodepool daemons. > > Mergers (including the implicit ones run by executors) also need to > be able to clone and fetch from git repositories listed in the > configuration. > > And ansible (and rsync?) over ssh from the executors to the job > nodes. Ansible rsync module uses ssh in our case. In the docs though, I just said "connect" since the actual mechanism may be different for, say, Windows users. But yeah, whatever Ansible is going to need to do to connect to the nodepool nodes, the executors need to be able to do that. > Also the executors likely need to connect to other systems to > perform archival/publication activities (logs, documentation, > tarballs...). > > And of course the job nodes need access to whatever things they have > to retrieve for testing. > > Oh, the zuul scheduler also needs to be able to make API calls to > whatever systems its connection drivers are configured for (gerrit > via ssh, github via https, et cetera). > > The zuul finger gateway daemon also needs to connect to the > unprivileged finger high-ports on the executors. > > And the zuul web daemon needs access to... the finger gateway? Other > things? Zuul web doesn't use the finger gateway, it also connects to the high finger ports on the executors. > And zuul (scheduler?) connects to an sql database for its db > reporter. I guess zuul web needs to be able to query that too for > the builds dashboard? > > What else am I forgetting? Or what did I get wrong above? That certainly seems like a good start. :) To directly address the case where the cloud that nodepool is using is on a VPN, I think specifically you would need to make sure that the Nodepool launchers, builders, and Zuul executors are on the VPN as well. I think that's it. -Jim From corvus at inaugust.com Wed Jan 17 23:02:11 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 15:02:11 -0800 Subject: Secret handling in branches Message-ID: <87fu74nl70.fsf@meyer.lemoncheese.net> Hi, Currently our handling of secrets in branches is less than ideal. There are two problems: 1) A secret may not be used by a job in another branch of the same project. 2) A secret may not be defined in more than one branch of the same project. The first issue is inconvenient for users which want to, for example, upload an image to dockerhub for all of the branches of their project. The second prohibits the obvious fix for the first issue, but it also means that as soon as a project with a secret definition branches, Zuul will have a configuration error. I believe the original intent was not to restrict secrets to a single branch. If we accept that a secret should be available for use by any branch of a project, the fix for that is simple. The fix for the second issue is also not too difficult -- we can simply permit duplicate definitions, but only on different branches of the same project and with the same content. That allows us to create a new branch from an existing one without a configuration error. I've implemented these fixes in this change: https://review.openstack.org/534956 In an IRC conversation, we briefly considered supporting distinct secrets on multiple branches -- that is, a secret with the same name on multiple branches of the same project but with potentially different values on each branch, but that's a bit more difficult to reason about, and ultimately doesn't make this more secure. An important thing to note about the change is that by permitting the use of secrets across branches, if a project has different group maintaining the different branches, it could allow a secret to leak from one group to the other. In other words, if a project uses secrets, you need to trust the maintainers of all the branches of that project equally. It's also worth noting that even in the current situation, where we (perhaps accidentally) only allowed the secret to be used within a single branch, it is equally trivial for a maintainer of another branch of the same project to simply copy the secret to their branch under a different name and expose it, since all branches of a project share a single private key. Preventing this would require different keys for different branches, which would make branching impossible. I think the proposed solution is the best way of handling this, given the constraints. I know some folks would like to use secrets in a production-promotion system, where distinct groups have differing levels of access control. I think in those cases, those secrets and jobs will have to be located in distinct projects, not branches. Please let me know your thoughts and if this affects your use of secrets. -Jim From fungi at yuggoth.org Wed Jan 17 23:10:08 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jan 2018 23:10:08 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <20180117231008.salbt3t7wn7btk4j@yuggoth.org> On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: [...] > The fix for the second issue is also not too difficult -- we can simply > permit duplicate definitions, but only on different branches of the same > project and with the same content. That allows us to create a new > branch from an existing one without a configuration error. [...] What's then the recommended workflow/sequence for changing a secret if it appears on more than one branch? First delete all but one so there are no remaining duplicates, then modify that one remaining copy? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Jan 17 23:28:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 17 Jan 2018 15:28:42 -0800 Subject: Secret handling in branches In-Reply-To: <20180117231008.salbt3t7wn7btk4j@yuggoth.org> (Jeremy Stanley's message of "Wed, 17 Jan 2018 23:10:08 +0000") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> Message-ID: <87r2qom5ed.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: > [...] >> The fix for the second issue is also not too difficult -- we can simply >> permit duplicate definitions, but only on different branches of the same >> project and with the same content. That allows us to create a new >> branch from an existing one without a configuration error. > [...] > > What's then the recommended workflow/sequence for changing a secret > if it appears on more than one branch? First delete all but one so > there are no remaining duplicates, then modify that one remaining > copy? I think that's the only thing that would work with the proposed change. It might be a good idea to delete secrets (and nodesets, and semaphores -- I'm working on similar changes for these) shortly after branch creation just to keep things tidy. Retiring a secret would be straightforward -- just delete it from master and let it continue to live on any branches where it's still relevant. Alternatively, if we decide this is too onerous, we could allow secrets to have differing values across branches, and simply only use the first one defined. Or we could allow and store distinct values across branches, matching them up with the appropriate job variants on those branches. It would make the workflow you describe here easier -- we'd just need to keep in mind it doesn't change the security posture at all -- the keys would be the same (as mentioned in the previous email). It would mean there's a bit more "magic" going on, and may lead people to mistake which branch-version of a secret is being used. If we're not sure which way to go, the proposed change is the more conservative of the two approaches and is forward compatible with the other -- if we require that any duplicate secrets have the same data today, then we can chose to permit different data later without breaking existing usage. -Jim From clint at fewbar.com Thu Jan 18 00:57:41 2018 From: clint at fewbar.com (Clint Byrum) Date: Wed, 17 Jan 2018 16:57:41 -0800 Subject: Secret handling in branches In-Reply-To: <87r2qom5ed.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> <87r2qom5ed.fsf@meyer.lemoncheese.net> Message-ID: <1516235439-sup-9125@fewbar.com> Excerpts from corvus's message of 2018-01-17 15:28:42 -0800: > Jeremy Stanley writes: > > > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: > > [...] > >> The fix for the second issue is also not too difficult -- we can simply > >> permit duplicate definitions, but only on different branches of the same > >> project and with the same content. That allows us to create a new > >> branch from an existing one without a configuration error. > > [...] > > > > What's then the recommended workflow/sequence for changing a secret > > if it appears on more than one branch? First delete all but one so > > there are no remaining duplicates, then modify that one remaining > > copy? > > I think that's the only thing that would work with the proposed change. > It might be a good idea to delete secrets (and nodesets, and > semaphores -- I'm working on similar changes for these) shortly after > branch creation just to keep things tidy. > > Retiring a secret would be straightforward -- just delete it from master > and let it continue to live on any branches where it's still relevant. > > Alternatively, if we decide this is too onerous, we could allow secrets > to have differing values across branches, and simply only use the first > one defined. > > Or we could allow and store distinct values across branches, matching > them up with the appropriate job variants on those branches. It would > make the workflow you describe here easier -- we'd just need to keep in > mind it doesn't change the security posture at all -- the keys would be > the same (as mentioned in the previous email). It would mean there's a > bit more "magic" going on, and may lead people to mistake which > branch-version of a secret is being used. > > If we're not sure which way to go, the proposed change is the more > conservative of the two approaches and is forward compatible with the > other -- if we require that any duplicate secrets have the same data > today, then we can chose to permit different data later without breaking > existing usage. This is thorny because without some kind of hierarchy/ordering on which branch is loaded first after master (I assume master is always the top?) it's hard to make any rules about varying. Could we just add a branch matcher to secrets the way we do with jobs? I haven't thought through this, just didn't see it as an alternative in your summary. Basically you can only have a different value if you have an explicit branch matcher that differs too? From tdecacqu at redhat.com Thu Jan 18 02:22:04 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 18 Jan 2018 02:22:04 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <1516241463.bwi0kezn2s.tristanC@fedora> On January 17, 2018 11:02 pm, James E. Blair wrote: > Hi, > > Currently our handling of secrets in branches is less than ideal. There > are two problems: > > 1) A secret may not be used by a job in another branch of the same > project. > > 2) A secret may not be defined in more than one branch of the same > project. > Hi, it seems like there is an extra problem, not related to the branch use-case, but that could be taken into account if relevant: 3) A secret may not be overriden in a project pipeline. If this was possible, then a project could use the python-upload-pypi with their own account without having to create a new job. For example: - project: release: jobs: - python-upload-pypi: secrets: - name: pypi_info secret: project_pypi_account Note that Zuul doesn't prevent to merge such change, but the secret doesn't seems to be taken into account. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 484 bytes Desc: not available URL: From mrhillsman at gmail.com Thu Jan 18 02:30:20 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 17 Jan 2018 20:30:20 -0600 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: <871sioquq2.fsf@meyer.lemoncheese.net> References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> <871sioquq2.fsf@meyer.lemoncheese.net> Message-ID: Awesome, thanks for getting this started so quickly. A quick look at the patch and I am excited to see the detail coming out. So I have a publicly accessible Zookeeper and Gearman with other services in the same environment working fine. I added a zuul-executor and nodepool (builder+launcher) to an environment behind a VPN and they can successfully talk to the Zookeeper and Gearman in the other public environment. I notice however that nodepool is not creating any images or instances within the VPN environment and also would like to know how to trigger a job for the specific executor I have running within the VPN environment. On Wed, Jan 17, 2018 at 11:08 AM, James E. Blair wrote: > Jeremy Stanley writes: > > > On 2018-01-16 20:10:19 -0600 (-0600), Melvin Hillsman wrote: > > [...] > >> Unfortunately one of the things documentation (could be I just > >> have not come across it) does not explain is the flow/dependencies > >> upon each other of the different daemons. I know some folks tried > >> to explain what could be done but I was still a little loss so > >> hoping this thread can flush out the details. > > > > I agree, we could likely stand to include a diagram showing the > > traffic flow and default protocols/ports used for communication > > between the various components. VPNs aside, some users may want to > > design a more restrictive environment limiting communication between > > each service with a network firewall and so having all this > > information in one place in the documentation would be helpful. > > Indeed. We have a place in the documentation for this, but it was > missing. I've taken a stab at filling it in, then used your reply as an > initial code review. :) > > Here's the change: https://review.openstack.org/534841 > > We also need to see if Nodepool's documentation should be updated and > start deep-linking to it from Zuul -- or decide if we want to duplicate > any content. > > > I know we've got gearman from the various (scheduler, mergers, > > executors) zuul daemons to the gearman server (which in many cases > > will still be on the same server as the scheduler due to running as > > a gear subprocess forked from it). > > > > Then we've got zookeeper from the various (scheduler, builder, > > launcher?) nodepool daemons to the zookeeper cluster members. > > In Nodepool v3 we just have builders and launchers, but yes, both need > to talk to the ZK cluster. > > > There's also statsd from pretty much every one of the above to the > > statsd collector server. > > Statsd is optional, but I'd certainly recommend it. > > > And OpenStack API calls from (all?) the nodepool daemons. > > > > Mergers (including the implicit ones run by executors) also need to > > be able to clone and fetch from git repositories listed in the > > configuration. > > > > And ansible (and rsync?) over ssh from the executors to the job > > nodes. > > Ansible rsync module uses ssh in our case. In the docs though, I just > said "connect" since the actual mechanism may be different for, say, > Windows users. But yeah, whatever Ansible is going to need to do to > connect to the nodepool nodes, the executors need to be able to do > that. > > > Also the executors likely need to connect to other systems to > > perform archival/publication activities (logs, documentation, > > tarballs...). > > > > And of course the job nodes need access to whatever things they have > > to retrieve for testing. > > > > Oh, the zuul scheduler also needs to be able to make API calls to > > whatever systems its connection drivers are configured for (gerrit > > via ssh, github via https, et cetera). > > > > The zuul finger gateway daemon also needs to connect to the > > unprivileged finger high-ports on the executors. > > > > And the zuul web daemon needs access to... the finger gateway? Other > > things? > > Zuul web doesn't use the finger gateway, it also connects to the high > finger ports on the executors. > > > And zuul (scheduler?) connects to an sql database for its db > > reporter. I guess zuul web needs to be able to query that too for > > the builds dashboard? > > > > What else am I forgetting? Or what did I get wrong above? > > That certainly seems like a good start. :) > > To directly address the case where the cloud that nodepool is using is > on a VPN, I think specifically you would need to make sure that the > Nodepool launchers, builders, and Zuul executors are on the VPN as well. > I think that's it. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Thu Jan 18 15:44:33 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 07:44:33 -0800 Subject: Secret handling in branches In-Reply-To: <1516235439-sup-9125@fewbar.com> (Clint Byrum's message of "Wed, 17 Jan 2018 16:57:41 -0800") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <20180117231008.salbt3t7wn7btk4j@yuggoth.org> <87r2qom5ed.fsf@meyer.lemoncheese.net> <1516235439-sup-9125@fewbar.com> Message-ID: <87shb3kw7y.fsf@meyer.lemoncheese.net> Clint Byrum writes: > Excerpts from corvus's message of 2018-01-17 15:28:42 -0800: >> Jeremy Stanley writes: >> >> > On 2018-01-17 15:02:11 -0800 (-0800), James E. Blair wrote: >> > [...] >> >> The fix for the second issue is also not too difficult -- we can simply >> >> permit duplicate definitions, but only on different branches of the same >> >> project and with the same content. That allows us to create a new >> >> branch from an existing one without a configuration error. >> > [...] >> > >> > What's then the recommended workflow/sequence for changing a secret >> > if it appears on more than one branch? First delete all but one so >> > there are no remaining duplicates, then modify that one remaining >> > copy? >> >> I think that's the only thing that would work with the proposed change. >> It might be a good idea to delete secrets (and nodesets, and >> semaphores -- I'm working on similar changes for these) shortly after >> branch creation just to keep things tidy. >> >> Retiring a secret would be straightforward -- just delete it from master >> and let it continue to live on any branches where it's still relevant. >> >> Alternatively, if we decide this is too onerous, we could allow secrets >> to have differing values across branches, and simply only use the first >> one defined. >> >> Or we could allow and store distinct values across branches, matching >> them up with the appropriate job variants on those branches. It would >> make the workflow you describe here easier -- we'd just need to keep in >> mind it doesn't change the security posture at all -- the keys would be >> the same (as mentioned in the previous email). It would mean there's a >> bit more "magic" going on, and may lead people to mistake which >> branch-version of a secret is being used. >> >> If we're not sure which way to go, the proposed change is the more >> conservative of the two approaches and is forward compatible with the >> other -- if we require that any duplicate secrets have the same data >> today, then we can chose to permit different data later without breaking >> existing usage. > > This is thorny because without some kind of hierarchy/ordering on which > branch is loaded first after master (I assume master is always the top?) > it's hard to make any rules about varying. The current order is master, then others alphabetically. There are probably some places where that needs to change to "master or then others alphabetically". But I think otherwise that's pretty close to stabilized. > Could we just add a branch matcher to secrets the way we do with jobs? I > haven't thought through this, just didn't see it as an alternative in > your summary. Basically you can only have a different value if you have > an explicit branch matcher that differs too? That's a possibility, or we could even make it a bit more automatic since secrets are bound tightly to job-playbooks. That means whenever a secret is used, we know exactly which job variant requested it, and could use that branch, or copy it's branch matcher. The same isn't true for nodesets or semaphores though, and they also have the same issues. So if we did support different values, it would probably be good to do that in the same way for all 3 items, which probably leaves something like branch matchers as you suggest. I guess the main question is: is there a use case for unique values of secrets or nodesets across branches? (I'm fairly certain we never want differing values of semaphores -- we don't even allow them to be changed in speculative execution.) -Jim From corvus at inaugust.com Thu Jan 18 16:07:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 08:07:32 -0800 Subject: Secret handling in branches In-Reply-To: <1516241463.bwi0kezn2s.tristanC@fedora> (Tristan Cacqueray's message of "Thu, 18 Jan 2018 02:22:04 +0000") References: <87fu74nl70.fsf@meyer.lemoncheese.net> <1516241463.bwi0kezn2s.tristanC@fedora> Message-ID: <87o9lrkv5n.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > Hi, it seems like there is an extra problem, not related to the branch > use-case, but that could be taken into account if relevant: > > 3) A secret may not be overriden in a project pipeline. > > If this was possible, then a project could use the python-upload-pypi > with their own account without having to create a new job. For example: > > - project: > release: > jobs: > - python-upload-pypi: > secrets: > - name: pypi_info > secret: project_pypi_account > > > Note that Zuul doesn't prevent to merge such change, but the secret > doesn't seems to be taken into account. Secrets are bound to the playbooks attached to the job variant where the secret is requested. So if you have: - job: name: parent run: playbooks/upload.yaml secrets: upload_creds - job: name: child run: playbook/expose_secrets.yaml And you run the child job, it will not have access to the upload_creds secret when it runs the expose_secrets playbook. Which is good. :) The case you cite is the inverse of that, but it fails for the same reason. If you added a playbook to that project-pipeline-variant, then it would be able to use project_pypi_account. I could see wanting to make this a one-way system, where you still can't do my example but could do yours. If we can convince ourselves that is safe in all cases, we could probably implement it. But for the moment, I think we settled on the idea that the same person(s) controlling the playbook that uses the secret should be the some ones controlling which secret is used. -Jim From corvus at inaugust.com Thu Jan 18 16:13:29 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 08:13:29 -0800 Subject: Zuul Setup - VPN or Environment with No Public Access In-Reply-To: (Melvin Hillsman's message of "Wed, 17 Jan 2018 20:30:20 -0600") References: <20180117164853.57fmt3bhrpuboy5v@yuggoth.org> <871sioquq2.fsf@meyer.lemoncheese.net> Message-ID: <87k1wfkuvq.fsf@meyer.lemoncheese.net> Melvin Hillsman writes: > Awesome, thanks for getting this started so quickly. A quick look at the > patch and I am excited to see the detail coming out. So I have a publicly > accessible Zookeeper and Gearman with other services in the same > environment working fine. I added a zuul-executor and nodepool > (builder+launcher) to an environment behind a VPN and they can successfully > talk to the Zookeeper and Gearman in the other public environment. I notice > however that nodepool is not creating any images or instances within the > VPN environment and also would like to know how to trigger a job for the > specific executor I have running within the VPN environment. It looks like in IRC you tracked the issue down to multiple nodepools sharing a zookeeper. Nodepool supports the zookeeper chroot option to facilitate that, but it looks like we neglected to add that option to Zuul. We should do that soon, then that case will be easier. When Zuul has multiple executors, there isn't a way to trigger a job on a specific executor. That's something we can support in the future -- here is a story with an implementation suggestion if someone wants to start working on it: https://storyboard.openstack.org/#!/story/2001125 In the mean time, since you're using VPNs, perhaps you could make all of your executors VPN clients for all of your environments? That way whichever executor gets a job with nodes in some environment, it can reach them. -Jim From Tobias.Henkel at bmw.de Thu Jan 18 17:51:02 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Thu, 18 Jan 2018 17:51:02 +0000 Subject: Secret handling in branches In-Reply-To: <87fu74nl70.fsf@meyer.lemoncheese.net> References: <87fu74nl70.fsf@meyer.lemoncheese.net> Message-ID: <92B05C15-DB71-4C2B-BB7B-B2D3D616E4CA@bmw.de> On 18.01.18, 00:02, "James E. Blair" wrote: Hi, Currently our handling of secrets in branches is less than ideal. There are two problems: 1) A secret may not be used by a job in another branch of the same project. 2) A secret may not be defined in more than one branch of the same project. The first issue is inconvenient for users which want to, for example, upload an image to dockerhub for all of the branches of their project. The second prohibits the obvious fix for the first issue, but it also means that as soon as a project with a secret definition branches, Zuul will have a configuration error. I believe the original intent was not to restrict secrets to a single branch. If we accept that a secret should be available for use by any branch of a project, the fix for that is simple. The fix for the second issue is also not too difficult -- we can simply permit duplicate definitions, but only on different branches of the same project and with the same content. That allows us to create a new branch from an existing one without a configuration error. I've implemented these fixes in this change: https://review.openstack.org/534956 In an IRC conversation, we briefly considered supporting distinct secrets on multiple branches -- that is, a secret with the same name on multiple branches of the same project but with potentially different values on each branch, but that's a bit more difficult to reason about, and ultimately doesn't make this more secure. An important thing to note about the change is that by permitting the use of secrets across branches, if a project has different group maintaining the different branches, it could allow a secret to leak from one group to the other. In other words, if a project uses secrets, you need to trust the maintainers of all the branches of that project equally. It's also worth noting that even in the current situation, where we (perhaps accidentally) only allowed the secret to be used within a single branch, it is equally trivial for a maintainer of another branch of the same project to simply copy the secret to their branch under a different name and expose it, since all branches of a project share a single private key. Preventing this would require different keys for different branches, which would make branching impossible. I think the proposed solution is the best way of handling this, given the constraints. I know some folks would like to use secrets in a production-promotion system, where distinct groups have differing levels of access control. I think in those cases, those secrets and jobs will have to be located in distinct projects, not branches. I fully agree on that. Please let me know your thoughts and if this affects your use of secrets. -Jim That all makes sense to me and I think this won’t affect our current use of secrets. Kind regards Tobi _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Thu Jan 18 21:24:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 18 Jan 2018 13:24:38 -0800 Subject: feature/zuulv3 branches merged into master Message-ID: <878tcukgh5.fsf@meyer.lemoncheese.net> Hi, Today we merged the feature/zuulv3 branches of Nodepool and Zuul into master. In order to delete the feature branch, now that it's no longer necessary, we need to close all of the changes on that branch. In order to do that, I have marked all of them as abandoned. I'd like to stress that this is a technicality and of course the changes themselves are still very much welcome. If you have a pending change to that branch, please re-propose the change to the master branch. If, when doing so, you could include a link to the previous version of the change, especially if previous reviews had been completed, that would be very helpful to reviewers. I'm including a summary of all the changes which were open on feature/zuulv3 shortly before I abandoned them. All 113 changes in status:open branch:feature/zuulv3 Number Subject Project Branch Topic Owner Updated C V W 383437 Increase build worker main loop delay nodepool feature/zuulv3 nodepool-zk James E. Blair 2017-12-18 1 -1 408951 Add command list into server.py zuul feature/zuulv3 launcher Joshua Hesketh 2017-12-18 -1 -1 426861 WIP: Add reporter for Federated Message Bus (fedmsg) zuul feature/zuulv3 426861 Paul Belanger 2017-12-18 -1 -1 429850 WIP: Re-enable test_time_database test zuul feature/zuulv3 enable-tests Paul Belanger 2017-12-18 -1 -1 446172 Add reconfigure_handler for executor logging zuul feature/zuulv3 Paul Belanger 2017-12-18 1 -1 450897 Re-enable test_timer_sshkey zuul feature/zuulv3 test-conflict Clint 'SpamapS' B 2017-12-18 -1 454826 [WIP] Integration: Get static nodes from nodepool zuul feature/zuulv3 Cullen Taylor 2017-12-18 -1 456090 Set the git sshkey for the connection zuul feature/zuulv3 zuulv3 Joshua Hesketh 2017-12-18 -1 -1 465852 Add Dockerfile nodepool feature/zuulv3 Tobias Henkel 2017-05-23 -1 1 -1 465912 Add Dockerfile zuul feature/zuulv3 Tobias Henkel 2017-12-28 2 1 -1 466071 Switch from testrepository to stestr zuul feature/zuulv3 stestr Clint 'SpamapS' B 2017-09-21 1 -1 468670 Re-enable test_merge_conflict_reports zuul feature/zuulv3 re-enable-tests Clint 'SpamapS' B 2017-09-21 2 -1 471175 WIP: Status branch protection checking for github zuul feature/zuulv3 branch-protection Jamie Lennox 2017-12-22 1 473811 Add build.started state flag zuul feature/zuulv3 streamer_testing Monty Taylor 2017-12-18 1 -1 -1 473985 Only prepend hostname on multi-node plays zuul feature/zuulv3 streamer_testing Monty Taylor 2017-12-18 1 -1 478675 Add ability to auto-generate simple one-line shell playbooks zuul feature/zuulv3 Monty Taylor 2017-12-18 -1 -1 480843 Ensure build.start_time is defined zuul feature/zuulv3 Tristan Cacqueray 2017-09-21 -1 1 481134 Add v3 update slides zuul feature/zuulv3 480759 Monty Taylor 2017-12-18 1 -1 487538 Use yarn and webpack to manage zuul-web javascript zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 -1 528295 ├─Add babel transpiling enabling use of ES6 features zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528296 │ └─Add StandardJS linting and analysis zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528373 │ └─Fix source_url handling for jobs view zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528297 │ └─Fix StandardJS warnings and turn them to errors zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528298 │ └─Add bundle analysis to the lint target zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-22 2 1 529193 │ └─Inject url endpoint information zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 1 528374 │ └─Make bundle of build web content zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 -1 528437 │ └─Remove use strict zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 2 -1 529686 ├─Add nodeenv and plumb it in to tox zuul feature/zuulv3 zuul-javascript-t Monty Taylor 2017-12-21 -1 529690 └─Add web server URL tests zuul feature/zuulv3 James E. Blair 2017-12-21 -1 494655 WIP Replace paste/webob with aiohttp zuul feature/zuulv3 aiohttp Monty Taylor 2017-09-21 -1 497962 Add zookeeper-server to bindep for Red Hat nodepool feature/zuulv3 Monty Taylor 2017-08-25 -1 -1 499224 Keep a cache of branches for projects and update on events zuul feature/zuulv3 github-ratelimit Monty Taylor 2017-09-21 -1 -1 499969 Add /node-list to the webapp nodepool feature/zuulv3 Tristan Cacqueray 2018-01-09 2 1 500254 └─Add /label-list to the webapp nodepool feature/zuulv3 Tristan Cacqueray 2018-01-09 2 1 531390 └─Refactor status functions, add web endpoints, allow params nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-09 1 531703 └─Add a separate module for node management commands nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-09 1 531718 └─webapp: add optional admin endpoint nodepool feature/zuulv3 500254 Matthieu Huin 2018-01-10 1 500159 Always generate the ARA report, even on failure zuul feature/zuulv3 zuulv3 David Moreau Sima 2017-09-21 1 -1 502468 Emit a message to the job log if ansible crashes zuul feature/zuulv3 Monty Taylor 2017-09-21 -1 503166 Don't gather host keys for non ssh connections nodepool feature/zuulv3 windows-support Tobias Henkel 05:42 AM -1 504112 └─Add connection-port to provider diskimage nodepool feature/zuulv3 windows-support Tobias Henkel 05:45 AM -1 504238 Delete IncludeRole object from result object for include_role tas zuul feature/zuulv3 David Moreau Sima 2018-01-09 1 -1 504267 Move github webhook from webapp to zuul-web zuul feature/zuulv3 zuul-web Jesse Keating 2017-12-11 1 1 504464 Consume server.id from shade create exception nodepool feature/zuulv3 Monty Taylor 2017-12-18 2 -1 504526 Add SELinux type enforcement zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-09-21 1 504809 WIP: add abstract job attribute zuul feature/zuulv3 James E. Blair 2017-09-21 -1 505419 Disable action and lookup plugins from 2.4 zuul feature/zuulv3 zuulv3 Monty Taylor 2017-09-21 2 -1 505430 └─Port in changes from ansible 2.4 command module zuul feature/zuulv3 zuulv3 Monty Taylor 2017-09-21 -1 506871 Add restricted-node-labels global limits zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-07 1 508774 Add noop job description zuul feature/zuulv3 zuul-web Tristan Cacqueray 05:11 PM 2 -2 1 508793 WIP add repl zuul feature/zuulv3 James E. Blair 2017-10-02 508960 Add memory awareness to system load governor zuul feature/zuulv3 Clint 'SpamapS' B 05:03 AM 2 1 509294 Add jinja filters for manipulating zuul.projects zuul feature/zuulv3 Monty Taylor 2017-10-03 -1 509484 Log exceptions from MODULE FAILURE more consistently zuul feature/zuulv3 Monty Taylor 2017-10-05 2 -1 509531 Provide resource_name in logging as structured data nodepool feature/zuulv3 zuulv3 Monty Taylor 2017-12-18 -1 -1 509903 Remove references to pipelines, queues, and layouts on dequeue zuul feature/zuulv3 zuulv3-fixes James E. Blair 2017-10-06 -1 -1 510301 Add a CRD multi-branch test zuul feature/zuulv3 James E. Blair 2017-10-07 -1 512130 Enabled ssh retries for ansible zuul feature/zuulv3 zuulv3-fixes Paul Belanger 2017-10-17 2 1 -1 513003 WIP: Add script for deterministic key generation zuul feature/zuulv3 Tobias Henkel 2017-10-18 -1 513368 Add regex support to project stanzas zuul feature/zuulv3 Tobias Henkel 2018-01-05 1 513387 Add node list to webapp nodepool feature/zuulv3 Tobias Henkel 2017-10-19 1 513885 Refactor Zuul job to use lightweight Vue components zuul feature/zuulv3 cleanup-status Mohammed Naser 2017-12-12 1 1 513886 └─Switch change list to use Vue.js zuul feature/zuulv3 cleanup-status Mohammed Naser 2017-10-21 1 515169 Support autoholding nodes for specific changes/refs zuul feature/zuulv3 autohold-changese Krzysztof Klimond 2017-12-14 1 516920 builder: do not cleanup image for driver not managing image nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 2 1 468624 └─Implement a static driver for Nodepool nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 -1 526325 └─Refactor run_handler to be generic nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 532450 └─Refactor NodeLauncher to be generic nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 468753 └─Implement an OpenContainer driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 -2 -1 521356 └─Implement a Kubernetes driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 528982 └─Implement an Amazon EC2 driver nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 1 517067 Add BaseSource.getProjectReadonly and refactor zuul feature/zuulv3 517078 Clint 'SpamapS' B 2017-11-03 -1 -1 517078 └─Do not add invalid projets via the /keys API zuul feature/zuulv3 517078 Clint 'SpamapS' B 2017-11-16 2 -1 518279 mqtt: add basic reporter zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-29 2 1 519222 Add client unit testing zuul feature/zuulv3 enqueue-ref Ian Wienand 2017-12-06 2 1 519596 Allow run to be list of playbooks zuul feature/zuulv3 Paul Belanger 2017-11-20 -1 1 519654 Override HOME environment variable in bubblewrap zuul feature/zuulv3 bwrap-set-home Clint 'SpamapS' B 2017-11-16 -1 -1 520657 Run image object autocleanup after uploading images nodepool feature/zuulv3 Monty Taylor 2017-11-29 -1 520664 WIP: Convert from legacy to native devstack job nodepool feature/zuulv3 devstack_native David Shrewsbury 2018-01-02 1 520855 Ensure the merger prunes on update zuul feature/zuulv3 fetch-prune Ian Wienand 2017-11-20 -1 1 521625 Add angular to fetch-dependencies.sh zuul feature/zuulv3 Monty Taylor 2017-11-21 -1 1 523640 zk: use kazoo retry facilities nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 05:28 AM -2 1 523697 executor: add log_stream_port and log_stream_file settings zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-11-30 -1 1 524463 Remove zuul._projects zuul feature/zuulv3 project-transitio Ian Wienand 2017-12-01 -1 -1 524773 handler: fix support for handler without launch_manager nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 2018-01-17 2 1 524620 └─Add a plugin interface for drivers nodepool feature/zuulv3 nodepool-drivers Tristan Cacqueray 02:53 AM 1 525851 zk: automatically retry command when connection is lost zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-12-27 -2 1 527579 web: add /{tenant}/jobs/{job_name} route zuul feature/zuulv3 zuul-web Tristan Cacqueray 2017-12-13 -1 1 528588 Use bandit security scanner zuul feature/zuulv3 bandit Clint 'SpamapS' B 2017-12-17 1 528729 requirements: remove paramiko <2.0 cap zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-05 2 1 528969 config: add statsd-server config parameter nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-15 2 -1 529013 web: add OpenAPI documentation zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2017-12-27 2 529060 Make Zuul able to start with a broken config zuul feature/zuulv3 529060 Fabien Boucher 2018-01-08 1 529293 Set remote url on every getRepo in merger zuul feature/zuulv3 remote-url Tobias Henkel 2018-01-08 1 529821 Add winrm certificate handling zuul feature/zuulv3 windows-support Tobias Henkel 2018-01-17 1 530521 Slack driver zuul feature/zuulv3 slack-reporter Clint 'SpamapS' B 2018-01-10 -2 1 531009 Allow Ansible 2.4 zuul feature/zuulv3 ansible24 David Shrewsbury 2018-01-11 -2 1 531057 license: remove dos line break nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-04 -1 531171 WIP Rework log streaming to use logging zuul feature/zuulv3 zuul-stream-rewor Monty Taylor 2018-01-04 -1 531510 Delete stale jobdirs at startup zuul feature/zuulv3 James E. Blair 2018-01-12 -1 -1 531742 Also prefix the indexes if needed zuul feature/zuulv3 prefix-indexes Tobias Henkel 2018-01-08 1 531934 Add job for testing against devel of ansible zuul feature/zuulv3 turn-on-ansible Monty Taylor 2018-01-16 1 532023 Fix sql reporting with postgres zuul feature/zuulv3 fix-postgres Tobias Henkel 2018-01-09 1 532310 Switch to python3-dev / python3-devel in bindep.txt zuul feature/zuulv3 Paul Belanger 2018-01-10 2 1 532718 scheduler: better handle format status error zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-11 1 533256 Add DIB faq to docs nodepool feature/zuulv3 dib-faq Clark Boylan 2018-01-12 1 533509 ansible: honor command no_log module attribute zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-15 1 534190 Use re2 for change_matcher zuul feature/zuulv3 fix-zuul-re Clint 'SpamapS' B 2018-01-16 -1 534588 pep8: ignore E124 rule zuul feature/zuulv3 zuulv3 Tristan Cacqueray 2018-01-17 1 534956 Handle secrets in branches zuul feature/zuulv3 James E. Blair 2018-01-17 2 1 534974 └─Handle nodesets in branches zuul feature/zuulv3 James E. Blair 05:23 PM 2 1 534988 zk: check for client in properties nodepool feature/zuulv3 zuulv3 Tristan Cacqueray 02:20 AM 1 535423 WIP: Add provider info command nodepool feature/zuulv3 info_cmd David Shrewsbury 05:44 PM -1 Thanks for your understanding and your help, Jim From corvus at inaugust.com Mon Jan 22 15:44:48 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 22 Jan 2018 07:44:48 -0800 Subject: Job parent branches Message-ID: <87372x7v9r.fsf@meyer.lemoncheese.net> Hi, We've run into an issue a few times where a complex job defined in a project repo doesn't run because it can't find a parent job that matches the branch. In all the cases I've seen so far, it's children of OpenStack's devstack job which are at issue. That's because the devstack job is defined in the devstack repo (an untrusted project repo) with branches. That means that it has implied branch matchers attached to it. We could, of course, turn those off, however, the intention is for the devstack job to be branched. Running the Queens release is not the same as running Ocata, so the different branch variants should be used under different circumstances. The problem occurs when someone wants to inherit from the devstack job in a different repo, and there is not a devstack branch variant which matches the branch of the child job. The way Zuul pieces together the final job that is run is to take the name of the job specified in the project-pipeline config, and then collect all the variants of that job which match the change. Each time it finds a match, it looks at the parent of that variant, and recursively collects all the variants of the parent which match the change (eliminating duplicates). If it can't find at least one variant of the parent which matches the change, the job is not run. The first time this came to my attention was for the tempest job. During development, the devstack job has only been defined on the master branch of devstack. The tempest job is in a repo with no branches, so it doesn't have an implied branch matcher. When any project invokes it, it will match the branch, and then Zuul will look for its parent job, 'devstack', and as long as the change is on the master branch, it will find it and run. However, if the change is on stable/ocata, and there is no devstack job defined in the stable/ocata branch of devstack, then it won't run. I think the solution to this case is to backport the devstack job to devstack's stable branches. That's the situation we will ultimately be in after our natural branching progression anyway, and we should treat this as a one-time cost of the transition to Zuul v3. The other two instances where this has come up are for projects which should run a child of the devstack job on a branch which does not, and never will, exist in devstack. When nodepool had a feature/zuulv3 branch, we were unable to run the devstack integration test job on that branch because of this. Ansible's default branch is 'devel', which means in order for us to run the shade integration test job on changes to Ansible, Zuul looks for a devstack variant which matches the 'devel' branch. In all cases, these issues could technically be solved by creating devstack job variants which matched these other branches. However, it should not be incumbent on the devstack maintainers to anticipate all of the branch names of other projects which might want to use the devstack job. We could create a devstack job variant with an explicit branch matcher which matches all branches, and treat the stable branch variants as deltas from that, however, that is more difficult to manage and doesn't fit our branch creation model. I can think of two solutions to this problem: 1) Have Zuul use a default branch when searching for parents with no matching branch variants. Every project in Zuul has a default branch (which can be specified in the 'project' config stanza, but the default default is 'master'). If, when searching for the parents of a job, Zuul finds no matching variants, it could perform the search a second time with the default branch of the parent job's project (all variants of a job are defined in the same repo, so we know which project the job is in, so there's only one default branch to search for each parent). 2) Allow someone creating a child job to specify a fallback branch for Zuul to use when searching for parent jobs. Essentially, we would modify the algorithm in the same way as the previous suggestion, except instead of determining the fallback branch based on the default branch of the parent job, we would let the child job supply the fallback branch. At each level of recursion as we walk up the inheritance path, a new fallback branch could be provided. I'm inclined to implement option (1) at least. That is very similar to what already happens today with git repo checkouts. I.e., if you did manage to run one of the jobs in the examples I described, you would get the master branch of devstack checked out in the job because of the existing fallback behavior. It makes sense that the job matching algorithm would be similar. And I think in many of the cases where this issue comes up, it would do the right thing automatically (certainly it would have in all the examples thus far). I'm on the fence about option (2). I see no reason not to add it other than perhaps it's unnecessary complexity. Maybe we should avoid implementing it until we find a compelling case for it? Or do folks anticipate a need for that now? Thanks, Jim From shrewsbury.dave at gmail.com Mon Jan 22 16:08:20 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 22 Jan 2018 11:08:20 -0500 Subject: Job parent branches In-Reply-To: <87372x7v9r.fsf@meyer.lemoncheese.net> References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: Hi! On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair wrote: [snip] > 2) Allow someone creating a child job to specify a fallback branch for > Zuul to use when searching for parent jobs. > > Essentially, we would modify the algorithm in the same way as the > previous suggestion, except instead of determining the fallback branch > based on the default branch of the parent job, we would let the child > job supply the fallback branch. At each level of recursion as we walk > up the inheritance path, a new fallback branch could be provided. > > I'm inclined to implement option (1) at least. That is very similar to > what already happens today with git repo checkouts. I.e., if you did > manage to run one of the jobs in the examples I described, you would get > the master branch of devstack checked out in the job because of the > existing fallback behavior. It makes sense that the job matching > algorithm would be similar. And I think in many of the cases where this > issue comes up, it would do the right thing automatically (certainly it > would have in all the examples thus far). > > I'm on the fence about option (2). I see no reason not to add it other > than perhaps it's unnecessary complexity. Maybe we should avoid > implementing it until we find a compelling case for it? Or do folks > anticipate a need for that now? > Although I could see wanting the ability to do #2 (for example, matching a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 branch), I cannot immediately find a need for it right this instant. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Mon Jan 22 16:14:04 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 22 Jan 2018 11:14:04 -0500 Subject: Job parent branches In-Reply-To: References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Jan 22, 2018 at 11:08 AM, David Shrewsbury < shrewsbury.dave at gmail.com> wrote: > Hi! > > On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair > wrote: > > [snip] > > >> 2) Allow someone creating a child job to specify a fallback branch for >> Zuul to use when searching for parent jobs. >> >> Essentially, we would modify the algorithm in the same way as the >> previous suggestion, except instead of determining the fallback branch >> based on the default branch of the parent job, we would let the child >> job supply the fallback branch. At each level of recursion as we walk >> up the inheritance path, a new fallback branch could be provided. >> >> I'm inclined to implement option (1) at least. That is very similar to >> what already happens today with git repo checkouts. I.e., if you did >> manage to run one of the jobs in the examples I described, you would get >> the master branch of devstack checked out in the job because of the >> existing fallback behavior. It makes sense that the job matching >> algorithm would be similar. And I think in many of the cases where this >> issue comes up, it would do the right thing automatically (certainly it >> would have in all the examples thus far). >> >> I'm on the fence about option (2). I see no reason not to add it other >> than perhaps it's unnecessary complexity. Maybe we should avoid >> implementing it until we find a compelling case for it? Or do folks >> anticipate a need for that now? >> > > Although I could see wanting the ability to do #2 (for example, matching > a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 > branch), I cannot immediately find a need for it right this instant. > > Sorry, bad example. I meant something like nodepool:feature/zuulv4 against shade:feature/version2 (different theoretical branch name). -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Tue Jan 23 00:58:39 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 23 Jan 2018 11:58:39 +1100 Subject: Job parent branches In-Reply-To: References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: It seems to me that option #1 has no config change and is therefore "free" (as far as deployers are concerned) to implement. We could start with option #1 and see if that solves all of our potential use cases. However if we find we still need more fine-tuning we could do option #2 where-by if no fallback branch is specified on the child job, it then tries option #1 (so no change in functionality in "upgrading" to option #2, just a new feature). On Tue, Jan 23, 2018 at 3:14 AM, David Shrewsbury wrote: > > On Mon, Jan 22, 2018 at 11:08 AM, David Shrewsbury < > shrewsbury.dave at gmail.com> wrote: > >> Hi! >> >> On Mon, Jan 22, 2018 at 10:44 AM, James E. Blair >> wrote: >> >> [snip] >> >> >>> 2) Allow someone creating a child job to specify a fallback branch for >>> Zuul to use when searching for parent jobs. >>> >>> Essentially, we would modify the algorithm in the same way as the >>> previous suggestion, except instead of determining the fallback branch >>> based on the default branch of the parent job, we would let the child >>> job supply the fallback branch. At each level of recursion as we walk >>> up the inheritance path, a new fallback branch could be provided. >>> >>> I'm inclined to implement option (1) at least. That is very similar to >>> what already happens today with git repo checkouts. I.e., if you did >>> manage to run one of the jobs in the examples I described, you would get >>> the master branch of devstack checked out in the job because of the >>> existing fallback behavior. It makes sense that the job matching >>> algorithm would be similar. And I think in many of the cases where this >>> issue comes up, it would do the right thing automatically (certainly it >>> would have in all the examples thus far). >>> >>> I'm on the fence about option (2). I see no reason not to add it other >>> than perhaps it's unnecessary complexity. Maybe we should avoid >>> implementing it until we find a compelling case for it? Or do folks >>> anticipate a need for that now? >>> >> >> Although I could see wanting the ability to do #2 (for example, matching >> a new nodepool:feature/zuulv4 branch against a shade:feature/zuulv4 >> branch), I cannot immediately find a need for it right this instant. >> >> Sorry, bad example. I meant something like nodepool:feature/zuulv4 against > shade:feature/version2 (different theoretical branch name). > > > -Dave > -- > David Shrewsbury (Shrews) > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jan 23 23:07:29 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 23 Jan 2018 15:07:29 -0800 Subject: Job parent branches In-Reply-To: <87372x7v9r.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Mon, 22 Jan 2018 07:44:48 -0800") References: <87372x7v9r.fsf@meyer.lemoncheese.net> Message-ID: <87607s2mz2.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > Hi, > > We've run into an issue a few times where a complex job defined in a > project repo doesn't run because it can't find a parent job that matches > the branch. > > In all the cases I've seen so far, it's children of OpenStack's devstack > job which are at issue. That's because the devstack job is defined in > the devstack repo (an untrusted project repo) with branches. That means > that it has implied branch matchers attached to it. > > We could, of course, turn those off, however, the intention is for the > devstack job to be branched. Running the Queens release is not the same > as running Ocata, so the different branch variants should be used under > different circumstances. > > The problem occurs when someone wants to inherit from the devstack job > in a different repo, and there is not a devstack branch variant which > matches the branch of the child job. > > The way Zuul pieces together the final job that is run is to take the > name of the job specified in the project-pipeline config, and then > collect all the variants of that job which match the change. Each time > it finds a match, it looks at the parent of that variant, and > recursively collects all the variants of the parent which match the > change (eliminating duplicates). If it can't find at least one variant > of the parent which matches the change, the job is not run. > > The first time this came to my attention was for the tempest job. > During development, the devstack job has only been defined on the master > branch of devstack. The tempest job is in a repo with no branches, so > it doesn't have an implied branch matcher. When any project invokes it, > it will match the branch, and then Zuul will look for its parent job, > 'devstack', and as long as the change is on the master branch, it will > find it and run. However, if the change is on stable/ocata, and there > is no devstack job defined in the stable/ocata branch of devstack, then > it won't run. > > I think the solution to this case is to backport the devstack job to > devstack's stable branches. That's the situation we will ultimately be > in after our natural branching progression anyway, and we should treat > this as a one-time cost of the transition to Zuul v3. > > The other two instances where this has come up are for projects which > should run a child of the devstack job on a branch which does not, and > never will, exist in devstack. > > When nodepool had a feature/zuulv3 branch, we were unable to run the > devstack integration test job on that branch because of this. Ansible's > default branch is 'devel', which means in order for us to run the shade > integration test job on changes to Ansible, Zuul looks for a devstack > variant which matches the 'devel' branch. > > In all cases, these issues could technically be solved by creating > devstack job variants which matched these other branches. However, it > should not be incumbent on the devstack maintainers to anticipate all of > the branch names of other projects which might want to use the devstack > job. We could create a devstack job variant with an explicit branch > matcher which matches all branches, and treat the stable branch variants > as deltas from that, however, that is more difficult to manage and > doesn't fit our branch creation model. > > I can think of two solutions to this problem: > > 1) Have Zuul use a default branch when searching for parents with no > matching branch variants. > > Every project in Zuul has a default branch (which can be specified in > the 'project' config stanza, but the default default is 'master'). If, > when searching for the parents of a job, Zuul finds no matching > variants, it could perform the search a second time with the default > branch of the parent job's project (all variants of a job are defined in > the same repo, so we know which project the job is in, so there's only > one default branch to search for each parent). > > 2) Allow someone creating a child job to specify a fallback branch for > Zuul to use when searching for parent jobs. > > Essentially, we would modify the algorithm in the same way as the > previous suggestion, except instead of determining the fallback branch > based on the default branch of the parent job, we would let the child > job supply the fallback branch. At each level of recursion as we walk > up the inheritance path, a new fallback branch could be provided. > > I'm inclined to implement option (1) at least. That is very similar to > what already happens today with git repo checkouts. I.e., if you did > manage to run one of the jobs in the examples I described, you would get > the master branch of devstack checked out in the job because of the > existing fallback behavior. It makes sense that the job matching > algorithm would be similar. And I think in many of the cases where this > issue comes up, it would do the right thing automatically (certainly it > would have in all the examples thus far). > > I'm on the fence about option (2). I see no reason not to add it other > than perhaps it's unnecessary complexity. Maybe we should avoid > implementing it until we find a compelling case for it? Or do folks > anticipate a need for that now? I've implemented the change described in (1): https://review.openstack.org/536998 In doing so, I found that a few things about it feel weird to me. First, the fact that this only applies to parents may mean that in some circumstances, users may still not get their expected behavior without performing some extra steps. For example, the shade/Ansible integration job Monty wants to run is currently only defined in the master branch of shade. So we could not directly add it to Ansible, since changes to the devel branch would not match. In fact, it's only after Monty's change to the branch matcher of that job to add devel (https://review.openstack.org/536159) that we get to the point where we can solve the problem of the parent not matching with (1). A solution to this dilemma would be to relax the branch matching constraints even further than in (1) -- we could, rather than only searching for default-branch variants of parent jobs, we could do so for all jobs as well. That means that if we say "run the integration job on changes to Ansible", as long as all the other matchers (files, etc) succeed, then something will run. In other words, we would no longer need to add the devel branch to the job definition in the shade repo. That's a two-edged blade -- it means that in many cases where we would not run a job now, we might in the future. Any time that someone added a job to a project-pipeline config on more branches than the job actually supported while counting on the job to simply not run on unsupported branches would see a behavior change -- the jobs would run where they did not before. But maybe that's okay? Maybe the fact that the project-pipeline variant of a job (i.e., where a user actually states "run this job for changes to this project") must always match is sufficient. Project-pipeline job invocations that are in in-repo configs are already tied to their branches. The biggest danger I see here would be for central configurations, where, for example, we might say "always run devstack on all branches of this project's changes". Today we will only run devstack on the branches where devstack is defined, but if we make this further change, we would always run devstack, and simply use the master version of devstack on branches which otherwise don't have a devstack job variant. I think the thing that bothers me the most about this is that once this is allowed, it's difficult to disallow. The only way to get the old behavior back would be to start adding more branch matchers to the project-pipeline configuration to further restrict when a job is run. On the other hand, I'm becoming more convinced that we should implement (2) from my message above. It seems arbitrary to only support falling back to the default branch of a project, when we know that branch mappings may be more complicated than that. Further, I'm starting to wonder if the approach in (2) might address my misgivings about (1). Perhaps if we made this kind of branch fallback always explicit, we can preserve the current conservative behavior, but still allow for the kind of branch mismatches we're running into. In the case of a job inheriting from another job (e.g., with Monty's change 536159), a user can say "I know this job is about to inherit from a job that may not match branches this job is designed to run on; in those cases, use this branch." Perhaps we can even say "use the default branch", and let folks explicitly opt-in to the implementation in (1). But also, in the case where there is no intermediate job (e.g., without Monty's change 536159), this attribute could still be set on the project-pipeline config. So where the shade job is added to Ansible, we could say "even though these changes are on devel, use the master shade job". I think I'd like to implement (2) on its own, without (1), and see what that looks like and if any other issues fall out of that. -Jim From corvus at inaugust.com Wed Jan 24 21:27:08 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 24 Jan 2018 13:27:08 -0800 Subject: Job parent branches In-Reply-To: <87607s2mz2.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 23 Jan 2018 15:07:29 -0800") References: <87372x7v9r.fsf@meyer.lemoncheese.net> <87607s2mz2.fsf@meyer.lemoncheese.net> Message-ID: <87d11zx80j.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > corvus at inaugust.com (James E. Blair) writes: > >> Hi, >> >> We've run into an issue a few times where a complex job defined in a >> project repo doesn't run because it can't find a parent job that matches >> the branch. >> >> In all the cases I've seen so far, it's children of OpenStack's devstack >> job which are at issue. That's because the devstack job is defined in >> the devstack repo (an untrusted project repo) with branches. That means >> that it has implied branch matchers attached to it. >> >> We could, of course, turn those off, however, the intention is for the >> devstack job to be branched. Running the Queens release is not the same >> as running Ocata, so the different branch variants should be used under >> different circumstances. >> >> The problem occurs when someone wants to inherit from the devstack job >> in a different repo, and there is not a devstack branch variant which >> matches the branch of the child job. >> >> The way Zuul pieces together the final job that is run is to take the >> name of the job specified in the project-pipeline config, and then >> collect all the variants of that job which match the change. Each time >> it finds a match, it looks at the parent of that variant, and >> recursively collects all the variants of the parent which match the >> change (eliminating duplicates). If it can't find at least one variant >> of the parent which matches the change, the job is not run. >> >> The first time this came to my attention was for the tempest job. >> During development, the devstack job has only been defined on the master >> branch of devstack. The tempest job is in a repo with no branches, so >> it doesn't have an implied branch matcher. When any project invokes it, >> it will match the branch, and then Zuul will look for its parent job, >> 'devstack', and as long as the change is on the master branch, it will >> find it and run. However, if the change is on stable/ocata, and there >> is no devstack job defined in the stable/ocata branch of devstack, then >> it won't run. >> >> I think the solution to this case is to backport the devstack job to >> devstack's stable branches. That's the situation we will ultimately be >> in after our natural branching progression anyway, and we should treat >> this as a one-time cost of the transition to Zuul v3. >> >> The other two instances where this has come up are for projects which >> should run a child of the devstack job on a branch which does not, and >> never will, exist in devstack. >> >> When nodepool had a feature/zuulv3 branch, we were unable to run the >> devstack integration test job on that branch because of this. Ansible's >> default branch is 'devel', which means in order for us to run the shade >> integration test job on changes to Ansible, Zuul looks for a devstack >> variant which matches the 'devel' branch. >> >> In all cases, these issues could technically be solved by creating >> devstack job variants which matched these other branches. However, it >> should not be incumbent on the devstack maintainers to anticipate all of >> the branch names of other projects which might want to use the devstack >> job. We could create a devstack job variant with an explicit branch >> matcher which matches all branches, and treat the stable branch variants >> as deltas from that, however, that is more difficult to manage and >> doesn't fit our branch creation model. >> >> I can think of two solutions to this problem: >> >> 1) Have Zuul use a default branch when searching for parents with no >> matching branch variants. >> >> Every project in Zuul has a default branch (which can be specified in >> the 'project' config stanza, but the default default is 'master'). If, >> when searching for the parents of a job, Zuul finds no matching >> variants, it could perform the search a second time with the default >> branch of the parent job's project (all variants of a job are defined in >> the same repo, so we know which project the job is in, so there's only >> one default branch to search for each parent). >> >> 2) Allow someone creating a child job to specify a fallback branch for >> Zuul to use when searching for parent jobs. >> >> Essentially, we would modify the algorithm in the same way as the >> previous suggestion, except instead of determining the fallback branch >> based on the default branch of the parent job, we would let the child >> job supply the fallback branch. At each level of recursion as we walk >> up the inheritance path, a new fallback branch could be provided. >> >> I'm inclined to implement option (1) at least. That is very similar to >> what already happens today with git repo checkouts. I.e., if you did >> manage to run one of the jobs in the examples I described, you would get >> the master branch of devstack checked out in the job because of the >> existing fallback behavior. It makes sense that the job matching >> algorithm would be similar. And I think in many of the cases where this >> issue comes up, it would do the right thing automatically (certainly it >> would have in all the examples thus far). >> >> I'm on the fence about option (2). I see no reason not to add it other >> than perhaps it's unnecessary complexity. Maybe we should avoid >> implementing it until we find a compelling case for it? Or do folks >> anticipate a need for that now? > > I've implemented the change described in (1): > https://review.openstack.org/536998 > > In doing so, I found that a few things about it feel weird to me. > First, the fact that this only applies to parents may mean that in some > circumstances, users may still not get their expected behavior without > performing some extra steps. For example, the shade/Ansible integration > job Monty wants to run is currently only defined in the master branch of > shade. So we could not directly add it to Ansible, since changes to the > devel branch would not match. In fact, it's only after Monty's change > to the branch matcher of that job to add devel > (https://review.openstack.org/536159) that we get to the point where we > can solve the problem of the parent not matching with (1). > > A solution to this dilemma would be to relax the branch matching > constraints even further than in (1) -- we could, rather than only > searching for default-branch variants of parent jobs, we could do so for > all jobs as well. That means that if we say "run the integration job on > changes to Ansible", as long as all the other matchers (files, etc) > succeed, then something will run. In other words, we would no longer > need to add the devel branch to the job definition in the shade repo. > > That's a two-edged blade -- it means that in many cases where we would > not run a job now, we might in the future. Any time that someone added > a job to a project-pipeline config on more branches than the job > actually supported while counting on the job to simply not run on > unsupported branches would see a behavior change -- the jobs would run > where they did not before. > > But maybe that's okay? Maybe the fact that the project-pipeline variant > of a job (i.e., where a user actually states "run this job for changes > to this project") must always match is sufficient. Project-pipeline job > invocations that are in in-repo configs are already tied to their > branches. The biggest danger I see here would be for central > configurations, where, for example, we might say "always run devstack on > all branches of this project's changes". Today we will only run > devstack on the branches where devstack is defined, but if we make this > further change, we would always run devstack, and simply use the master > version of devstack on branches which otherwise don't have a devstack > job variant. > > I think the thing that bothers me the most about this is that once this > is allowed, it's difficult to disallow. The only way to get the old > behavior back would be to start adding more branch matchers to the > project-pipeline configuration to further restrict when a job is run. > > On the other hand, I'm becoming more convinced that we should implement > (2) from my message above. It seems arbitrary to only support falling > back to the default branch of a project, when we know that branch > mappings may be more complicated than that. > > Further, I'm starting to wonder if the approach in (2) might address my > misgivings about (1). Perhaps if we made this kind of branch fallback > always explicit, we can preserve the current conservative behavior, but > still allow for the kind of branch mismatches we're running into. In > the case of a job inheriting from another job (e.g., with Monty's change > 536159), a user can say "I know this job is about to inherit from a job > that may not match branches this job is designed to run on; in those > cases, use this branch." Perhaps we can even say "use the default > branch", and let folks explicitly opt-in to the implementation in (1). > But also, in the case where there is no intermediate job (e.g., without > Monty's change 536159), this attribute could still be set on the > project-pipeline config. So where the shade job is added to Ansible, we > could say "even though these changes are on devel, use the master shade > job". > > I think I'd like to implement (2) on its own, without (1), and see what > that looks like and if any other issues fall out of that. I implemented a variant of (2) ... let's call it (3) ... which overloads the 'override-checkout' attributes (of both job, and job.required-projects) to additionally mean "pretend this change is on this branch". The implementation is in: https://review.openstack.org/537655 So a user could say: - job: name: foo parent: devstack override-checkout: master And if that job ran on a 'devel' branch change, when it went to look for a devstack job to inherit from, it would appear as if the change it was running for was on the master branch, rather than 'devel', and so it would match the devstack job defined on the 'master' branch, in addition to ensuring that when devstack was checked out, it got the 'master' branch. This is the part I like about this change. I think that most of the time when you are setting an override-checkout for a project, if you happen to also be using jobs defined for that project, you probably want those variants too. That example is general -- the same works for required-projects as well: - job: name: foo parent: devstack required-projects: - name: openstack-dev/devstack override-checkout: master This would only perform the branch masking described above when looking for devstack jobs -- when searching for any other jobs in the inheritance hierarchy, the actual branch would be used. The only misgiving I have about this is whether we might run into a problem where we don't want to affect the job variants collected, but do still want to affect the checkout. Or perhaps wanted to set those to different values. I know that some folks are using override-checkout to checkout tags rather than branches. I don't think this approach inhibits that -- I added a check that before overriding the branch matcher, we ensure it's an actual branch. So if you say "override-checkout: diablo-eol" that tag will still be checked out, but you'll still get the old branch matching behavior for the jobs. Basically, I don't think this hurts that use case, but it doesn't help it, in that there isn't a way to say "checkout diablo-eol, and use the pike job definition". Well, there is a way -- you just update the devstack job itself. It's unclear how far we should go in supporting this use case, which is effectively "using a job on a branch it wasn't intended for". Anyway, it would be very simple to split this out into a new variable and that should be safe. I think I like the approach of having it on both the job and job.required-projects -- it gives us a lot more flexibility and specificity, which I think could come in handy. So even if we do split it out, I think it should behave the same. -Jim