From corvus at inaugust.com Thu Feb 1 18:15:00 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 01 Feb 2018 10:15:00 -0800 Subject: Feature branch merge update Message-ID: <87wozwd1az.fsf@meyer.lemoncheese.net> Hi, After merging the feature/zuulv3 branches of Nodepool and Zuul back into master, a number of open changes on the master branch remained from before the merge (and, in many cases, much older than that). Some of us have gone through to try to triage those changes. Some of them were implemented on the feature/zuulv3 branch or otherwise just don't apply anymore (a component was removed, or functions differently). These changes were abandoned with a short explanation of why, and an invitation to restore them if the, admittedly cursory, analysis was wrong. Some of them may still apply, but in those cases, almost certainly need to be rebased. These changes we marked with Workflow -1: Work in Progress, with the hope that the author will update them when time permits. At this point, all of the open changes on Zuul and Nodepool which are not marked WIP should be relevant to the current state of the tree, and should be reviewed. So if you've only been looking at a subset of open changes to avoid the older changes on master which may not be relevant, you should be able to remove that filter and consider all open, non-WIP changes. Thanks, Jim From corvus at inaugust.com Thu Feb 1 18:37:49 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 01 Feb 2018 10:37:49 -0800 Subject: Zuul website Message-ID: <87fu6kd08y.fsf@meyer.lemoncheese.net> Hi, We now have a website at https://zuul-ci.org/ The content is in git in the openstack-infra/zuul-website repo: https://git.openstack.org/cgit/openstack-infra/zuul-website/tree/ I put together the site using a CC-BY 3.0 template, and I expect that the content of that repo will be licensed CC-BY as well (as most of our docs-only repos are). I enthusiastically invite other folks to contribute to the website by proposing changes to that repo. Currently the zuul-core team has approval rights, but we can expand it as warranted -- I expect there's an overlapping group of folks who would like to contribute in this area. I am not emotionally attached to anything about the current website. Please feel free to propose changes to absolutely everything -- including replacing the site entirely, using a different template, title image, etc. What's there is a starting point, and an attempt to organize information and present it to folks in a way that helps them find the right resources quickly. I only ask that any changes try to keep this in mind: the site has many audiences, and we should think about how they all interact with it. The site itself is currently simply static HTML. You can view it on your workstation with no special tools. I looked into static site generators, and there are promising tools in this space, but ultimately I felt that until we get to the point where we want more than a handful of pages, the extra work isn't worth it. Some parts of the template are not in use in the current site. Most notable is the navigation bar. We can add those back in easily as warranted. You may want to visit the live demo of the template to see all of its offerings: https://html5up.net/arcana So as far as the infrastructure goes, I suggest that we continue with simple static pages for now, and when we get to the point where we know what kind of content we want to add that warrants more tooling, we adopt a static site generator then. I do think we may want to start publishing the zuul, nodepool, and zuul-jobs docs under this domain name as well. But we can do that in the same way we do with OpenStack -- publish the docs in a subdirectory, so they won't be part of the 'zuul-website' repo. We can start looking at adapting those jobs soon. Thanks, Jim From gdk at redhat.com Thu Feb 1 18:45:46 2018 From: gdk at redhat.com (Greg Dekoenigsberg) Date: Thu, 1 Feb 2018 13:45:46 -0500 Subject: Zuul website In-Reply-To: <87fu6kd08y.fsf@meyer.lemoncheese.net> References: <87fu6kd08y.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Feb 1, 2018 at 1:37 PM, James E. Blair wrote: > Hi, > > We now have a website at https://zuul-ci.org/ > > The content is in git in the openstack-infra/zuul-website repo: > > https://git.openstack.org/cgit/openstack-infra/zuul-website/tree/ > > I put together the site using a CC-BY 3.0 template, and I expect that > the content of that repo will be licensed CC-BY as well (as most of our > docs-only repos are). > > I enthusiastically invite other folks to contribute to the website by > proposing changes to that repo. Currently the zuul-core team has > approval rights, but we can expand it as warranted -- I expect there's > an overlapping group of folks who would like to contribute in this area. > > I am not emotionally attached to anything about the current website. > Please feel free to propose changes to absolutely everything -- > including replacing the site entirely, using a different template, > title image, etc. > > What's there is a starting point, and an attempt to organize information > and present it to folks in a way that helps them find the right > resources quickly. I only ask that any changes try to keep this in > mind: the site has many audiences, and we should think about how they > all interact with it. > > The site itself is currently simply static HTML. You can view it on > your workstation with no special tools. I looked into static site > generators, and there are promising tools in this space, but ultimately > I felt that until we get to the point where we want more than a handful > of pages, the extra work isn't worth it. > > Some parts of the template are not in use in the current site. Most > notable is the navigation bar. We can add those back in easily as > warranted. You may want to visit the live demo of the template to see > all of its offerings: > > https://html5up.net/arcana > > So as far as the infrastructure goes, I suggest that we continue with > simple static pages for now, and when we get to the point where we know > what kind of content we want to add that warrants more tooling, we adopt > a static site generator then. > > I do think we may want to start publishing the zuul, nodepool, and > zuul-jobs docs under this domain name as well. But we can do that in > the same way we do with OpenStack -- publish the docs in a subdirectory, > so they won't be part of the 'zuul-website' repo. We can start looking > at adapting those jobs soon. > > Thanks, > > Jim It looks beautiful. :) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Feb 7 19:40:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Feb 2018 11:40:14 -0800 Subject: How to determine what has changed in a test Message-ID: <87mv0kzizl.fsf@meyer.lemoncheese.net> Hi, The problem of how to determine what has changed in the change under test has come to my attention twice today, and it's still early, so I thought it'd be a good topic for a ML discussion. The two issues that have come to my attention represent two similar kinds of tests: a) Inspect the specific commit(s) under test (e.g., inspect the commit message). b) Inspect the delta of applying the patch (e.g., run coverage twice and calculate the difference). Simon Westphahl has proposed a patch focused on the first case, which adds a zuul.commits variable with the sha(s) of the patch(es) under test: https://review.openstack.org/539975 Meanwhile, here is an example of a test which relies on the existence of an origin remote: http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 We also have the minor example of pip emitting warnings if there is no origin remote; if we can fix that here, that's fine, but I don't think it's significant enough to drive our design. The patch to add zuul.commits makes case (a) fairly simple -- you can have ansible iterate over the values in the variable. I think that case (b) might be a bit harder with zuul.commits -- specialized tooling would be needed to create the alternative repo state (ie, a new branch) in order to perform the git checkout or diff operations. Because in (b) we are focused more on the previous state than the commit, we can't just checkout zuul.commit, we need to checkout the parent of zuul.commit. However, if we were to create an origin remote, and populate it with the "previous" git repo state, (that is, either the current state of the tree, or that with the changes ahead of this one applied), it seems we can accommodate both (a) and (b) in an intuitive manner. You could get the values that would be put in zuul.commits by performing 'git diff origin/master..master', which addresses (a), and you could address (b) with 'git checkout origin/master'. Among the reasons we don't have any remotes on the git checkouts are: * What would the URL be? * What would the content be? * Concern about jobs altering the git repo under test by performing naïve and incorrect git operations. We've addressed the content question, that leaves the other two. Ideally, if we did this, I'd like to have a remote with no URL. However, while that may be possible, some investigation suggests that we may want to avoid that as it looks like it may be a bit of an edge case that not all tools or libraries would support correctly. However, if we set it to a bogus URL, that may be sufficient to satisfy the tooling, as well as prevent git repo manipulation errors. I'm inclined to favor this approach, as I think it addresses both cases and is fairly intuitive. Do folks think it would work? Are there things we would be missing as compared to the zuul.commits approach? Thanks, Jim From Tobias.Henkel at bmw.de Wed Feb 7 20:32:43 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Wed, 7 Feb 2018 20:32:43 +0000 Subject: How to determine what has changed in a test In-Reply-To: <87mv0kzizl.fsf@meyer.lemoncheese.net> References: <87mv0kzizl.fsf@meyer.lemoncheese.net> Message-ID: Hi, These are interesting question to discuss. On 07.02.18, 20:40, "James E. Blair" wrote: Hi, The problem of how to determine what has changed in the change under test has come to my attention twice today, and it's still early, so I thought it'd be a good topic for a ML discussion. The two issues that have come to my attention represent two similar kinds of tests: a) Inspect the specific commit(s) under test (e.g., inspect the commit message). b) Inspect the delta of applying the patch (e.g., run coverage twice and calculate the difference). Simon Westphahl has proposed a patch focused on the first case, which adds a zuul.commits variable with the sha(s) of the patch(es) under test: https://review.openstack.org/539975 Meanwhile, here is an example of a test which relies on the existence of an origin remote: http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 We also have the minor example of pip emitting warnings if there is no origin remote; if we can fix that here, that's fine, but I don't think it's significant enough to drive our design. The patch to add zuul.commits makes case (a) fairly simple -- you can have ansible iterate over the values in the variable. I think that case (b) might be a bit harder with zuul.commits -- specialized tooling would be needed to create the alternative repo state (ie, a new branch) in order to perform the git checkout or diff operations. Because in (b) we are focused more on the previous state than the commit, we can't just checkout zuul.commit, we need to checkout the parent of zuul.commit. However, if we were to create an origin remote, and populate it with the "previous" git repo state, (that is, either the current state of the tree, or that with the changes ahead of this one applied), it seems we can accommodate both (a) and (b) in an intuitive manner. You could get the values that would be put in zuul.commits by performing 'git diff origin/master..master', which addresses (a), and you could address (b) with 'git checkout origin/master'. Among the reasons we don't have any remotes on the git checkouts are: * What would the URL be? * What would the content be? * Concern about jobs altering the git repo under test by performing naïve and incorrect git operations. We've addressed the content question, that leaves the other two. Ideally, if we did this, I'd like to have a remote with no URL. However, while that may be possible, some investigation suggests that we may want to avoid that as it looks like it may be a bit of an edge case that not all tools or libraries would support correctly. However, if we set it to a bogus URL, that may be sufficient to satisfy the tooling, as well as prevent git repo manipulation errors. I just tried that out and it seems while you can set an empty remote even git itself (git fetch origin) behaves differently. So if we go that way I'd suggest we use a bogus url with no network interaction like 'file:///dev/null'. I'm inclined to favor this approach, as I think it addresses both cases and is fairly intuitive. Do folks think it would work? Are there things we would be missing as compared to the zuul.commits approach? I think there should be no functional difference compared to what we can get via git log. However we need to cope with the merge commit zuul creates, but I think that should be no real problem. When using github we should think about how merges within the PR can be handled. In the github workflow it is very common to merge master into the PR to update it. In this case github lists the commits in an order which is not obvious at first glance. So I think in this case 'git log' might be even superior as it has some means to better filter/sort the commits compared with the commit list of github. Thanks, Jim Kind regards Tobias From cboylan at sapwetik.org Wed Feb 7 23:16:53 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Feb 2018 15:16:53 -0800 Subject: How to determine what has changed in a test In-Reply-To: <87mv0kzizl.fsf@meyer.lemoncheese.net> References: <87mv0kzizl.fsf@meyer.lemoncheese.net> Message-ID: <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> On Wed, Feb 7, 2018, at 11:40 AM, James E. Blair wrote: > Hi, > > The problem of how to determine what has changed in the change under > test has come to my attention twice today, and it's still early, so I > thought it'd be a good topic for a ML discussion. > > The two issues that have come to my attention represent two similar > kinds of tests: > > a) Inspect the specific commit(s) under test (e.g., inspect the commit > message). > > b) Inspect the delta of applying the patch (e.g., run coverage twice and > calculate the difference). > > Simon Westphahl has proposed a patch focused on the first case, which > adds a zuul.commits variable with the sha(s) of the patch(es) under > test: https://review.openstack.org/539975 > > Meanwhile, here is an example of a test which relies on the existence of > an origin remote: > > > http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 > > We also have the minor example of pip emitting warnings if there is no > origin remote; if we can fix that here, that's fine, but I don't think > it's significant enough to drive our design. > > The patch to add zuul.commits makes case (a) fairly simple -- you can > have ansible iterate over the values in the variable. > > I think that case (b) might be a bit harder with zuul.commits -- > specialized tooling would be needed to create the alternative repo state > (ie, a new branch) in order to perform the git checkout or diff > operations. Because in (b) we are focused more on the previous state > than the commit, we can't just checkout zuul.commit, we need to checkout > the parent of zuul.commit. > > However, if we were to create an origin remote, and populate it with the > "previous" git repo state, (that is, either the current state of the > tree, or that with the changes ahead of this one applied), it seems we > can accommodate both (a) and (b) in an intuitive manner. > > You could get the values that would be put in zuul.commits by > performing 'git diff origin/master..master', which addresses (a), and > you could address (b) with 'git checkout origin/master'. > > Among the reasons we don't have any remotes on the git checkouts are: > > * What would the URL be? > * What would the content be? > * Concern about jobs altering the git repo under test by performing > naïve and incorrect git operations. > > We've addressed the content question, that leaves the other two. > Ideally, if we did this, I'd like to have a remote with no URL. > However, while that may be possible, some investigation suggests that we > may want to avoid that as it looks like it may be a bit of an edge case > that not all tools or libraries would support correctly. However, if we > set it to a bogus URL, that may be sufficient to satisfy the tooling, as > well as prevent git repo manipulation errors. It seems like the origin url should be the repo's canonical location url for correctness. This opens jobs up to bad git repo manipulation but nothing prevents users from doing it themselves in the current situation. If we really want to use a bogus url then avoiding "origin" might be best as the name for the remote. We could call it base or something like that. Another option may be to use a non remote ref and then you can git diff refs/zuul/base/master..master then continue to have no remotes. This avoids breaking tooling with bogus remote urls in unexpected ways but provides for jobs to find the ref that they need. Basically I think we should avoid using remotes if the remote isn't going to be valid and instead use some other pointer (or use a valid remote). > > I'm inclined to favor this approach, as I think it addresses both cases > and is fairly intuitive. Do folks think it would work? Are there > things we would be missing as compared to the zuul.commits approach? > > Thanks, > > Jim From corvus at inaugust.com Thu Feb 8 00:03:52 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Feb 2018 16:03:52 -0800 Subject: How to determine what has changed in a test In-Reply-To: <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> (Clark Boylan's message of "Wed, 07 Feb 2018 15:16:53 -0800") References: <87mv0kzizl.fsf@meyer.lemoncheese.net> <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> Message-ID: <87o9l0xs7r.fsf@meyer.lemoncheese.net> Clark Boylan writes: > It seems like the origin url should be the repo's canonical location > url for correctness. This opens jobs up to bad git repo manipulation > but nothing prevents users from doing it themselves in the current > situation. If we really want to use a bogus url then avoiding "origin" > might be best as the name for the remote. We could call it base or > something like that. That was my first instinct as well, however, in many cases the URL that Zuul uses or the canonical location won't be accessible to the test node, or might have a username and credentials attached. Having a working URL might make it easy to grow a dependency on that and put those kinds of systems at a disadvantage. And we should be careful not to accidentally suggest or encourage the idea that the remote connection is valid -- tests are running in a speculative future state which can not be represented outside the test environment. Of course, some times you need to access a remote (for example, OpenStack's release and proposal jobs). I think it's okay to do extra work to set up that interaction when required, since it's really outside of the scope of Zuul. Using the 'origin' remote has the advantage of being intuitive to most git users, and it would allow tools like pip and https://github.com/Bachmann1234/diff-cover and pip to work as designed with no effort. Diff-cover can be used with an alternate branch, so it's not a requirement, it just seems like the most sensible default. There's no reason origin needs to be the canonical location (it is a DVCS after all), and in those cases, like the OpenStack release jobs where we do want to interact with a remote, we can make sure to update it appropriately (either by removing it or using set-url before use). > Another option may be to use a non remote ref and then you can git > diff refs/zuul/base/master..master then continue to have no > remotes. This avoids breaking tooling with bogus remote urls in > unexpected ways but provides for jobs to find the ref that they need. That's true. It solves the URL problem. It just makes things like diff-cover a little more difficult to use (and probably creates a difference between how a user would run diff-cover locally vs. in a test). -Jim From iwienand at redhat.com Mon Feb 19 03:25:34 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 19 Feb 2018 14:25:34 +1100 Subject: [nodepool] Restricting images to specific nodepool builders Message-ID: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Hi, How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud. Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are". I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building? e.g. diskimages would specify required builder tags similar to: --- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 --- The nodepool.yaml would grow another section similar: --- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true --- For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config. The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building. Thoughts welcome :) -i [1] https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml From shrewsbury.dave at gmail.com Mon Feb 19 13:28:27 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 19 Feb 2018 08:28:27 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Message-ID: Hi, On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > Hi, > > How should we go about restricting certain image builds to specific > nodepool builder instances? My immediate issue is with ARM64 image > builds, which I only want to happen on a builder hosted in an ARM64 > cloud. > > Currently, the builders go through the image list and check "is the > existing image missing or too old, if so, build" [1]. Additionally, > all builders share a configuration file [2]; so builders don't know > "who they are". > > Why not just split the builder configuration file? I don't see a need to add code to do this. > I'd propose we add an arbitrary tag/match system so that builders can > pickup only those builds they mark themselves capable of building? > > e.g. diskimages would specify required builder tags similar to: > > --- > diskimages: > - name: arm64-ubuntu-xenial > elements: > - block-device-efi > - vm > - ubuntu-minimal > ... > env-vars: > TMPDIR: /opt/dib_tmp > DIB_CHECKSUM: '1' > ... > builder-requires: > architecture: arm64 > --- > > The nodepool.yaml would grow another section similar: > > --- > builder-provides: > architecture: arm64 > something_else_unique_about_this_buidler: true > --- > > For OpenStack, we would template this section in the config file via > puppet in [2], ensuring above that only our theoretical ARM64 build > machine had that section in it's config. > > The nodepool-buidler build loop can then check that its > builder-provides section has all the tags specified in an image's > "builder-requires" section before deciding to start building. > > Thoughts welcome :) > > -i > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > tree/nodepool/builder.py#n607 > [2] https://git.openstack.org/cgit/openstack-infra/project- > config/tree/nodepool/nodepool.yaml > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Feb 19 15:23:28 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 19 Feb 2018 10:23:28 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Message-ID: <20180219152328.GA7920@localhost.localdomain> On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > Hi, > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > > > Hi, > > > > How should we go about restricting certain image builds to specific > > nodepool builder instances? My immediate issue is with ARM64 image > > builds, which I only want to happen on a builder hosted in an ARM64 > > cloud. > > > > Currently, the builders go through the image list and check "is the > > existing image missing or too old, if so, build" [1]. Additionally, > > all builders share a configuration file [2]; so builders don't know > > "who they are". > > > > > > Why not just split the builder configuration file? I don't see a need to > add code > to do this. > In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch. I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set. [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool [2] https://review.openstack.org/461239/ > > > > > I'd propose we add an arbitrary tag/match system so that builders can > > pickup only those builds they mark themselves capable of building? > > > > e.g. diskimages would specify required builder tags similar to: > > > > --- > > diskimages: > > - name: arm64-ubuntu-xenial > > elements: > > - block-device-efi > > - vm > > - ubuntu-minimal > > ... > > env-vars: > > TMPDIR: /opt/dib_tmp > > DIB_CHECKSUM: '1' > > ... > > builder-requires: > > architecture: arm64 > > --- > > > > The nodepool.yaml would grow another section similar: > > > > --- > > builder-provides: > > architecture: arm64 > > something_else_unique_about_this_buidler: true > > --- > > > > For OpenStack, we would template this section in the config file via > > puppet in [2], ensuring above that only our theoretical ARM64 build > > machine had that section in it's config. > > > > The nodepool-buidler build loop can then check that its > > builder-provides section has all the tags specified in an image's > > "builder-requires" section before deciding to start building. > > > > Thoughts welcome :) > > > > -i > > > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > > tree/nodepool/builder.py#n607 > > [2] https://git.openstack.org/cgit/openstack-infra/project- > > config/tree/nodepool/nodepool.yaml > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > -- > David Shrewsbury (Shrews) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mordred at inaugust.com Mon Feb 19 16:20:54 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 19 Feb 2018 10:20:54 -0600 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> On 02/19/2018 09:23 AM, Paul Belanger wrote: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >> Hi, >> >> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >> >>> Hi, >>> >>> How should we go about restricting certain image builds to specific >>> nodepool builder instances? My immediate issue is with ARM64 image >>> builds, which I only want to happen on a builder hosted in an ARM64 >>> cloud. >>> >>> Currently, the builders go through the image list and check "is the >>> existing image missing or too old, if so, build" [1]. Additionally, >>> all builders share a configuration file [2]; so builders don't know >>> "who they are". >>> >>> >> >> Why not just split the builder configuration file? I don't see a need to >> add code >> to do this. >> > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > We could update puppet to start templating or add support for nodepool.d (like > zuul.d) and better split our configs too. I just haven't found time to write > that patch. > > I did submit support homing diskimage builds to specific builder[2] a while > back, which is more inline with what ianw is asking. This allows us to assign > images to builders, if set. I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature. If we consider this builder need alongside the discussion about cloud-region awareness for zuul executors, I think what we're coming to is that there are some actions we want to be able to be cloud-region aware and we might want to consider a holistic design for such awareness and/or tying. >> >> >> >>> I'd propose we add an arbitrary tag/match system so that builders can >>> pickup only those builds they mark themselves capable of building? >>> >>> e.g. diskimages would specify required builder tags similar to: >>> >>> --- >>> diskimages: >>> - name: arm64-ubuntu-xenial >>> elements: >>> - block-device-efi >>> - vm >>> - ubuntu-minimal >>> ... >>> env-vars: >>> TMPDIR: /opt/dib_tmp >>> DIB_CHECKSUM: '1' >>> ... >>> builder-requires: >>> architecture: arm64 >>> --- >>> >>> The nodepool.yaml would grow another section similar: >>> >>> --- >>> builder-provides: >>> architecture: arm64 >>> something_else_unique_about_this_buidler: true >>> --- >>> >>> For OpenStack, we would template this section in the config file via >>> puppet in [2], ensuring above that only our theoretical ARM64 build >>> machine had that section in it's config. >>> >>> The nodepool-buidler build loop can then check that its >>> builder-provides section has all the tags specified in an image's >>> "builder-requires" section before deciding to start building. >>> >>> Thoughts welcome :) >>> >>> -i >>> >>> [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ >>> tree/nodepool/builder.py#n607 >>> [2] https://git.openstack.org/cgit/openstack-infra/project- >>> config/tree/nodepool/nodepool.yaml >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>> >> >> >> >> -- >> David Shrewsbury (Shrews) > >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From dmsimard at redhat.com Mon Feb 19 16:41:39 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 19 Feb 2018 11:41:39 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: On Mon, Feb 19, 2018 at 11:20 AM, Monty Taylor wrote: > I'd honestly prefer if we got to a place where we were not splitting config files > like we do today, but instead have a config file and we have nodepool figure > out any splitting needed. I think that's a larger effort that needs some design > ... but in general I personally consider the current need to split the config file > by hand to be a bug, not a feature. +1, I think we already have the necessary tooling in place to "shard" things appropriately between launchers (gearman, zookeeper, etc) without having to split configuration files. Splitting things manually per launcher is also prone to failure/SPoF if a particular launcher is not behaving well for some reason. > > If we consider this builder need alongside the discussion about cloud-region > awareness for zuul executors, I think what we're coming to is that there are > some actions we want to be able to be cloud-region aware and we might want > to consider a holistic design for such awareness and/or tying. > +1, and for reference, the story about Zuul executor "zones" [1]. I think it would make sense to have a common logic around "zones" (or "clouds", or "providers", whatever) because really you might not just need an executor in a private network. In reality you'll also need a nodepool-launcher to launch VMs in that private cloud. I'm not intimate with the internals of nodepool-builder but what component ultimately takes care of uploading the image, is that nodepool-launcher ? Otherwise you also need a nodepool-builder instance in that private network... I see it roughly looking like this (I hope this renders correctly, otherwise see [2]): "LAN" Firewall "WAN" + +-----------------+ | +--------------+ |Zuul Executor | | |Zuul Scheduler| |Nodepool builder +---------> |Gearman | |Nodepool launcher| | |Zookeeper | +-------+---------+ | +--------------+ | | v | Private cloud + [1]: https://storyboard.openstack.org/#!/story/2001125 [2]: https://i.imgur.com/QVSFgd6.png David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From pabelanger at redhat.com Mon Feb 19 16:42:15 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 19 Feb 2018 11:42:15 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: <20180219164151.GA16043@localhost.localdomain> On Mon, Feb 19, 2018 at 10:20:54AM -0600, Monty Taylor wrote: > On 02/19/2018 09:23 AM, Paul Belanger wrote: > > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > > > Hi, > > > > > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > > > > > > > Hi, > > > > > > > > How should we go about restricting certain image builds to specific > > > > nodepool builder instances? My immediate issue is with ARM64 image > > > > builds, which I only want to happen on a builder hosted in an ARM64 > > > > cloud. > > > > > > > > Currently, the builders go through the image list and check "is the > > > > existing image missing or too old, if so, build" [1]. Additionally, > > > > all builders share a configuration file [2]; so builders don't know > > > > "who they are". > > > > > > > > > > > > > > Why not just split the builder configuration file? I don't see a need to > > > add code > > > to do this. > > > > > In our case (openstack-infra) this will require another change to > > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > > have 7[1] different nodepool configuration files to now manage. 4 x > > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > > > We could update puppet to start templating or add support for nodepool.d (like > > zuul.d) and better split our configs too. I just haven't found time to write > > that patch. > > > > I did submit support homing diskimage builds to specific builder[2] a while > > back, which is more inline with what ianw is asking. This allows us to assign > > images to builders, if set. > > I'd honestly prefer if we got to a place where we were not splitting config > files like we do today, but instead have a config file and we have nodepool > figure out any splitting needed. I think that's a larger effort that needs > some design ... but in general I personally consider the current need to > split the config file by hand to be a bug, not a feature. > +1 Agree (bug, not a feature), so far the split configuration was to work around issues of having more then 1 nodepool-launcher. > If we consider this builder need alongside the discussion about cloud-region > awareness for zuul executors, I think what we're coming to is that there are > some actions we want to be able to be cloud-region aware and we might want > to consider a holistic design for such awareness and/or tying. > Sounds like we should maybe spend a few minutes at PTG maybe discussion it. I'll add a topic to etherpad. > > > > > > > > > > > > > I'd propose we add an arbitrary tag/match system so that builders can > > > > pickup only those builds they mark themselves capable of building? > > > > > > > > e.g. diskimages would specify required builder tags similar to: > > > > > > > > --- > > > > diskimages: > > > > - name: arm64-ubuntu-xenial > > > > elements: > > > > - block-device-efi > > > > - vm > > > > - ubuntu-minimal > > > > ... > > > > env-vars: > > > > TMPDIR: /opt/dib_tmp > > > > DIB_CHECKSUM: '1' > > > > ... > > > > builder-requires: > > > > architecture: arm64 > > > > --- > > > > > > > > The nodepool.yaml would grow another section similar: > > > > > > > > --- > > > > builder-provides: > > > > architecture: arm64 > > > > something_else_unique_about_this_buidler: true > > > > --- > > > > > > > > For OpenStack, we would template this section in the config file via > > > > puppet in [2], ensuring above that only our theoretical ARM64 build > > > > machine had that section in it's config. > > > > > > > > The nodepool-buidler build loop can then check that its > > > > builder-provides section has all the tags specified in an image's > > > > "builder-requires" section before deciding to start building. > > > > > > > > Thoughts welcome :) > > > > > > > > -i > > > > > > > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > > > > tree/nodepool/builder.py#n607 > > > > [2] https://git.openstack.org/cgit/openstack-infra/project- > > > > config/tree/nodepool/nodepool.yaml > > > > > > > > _______________________________________________ > > > > Zuul-discuss mailing list > > > > Zuul-discuss at lists.zuul-ci.org > > > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > > > > > > > > > > > -- > > > David Shrewsbury (Shrews) > > > > > _______________________________________________ > > > Zuul-discuss mailing list > > > Zuul-discuss at lists.zuul-ci.org > > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Mon Feb 19 16:50:03 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 19 Feb 2018 08:50:03 -0800 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> (Paul Belanger's message of "Mon, 19 Feb 2018 10:23:28 -0500") References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <87371wyldw.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >> Hi, >> >> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >> >> > Hi, >> > >> > How should we go about restricting certain image builds to specific >> > nodepool builder instances? My immediate issue is with ARM64 image >> > builds, which I only want to happen on a builder hosted in an ARM64 >> > cloud. >> > >> > Currently, the builders go through the image list and check "is the >> > existing image missing or too old, if so, build" [1]. Additionally, >> > all builders share a configuration file [2]; so builders don't know >> > "who they are". >> > >> > >> >> Why not just split the builder configuration file? I don't see a need to >> add code >> to do this. >> > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. This seems like a pretty legitimate case to split the config. Very little of the config for the arm64 builder will be shared with any of the other builders, so perhaps unlike the case where one simply wants high-availability launchers, this seems like a very sensible use of a separate config file. At any rate, that's what we should do in openstack-infra to solve the issue Ian asked about. -Jim From shrewsbury.dave at gmail.com Mon Feb 19 16:56:22 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 19 Feb 2018 11:56:22 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: On Mon, Feb 19, 2018 at 10:23 AM, Paul Belanger wrote: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > > Hi, > > > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand > wrote: > > > > > Hi, > > > > > > How should we go about restricting certain image builds to specific > > > nodepool builder instances? My immediate issue is with ARM64 image > > > builds, which I only want to happen on a builder hosted in an ARM64 > > > cloud. > > > > > > Currently, the builders go through the image list and check "is the > > > existing image missing or too old, if so, build" [1]. Additionally, > > > all builders share a configuration file [2]; so builders don't know > > > "who they are". > > > > > > > > > > Why not just split the builder configuration file? I don't see a need to > > add code > > to do this. > > > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean > we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services > running. > We don't have to have 1 config file per builder. Just one additional file for the arm64 builder. The others can still share the config we have now. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Feb 19 17:05:22 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 19 Feb 2018 11:05:22 -0600 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <87371wyldw.fsf@meyer.lemoncheese.net> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <87371wyldw.fsf@meyer.lemoncheese.net> Message-ID: <4c4ab136-f498-682a-1662-3a1f9708244c@inaugust.com> On 02/19/2018 10:50 AM, James E. Blair wrote: > Paul Belanger writes: > >> On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >>> Hi, >>> >>> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >>> >>>> Hi, >>>> >>>> How should we go about restricting certain image builds to specific >>>> nodepool builder instances? My immediate issue is with ARM64 image >>>> builds, which I only want to happen on a builder hosted in an ARM64 >>>> cloud. >>>> >>>> Currently, the builders go through the image list and check "is the >>>> existing image missing or too old, if so, build" [1]. Additionally, >>>> all builders share a configuration file [2]; so builders don't know >>>> "who they are". >>>> >>>> >>> >>> Why not just split the builder configuration file? I don't see a need to >>> add code >>> to do this. >>> >> In our case (openstack-infra) this will require another change to >> puppet-nodepool to support this. Not that we cannot, but it will now mean we'll >> have 7[1] different nodepool configuration files to now manage. 4 x >> nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > This seems like a pretty legitimate case to split the config. Very > little of the config for the arm64 builder will be shared with any of > the other builders, so perhaps unlike the case where one simply wants > high-availability launchers, this seems like a very sensible use of a > separate config file. Yup, totally agree. My other response about split config was more in response to Paul's suggestion: On 02/19/2018 09:23 AM, Paul Belanger wrote: > We could update puppet to start templating or add support for > nodepool.d (like zuul.d) and better split our configs too. I just > haven't found time to write that patch. I don't think we should improve general handling of split configs - I think we should make us not need them. But I agree, in this case a builder-specific config for arm is the right choice. > At any rate, that's what we should do in openstack-infra to solve the > issue Ian asked about. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From corvus at inaugust.com Mon Feb 19 17:20:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 19 Feb 2018 09:20:14 -0800 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> (Monty Taylor's message of "Mon, 19 Feb 2018 10:20:54 -0600") References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: <87y3jox5f5.fsf@meyer.lemoncheese.net> Monty Taylor writes: > I'd honestly prefer if we got to a place where we were not splitting > config files like we do today, but instead have a config file and we > have nodepool figure out any splitting needed. I think that's a larger > effort that needs some design ... but in general I personally consider > the current need to split the config file by hand to be a bug, not a > feature. I think there are certainly some cases where it's awkward, especially with launchers. I suspect this case, for builders, is going to be *less* awkward to have it as a separate config file (because of the lack of shared configuration). We should try it and bring that experience to the table when we talk about improvements. -Jim From iwienand at redhat.com Mon Feb 19 23:15:18 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 20 Feb 2018 10:15:18 +1100 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <787f7399-53b5-4d88-2a58-b6cc7f17f403@redhat.com> On 02/20/2018 02:23 AM, Paul Belanger wrote: >> Why not just split the builder configuration file? I don't see a >> need to add code to do this. I'm happy with this; I was just coming at it from an angle of not splitting the config file, but KISS :) > I did submit support homing diskimage builds to specific builder[2] a while > back, which is more inline with what ianw is asking. This allows us to assign > images to builders, if set. > [2] https://review.openstack.org/461239/ Only comment on this is that I think it might be better to avoid putting specific hostnames in there directly; but rather add meta-data to diskimage configurations describing the features they need on the builder, and have the builder then only choose those builds it knows it can do. Feels more natural for the message-queue/scale-out type environment where we can add/drop hosts at will. We've two real examples to inform design; needing the Xenial build host when all the others were trusty, and now the arm64 based ones. -i From shrewsbury.dave at gmail.com Tue Feb 27 13:57:22 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 08:57:22 -0500 Subject: [nodepool] Webapp label_list Message-ID: Greetings! We have recently merged a change to add the label_list command to nodepool's webapp [1]. For that change, we briefly discussed in IRC [2] the meaning of the "count" parameter that was originally within the output of that command. The tl;dr from me in that discussion was that count without considering node state has no meaning. The final PS of that review left count out of the results. There is now another change proposing to add that count back in [3]. I think this functionality needs some clarification on what it is actually supposed to mean, because I am a bit confused. If it is supposed to be a list of all labels that are configured within nodepool, then it will never be a complete set unless there are nodes booted for each label. If it is supposed to be a list of labels that are immediately available for jobs, then we will have to take node state into account (as well as whether the node is locked or already allocated to another request). So I guess my question is (and I suppose this should have been clarified earlier) .... what is label_list supposed to mean for a user? [1] https://review.openstack.org/535563 [2] http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018-02-07.log.html#t2018-02-07T19:12:32 [3] https://review.openstack.org/548134 -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Feb 27 14:29:47 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Feb 2018 14:29:47 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: References: Message-ID: <1519741171.44y8fc6wez.tristanC@fedora> On February 27, 2018 1:57 pm, David Shrewsbury wrote: > Greetings! > > We have recently merged a change to add the label_list command to > nodepool's webapp [1]. > For that change, we briefly discussed in IRC [2] the meaning of the "count" > parameter that was > originally within the output of that command. The tl;dr from me in that > discussion was that > count without considering node state has no meaning. The final PS of that > review left count > out of the results. > > There is now another change proposing to add that count back in [3]. > > I think this functionality needs some clarification on what it is actually > supposed to mean, > because I am a bit confused. > > If it is supposed to be a list of all labels that are configured within > nodepool, then it will > never be a complete set unless there are nodes booted for each label. > > If it is supposed to be a list of labels that are immediately available for > jobs, then we > will have to take node state into account (as well as whether the node is > locked or > already allocated to another request). > > So I guess my question is (and I suppose this should have been clarified > earlier) > .... what is label_list supposed to mean for a user? > > Hello David, the label_list is supposed to tell the user what labels are available. It's currently not reporting labels without min-ready nor in-use node, and the controller can probably be rewritten to better get that information. Ideally we need a "labels" and/or "nodeset" menu next to the job list that the user could use to set project's configs, e.g. without having to look at nodepool configuration. -Tristan > [1] https://review.openstack.org/535563 > [2] > http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018-02-07.log.html#t2018-02-07T19:12:32 > [3] https://review.openstack.org/548134 > > > -Dave > -- > David Shrewsbury (Shrews) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Tue Feb 27 14:38:26 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 27 Feb 2018 14:38:26 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: <1519741171.44y8fc6wez.tristanC@fedora> References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> On 27.02.18, 14:29, "Tristan Cacqueray" wrote: (...) Hello David, the label_list is supposed to tell the user what labels are available. It's currently not reporting labels without min-ready nor in-use node, and the controller can probably be rewritten to better get that information. Ideally we need a "labels" and/or "nodeset" menu next to the job list that the user could use to set project's configs, e.g. without having to look at nodepool configuration. -Tristan In this case we should think about some other stuff as well. * First labels without min-ready are available since v3. * Further there are discussions on how to limit labels to tenants in zuul. So we need to think about if and how we want to limit the scope of labels to the zuul tenants. ** In case this stays in nodepool (which might be an admin tool) we might not care. ** Do we want to include that somehow into zuul-web? In this case this needs to obey the restrictions. -Tobias From tdecacqu at redhat.com Tue Feb 27 15:24:50 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Feb 2018 15:24:50 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> References: <1519741171.44y8fc6wez.tristanC@fedora> <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> Message-ID: <1519744213.mvkftaqw92.tristanC@fedora> On February 27, 2018 2:38 pm, Tobias.Henkel at bmw.de wrote: > On 27.02.18, 14:29, "Tristan Cacqueray" wrote: > (...) > Hello David, > > the label_list is supposed to tell the user what labels are available. > It's currently not reporting labels without min-ready nor in-use node, > and the controller can probably be rewritten to better get that information. > > Ideally we need a "labels" and/or "nodeset" menu next to the job > list that the user could use to set project's configs, e.g. without > having to look at nodepool configuration. > > -Tristan > > In this case we should think about some other stuff as well. > * First labels without min-ready are available since v3. > * Further there are discussions on how to limit labels to tenants in zuul. So we need to think about if and how we want to limit the scope of labels to the zuul tenants. Even in a single tenant we may want to be able to restrict labels to specifics projects or jobs, as proposed here: https://review.openstack.org/535546 > ** In case this stays in nodepool (which might be an admin tool) we might not care. > ** Do we want to include that somehow into zuul-web? In this case this needs to obey the restrictions. > Perhaps we should somehow merge nodepool-webui into zuul-web? -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Tue Feb 27 15:26:39 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 10:26:39 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: <1519741171.44y8fc6wez.tristanC@fedora> References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 9:29 AM, Tristan Cacqueray wrote: [snip] > Hello David, > > the label_list is supposed to tell the user what labels are available. > It's currently not reporting labels without min-ready nor in-use node, > and the controller can probably be rewritten to better get that > information. > This doesn't seem too useful then, and perhaps we should consider an alternative approach? First idea that comes to mind is maybe we can make use of the registerLauncher() API call to record the launcher's known labels. Then label_list could just pull from that, rather than iterate through all nodes. > Ideally we need a "labels" and/or "nodeset" menu next to the job > list that the user could use to set project's configs, e.g. without > having to look at nodepool configuration. > > -Tristan > > [1] https://review.openstack.org/535563 >> [2] >> http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018- >> 02-07.log.html#t2018-02-07T19:12:32 >> [3] https://review.openstack.org/548134 >> >> >> -Dave >> -- >> David Shrewsbury (Shrews) >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Feb 27 17:40:55 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 12:40:55 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 10:26 AM, David Shrewsbury < shrewsbury.dave at gmail.com> wrote: > > > On Tue, Feb 27, 2018 at 9:29 AM, Tristan Cacqueray > wrote: > > [snip] > > >> Hello David, >> >> the label_list is supposed to tell the user what labels are available. >> It's currently not reporting labels without min-ready nor in-use node, >> and the controller can probably be rewritten to better get that >> information. >> > > This doesn't seem too useful then, and perhaps we should consider an > alternative approach? > First idea that comes to mind is maybe we can make use of the > registerLauncher() API call to > record the launcher's known labels. Then label_list could just pull from > that, rather than iterate > through all nodes. > > I was thinking of something along these lines: https://review.openstack.org/548376 Maybe that can be built on to provide more accurate information? -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Feb 27 22:56:34 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 28 Feb 2018 09:56:34 +1100 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On 02/28/2018 02:26 AM, David Shrewsbury wrote: >> the label_list is supposed to tell the user what labels are available. >> It's currently not reporting labels without min-ready nor in-use node, >> and the controller can probably be rewritten to better get that >> information. > This doesn't seem too useful then, and perhaps we should consider an > alternative approach? Agree, maybe we should just take it out pending re-implementation? I'll propose that and we can discuss. My main issue is that every other view was returning a tuple of (ordered) dicts ([result, result, ...], header dict) where the header describes each key in the result dict with a human/pretty-print label. This makes it logical to a) build a single consistent function that translates to pretty-print or json [1] and b) allows for a common filter function to only show certain fields [2]. I feel like this is a win for consistency. One thing that I think will help is documenting the interfaces better as started in [2]. It's a little easier to ensure consistency when you're just comparing all the end-points without implementation details. --- Since we're talking about it; would we be better to remove the ".json" suffix checks, and just look at the mime type accept headers and return based on that? It seems to be the more common approach. -i [1] https://review.openstack.org/#/c/548134/ [2] https://review.openstack.org/#/c/548135/ From shrewsbury.dave at gmail.com Wed Feb 28 03:11:29 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 22:11:29 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 5:56 PM, Ian Wienand wrote: > On 02/28/2018 02:26 AM, David Shrewsbury wrote: > >> the label_list is supposed to tell the user what labels are available. > >> It's currently not reporting labels without min-ready nor in-use node, > >> and the controller can probably be rewritten to better get that > >> information. > > > This doesn't seem too useful then, and perhaps we should consider an > > alternative approach? > > Agree, maybe we should just take it out pending re-implementation? > I'll propose that and we can discuss. > I don't think a full removal is necessary. If the change I proposed to add config info to launcher registration is deemed worthy, it's a simple change to the current code to pull that new data. > > My main issue is that every other view was returning a tuple of > (ordered) dicts > > ([result, result, ...], header dict) > > where the header describes each key in the result dict with a > human/pretty-print label. > > This makes it logical to a) build a single consistent function that > translates to pretty-print or json [1] and b) allows for a common > filter function to only show certain fields [2]. I feel like this is > a win for consistency. > Yeah, I like consistency here too. I don't see any reason why that can't be easily fixed when we make the change mentioned above. > > One thing that I think will help is documenting the interfaces better > as started in [2]. It's a little easier to ensure consistency when > you're just comparing all the end-points without implementation > details. > > --- > > Since we're talking about it; would we be better to remove the ".json" > suffix checks, and just look at the mime type accept headers and > return based on that? It seems to be the more common approach. > > I'm not the most strongly versed in web things, but on the surface, this seems pretty logical to me. > -i > > [1] https://review.openstack.org/#/c/548134/ > [2] https://review.openstack.org/#/c/548135/ > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Thu Feb 1 18:15:00 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 01 Feb 2018 10:15:00 -0800 Subject: Feature branch merge update Message-ID: <87wozwd1az.fsf@meyer.lemoncheese.net> Hi, After merging the feature/zuulv3 branches of Nodepool and Zuul back into master, a number of open changes on the master branch remained from before the merge (and, in many cases, much older than that). Some of us have gone through to try to triage those changes. Some of them were implemented on the feature/zuulv3 branch or otherwise just don't apply anymore (a component was removed, or functions differently). These changes were abandoned with a short explanation of why, and an invitation to restore them if the, admittedly cursory, analysis was wrong. Some of them may still apply, but in those cases, almost certainly need to be rebased. These changes we marked with Workflow -1: Work in Progress, with the hope that the author will update them when time permits. At this point, all of the open changes on Zuul and Nodepool which are not marked WIP should be relevant to the current state of the tree, and should be reviewed. So if you've only been looking at a subset of open changes to avoid the older changes on master which may not be relevant, you should be able to remove that filter and consider all open, non-WIP changes. Thanks, Jim From corvus at inaugust.com Thu Feb 1 18:37:49 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 01 Feb 2018 10:37:49 -0800 Subject: Zuul website Message-ID: <87fu6kd08y.fsf@meyer.lemoncheese.net> Hi, We now have a website at https://zuul-ci.org/ The content is in git in the openstack-infra/zuul-website repo: https://git.openstack.org/cgit/openstack-infra/zuul-website/tree/ I put together the site using a CC-BY 3.0 template, and I expect that the content of that repo will be licensed CC-BY as well (as most of our docs-only repos are). I enthusiastically invite other folks to contribute to the website by proposing changes to that repo. Currently the zuul-core team has approval rights, but we can expand it as warranted -- I expect there's an overlapping group of folks who would like to contribute in this area. I am not emotionally attached to anything about the current website. Please feel free to propose changes to absolutely everything -- including replacing the site entirely, using a different template, title image, etc. What's there is a starting point, and an attempt to organize information and present it to folks in a way that helps them find the right resources quickly. I only ask that any changes try to keep this in mind: the site has many audiences, and we should think about how they all interact with it. The site itself is currently simply static HTML. You can view it on your workstation with no special tools. I looked into static site generators, and there are promising tools in this space, but ultimately I felt that until we get to the point where we want more than a handful of pages, the extra work isn't worth it. Some parts of the template are not in use in the current site. Most notable is the navigation bar. We can add those back in easily as warranted. You may want to visit the live demo of the template to see all of its offerings: https://html5up.net/arcana So as far as the infrastructure goes, I suggest that we continue with simple static pages for now, and when we get to the point where we know what kind of content we want to add that warrants more tooling, we adopt a static site generator then. I do think we may want to start publishing the zuul, nodepool, and zuul-jobs docs under this domain name as well. But we can do that in the same way we do with OpenStack -- publish the docs in a subdirectory, so they won't be part of the 'zuul-website' repo. We can start looking at adapting those jobs soon. Thanks, Jim From gdk at redhat.com Thu Feb 1 18:45:46 2018 From: gdk at redhat.com (Greg Dekoenigsberg) Date: Thu, 1 Feb 2018 13:45:46 -0500 Subject: Zuul website In-Reply-To: <87fu6kd08y.fsf@meyer.lemoncheese.net> References: <87fu6kd08y.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Feb 1, 2018 at 1:37 PM, James E. Blair wrote: > Hi, > > We now have a website at https://zuul-ci.org/ > > The content is in git in the openstack-infra/zuul-website repo: > > https://git.openstack.org/cgit/openstack-infra/zuul-website/tree/ > > I put together the site using a CC-BY 3.0 template, and I expect that > the content of that repo will be licensed CC-BY as well (as most of our > docs-only repos are). > > I enthusiastically invite other folks to contribute to the website by > proposing changes to that repo. Currently the zuul-core team has > approval rights, but we can expand it as warranted -- I expect there's > an overlapping group of folks who would like to contribute in this area. > > I am not emotionally attached to anything about the current website. > Please feel free to propose changes to absolutely everything -- > including replacing the site entirely, using a different template, > title image, etc. > > What's there is a starting point, and an attempt to organize information > and present it to folks in a way that helps them find the right > resources quickly. I only ask that any changes try to keep this in > mind: the site has many audiences, and we should think about how they > all interact with it. > > The site itself is currently simply static HTML. You can view it on > your workstation with no special tools. I looked into static site > generators, and there are promising tools in this space, but ultimately > I felt that until we get to the point where we want more than a handful > of pages, the extra work isn't worth it. > > Some parts of the template are not in use in the current site. Most > notable is the navigation bar. We can add those back in easily as > warranted. You may want to visit the live demo of the template to see > all of its offerings: > > https://html5up.net/arcana > > So as far as the infrastructure goes, I suggest that we continue with > simple static pages for now, and when we get to the point where we know > what kind of content we want to add that warrants more tooling, we adopt > a static site generator then. > > I do think we may want to start publishing the zuul, nodepool, and > zuul-jobs docs under this domain name as well. But we can do that in > the same way we do with OpenStack -- publish the docs in a subdirectory, > so they won't be part of the 'zuul-website' repo. We can start looking > at adapting those jobs soon. > > Thanks, > > Jim It looks beautiful. :) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Feb 7 19:40:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Feb 2018 11:40:14 -0800 Subject: How to determine what has changed in a test Message-ID: <87mv0kzizl.fsf@meyer.lemoncheese.net> Hi, The problem of how to determine what has changed in the change under test has come to my attention twice today, and it's still early, so I thought it'd be a good topic for a ML discussion. The two issues that have come to my attention represent two similar kinds of tests: a) Inspect the specific commit(s) under test (e.g., inspect the commit message). b) Inspect the delta of applying the patch (e.g., run coverage twice and calculate the difference). Simon Westphahl has proposed a patch focused on the first case, which adds a zuul.commits variable with the sha(s) of the patch(es) under test: https://review.openstack.org/539975 Meanwhile, here is an example of a test which relies on the existence of an origin remote: http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 We also have the minor example of pip emitting warnings if there is no origin remote; if we can fix that here, that's fine, but I don't think it's significant enough to drive our design. The patch to add zuul.commits makes case (a) fairly simple -- you can have ansible iterate over the values in the variable. I think that case (b) might be a bit harder with zuul.commits -- specialized tooling would be needed to create the alternative repo state (ie, a new branch) in order to perform the git checkout or diff operations. Because in (b) we are focused more on the previous state than the commit, we can't just checkout zuul.commit, we need to checkout the parent of zuul.commit. However, if we were to create an origin remote, and populate it with the "previous" git repo state, (that is, either the current state of the tree, or that with the changes ahead of this one applied), it seems we can accommodate both (a) and (b) in an intuitive manner. You could get the values that would be put in zuul.commits by performing 'git diff origin/master..master', which addresses (a), and you could address (b) with 'git checkout origin/master'. Among the reasons we don't have any remotes on the git checkouts are: * What would the URL be? * What would the content be? * Concern about jobs altering the git repo under test by performing naïve and incorrect git operations. We've addressed the content question, that leaves the other two. Ideally, if we did this, I'd like to have a remote with no URL. However, while that may be possible, some investigation suggests that we may want to avoid that as it looks like it may be a bit of an edge case that not all tools or libraries would support correctly. However, if we set it to a bogus URL, that may be sufficient to satisfy the tooling, as well as prevent git repo manipulation errors. I'm inclined to favor this approach, as I think it addresses both cases and is fairly intuitive. Do folks think it would work? Are there things we would be missing as compared to the zuul.commits approach? Thanks, Jim From Tobias.Henkel at bmw.de Wed Feb 7 20:32:43 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Wed, 7 Feb 2018 20:32:43 +0000 Subject: How to determine what has changed in a test In-Reply-To: <87mv0kzizl.fsf@meyer.lemoncheese.net> References: <87mv0kzizl.fsf@meyer.lemoncheese.net> Message-ID: Hi, These are interesting question to discuss. On 07.02.18, 20:40, "James E. Blair" wrote: Hi, The problem of how to determine what has changed in the change under test has come to my attention twice today, and it's still early, so I thought it'd be a good topic for a ML discussion. The two issues that have come to my attention represent two similar kinds of tests: a) Inspect the specific commit(s) under test (e.g., inspect the commit message). b) Inspect the delta of applying the patch (e.g., run coverage twice and calculate the difference). Simon Westphahl has proposed a patch focused on the first case, which adds a zuul.commits variable with the sha(s) of the patch(es) under test: https://review.openstack.org/539975 Meanwhile, here is an example of a test which relies on the existence of an origin remote: http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 We also have the minor example of pip emitting warnings if there is no origin remote; if we can fix that here, that's fine, but I don't think it's significant enough to drive our design. The patch to add zuul.commits makes case (a) fairly simple -- you can have ansible iterate over the values in the variable. I think that case (b) might be a bit harder with zuul.commits -- specialized tooling would be needed to create the alternative repo state (ie, a new branch) in order to perform the git checkout or diff operations. Because in (b) we are focused more on the previous state than the commit, we can't just checkout zuul.commit, we need to checkout the parent of zuul.commit. However, if we were to create an origin remote, and populate it with the "previous" git repo state, (that is, either the current state of the tree, or that with the changes ahead of this one applied), it seems we can accommodate both (a) and (b) in an intuitive manner. You could get the values that would be put in zuul.commits by performing 'git diff origin/master..master', which addresses (a), and you could address (b) with 'git checkout origin/master'. Among the reasons we don't have any remotes on the git checkouts are: * What would the URL be? * What would the content be? * Concern about jobs altering the git repo under test by performing naïve and incorrect git operations. We've addressed the content question, that leaves the other two. Ideally, if we did this, I'd like to have a remote with no URL. However, while that may be possible, some investigation suggests that we may want to avoid that as it looks like it may be a bit of an edge case that not all tools or libraries would support correctly. However, if we set it to a bogus URL, that may be sufficient to satisfy the tooling, as well as prevent git repo manipulation errors. I just tried that out and it seems while you can set an empty remote even git itself (git fetch origin) behaves differently. So if we go that way I'd suggest we use a bogus url with no network interaction like 'file:///dev/null'. I'm inclined to favor this approach, as I think it addresses both cases and is fairly intuitive. Do folks think it would work? Are there things we would be missing as compared to the zuul.commits approach? I think there should be no functional difference compared to what we can get via git log. However we need to cope with the merge commit zuul creates, but I think that should be no real problem. When using github we should think about how merges within the PR can be handled. In the github workflow it is very common to merge master into the PR to update it. In this case github lists the commits in an order which is not obvious at first glance. So I think in this case 'git log' might be even superior as it has some means to better filter/sort the commits compared with the commit list of github. Thanks, Jim Kind regards Tobias From cboylan at sapwetik.org Wed Feb 7 23:16:53 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Feb 2018 15:16:53 -0800 Subject: How to determine what has changed in a test In-Reply-To: <87mv0kzizl.fsf@meyer.lemoncheese.net> References: <87mv0kzizl.fsf@meyer.lemoncheese.net> Message-ID: <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> On Wed, Feb 7, 2018, at 11:40 AM, James E. Blair wrote: > Hi, > > The problem of how to determine what has changed in the change under > test has come to my attention twice today, and it's still early, so I > thought it'd be a good topic for a ML discussion. > > The two issues that have come to my attention represent two similar > kinds of tests: > > a) Inspect the specific commit(s) under test (e.g., inspect the commit > message). > > b) Inspect the delta of applying the patch (e.g., run coverage twice and > calculate the difference). > > Simon Westphahl has proposed a patch focused on the first case, which > adds a zuul.commits variable with the sha(s) of the patch(es) under > test: https://review.openstack.org/539975 > > Meanwhile, here is an example of a test which relies on the existence of > an origin remote: > > > http://logs.openstack.org/58/473658/19/check/openstack-tox-cover/0f6849c/job-output.txt.gz#_2018-02-07_05_43_43_163295 > > We also have the minor example of pip emitting warnings if there is no > origin remote; if we can fix that here, that's fine, but I don't think > it's significant enough to drive our design. > > The patch to add zuul.commits makes case (a) fairly simple -- you can > have ansible iterate over the values in the variable. > > I think that case (b) might be a bit harder with zuul.commits -- > specialized tooling would be needed to create the alternative repo state > (ie, a new branch) in order to perform the git checkout or diff > operations. Because in (b) we are focused more on the previous state > than the commit, we can't just checkout zuul.commit, we need to checkout > the parent of zuul.commit. > > However, if we were to create an origin remote, and populate it with the > "previous" git repo state, (that is, either the current state of the > tree, or that with the changes ahead of this one applied), it seems we > can accommodate both (a) and (b) in an intuitive manner. > > You could get the values that would be put in zuul.commits by > performing 'git diff origin/master..master', which addresses (a), and > you could address (b) with 'git checkout origin/master'. > > Among the reasons we don't have any remotes on the git checkouts are: > > * What would the URL be? > * What would the content be? > * Concern about jobs altering the git repo under test by performing > naïve and incorrect git operations. > > We've addressed the content question, that leaves the other two. > Ideally, if we did this, I'd like to have a remote with no URL. > However, while that may be possible, some investigation suggests that we > may want to avoid that as it looks like it may be a bit of an edge case > that not all tools or libraries would support correctly. However, if we > set it to a bogus URL, that may be sufficient to satisfy the tooling, as > well as prevent git repo manipulation errors. It seems like the origin url should be the repo's canonical location url for correctness. This opens jobs up to bad git repo manipulation but nothing prevents users from doing it themselves in the current situation. If we really want to use a bogus url then avoiding "origin" might be best as the name for the remote. We could call it base or something like that. Another option may be to use a non remote ref and then you can git diff refs/zuul/base/master..master then continue to have no remotes. This avoids breaking tooling with bogus remote urls in unexpected ways but provides for jobs to find the ref that they need. Basically I think we should avoid using remotes if the remote isn't going to be valid and instead use some other pointer (or use a valid remote). > > I'm inclined to favor this approach, as I think it addresses both cases > and is fairly intuitive. Do folks think it would work? Are there > things we would be missing as compared to the zuul.commits approach? > > Thanks, > > Jim From corvus at inaugust.com Thu Feb 8 00:03:52 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Feb 2018 16:03:52 -0800 Subject: How to determine what has changed in a test In-Reply-To: <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> (Clark Boylan's message of "Wed, 07 Feb 2018 15:16:53 -0800") References: <87mv0kzizl.fsf@meyer.lemoncheese.net> <1518045413.1507887.1263337672.58E1B018@webmail.messagingengine.com> Message-ID: <87o9l0xs7r.fsf@meyer.lemoncheese.net> Clark Boylan writes: > It seems like the origin url should be the repo's canonical location > url for correctness. This opens jobs up to bad git repo manipulation > but nothing prevents users from doing it themselves in the current > situation. If we really want to use a bogus url then avoiding "origin" > might be best as the name for the remote. We could call it base or > something like that. That was my first instinct as well, however, in many cases the URL that Zuul uses or the canonical location won't be accessible to the test node, or might have a username and credentials attached. Having a working URL might make it easy to grow a dependency on that and put those kinds of systems at a disadvantage. And we should be careful not to accidentally suggest or encourage the idea that the remote connection is valid -- tests are running in a speculative future state which can not be represented outside the test environment. Of course, some times you need to access a remote (for example, OpenStack's release and proposal jobs). I think it's okay to do extra work to set up that interaction when required, since it's really outside of the scope of Zuul. Using the 'origin' remote has the advantage of being intuitive to most git users, and it would allow tools like pip and https://github.com/Bachmann1234/diff-cover and pip to work as designed with no effort. Diff-cover can be used with an alternate branch, so it's not a requirement, it just seems like the most sensible default. There's no reason origin needs to be the canonical location (it is a DVCS after all), and in those cases, like the OpenStack release jobs where we do want to interact with a remote, we can make sure to update it appropriately (either by removing it or using set-url before use). > Another option may be to use a non remote ref and then you can git > diff refs/zuul/base/master..master then continue to have no > remotes. This avoids breaking tooling with bogus remote urls in > unexpected ways but provides for jobs to find the ref that they need. That's true. It solves the URL problem. It just makes things like diff-cover a little more difficult to use (and probably creates a difference between how a user would run diff-cover locally vs. in a test). -Jim From iwienand at redhat.com Mon Feb 19 03:25:34 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 19 Feb 2018 14:25:34 +1100 Subject: [nodepool] Restricting images to specific nodepool builders Message-ID: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Hi, How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud. Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are". I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building? e.g. diskimages would specify required builder tags similar to: --- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 --- The nodepool.yaml would grow another section similar: --- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true --- For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config. The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building. Thoughts welcome :) -i [1] https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml From shrewsbury.dave at gmail.com Mon Feb 19 13:28:27 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 19 Feb 2018 08:28:27 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Message-ID: Hi, On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > Hi, > > How should we go about restricting certain image builds to specific > nodepool builder instances? My immediate issue is with ARM64 image > builds, which I only want to happen on a builder hosted in an ARM64 > cloud. > > Currently, the builders go through the image list and check "is the > existing image missing or too old, if so, build" [1]. Additionally, > all builders share a configuration file [2]; so builders don't know > "who they are". > > Why not just split the builder configuration file? I don't see a need to add code to do this. > I'd propose we add an arbitrary tag/match system so that builders can > pickup only those builds they mark themselves capable of building? > > e.g. diskimages would specify required builder tags similar to: > > --- > diskimages: > - name: arm64-ubuntu-xenial > elements: > - block-device-efi > - vm > - ubuntu-minimal > ... > env-vars: > TMPDIR: /opt/dib_tmp > DIB_CHECKSUM: '1' > ... > builder-requires: > architecture: arm64 > --- > > The nodepool.yaml would grow another section similar: > > --- > builder-provides: > architecture: arm64 > something_else_unique_about_this_buidler: true > --- > > For OpenStack, we would template this section in the config file via > puppet in [2], ensuring above that only our theoretical ARM64 build > machine had that section in it's config. > > The nodepool-buidler build loop can then check that its > builder-provides section has all the tags specified in an image's > "builder-requires" section before deciding to start building. > > Thoughts welcome :) > > -i > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > tree/nodepool/builder.py#n607 > [2] https://git.openstack.org/cgit/openstack-infra/project- > config/tree/nodepool/nodepool.yaml > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Feb 19 15:23:28 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 19 Feb 2018 10:23:28 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> Message-ID: <20180219152328.GA7920@localhost.localdomain> On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > Hi, > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > > > Hi, > > > > How should we go about restricting certain image builds to specific > > nodepool builder instances? My immediate issue is with ARM64 image > > builds, which I only want to happen on a builder hosted in an ARM64 > > cloud. > > > > Currently, the builders go through the image list and check "is the > > existing image missing or too old, if so, build" [1]. Additionally, > > all builders share a configuration file [2]; so builders don't know > > "who they are". > > > > > > Why not just split the builder configuration file? I don't see a need to > add code > to do this. > In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch. I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set. [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool [2] https://review.openstack.org/461239/ > > > > > I'd propose we add an arbitrary tag/match system so that builders can > > pickup only those builds they mark themselves capable of building? > > > > e.g. diskimages would specify required builder tags similar to: > > > > --- > > diskimages: > > - name: arm64-ubuntu-xenial > > elements: > > - block-device-efi > > - vm > > - ubuntu-minimal > > ... > > env-vars: > > TMPDIR: /opt/dib_tmp > > DIB_CHECKSUM: '1' > > ... > > builder-requires: > > architecture: arm64 > > --- > > > > The nodepool.yaml would grow another section similar: > > > > --- > > builder-provides: > > architecture: arm64 > > something_else_unique_about_this_buidler: true > > --- > > > > For OpenStack, we would template this section in the config file via > > puppet in [2], ensuring above that only our theoretical ARM64 build > > machine had that section in it's config. > > > > The nodepool-buidler build loop can then check that its > > builder-provides section has all the tags specified in an image's > > "builder-requires" section before deciding to start building. > > > > Thoughts welcome :) > > > > -i > > > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > > tree/nodepool/builder.py#n607 > > [2] https://git.openstack.org/cgit/openstack-infra/project- > > config/tree/nodepool/nodepool.yaml > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > -- > David Shrewsbury (Shrews) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mordred at inaugust.com Mon Feb 19 16:20:54 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 19 Feb 2018 10:20:54 -0600 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> On 02/19/2018 09:23 AM, Paul Belanger wrote: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >> Hi, >> >> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >> >>> Hi, >>> >>> How should we go about restricting certain image builds to specific >>> nodepool builder instances? My immediate issue is with ARM64 image >>> builds, which I only want to happen on a builder hosted in an ARM64 >>> cloud. >>> >>> Currently, the builders go through the image list and check "is the >>> existing image missing or too old, if so, build" [1]. Additionally, >>> all builders share a configuration file [2]; so builders don't know >>> "who they are". >>> >>> >> >> Why not just split the builder configuration file? I don't see a need to >> add code >> to do this. >> > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > We could update puppet to start templating or add support for nodepool.d (like > zuul.d) and better split our configs too. I just haven't found time to write > that patch. > > I did submit support homing diskimage builds to specific builder[2] a while > back, which is more inline with what ianw is asking. This allows us to assign > images to builders, if set. I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature. If we consider this builder need alongside the discussion about cloud-region awareness for zuul executors, I think what we're coming to is that there are some actions we want to be able to be cloud-region aware and we might want to consider a holistic design for such awareness and/or tying. >> >> >> >>> I'd propose we add an arbitrary tag/match system so that builders can >>> pickup only those builds they mark themselves capable of building? >>> >>> e.g. diskimages would specify required builder tags similar to: >>> >>> --- >>> diskimages: >>> - name: arm64-ubuntu-xenial >>> elements: >>> - block-device-efi >>> - vm >>> - ubuntu-minimal >>> ... >>> env-vars: >>> TMPDIR: /opt/dib_tmp >>> DIB_CHECKSUM: '1' >>> ... >>> builder-requires: >>> architecture: arm64 >>> --- >>> >>> The nodepool.yaml would grow another section similar: >>> >>> --- >>> builder-provides: >>> architecture: arm64 >>> something_else_unique_about_this_buidler: true >>> --- >>> >>> For OpenStack, we would template this section in the config file via >>> puppet in [2], ensuring above that only our theoretical ARM64 build >>> machine had that section in it's config. >>> >>> The nodepool-buidler build loop can then check that its >>> builder-provides section has all the tags specified in an image's >>> "builder-requires" section before deciding to start building. >>> >>> Thoughts welcome :) >>> >>> -i >>> >>> [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ >>> tree/nodepool/builder.py#n607 >>> [2] https://git.openstack.org/cgit/openstack-infra/project- >>> config/tree/nodepool/nodepool.yaml >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>> >> >> >> >> -- >> David Shrewsbury (Shrews) > >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From dmsimard at redhat.com Mon Feb 19 16:41:39 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 19 Feb 2018 11:41:39 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: On Mon, Feb 19, 2018 at 11:20 AM, Monty Taylor wrote: > I'd honestly prefer if we got to a place where we were not splitting config files > like we do today, but instead have a config file and we have nodepool figure > out any splitting needed. I think that's a larger effort that needs some design > ... but in general I personally consider the current need to split the config file > by hand to be a bug, not a feature. +1, I think we already have the necessary tooling in place to "shard" things appropriately between launchers (gearman, zookeeper, etc) without having to split configuration files. Splitting things manually per launcher is also prone to failure/SPoF if a particular launcher is not behaving well for some reason. > > If we consider this builder need alongside the discussion about cloud-region > awareness for zuul executors, I think what we're coming to is that there are > some actions we want to be able to be cloud-region aware and we might want > to consider a holistic design for such awareness and/or tying. > +1, and for reference, the story about Zuul executor "zones" [1]. I think it would make sense to have a common logic around "zones" (or "clouds", or "providers", whatever) because really you might not just need an executor in a private network. In reality you'll also need a nodepool-launcher to launch VMs in that private cloud. I'm not intimate with the internals of nodepool-builder but what component ultimately takes care of uploading the image, is that nodepool-launcher ? Otherwise you also need a nodepool-builder instance in that private network... I see it roughly looking like this (I hope this renders correctly, otherwise see [2]): "LAN" Firewall "WAN" + +-----------------+ | +--------------+ |Zuul Executor | | |Zuul Scheduler| |Nodepool builder +---------> |Gearman | |Nodepool launcher| | |Zookeeper | +-------+---------+ | +--------------+ | | v | Private cloud + [1]: https://storyboard.openstack.org/#!/story/2001125 [2]: https://i.imgur.com/QVSFgd6.png David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From pabelanger at redhat.com Mon Feb 19 16:42:15 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 19 Feb 2018 11:42:15 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: <20180219164151.GA16043@localhost.localdomain> On Mon, Feb 19, 2018 at 10:20:54AM -0600, Monty Taylor wrote: > On 02/19/2018 09:23 AM, Paul Belanger wrote: > > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > > > Hi, > > > > > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: > > > > > > > Hi, > > > > > > > > How should we go about restricting certain image builds to specific > > > > nodepool builder instances? My immediate issue is with ARM64 image > > > > builds, which I only want to happen on a builder hosted in an ARM64 > > > > cloud. > > > > > > > > Currently, the builders go through the image list and check "is the > > > > existing image missing or too old, if so, build" [1]. Additionally, > > > > all builders share a configuration file [2]; so builders don't know > > > > "who they are". > > > > > > > > > > > > > > Why not just split the builder configuration file? I don't see a need to > > > add code > > > to do this. > > > > > In our case (openstack-infra) this will require another change to > > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > > have 7[1] different nodepool configuration files to now manage. 4 x > > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > > > We could update puppet to start templating or add support for nodepool.d (like > > zuul.d) and better split our configs too. I just haven't found time to write > > that patch. > > > > I did submit support homing diskimage builds to specific builder[2] a while > > back, which is more inline with what ianw is asking. This allows us to assign > > images to builders, if set. > > I'd honestly prefer if we got to a place where we were not splitting config > files like we do today, but instead have a config file and we have nodepool > figure out any splitting needed. I think that's a larger effort that needs > some design ... but in general I personally consider the current need to > split the config file by hand to be a bug, not a feature. > +1 Agree (bug, not a feature), so far the split configuration was to work around issues of having more then 1 nodepool-launcher. > If we consider this builder need alongside the discussion about cloud-region > awareness for zuul executors, I think what we're coming to is that there are > some actions we want to be able to be cloud-region aware and we might want > to consider a holistic design for such awareness and/or tying. > Sounds like we should maybe spend a few minutes at PTG maybe discussion it. I'll add a topic to etherpad. > > > > > > > > > > > > > I'd propose we add an arbitrary tag/match system so that builders can > > > > pickup only those builds they mark themselves capable of building? > > > > > > > > e.g. diskimages would specify required builder tags similar to: > > > > > > > > --- > > > > diskimages: > > > > - name: arm64-ubuntu-xenial > > > > elements: > > > > - block-device-efi > > > > - vm > > > > - ubuntu-minimal > > > > ... > > > > env-vars: > > > > TMPDIR: /opt/dib_tmp > > > > DIB_CHECKSUM: '1' > > > > ... > > > > builder-requires: > > > > architecture: arm64 > > > > --- > > > > > > > > The nodepool.yaml would grow another section similar: > > > > > > > > --- > > > > builder-provides: > > > > architecture: arm64 > > > > something_else_unique_about_this_buidler: true > > > > --- > > > > > > > > For OpenStack, we would template this section in the config file via > > > > puppet in [2], ensuring above that only our theoretical ARM64 build > > > > machine had that section in it's config. > > > > > > > > The nodepool-buidler build loop can then check that its > > > > builder-provides section has all the tags specified in an image's > > > > "builder-requires" section before deciding to start building. > > > > > > > > Thoughts welcome :) > > > > > > > > -i > > > > > > > > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/ > > > > tree/nodepool/builder.py#n607 > > > > [2] https://git.openstack.org/cgit/openstack-infra/project- > > > > config/tree/nodepool/nodepool.yaml > > > > > > > > _______________________________________________ > > > > Zuul-discuss mailing list > > > > Zuul-discuss at lists.zuul-ci.org > > > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > > > > > > > > > > > -- > > > David Shrewsbury (Shrews) > > > > > _______________________________________________ > > > Zuul-discuss mailing list > > > Zuul-discuss at lists.zuul-ci.org > > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Mon Feb 19 16:50:03 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 19 Feb 2018 08:50:03 -0800 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> (Paul Belanger's message of "Mon, 19 Feb 2018 10:23:28 -0500") References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <87371wyldw.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >> Hi, >> >> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >> >> > Hi, >> > >> > How should we go about restricting certain image builds to specific >> > nodepool builder instances? My immediate issue is with ARM64 image >> > builds, which I only want to happen on a builder hosted in an ARM64 >> > cloud. >> > >> > Currently, the builders go through the image list and check "is the >> > existing image missing or too old, if so, build" [1]. Additionally, >> > all builders share a configuration file [2]; so builders don't know >> > "who they are". >> > >> > >> >> Why not just split the builder configuration file? I don't see a need to >> add code >> to do this. >> > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. This seems like a pretty legitimate case to split the config. Very little of the config for the arm64 builder will be shared with any of the other builders, so perhaps unlike the case where one simply wants high-availability launchers, this seems like a very sensible use of a separate config file. At any rate, that's what we should do in openstack-infra to solve the issue Ian asked about. -Jim From shrewsbury.dave at gmail.com Mon Feb 19 16:56:22 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 19 Feb 2018 11:56:22 -0500 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: On Mon, Feb 19, 2018 at 10:23 AM, Paul Belanger wrote: > On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: > > Hi, > > > > On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand > wrote: > > > > > Hi, > > > > > > How should we go about restricting certain image builds to specific > > > nodepool builder instances? My immediate issue is with ARM64 image > > > builds, which I only want to happen on a builder hosted in an ARM64 > > > cloud. > > > > > > Currently, the builders go through the image list and check "is the > > > existing image missing or too old, if so, build" [1]. Additionally, > > > all builders share a configuration file [2]; so builders don't know > > > "who they are". > > > > > > > > > > Why not just split the builder configuration file? I don't see a need to > > add code > > to do this. > > > In our case (openstack-infra) this will require another change to > puppet-nodepool to support this. Not that we cannot, but it will now mean > we'll > have 7[1] different nodepool configuration files to now manage. 4 x > nodepool-launchers, 3 x nodepool-builders, since we have 7 services > running. > We don't have to have 1 config file per builder. Just one additional file for the arm64 builder. The others can still share the config we have now. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Feb 19 17:05:22 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 19 Feb 2018 11:05:22 -0600 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <87371wyldw.fsf@meyer.lemoncheese.net> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <87371wyldw.fsf@meyer.lemoncheese.net> Message-ID: <4c4ab136-f498-682a-1662-3a1f9708244c@inaugust.com> On 02/19/2018 10:50 AM, James E. Blair wrote: > Paul Belanger writes: > >> On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote: >>> Hi, >>> >>> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand wrote: >>> >>>> Hi, >>>> >>>> How should we go about restricting certain image builds to specific >>>> nodepool builder instances? My immediate issue is with ARM64 image >>>> builds, which I only want to happen on a builder hosted in an ARM64 >>>> cloud. >>>> >>>> Currently, the builders go through the image list and check "is the >>>> existing image missing or too old, if so, build" [1]. Additionally, >>>> all builders share a configuration file [2]; so builders don't know >>>> "who they are". >>>> >>>> >>> >>> Why not just split the builder configuration file? I don't see a need to >>> add code >>> to do this. >>> >> In our case (openstack-infra) this will require another change to >> puppet-nodepool to support this. Not that we cannot, but it will now mean we'll >> have 7[1] different nodepool configuration files to now manage. 4 x >> nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. > > This seems like a pretty legitimate case to split the config. Very > little of the config for the arm64 builder will be shared with any of > the other builders, so perhaps unlike the case where one simply wants > high-availability launchers, this seems like a very sensible use of a > separate config file. Yup, totally agree. My other response about split config was more in response to Paul's suggestion: On 02/19/2018 09:23 AM, Paul Belanger wrote: > We could update puppet to start templating or add support for > nodepool.d (like zuul.d) and better split our configs too. I just > haven't found time to write that patch. I don't think we should improve general handling of split configs - I think we should make us not need them. But I agree, in this case a builder-specific config for arm is the right choice. > At any rate, that's what we should do in openstack-infra to solve the > issue Ian asked about. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From corvus at inaugust.com Mon Feb 19 17:20:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 19 Feb 2018 09:20:14 -0800 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> (Monty Taylor's message of "Mon, 19 Feb 2018 10:20:54 -0600") References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> <604239e7-d2f0-d9e7-f0ad-bd0db3565c3b@inaugust.com> Message-ID: <87y3jox5f5.fsf@meyer.lemoncheese.net> Monty Taylor writes: > I'd honestly prefer if we got to a place where we were not splitting > config files like we do today, but instead have a config file and we > have nodepool figure out any splitting needed. I think that's a larger > effort that needs some design ... but in general I personally consider > the current need to split the config file by hand to be a bug, not a > feature. I think there are certainly some cases where it's awkward, especially with launchers. I suspect this case, for builders, is going to be *less* awkward to have it as a separate config file (because of the lack of shared configuration). We should try it and bring that experience to the table when we talk about improvements. -Jim From iwienand at redhat.com Mon Feb 19 23:15:18 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 20 Feb 2018 10:15:18 +1100 Subject: [nodepool] Restricting images to specific nodepool builders In-Reply-To: <20180219152328.GA7920@localhost.localdomain> References: <2a246157-8eff-169b-920b-d7f0ebee8580@redhat.com> <20180219152328.GA7920@localhost.localdomain> Message-ID: <787f7399-53b5-4d88-2a58-b6cc7f17f403@redhat.com> On 02/20/2018 02:23 AM, Paul Belanger wrote: >> Why not just split the builder configuration file? I don't see a >> need to add code to do this. I'm happy with this; I was just coming at it from an angle of not splitting the config file, but KISS :) > I did submit support homing diskimage builds to specific builder[2] a while > back, which is more inline with what ianw is asking. This allows us to assign > images to builders, if set. > [2] https://review.openstack.org/461239/ Only comment on this is that I think it might be better to avoid putting specific hostnames in there directly; but rather add meta-data to diskimage configurations describing the features they need on the builder, and have the builder then only choose those builds it knows it can do. Feels more natural for the message-queue/scale-out type environment where we can add/drop hosts at will. We've two real examples to inform design; needing the Xenial build host when all the others were trusty, and now the arm64 based ones. -i From shrewsbury.dave at gmail.com Tue Feb 27 13:57:22 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 08:57:22 -0500 Subject: [nodepool] Webapp label_list Message-ID: Greetings! We have recently merged a change to add the label_list command to nodepool's webapp [1]. For that change, we briefly discussed in IRC [2] the meaning of the "count" parameter that was originally within the output of that command. The tl;dr from me in that discussion was that count without considering node state has no meaning. The final PS of that review left count out of the results. There is now another change proposing to add that count back in [3]. I think this functionality needs some clarification on what it is actually supposed to mean, because I am a bit confused. If it is supposed to be a list of all labels that are configured within nodepool, then it will never be a complete set unless there are nodes booted for each label. If it is supposed to be a list of labels that are immediately available for jobs, then we will have to take node state into account (as well as whether the node is locked or already allocated to another request). So I guess my question is (and I suppose this should have been clarified earlier) .... what is label_list supposed to mean for a user? [1] https://review.openstack.org/535563 [2] http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018-02-07.log.html#t2018-02-07T19:12:32 [3] https://review.openstack.org/548134 -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Feb 27 14:29:47 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Feb 2018 14:29:47 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: References: Message-ID: <1519741171.44y8fc6wez.tristanC@fedora> On February 27, 2018 1:57 pm, David Shrewsbury wrote: > Greetings! > > We have recently merged a change to add the label_list command to > nodepool's webapp [1]. > For that change, we briefly discussed in IRC [2] the meaning of the "count" > parameter that was > originally within the output of that command. The tl;dr from me in that > discussion was that > count without considering node state has no meaning. The final PS of that > review left count > out of the results. > > There is now another change proposing to add that count back in [3]. > > I think this functionality needs some clarification on what it is actually > supposed to mean, > because I am a bit confused. > > If it is supposed to be a list of all labels that are configured within > nodepool, then it will > never be a complete set unless there are nodes booted for each label. > > If it is supposed to be a list of labels that are immediately available for > jobs, then we > will have to take node state into account (as well as whether the node is > locked or > already allocated to another request). > > So I guess my question is (and I suppose this should have been clarified > earlier) > .... what is label_list supposed to mean for a user? > > Hello David, the label_list is supposed to tell the user what labels are available. It's currently not reporting labels without min-ready nor in-use node, and the controller can probably be rewritten to better get that information. Ideally we need a "labels" and/or "nodeset" menu next to the job list that the user could use to set project's configs, e.g. without having to look at nodepool configuration. -Tristan > [1] https://review.openstack.org/535563 > [2] > http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018-02-07.log.html#t2018-02-07T19:12:32 > [3] https://review.openstack.org/548134 > > > -Dave > -- > David Shrewsbury (Shrews) > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Tue Feb 27 14:38:26 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 27 Feb 2018 14:38:26 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: <1519741171.44y8fc6wez.tristanC@fedora> References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> On 27.02.18, 14:29, "Tristan Cacqueray" wrote: (...) Hello David, the label_list is supposed to tell the user what labels are available. It's currently not reporting labels without min-ready nor in-use node, and the controller can probably be rewritten to better get that information. Ideally we need a "labels" and/or "nodeset" menu next to the job list that the user could use to set project's configs, e.g. without having to look at nodepool configuration. -Tristan In this case we should think about some other stuff as well. * First labels without min-ready are available since v3. * Further there are discussions on how to limit labels to tenants in zuul. So we need to think about if and how we want to limit the scope of labels to the zuul tenants. ** In case this stays in nodepool (which might be an admin tool) we might not care. ** Do we want to include that somehow into zuul-web? In this case this needs to obey the restrictions. -Tobias From tdecacqu at redhat.com Tue Feb 27 15:24:50 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Feb 2018 15:24:50 +0000 Subject: [nodepool] Webapp label_list In-Reply-To: <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> References: <1519741171.44y8fc6wez.tristanC@fedora> <67337022-B3E8-4F74-B93A-13F5337E7101@bmw.de> Message-ID: <1519744213.mvkftaqw92.tristanC@fedora> On February 27, 2018 2:38 pm, Tobias.Henkel at bmw.de wrote: > On 27.02.18, 14:29, "Tristan Cacqueray" wrote: > (...) > Hello David, > > the label_list is supposed to tell the user what labels are available. > It's currently not reporting labels without min-ready nor in-use node, > and the controller can probably be rewritten to better get that information. > > Ideally we need a "labels" and/or "nodeset" menu next to the job > list that the user could use to set project's configs, e.g. without > having to look at nodepool configuration. > > -Tristan > > In this case we should think about some other stuff as well. > * First labels without min-ready are available since v3. > * Further there are discussions on how to limit labels to tenants in zuul. So we need to think about if and how we want to limit the scope of labels to the zuul tenants. Even in a single tenant we may want to be able to restrict labels to specifics projects or jobs, as proposed here: https://review.openstack.org/535546 > ** In case this stays in nodepool (which might be an admin tool) we might not care. > ** Do we want to include that somehow into zuul-web? In this case this needs to obey the restrictions. > Perhaps we should somehow merge nodepool-webui into zuul-web? -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Tue Feb 27 15:26:39 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 10:26:39 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: <1519741171.44y8fc6wez.tristanC@fedora> References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 9:29 AM, Tristan Cacqueray wrote: [snip] > Hello David, > > the label_list is supposed to tell the user what labels are available. > It's currently not reporting labels without min-ready nor in-use node, > and the controller can probably be rewritten to better get that > information. > This doesn't seem too useful then, and perhaps we should consider an alternative approach? First idea that comes to mind is maybe we can make use of the registerLauncher() API call to record the launcher's known labels. Then label_list could just pull from that, rather than iterate through all nodes. > Ideally we need a "labels" and/or "nodeset" menu next to the job > list that the user could use to set project's configs, e.g. without > having to look at nodepool configuration. > > -Tristan > > [1] https://review.openstack.org/535563 >> [2] >> http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018- >> 02-07.log.html#t2018-02-07T19:12:32 >> [3] https://review.openstack.org/548134 >> >> >> -Dave >> -- >> David Shrewsbury (Shrews) >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Feb 27 17:40:55 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 12:40:55 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 10:26 AM, David Shrewsbury < shrewsbury.dave at gmail.com> wrote: > > > On Tue, Feb 27, 2018 at 9:29 AM, Tristan Cacqueray > wrote: > > [snip] > > >> Hello David, >> >> the label_list is supposed to tell the user what labels are available. >> It's currently not reporting labels without min-ready nor in-use node, >> and the controller can probably be rewritten to better get that >> information. >> > > This doesn't seem too useful then, and perhaps we should consider an > alternative approach? > First idea that comes to mind is maybe we can make use of the > registerLauncher() API call to > record the launcher's known labels. Then label_list could just pull from > that, rather than iterate > through all nodes. > > I was thinking of something along these lines: https://review.openstack.org/548376 Maybe that can be built on to provide more accurate information? -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Feb 27 22:56:34 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 28 Feb 2018 09:56:34 +1100 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On 02/28/2018 02:26 AM, David Shrewsbury wrote: >> the label_list is supposed to tell the user what labels are available. >> It's currently not reporting labels without min-ready nor in-use node, >> and the controller can probably be rewritten to better get that >> information. > This doesn't seem too useful then, and perhaps we should consider an > alternative approach? Agree, maybe we should just take it out pending re-implementation? I'll propose that and we can discuss. My main issue is that every other view was returning a tuple of (ordered) dicts ([result, result, ...], header dict) where the header describes each key in the result dict with a human/pretty-print label. This makes it logical to a) build a single consistent function that translates to pretty-print or json [1] and b) allows for a common filter function to only show certain fields [2]. I feel like this is a win for consistency. One thing that I think will help is documenting the interfaces better as started in [2]. It's a little easier to ensure consistency when you're just comparing all the end-points without implementation details. --- Since we're talking about it; would we be better to remove the ".json" suffix checks, and just look at the mime type accept headers and return based on that? It seems to be the more common approach. -i [1] https://review.openstack.org/#/c/548134/ [2] https://review.openstack.org/#/c/548135/ From shrewsbury.dave at gmail.com Wed Feb 28 03:11:29 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 27 Feb 2018 22:11:29 -0500 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: On Tue, Feb 27, 2018 at 5:56 PM, Ian Wienand wrote: > On 02/28/2018 02:26 AM, David Shrewsbury wrote: > >> the label_list is supposed to tell the user what labels are available. > >> It's currently not reporting labels without min-ready nor in-use node, > >> and the controller can probably be rewritten to better get that > >> information. > > > This doesn't seem too useful then, and perhaps we should consider an > > alternative approach? > > Agree, maybe we should just take it out pending re-implementation? > I'll propose that and we can discuss. > I don't think a full removal is necessary. If the change I proposed to add config info to launcher registration is deemed worthy, it's a simple change to the current code to pull that new data. > > My main issue is that every other view was returning a tuple of > (ordered) dicts > > ([result, result, ...], header dict) > > where the header describes each key in the result dict with a > human/pretty-print label. > > This makes it logical to a) build a single consistent function that > translates to pretty-print or json [1] and b) allows for a common > filter function to only show certain fields [2]. I feel like this is > a win for consistency. > Yeah, I like consistency here too. I don't see any reason why that can't be easily fixed when we make the change mentioned above. > > One thing that I think will help is documenting the interfaces better > as started in [2]. It's a little easier to ensure consistency when > you're just comparing all the end-points without implementation > details. > > --- > > Since we're talking about it; would we be better to remove the ".json" > suffix checks, and just look at the mime type accept headers and > return based on that? It seems to be the more common approach. > > I'm not the most strongly versed in web things, but on the surface, this seems pretty logical to me. > -i > > [1] https://review.openstack.org/#/c/548134/ > [2] https://review.openstack.org/#/c/548135/ > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: