[nodepool] Restricting images to specific nodepool builders
Hi, How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud. Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are". I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building? e.g. diskimages would specify required builder tags similar to: --- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 --- The nodepool.yaml would grow another section similar: --- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true --- For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config. The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building. Thoughts welcome :) -i [1] https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builde... [2] https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/...
Hi, On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building?
e.g. diskimages would specify required builder tags similar to:
--- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 ---
The nodepool.yaml would grow another section similar:
--- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true ---
For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config.
The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building.
Thoughts welcome :)
-i
[1] https://git.openstack.org/cgit/openstack-infra/nodepool/ tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project- config/tree/nodepool/nodepool.yaml
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running. We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch. I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set. [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool [2] https://review.openstack.org/461239/
I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building?
e.g. diskimages would specify required builder tags similar to:
--- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 ---
The nodepool.yaml would grow another section similar:
--- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true ---
For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config.
The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building.
Thoughts welcome :)
-i
[1] https://git.openstack.org/cgit/openstack-infra/nodepool/ tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project- config/tree/nodepool/nodepool.yaml
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On 02/19/2018 09:23 AM, Paul Belanger wrote:
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.
We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch.
I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set.
I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature. If we consider this builder need alongside the discussion about cloud-region awareness for zuul executors, I think what we're coming to is that there are some actions we want to be able to be cloud-region aware and we might want to consider a holistic design for such awareness and/or tying.
I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building?
e.g. diskimages would specify required builder tags similar to:
--- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 ---
The nodepool.yaml would grow another section similar:
--- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true ---
For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config.
The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building.
Thoughts welcome :)
-i
[1] https://git.openstack.org/cgit/openstack-infra/nodepool/ tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project- config/tree/nodepool/nodepool.yaml
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Mon, Feb 19, 2018 at 11:20 AM, Monty Taylor <mordred@inaugust.com> wrote:
I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature.
+1, I think we already have the necessary tooling in place to "shard" things appropriately between launchers (gearman, zookeeper, etc) without having to split configuration files. Splitting things manually per launcher is also prone to failure/SPoF if a particular launcher is not behaving well for some reason.
If we consider this builder need alongside the discussion about cloud-region awareness for zuul executors, I think what we're coming to is that there are some actions we want to be able to be cloud-region aware and we might want to consider a holistic design for such awareness and/or tying.
+1, and for reference, the story about Zuul executor "zones" [1]. I think it would make sense to have a common logic around "zones" (or "clouds", or "providers", whatever) because really you might not just need an executor in a private network. In reality you'll also need a nodepool-launcher to launch VMs in that private cloud. I'm not intimate with the internals of nodepool-builder but what component ultimately takes care of uploading the image, is that nodepool-launcher ? Otherwise you also need a nodepool-builder instance in that private network... I see it roughly looking like this (I hope this renders correctly, otherwise see [2]): "LAN" Firewall "WAN" + +-----------------+ | +--------------+ |Zuul Executor | | |Zuul Scheduler| |Nodepool builder +---------> |Gearman | |Nodepool launcher| | |Zookeeper | +-------+---------+ | +--------------+ | | v | Private cloud + [1]: https://storyboard.openstack.org/#!/story/2001125 [2]: https://i.imgur.com/QVSFgd6.png David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter]
On Mon, Feb 19, 2018 at 10:20:54AM -0600, Monty Taylor wrote:
On 02/19/2018 09:23 AM, Paul Belanger wrote:
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.
We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch.
I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set.
I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature.
+1 Agree (bug, not a feature), so far the split configuration was to work around issues of having more then 1 nodepool-launcher.
If we consider this builder need alongside the discussion about cloud-region awareness for zuul executors, I think what we're coming to is that there are some actions we want to be able to be cloud-region aware and we might want to consider a holistic design for such awareness and/or tying.
Sounds like we should maybe spend a few minutes at PTG maybe discussion it. I'll add a topic to etherpad.
I'd propose we add an arbitrary tag/match system so that builders can pickup only those builds they mark themselves capable of building?
e.g. diskimages would specify required builder tags similar to:
--- diskimages: - name: arm64-ubuntu-xenial elements: - block-device-efi - vm - ubuntu-minimal ... env-vars: TMPDIR: /opt/dib_tmp DIB_CHECKSUM: '1' ... builder-requires: architecture: arm64 ---
The nodepool.yaml would grow another section similar:
--- builder-provides: architecture: arm64 something_else_unique_about_this_buidler: true ---
For OpenStack, we would template this section in the config file via puppet in [2], ensuring above that only our theoretical ARM64 build machine had that section in it's config.
The nodepool-buidler build loop can then check that its builder-provides section has all the tags specified in an image's "builder-requires" section before deciding to start building.
Thoughts welcome :)
-i
[1] https://git.openstack.org/cgit/openstack-infra/nodepool/ tree/nodepool/builder.py#n607 [2] https://git.openstack.org/cgit/openstack-infra/project- config/tree/nodepool/nodepool.yaml
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Monty Taylor <mordred@inaugust.com> writes:
I'd honestly prefer if we got to a place where we were not splitting config files like we do today, but instead have a config file and we have nodepool figure out any splitting needed. I think that's a larger effort that needs some design ... but in general I personally consider the current need to split the config file by hand to be a bug, not a feature.
I think there are certainly some cases where it's awkward, especially with launchers. I suspect this case, for builders, is going to be *less* awkward to have it as a separate config file (because of the lack of shared configuration). We should try it and bring that experience to the table when we talk about improvements. -Jim
Paul Belanger <pabelanger@redhat.com> writes:
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.
This seems like a pretty legitimate case to split the config. Very little of the config for the arm64 builder will be shared with any of the other builders, so perhaps unlike the case where one simply wants high-availability launchers, this seems like a very sensible use of a separate config file. At any rate, that's what we should do in openstack-infra to solve the issue Ian asked about. -Jim
On 02/19/2018 10:50 AM, James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.
This seems like a pretty legitimate case to split the config. Very little of the config for the arm64 builder will be shared with any of the other builders, so perhaps unlike the case where one simply wants high-availability launchers, this seems like a very sensible use of a separate config file.
Yup, totally agree. My other response about split config was more in response to Paul's suggestion: On 02/19/2018 09:23 AM, Paul Belanger wrote:
We could update puppet to start templating or add support for nodepool.d (like zuul.d) and better split our configs too. I just haven't found time to write that patch.
I don't think we should improve general handling of split configs - I think we should make us not need them. But I agree, in this case a builder-specific config for arm is the right choice.
At any rate, that's what we should do in openstack-infra to solve the issue Ian asked about.
-Jim
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Mon, Feb 19, 2018 at 10:23 AM, Paul Belanger <pabelanger@redhat.com> wrote:
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
Hi,
On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand <iwienand@redhat.com> wrote:
Hi,
How should we go about restricting certain image builds to specific nodepool builder instances? My immediate issue is with ARM64 image builds, which I only want to happen on a builder hosted in an ARM64 cloud.
Currently, the builders go through the image list and check "is the existing image missing or too old, if so, build" [1]. Additionally, all builders share a configuration file [2]; so builders don't know "who they are".
Why not just split the builder configuration file? I don't see a need to add code to do this.
In our case (openstack-infra) this will require another change to puppet-nodepool to support this. Not that we cannot, but it will now mean we'll have 7[1] different nodepool configuration files to now manage. 4 x nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.
We don't have to have 1 config file per builder. Just one additional file for the arm64 builder. The others can still share the config we have now. -Dave -- David Shrewsbury (Shrews)
On 02/20/2018 02:23 AM, Paul Belanger wrote:
Why not just split the builder configuration file? I don't see a need to add code to do this.
I'm happy with this; I was just coming at it from an angle of not splitting the config file, but KISS :)
I did submit support homing diskimage builds to specific builder[2] a while back, which is more inline with what ianw is asking. This allows us to assign images to builders, if set.
Only comment on this is that I think it might be better to avoid putting specific hostnames in there directly; but rather add meta-data to diskimage configurations describing the features they need on the builder, and have the builder then only choose those builds it knows it can do. Feels more natural for the message-queue/scale-out type environment where we can add/drop hosts at will. We've two real examples to inform design; needing the Xenial build host when all the others were trusty, and now the arm64 based ones. -i
participants (6)
-
corvus@inaugust.com
-
David Moreau Simard
-
David Shrewsbury
-
Ian Wienand
-
Monty Taylor
-
Paul Belanger