Hi, I'd like to propose some new features related to Nodepool and image building. The short version is that right now image management is tied fairly closely to diskimage-builder within the nodepool-builder process. This is problematic for folks who want Nodepool image management but can't use DIB (ie, Windows images). It's even problematic for those who do want to use DIB, in that it's difficult to debug image builds. It's worth noting the original Zuulv3 spec had room for more than one nodepool-builder implementation (in an offhand comment, it mentioned that a snapshot-based nodepool-builder process could be created). However, that has not yet come to pass. Over the years, we've speculated about the idea of using Zuul to build images, and I think it's time to see if we can put that into practice. It makes a lot of sense in that we have a ready-made execution environment with Zuul -- build logs and history are well defined and accessible to anyone. It's flexible and can accommodate any image building system (DIB or otherwise). The main downside is the inception aspect, but I'm not worried about that (more later). I think the key requirements are: * Able to create images using any tooling. * Visibility into build history. * Nodepool-launcher should automatically use the "latest" build. Part of this solution is obvious, but I think there are open questions for other parts: * Should the Zuul job be responsible for uploading the image to the cloud? * Should Nodepool be responsible for deleting images? About that downside: clearly there's an issue when bootstrapping a new Zuul system -- how can you run jobs without images? Most new Zuul installations start by using cloud images from the provider. Any Zuul user who wants custom images likely already has some kind of job running on some kind of image anyway. I think it's a natural progression to create a Zuul job to build a custom image and then begin using that custom image in "real" Zuul jobs. Long story short: bootstrap the system with cloud-images. As an OpenDev admin, I think I would be comfortable with a Zuul job that built our DIB images using cloud provider images -- I don't think the differences between those images are so great as to make a DIB job difficult to write. However, if we did use this system, I'm not sure we would ever have to. We have enough clouds with our own images in them already, we could probably use our images to make new ones. In this case: a chicken with no egg. :) Let's work through how this might look. First the easy part, then I'm going to propose some alternate solutions. 1) Build the image: A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a timer. It has a mutex to avoid duplicate image builds. It runs DIB in this case but could just as easily run packer or even snapshot the system it's running on (we can add roles in zuul-jobs for common image building systems). 2) Get the image into the cloud(s): a) Our first option is to do this work in Zuul. Have the job (or a dependent job) upload the image to the cloud(s) (if this was a snapshot build, that's a no-op -- it's already there). Advantage: the upload log is visible along with the build log. Disadvantage: if the upload fails, we won't retry like Nodepool does. b) An alternative would be to have Nodepool do this. Nodepool could even query the Zuul API to find the latest build of build-nodepool-image-ubuntu-focal, check the artifacts list, and download the image file from there. After that, it looks a lot like the current nodepool-builder process right after DIB runs (it has a file on disk and it starts to upload it). Advantage: we use all of the Nodepool image lifecycle management. Disadvantage: upload logs are hidden from users (like today). to integrate snapshot builds, we would have to download the snapshot and re-upload it. 3) Get nodepool-launcher to use the new image: a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper. i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them. ii) Or we could just have nodepool-launcher look for that pattern and use the latest image. b) If we use 2b, then launcher will work as it does today with nodepool-builder; no changes are needed. 4) Delete the image when it's no longer needed We should use the n-2 rule we have today so there's always at least 2 images in the cloud for easy rollback. a) If we used 2.a + 3.a.iii then we don't have ZK records. It would probably make sense to handle image deletes in Zuul jobs with this setup. Presumably the jobs would query the cloud API, list images, and delete them according to the age rules. The disadvantage here is we'd be re-implementing a good chuck of nodepool-builder in Ansible modules. It's doable, but may be a bit clunky. b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so nodepool-builder (or something like it) can delete the images more or less how it does today. I think those are most of the choices. Now that I've typed all of that out, I think I really like the system with "b" for all the choices. I'm not too worried about the upload logs being hidden. And the snapshot system is workable, if not ideal. If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b would probably be best, but at the cost of moving the upload code into zuul-jobs roles. Or we could do the same thing we did last time and implement the "b" solution ("nodepool-zuul-builder") and if someone really wants to do snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as ("nodepool-snapshot-builder"). That would probably be fairly efficient as far as code goes. Finally, a word about DIB and the current nodepool-builder: none of this requires that we change anything about the current nodepool-builder process. If we like it, we can keep it exactly as it is. Indeed, in OpenDev's case, we rely very heavily on the cache on the build host, so it's not at all clear that doing builds on throwaway nodes is ideal. Of course, we could do builds on static Zuul nodes and get the benefit of the cache. I think the way to approach this would be to implement the new system, then try using it with DIB, and if we love it and find no significant drawbacks, consider deprecating the current nodepool-builder. If we don't love it, we'll keep it. I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec. -Jim PS: there's a fairly well-known hack for making non-DIB image builds by replacing DIB with a script that accepts the same arguments. The "b" plan could actually be implemented today with no changes to Nodepool using that same hack. A final implementation probably wouldn't look much different structurally.
Thank you Jim for starting that important discussion. On Sat, Feb 06, 2021 at 15:48 James E. Blair wrote:
Hi,
I'd like to propose some new features related to Nodepool and image building. The short version is that right now image management is tied fairly closely to diskimage-builder within the nodepool-builder process. This is problematic for folks who want Nodepool image management but can't use DIB (ie, Windows images). It's even problematic for those who do want to use DIB, in that it's difficult to debug image builds.
It's worth noting the original Zuulv3 spec had room for more than one nodepool-builder implementation (in an offhand comment, it mentioned that a snapshot-based nodepool-builder process could be created). However, that has not yet come to pass.
Over the years, we've speculated about the idea of using Zuul to build images, and I think it's time to see if we can put that into practice. It makes a lot of sense in that we have a ready-made execution environment with Zuul -- build logs and history are well defined and accessible to anyone. It's flexible and can accommodate any image building system (DIB or otherwise). The main downside is the inception aspect, but I'm not worried about that (more later).
I think the key requirements are:
* Able to create images using any tooling. * Visibility into build history. * Nodepool-launcher should automatically use the "latest" build.
Could we also add the ability to sanity check the image before enabling it?
Part of this solution is obvious, but I think there are open questions for other parts:
* Should the Zuul job be responsible for uploading the image to the cloud? * Should Nodepool be responsible for deleting images?
About that downside: clearly there's an issue when bootstrapping a new Zuul system -- how can you run jobs without images? Most new Zuul installations start by using cloud images from the provider. Any Zuul user who wants custom images likely already has some kind of job running on some kind of image anyway. I think it's a natural progression to create a Zuul job to build a custom image and then begin using that custom image in "real" Zuul jobs. Long story short: bootstrap the system with cloud-images. As an OpenDev admin, I think I would be comfortable with a Zuul job that built our DIB images using cloud provider images -- I don't think the differences between those images are so great as to make a DIB job difficult to write. However, if we did use this system, I'm not sure we would ever have to. We have enough clouds with our own images in them already, we could probably use our images to make new ones. In this case: a chicken with no egg. :)
Let's work through how this might look. First the easy part, then I'm going to propose some alternate solutions.
1) Build the image:
A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a timer. It has a mutex to avoid duplicate image builds. It runs DIB in this case but could just as easily run packer or even snapshot the system it's running on (we can add roles in zuul-jobs for common image building systems).
2) Get the image into the cloud(s):
a) Our first option is to do this work in Zuul. Have the job (or a dependent job) upload the image to the cloud(s) (if this was a snapshot build, that's a no-op -- it's already there). Advantage: the upload log is visible along with the build log. Disadvantage: if the upload fails, we won't retry like Nodepool does.
b) An alternative would be to have Nodepool do this. Nodepool could even query the Zuul API to find the latest build of build-nodepool-image-ubuntu-focal, check the artifacts list, and download the image file from there. After that, it looks a lot like the current nodepool-builder process right after DIB runs (it has a file on disk and it starts to upload it). Advantage: we use all of the Nodepool image lifecycle management. Disadvantage: upload logs are hidden from users (like today). to integrate snapshot builds, we would have to download the snapshot and re-upload it.
3) Get nodepool-launcher to use the new image:
a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper.
i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them.
ii) Or we could just have nodepool-launcher look for that pattern and use the latest image.
Isn't (ii) already the case when using cloud-image?
b) If we use 2b, then launcher will work as it does today with nodepool-builder; no changes are needed.
4) Delete the image when it's no longer needed
We should use the n-2 rule we have today so there's always at least 2 images in the cloud for easy rollback.
a) If we used 2.a + 3.a.iii then we don't have ZK records. It would probably make sense to handle image deletes in Zuul jobs with this setup. Presumably the jobs would query the cloud API, list images, and delete them according to the age rules. The disadvantage here is we'd be re-implementing a good chuck of nodepool-builder in Ansible modules. It's doable, but may be a bit clunky.
b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so nodepool-builder (or something like it) can delete the images more or less how it does today.
I think those are most of the choices.
Now that I've typed all of that out, I think I really like the system with "b" for all the choices. I'm not too worried about the upload logs being hidden. And the snapshot system is workable, if not ideal.
If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b would probably be best, but at the cost of moving the upload code into zuul-jobs roles.
Or we could do the same thing we did last time and implement the "b" solution ("nodepool-zuul-builder") and if someone really wants to do snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as ("nodepool-snapshot-builder"). That would probably be fairly efficient as far as code goes.
Finally, a word about DIB and the current nodepool-builder: none of this requires that we change anything about the current nodepool-builder process. If we like it, we can keep it exactly as it is. Indeed, in OpenDev's case, we rely very heavily on the cache on the build host, so it's not at all clear that doing builds on throwaway nodes is ideal. Of course, we could do builds on static Zuul nodes and get the benefit of the cache. I think the way to approach this would be to implement the new system, then try using it with DIB, and if we love it and find no significant drawbacks, consider deprecating the current nodepool-builder. If we don't love it, we'll keep it.
I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec.
-Jim
PS: there's a fairly well-known hack for making non-DIB image builds by replacing DIB with a script that accepts the same arguments. The "b" plan could actually be implemented today with no changes to Nodepool using that same hack. A final implementation probably wouldn't look much different structurally.
I prefer the system with "a" for all the choices, and 3.a.ii because it doesn't require an extra service nor informations to be stored in ZK. Also everything is defined in Zuul without hidden pieces. This is actually the system we implemented to manage container update: We have been working on a Zuul periodic pipeline to update container images. The key features of this pipeline are: - Gate image definition change, by trying to build and sanity check before merging a change. - Save resources by updating the image instead of doing a clean rebuild. When the image definition did not changed, and when there is enough room for more layers, a `dnf update` task is performed to produce a single extra layer. When nothing get updated, the promotion is skipped. - Sanity check the image before promotion, by running some commands such as `tox --version` or `twine --version` You can find the zuul-worker-* images published at https://quay.io/organization/software-factory The build history is at: https://softwarefactory-project.io/zuul/t/containers-update/builds The image update jobs are in: https://softwarefactory-project.io/cgit/containers/tree/zuul.d Note that there is still a bit of work to be done before we release and document that use-case. Regards, -Tristan
Tristan Cacqueray <tdecacqu@redhat.com> writes:
On Sat, Feb 06, 2021 at 15:48 James E. Blair wrote:
I think the key requirements are:
* Able to create images using any tooling. * Visibility into build history. * Nodepool-launcher should automatically use the "latest" build.
Could we also add the ability to sanity check the image before enabling it?
Sure, we can add that as a requirement. Note that is currently implemented in Nodepool with: https://zuul-ci.org/docs/nodepool/openstack.html#attr-providers.[openstack].post-upload-hook The options in the plan below where we have nodepool uploading the image could use that as well, and additionally, if Zuul builds the image, there may be more options for validating it as part of the build process. Based on the discussion below, I probably should have included "have a managed lifecycle for older images" as a requirement as well.
Part of this solution is obvious, but I think there are open questions for other parts:
* Should the Zuul job be responsible for uploading the image to the cloud? * Should Nodepool be responsible for deleting images?
About that downside: clearly there's an issue when bootstrapping a new Zuul system -- how can you run jobs without images? Most new Zuul installations start by using cloud images from the provider. Any Zuul user who wants custom images likely already has some kind of job running on some kind of image anyway. I think it's a natural progression to create a Zuul job to build a custom image and then begin using that custom image in "real" Zuul jobs. Long story short: bootstrap the system with cloud-images. As an OpenDev admin, I think I would be comfortable with a Zuul job that built our DIB images using cloud provider images -- I don't think the differences between those images are so great as to make a DIB job difficult to write. However, if we did use this system, I'm not sure we would ever have to. We have enough clouds with our own images in them already, we could probably use our images to make new ones. In this case: a chicken with no egg. :)
Let's work through how this might look. First the easy part, then I'm going to propose some alternate solutions.
1) Build the image:
A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a timer. It has a mutex to avoid duplicate image builds. It runs DIB in this case but could just as easily run packer or even snapshot the system it's running on (we can add roles in zuul-jobs for common image building systems).
2) Get the image into the cloud(s):
a) Our first option is to do this work in Zuul. Have the job (or a dependent job) upload the image to the cloud(s) (if this was a snapshot build, that's a no-op -- it's already there). Advantage: the upload log is visible along with the build log. Disadvantage: if the upload fails, we won't retry like Nodepool does.
b) An alternative would be to have Nodepool do this. Nodepool could even query the Zuul API to find the latest build of build-nodepool-image-ubuntu-focal, check the artifacts list, and download the image file from there. After that, it looks a lot like the current nodepool-builder process right after DIB runs (it has a file on disk and it starts to upload it). Advantage: we use all of the Nodepool image lifecycle management. Disadvantage: upload logs are hidden from users (like today). to integrate snapshot builds, we would have to download the snapshot and re-upload it.
3) Get nodepool-launcher to use the new image:
a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper.
i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them.
ii) Or we could just have nodepool-launcher look for that pattern and use the latest image.
Isn't (ii) already the case when using cloud-image?
Cloud-image refers to a single image, so yes, you can replace that and Nodepool should pick up the new uuid. But it doesn't pattern match on names, so you can't easily have the same kind of "just delete the latest" rollback strategy we do with nodepool-builder and dib. I think we would need to add some kind of pattern matching support in order to support keeping more than one image in the cloud (latest and previous at least).
b) If we use 2b, then launcher will work as it does today with nodepool-builder; no changes are needed.
4) Delete the image when it's no longer needed
We should use the n-2 rule we have today so there's always at least 2 images in the cloud for easy rollback.
a) If we used 2.a + 3.a.iii then we don't have ZK records. It would probably make sense to handle image deletes in Zuul jobs with this setup. Presumably the jobs would query the cloud API, list images, and delete them according to the age rules. The disadvantage here is we'd be re-implementing a good chuck of nodepool-builder in Ansible modules. It's doable, but may be a bit clunky.
b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so nodepool-builder (or something like it) can delete the images more or less how it does today.
I think those are most of the choices.
Now that I've typed all of that out, I think I really like the system with "b" for all the choices. I'm not too worried about the upload logs being hidden. And the snapshot system is workable, if not ideal.
If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b would probably be best, but at the cost of moving the upload code into zuul-jobs roles.
Or we could do the same thing we did last time and implement the "b" solution ("nodepool-zuul-builder") and if someone really wants to do snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as ("nodepool-snapshot-builder"). That would probably be fairly efficient as far as code goes.
Finally, a word about DIB and the current nodepool-builder: none of this requires that we change anything about the current nodepool-builder process. If we like it, we can keep it exactly as it is. Indeed, in OpenDev's case, we rely very heavily on the cache on the build host, so it's not at all clear that doing builds on throwaway nodes is ideal. Of course, we could do builds on static Zuul nodes and get the benefit of the cache. I think the way to approach this would be to implement the new system, then try using it with DIB, and if we love it and find no significant drawbacks, consider deprecating the current nodepool-builder. If we don't love it, we'll keep it.
I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec.
-Jim
PS: there's a fairly well-known hack for making non-DIB image builds by replacing DIB with a script that accepts the same arguments. The "b" plan could actually be implemented today with no changes to Nodepool using that same hack. A final implementation probably wouldn't look much different structurally.
I prefer the system with "a" for all the choices, and 3.a.ii because it doesn't require an extra service nor informations to be stored in ZK. Also everything is defined in Zuul without hidden pieces. This is actually the system we implemented to manage container update:
Okay, so that's: 1) Build the image in Zuul. 2.a) Upload the image in Zuul. 3.a.ii) Have nodepool-launcher use cloud-images with pattern matching to determine the latest image. 4.a) Write new Zuul jobs to manage deletion.
We have been working on a Zuul periodic pipeline to update container images. The key features of this pipeline are:
- Gate image definition change, by trying to build and sanity check before merging a change.
- Save resources by updating the image instead of doing a clean rebuild. When the image definition did not changed, and when there is enough room for more layers, a `dnf update` task is performed to produce a single extra layer. When nothing get updated, the promotion is skipped.
This advantage probably accrues more easily to container images versus VM images.
- Sanity check the image before promotion, by running some commands such as `tox --version` or `twine --version`
You can find the zuul-worker-* images published at https://quay.io/organization/software-factory
The build history is at: https://softwarefactory-project.io/zuul/t/containers-update/builds
The image update jobs are in: https://softwarefactory-project.io/cgit/containers/tree/zuul.d
Note that there is still a bit of work to be done before we release and document that use-case.
I don't see a rollback strategy with that. I think one of the requirements that we have (and I should have explicitly listed) is to maintain the image lifecycle management that we have with nodepool-builder. It doesn't have to be implemented in n-b, it could be in a Zuul job (I described this is 4.a). But we've found that, at least with heavyweight images, it's very difficult to thoroughly validate images before use. There's only way to find out if something esoteric is going to break devstack: run devstack for an hour. Multiply that by all the different jobs OpenDev has, and the only way to thoroughly vet an image is to run every job on it. The only practical way to do that is to put it into production. We're moving to lighter and lighter images to reduce the impact of image updates, but the problem still exists. And when there's a problem, we need to roll back quickly. At any rate, I think that rollback ability is important to preserve even if we build images in Zuul. That's where this gets complicated. Step 4.a implements that in this path, but to me, that looks like quite a bit of tricky Ansible. Essentially, we would need roles that scanned the cloud image list, figured out which is the latest image (mirroring the Python code that we would have implemented for 3.a.ii), then delete images older than the last 2). It's not impossible; we have something similar in zuul-jobs already to delete old container image tags. But it feels a little more rube-goldberg. The (a) options remind me of the Jenkins jobs which used to do this before we wrote Nodepool -- they broke a lot because clouds aren't reliable, and it's tricky doing image management with no central state. There's a reason OpenDev's nodepool-builder spends most of its time retrying image upload attempts. I think that's why I lean toward (b) options -- they generally use as much of the nodepool-builder image handling framework as possible and move only the build process to Zuul. I do think the (a) options are workable, but if I'm reading your reply right, I may not have communicated the importance and complexity of image lifecycle management before. With that in mind now, if we went with (a), do we think the system would be robust enough that it could (if we wanted to) be used in OpenDev? -Jim
On Sat, Feb 6, 2021, at 3:48 PM, James E. Blair wrote:
Hi,
I'd like to propose some new features related to Nodepool and image building. The short version is that right now image management is tied fairly closely to diskimage-builder within the nodepool-builder process. This is problematic for folks who want Nodepool image management but can't use DIB (ie, Windows images). It's even problematic for those who do want to use DIB, in that it's difficult to debug image builds.
In my experience the image debugging problem is usually because the images are failing due to a network setup problem or a systemd unit ordering issue. I don't think that moving image builds to Zuul or using something other than DIB solves the fundamental issues with debugging image builds.
It's worth noting the original Zuulv3 spec had room for more than one nodepool-builder implementation (in an offhand comment, it mentioned that a snapshot-based nodepool-builder process could be created). However, that has not yet come to pass.
Over the years, we've speculated about the idea of using Zuul to build images, and I think it's time to see if we can put that into practice. It makes a lot of sense in that we have a ready-made execution environment with Zuul -- build logs and history are well defined and accessible to anyone. It's flexible and can accommodate any image building system (DIB or otherwise). The main downside is the inception aspect, but I'm not worried about that (more later).
I think the key requirements are:
* Able to create images using any tooling. * Visibility into build history. * Nodepool-launcher should automatically use the "latest" build.
In the hard to debug scenarios I've listed above one of the helpful tools the current setup gives us is exposing the built but broken image to people willing to debug it. They can boot it locally and check console logs or attach to a tty directly and debug from there. I would say that continuing to expose images should probably be considered a key requirement if we are interested in making debugging easier.
Part of this solution is obvious, but I think there are open questions for other parts:
* Should the Zuul job be responsible for uploading the image to the cloud? * Should Nodepool be responsible for deleting images?
About that downside: clearly there's an issue when bootstrapping a new Zuul system -- how can you run jobs without images? Most new Zuul installations start by using cloud images from the provider. Any Zuul user who wants custom images likely already has some kind of job running on some kind of image anyway. I think it's a natural progression to create a Zuul job to build a custom image and then begin using that custom image in "real" Zuul jobs. Long story short: bootstrap the system with cloud-images. As an OpenDev admin, I think I would be comfortable with a Zuul job that built our DIB images using cloud provider images -- I don't think the differences between those images are so great as to make a DIB job difficult to write. However, if we did use this system, I'm not sure we would ever have to. We have enough clouds with our own images in them already, we could probably use our images to make new ones. In this case: a chicken with no egg. :)
Let's work through how this might look. First the easy part, then I'm going to propose some alternate solutions.
1) Build the image:
A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a timer. It has a mutex to avoid duplicate image builds. It runs DIB in this case but could just as easily run packer or even snapshot the system it's running on (we can add roles in zuul-jobs for common image building systems).
2) Get the image into the cloud(s):
a) Our first option is to do this work in Zuul. Have the job (or a dependent job) upload the image to the cloud(s) (if this was a snapshot build, that's a no-op -- it's already there). Advantage: the upload log is visible along with the build log. Disadvantage: if the upload fails, we won't retry like Nodepool does.
b) An alternative would be to have Nodepool do this. Nodepool could even query the Zuul API to find the latest build of build-nodepool-image-ubuntu-focal, check the artifacts list, and download the image file from there. After that, it looks a lot like the current nodepool-builder process right after DIB runs (it has a file on disk and it starts to upload it). Advantage: we use all of the Nodepool image lifecycle management. Disadvantage: upload logs are hidden from users (like today). to integrate snapshot builds, we would have to download the snapshot and re-upload it.
This option's reliance on artifacts ensures that we'd expose the image to people doing debugging which is a nice feature.
3) Get nodepool-launcher to use the new image:
a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper.
i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them.
ii) Or we could just have nodepool-launcher look for that pattern and use the latest image.
b) If we use 2b, then launcher will work as it does today with nodepool-builder; no changes are needed.
4) Delete the image when it's no longer needed
We should use the n-2 rule we have today so there's always at least 2 images in the cloud for easy rollback.
a) If we used 2.a + 3.a.iii then we don't have ZK records. It would probably make sense to handle image deletes in Zuul jobs with this setup. Presumably the jobs would query the cloud API, list images, and delete them according to the age rules. The disadvantage here is we'd be re-implementing a good chuck of nodepool-builder in Ansible modules. It's doable, but may be a bit clunky.
b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so nodepool-builder (or something like it) can delete the images more or less how it does today.
I think those are most of the choices.
Now that I've typed all of that out, I think I really like the system with "b" for all the choices. I'm not too worried about the upload logs being hidden. And the snapshot system is workable, if not ideal.
If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b would probably be best, but at the cost of moving the upload code into zuul-jobs roles.
Or we could do the same thing we did last time and implement the "b" solution ("nodepool-zuul-builder") and if someone really wants to do snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as ("nodepool-snapshot-builder"). That would probably be fairly efficient as far as code goes.
Finally, a word about DIB and the current nodepool-builder: none of this requires that we change anything about the current nodepool-builder process. If we like it, we can keep it exactly as it is. Indeed, in OpenDev's case, we rely very heavily on the cache on the build host, so it's not at all clear that doing builds on throwaway nodes is ideal. Of course, we could do builds on static Zuul nodes and get the benefit of the cache. I think the way to approach this would be to implement the new system, then try using it with DIB, and if we love it and find no significant drawbacks, consider deprecating the current nodepool-builder. If we don't love it, we'll keep it.
One of the major reasons we rely on the cache is that we want to ensure that our images have up to date git repos for all of the active git repos we host. Using a local cache reduces the overhead when updating those git repos and keeps the image builds running quickly. I suspect that we can rely on the zuul executor git caches in a similar way to help bootstrap builds on single use nodes. Though perhaps with a bit more network traffic if we aren't starting the image builds on a node that already caches the git repos.
I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec.
I worry that we could find ourselves in a situation where an image update is required to run zuul jobs (python package changes affecting ansible, ssh server settings, something something theoretical), and Zuul operators would be forced to run their image builder out of band, manually upload images, update nodepool configs to use that image, then switch back over to automated builds once everything settles again. The n-2 images rule should guard against this, but we have had situations where nodepool decided it should delete all images (due to misconfiguration). An unlikely situation that ops could work through, but with the current builder you'd just keep it running it and wait an hour. More generally the more we create cycles in dependencies of the system the more I worry that we could find ourselves in less fun bootstrapping or fix required situations.
-Jim
PS: there's a fairly well-known hack for making non-DIB image builds by replacing DIB with a script that accepts the same arguments. The "b" plan could actually be implemented today with no changes to Nodepool using that same hack. A final implementation probably wouldn't look much different structurally.
"Clark Boylan" <cboylan@sapwetik.org> writes:
On Sat, Feb 6, 2021, at 3:48 PM, James E. Blair wrote:
Hi,
I'd like to propose some new features related to Nodepool and image building. The short version is that right now image management is tied fairly closely to diskimage-builder within the nodepool-builder process. This is problematic for folks who want Nodepool image management but can't use DIB (ie, Windows images). It's even problematic for those who do want to use DIB, in that it's difficult to debug image builds.
In my experience the image debugging problem is usually because the images are failing due to a network setup problem or a systemd unit ordering issue. I don't think that moving image builds to Zuul or using something other than DIB solves the fundamental issues with debugging image builds.
Oh, sure, I just meant that there's opportunities for more involvement / visibility. It's easier to push up a patch to a Zuul job and watch it run than it is to go through the nodepool-builder cycle. And to be clear, I'm not at all suggesting using anything other than DIB for building Linux images. Hypothetically, if OpenDev were to adopt this, I imagine we would run DIB in our Zuul jobs. I am suggesting using something other than DIB to build Windows images because it doesn't support Windows.
It's worth noting the original Zuulv3 spec had room for more than one nodepool-builder implementation (in an offhand comment, it mentioned that a snapshot-based nodepool-builder process could be created). However, that has not yet come to pass.
Over the years, we've speculated about the idea of using Zuul to build images, and I think it's time to see if we can put that into practice. It makes a lot of sense in that we have a ready-made execution environment with Zuul -- build logs and history are well defined and accessible to anyone. It's flexible and can accommodate any image building system (DIB or otherwise). The main downside is the inception aspect, but I'm not worried about that (more later).
I think the key requirements are:
* Able to create images using any tooling. * Visibility into build history. * Nodepool-launcher should automatically use the "latest" build.
In the hard to debug scenarios I've listed above one of the helpful tools the current setup gives us is exposing the built but broken image to people willing to debug it. They can boot it locally and check console logs or attach to a tty directly and debug from there. I would say that continuing to expose images should probably be considered a key requirement if we are interested in making debugging easier.
Agree. If a Zuul job produces an artifact, done! [edit -- as you point out below :)]
Part of this solution is obvious, but I think there are open questions for other parts:
* Should the Zuul job be responsible for uploading the image to the cloud? * Should Nodepool be responsible for deleting images?
About that downside: clearly there's an issue when bootstrapping a new Zuul system -- how can you run jobs without images? Most new Zuul installations start by using cloud images from the provider. Any Zuul user who wants custom images likely already has some kind of job running on some kind of image anyway. I think it's a natural progression to create a Zuul job to build a custom image and then begin using that custom image in "real" Zuul jobs. Long story short: bootstrap the system with cloud-images. As an OpenDev admin, I think I would be comfortable with a Zuul job that built our DIB images using cloud provider images -- I don't think the differences between those images are so great as to make a DIB job difficult to write. However, if we did use this system, I'm not sure we would ever have to. We have enough clouds with our own images in them already, we could probably use our images to make new ones. In this case: a chicken with no egg. :)
Let's work through how this might look. First the easy part, then I'm going to propose some alternate solutions.
1) Build the image:
A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a timer. It has a mutex to avoid duplicate image builds. It runs DIB in this case but could just as easily run packer or even snapshot the system it's running on (we can add roles in zuul-jobs for common image building systems).
2) Get the image into the cloud(s):
a) Our first option is to do this work in Zuul. Have the job (or a dependent job) upload the image to the cloud(s) (if this was a snapshot build, that's a no-op -- it's already there). Advantage: the upload log is visible along with the build log. Disadvantage: if the upload fails, we won't retry like Nodepool does.
b) An alternative would be to have Nodepool do this. Nodepool could even query the Zuul API to find the latest build of build-nodepool-image-ubuntu-focal, check the artifacts list, and download the image file from there. After that, it looks a lot like the current nodepool-builder process right after DIB runs (it has a file on disk and it starts to upload it). Advantage: we use all of the Nodepool image lifecycle management. Disadvantage: upload logs are hidden from users (like today). to integrate snapshot builds, we would have to download the snapshot and re-upload it.
This option's reliance on artifacts ensures that we'd expose the image to people doing debugging which is a nice feature.
3) Get nodepool-launcher to use the new image:
a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper.
i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them.
ii) Or we could just have nodepool-launcher look for that pattern and use the latest image.
b) If we use 2b, then launcher will work as it does today with nodepool-builder; no changes are needed.
4) Delete the image when it's no longer needed
We should use the n-2 rule we have today so there's always at least 2 images in the cloud for easy rollback.
a) If we used 2.a + 3.a.iii then we don't have ZK records. It would probably make sense to handle image deletes in Zuul jobs with this setup. Presumably the jobs would query the cloud API, list images, and delete them according to the age rules. The disadvantage here is we'd be re-implementing a good chuck of nodepool-builder in Ansible modules. It's doable, but may be a bit clunky.
b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so nodepool-builder (or something like it) can delete the images more or less how it does today.
I think those are most of the choices.
Now that I've typed all of that out, I think I really like the system with "b" for all the choices. I'm not too worried about the upload logs being hidden. And the snapshot system is workable, if not ideal.
If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b would probably be best, but at the cost of moving the upload code into zuul-jobs roles.
Or we could do the same thing we did last time and implement the "b" solution ("nodepool-zuul-builder") and if someone really wants to do snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as ("nodepool-snapshot-builder"). That would probably be fairly efficient as far as code goes.
Finally, a word about DIB and the current nodepool-builder: none of this requires that we change anything about the current nodepool-builder process. If we like it, we can keep it exactly as it is. Indeed, in OpenDev's case, we rely very heavily on the cache on the build host, so it's not at all clear that doing builds on throwaway nodes is ideal. Of course, we could do builds on static Zuul nodes and get the benefit of the cache. I think the way to approach this would be to implement the new system, then try using it with DIB, and if we love it and find no significant drawbacks, consider deprecating the current nodepool-builder. If we don't love it, we'll keep it.
One of the major reasons we rely on the cache is that we want to ensure that our images have up to date git repos for all of the active git repos we host. Using a local cache reduces the overhead when updating those git repos and keeps the image builds running quickly. I suspect that we can rely on the zuul executor git caches in a similar way to help bootstrap builds on single use nodes. Though perhaps with a bit more network traffic if we aren't starting the image builds on a node that already caches the git repos.
Yeah, the build could get really interesting if do required-projects for the whole system. :) We could perhaps expose the executor cache as an RO mount in bwrap and bypass Zuul. That would probably be okay for OpenDev, but not ideal for others. Another thing we could do is be more selective in our cache. Cache the largest 20% of projects only.
I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec.
I worry that we could find ourselves in a situation where an image update is required to run zuul jobs (python package changes affecting ansible, ssh server settings, something something theoretical), and Zuul operators would be forced to run their image builder out of band, manually upload images, update nodepool configs to use that image, then switch back over to automated builds once everything settles again. The n-2 images rule should guard against this, but we have had situations where nodepool decided it should delete all images (due to misconfiguration). An unlikely situation that ops could work through, but with the current builder you'd just keep it running it and wait an hour.
More generally the more we create cycles in dependencies of the system the more I worry that we could find ourselves in less fun bootstrapping or fix required situations.
That's a good point. One way to avoid this for any particular operator would be to never use Zuul-built images to build images for Zuul. To continue with OpenDev as an example, we could choose to only run image build jobs on provider-supplied images. It's a bit more work (we'd have to handle the differences between those images) but the problem is also very limited in scope -- there's only one job that needs to work across those images: the "build-*-image" job. Moreover, we could even limit the number of providers we use for that (ie, if it work in 2 or 3 of our providers, that's probably enough to give us resiliency). -Jim
I think I like the "b" solution the most as it looks right now since it would mean a lot of functionality would be re-used. But I have a feeling that the "a" solution is better in the long term, and would be easier to extend since it exposes everything in zuul. However that could also be harder to get it right since it's also constrained by any limitations in zuul. I also have a concern about leaking nodes when jobs are aborted, for example if packer is used to start a new instance. But maybe that can be solved by just tagging the instances in a certain way and scanning for leaked instances periodically (or in cleanup but that could still leak nodes if it's interrupted).
3) Get nodepool-launcher to use the new image:
a) If we use 2.a, then we have an image in the cloud without a record in Zookeeper.
i) We could have nodepool-builder (or something like it) scan the cloud for images matching a certain pattern and create ZK records for them.
ii) Or we could just have nodepool-launcher look for that pattern and use the latest image.
The ec2 driver already supports 3aii) today with image-filters: https://zuul-ci.org/docs/nodepool/aws.html#attr-providers.[aws].cloud-images.image-filters Regards Albin
On 2/6/21 3:48 PM, James E. Blair wrote:
I'd appreciate feedback on this. I know this is a bit free-form; I'll take the feedback I get and synthesize it into a proper spec.
Thanks to everyone for the feedback so far. I've digested all of this into a short spec here: https://review.opendev.org/c/zuul/zuul/+/775042 -Jim
participants (5)
-
albin vass
-
Clark Boylan
-
James E. Blair
-
jim@acmegating.com
-
Tristan Cacqueray