Nodepool image building

Tristan Cacqueray tdecacqu at redhat.com
Sun Feb 7 19:47:15 UTC 2021


Thank you Jim for starting that important discussion.

On Sat, Feb 06, 2021 at 15:48 James E. Blair wrote:
> Hi,
>
> I'd like to propose some new features related to Nodepool and image
> building.  The short version is that right now image management is tied
> fairly closely to diskimage-builder within the nodepool-builder process.
> This is problematic for folks who want Nodepool image management but
> can't use DIB (ie, Windows images).  It's even problematic for those who
> do want to use DIB, in that it's difficult to debug image builds.
>
> It's worth noting the original Zuulv3 spec had room for more than one
> nodepool-builder implementation (in an offhand comment, it mentioned
> that a snapshot-based nodepool-builder process could be created).
> However, that has not yet come to pass.
>
> Over the years, we've speculated about the idea of using Zuul to build
> images, and I think it's time to see if we can put that into practice.
> It makes a lot of sense in that we have a ready-made execution
> environment with Zuul -- build logs and history are well defined and
> accessible to anyone.  It's flexible and can accommodate any image
> building system (DIB or otherwise).  The main downside is the inception
> aspect, but I'm not worried about that (more later).
>
> I think the key requirements are:
>
> * Able to create images using any tooling.
> * Visibility into build history.
> * Nodepool-launcher should automatically use the "latest" build.

Could we also add the ability to sanity check the image before enabling
it?

> Part of this solution is obvious, but I think there are open questions
> for other parts:
>
> * Should the Zuul job be responsible for uploading the image to the
>   cloud?
> * Should Nodepool be responsible for deleting images?
>
> About that downside: clearly there's an issue when bootstrapping a new
> Zuul system -- how can you run jobs without images?  Most new Zuul
> installations start by using cloud images from the provider.  Any Zuul
> user who wants custom images likely already has some kind of job running
> on some kind of image anyway.  I think it's a natural progression to
> create a Zuul job to build a custom image and then begin using that
> custom image in "real" Zuul jobs.  Long story short: bootstrap the
> system with cloud-images.  As an OpenDev admin, I think I would be
> comfortable with a Zuul job that built our DIB images using cloud
> provider images -- I don't think the differences between those images
> are so great as to make a DIB job difficult to write.  However, if we
> did use this system, I'm not sure we would ever have to.  We have enough
> clouds with our own images in them already, we could probably use our
> images to make new ones.  In this case: a chicken with no egg.  :)
>
> Let's work through how this might look.  First the easy part, then I'm
> going to propose some alternate solutions.
>
> 1) Build the image:
>
> A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a
> timer.  It has a mutex to avoid duplicate image builds.  It runs DIB in
> this case but could just as easily run packer or even snapshot the
> system it's running on (we can add roles in zuul-jobs for common image
> building systems).
>
> 2) Get the image into the cloud(s):
>
>    a) Our first option is to do this work in Zuul.  Have the job (or a
>       dependent job) upload the image to the cloud(s) (if this was a
>       snapshot build, that's a no-op -- it's already there).  Advantage:
>       the upload log is visible along with the build log.  Disadvantage:
>       if the upload fails, we won't retry like Nodepool does.
>
>    b) An alternative would be to have Nodepool do this.  Nodepool could
>       even query the Zuul API to find the latest build of
>       build-nodepool-image-ubuntu-focal, check the artifacts list, and
>       download the image file from there.  After that, it looks a lot
>       like the current nodepool-builder process right after DIB runs (it
>       has a file on disk and it starts to upload it).
>       Advantage: we use all of the Nodepool image lifecycle management.
>       Disadvantage: upload logs are hidden from users (like today).
>                     to integrate snapshot builds, we would have to
>                     download the snapshot and re-upload it.
>
> 3) Get nodepool-launcher to use the new image:
>
>    a) If we use 2.a, then we have an image in the cloud without a record
>       in Zookeeper.
>
>       i) We could have nodepool-builder (or something like it) scan the
>          cloud for images matching a certain pattern and create ZK
>          records for them.
>
>       ii) Or we could just have nodepool-launcher look for that pattern
>           and use the latest image.

Isn't (ii) already the case when using cloud-image?

>    b) If we use 2b, then launcher will work as it does today with
>       nodepool-builder; no changes are needed.
>
> 4) Delete the image when it's no longer needed
>
> We should use the n-2 rule we have today so there's always at least 2
> images in the cloud for easy rollback.
>
>    a) If we used 2.a + 3.a.iii then we don't have ZK records.  It would
>       probably make sense to handle image deletes in Zuul jobs with this
>       setup.  Presumably the jobs would query the cloud API, list
>       images, and delete them according to the age rules.  The
>       disadvantage here is we'd be re-implementing a good chuck of
>       nodepool-builder in Ansible modules.  It's doable, but may be a
>       bit clunky.
>
>    b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so
>       nodepool-builder (or something like it) can delete the images more
>       or less how it does today.
>
>
> I think those are most of the choices.
>
> Now that I've typed all of that out, I think I really like the system
> with "b" for all the choices.  I'm not too worried about the upload logs
> being hidden.  And the snapshot system is workable, if not ideal.
>
> If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b
> would probably be best, but at the cost of moving the upload code into
> zuul-jobs roles.
>
> Or we could do the same thing we did last time and implement the "b"
> solution ("nodepool-zuul-builder") and if someone really wants to do
> snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as
> ("nodepool-snapshot-builder").  That would probably be fairly efficient
> as far as code goes.
>
> Finally, a word about DIB and the current nodepool-builder: none of this
> requires that we change anything about the current nodepool-builder
> process.  If we like it, we can keep it exactly as it is.  Indeed, in
> OpenDev's case, we rely very heavily on the cache on the build host, so
> it's not at all clear that doing builds on throwaway nodes is ideal.  Of
> course, we could do builds on static Zuul nodes and get the benefit of
> the cache.  I think the way to approach this would be to implement the
> new system, then try using it with DIB, and if we love it and find no
> significant drawbacks, consider deprecating the current
> nodepool-builder.  If we don't love it, we'll keep it.
>
> I'd appreciate feedback on this.  I know this is a bit free-form; I'll
> take the feedback I get and synthesize it into a proper spec.
>
> -Jim
>
> PS: there's a fairly well-known hack for making non-DIB image builds by
> replacing DIB with a script that accepts the same arguments.  The "b"
> plan could actually be implemented today with no changes to Nodepool
> using that same hack.  A final implementation probably wouldn't look
> much different structurally.

I prefer the system with "a" for all the choices, and 3.a.ii because it
doesn't require an extra service nor informations to be stored in ZK.
Also everything is defined in Zuul without hidden pieces.
This is actually the system we implemented to manage container update:

We have been working on a Zuul periodic pipeline to update
container images. The key features of this pipeline are:

- Gate image definition change, by trying to build and sanity check
  before merging a change.

- Save resources by updating the image instead of doing a clean rebuild.
  When the image definition did not changed, and when there is enough
  room for more layers, a `dnf update` task is performed to produce a
  single extra layer. When nothing get updated, the promotion is skipped.

- Sanity check the image before promotion, by running some commands such
  as `tox --version` or `twine --version`

You can find the zuul-worker-* images published at
  https://quay.io/organization/software-factory

The build history is at:
  https://softwarefactory-project.io/zuul/t/containers-update/builds

The image update jobs are in:
  https://softwarefactory-project.io/cgit/containers/tree/zuul.d

Note that there is still a bit of work to be done before we release and
document that use-case.

Regards,
-Tristan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 515 bytes
Desc: not available
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20210207/97feaad8/attachment.sig>


More information about the Zuul-discuss mailing list