Nodepool image building

James E. Blair jim at acmegating.com
Mon Feb 8 17:56:09 UTC 2021


"Clark Boylan" <cboylan at sapwetik.org> writes:

> On Sat, Feb 6, 2021, at 3:48 PM, James E. Blair wrote:
>> Hi,
>> 
>> I'd like to propose some new features related to Nodepool and image
>> building.  The short version is that right now image management is tied
>> fairly closely to diskimage-builder within the nodepool-builder process.
>> This is problematic for folks who want Nodepool image management but
>> can't use DIB (ie, Windows images).  It's even problematic for those who
>> do want to use DIB, in that it's difficult to debug image builds.
>
> In my experience the image debugging problem is usually because the
> images are failing due to a network setup problem or a systemd unit
> ordering issue. I don't think that moving image builds to Zuul or
> using something other than DIB solves the fundamental issues with
> debugging image builds.

Oh, sure, I just meant that there's opportunities for more involvement /
visibility.  It's easier to push up a patch to a Zuul job and watch it
run than it is to go through the nodepool-builder cycle.

And to be clear, I'm not at all suggesting using anything other than DIB
for building Linux images.  Hypothetically, if OpenDev were to adopt
this, I imagine we would run DIB in our Zuul jobs.

I am suggesting using something other than DIB to build Windows images
because it doesn't support Windows.

>> It's worth noting the original Zuulv3 spec had room for more than one
>> nodepool-builder implementation (in an offhand comment, it mentioned
>> that a snapshot-based nodepool-builder process could be created).
>> However, that has not yet come to pass.
>> 
>> Over the years, we've speculated about the idea of using Zuul to build
>> images, and I think it's time to see if we can put that into practice.
>> It makes a lot of sense in that we have a ready-made execution
>> environment with Zuul -- build logs and history are well defined and
>> accessible to anyone.  It's flexible and can accommodate any image
>> building system (DIB or otherwise).  The main downside is the inception
>> aspect, but I'm not worried about that (more later).
>> 
>> I think the key requirements are:
>> 
>> * Able to create images using any tooling.
>> * Visibility into build history.
>> * Nodepool-launcher should automatically use the "latest" build.
>
> In the hard to debug scenarios I've listed above one of the helpful
> tools the current setup gives us is exposing the built but broken
> image to people willing to debug it. They can boot it locally and
> check console logs or attach to a tty directly and debug from there. I
> would say that continuing to expose images should probably be
> considered a key requirement if we are interested in making debugging
> easier.

Agree.  If a Zuul job produces an artifact, done!
[edit -- as you point out below :)]

>> Part of this solution is obvious, but I think there are open questions
>> for other parts:
>> 
>> * Should the Zuul job be responsible for uploading the image to the
>>   cloud?
>> * Should Nodepool be responsible for deleting images?
>> 
>> About that downside: clearly there's an issue when bootstrapping a new
>> Zuul system -- how can you run jobs without images?  Most new Zuul
>> installations start by using cloud images from the provider.  Any Zuul
>> user who wants custom images likely already has some kind of job running
>> on some kind of image anyway.  I think it's a natural progression to
>> create a Zuul job to build a custom image and then begin using that
>> custom image in "real" Zuul jobs.  Long story short: bootstrap the
>> system with cloud-images.  As an OpenDev admin, I think I would be
>> comfortable with a Zuul job that built our DIB images using cloud
>> provider images -- I don't think the differences between those images
>> are so great as to make a DIB job difficult to write.  However, if we
>> did use this system, I'm not sure we would ever have to.  We have enough
>> clouds with our own images in them already, we could probably use our
>> images to make new ones.  In this case: a chicken with no egg.  :)
>> 
>> Let's work through how this might look.  First the easy part, then I'm
>> going to propose some alternate solutions.
>> 
>> 1) Build the image:
>> 
>> A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a
>> timer.  It has a mutex to avoid duplicate image builds.  It runs DIB in
>> this case but could just as easily run packer or even snapshot the
>> system it's running on (we can add roles in zuul-jobs for common image
>> building systems).
>> 
>> 2) Get the image into the cloud(s):
>> 
>>    a) Our first option is to do this work in Zuul.  Have the job (or a
>>       dependent job) upload the image to the cloud(s) (if this was a
>>       snapshot build, that's a no-op -- it's already there).  Advantage:
>>       the upload log is visible along with the build log.  Disadvantage:
>>       if the upload fails, we won't retry like Nodepool does.
>> 
>>    b) An alternative would be to have Nodepool do this.  Nodepool could
>>       even query the Zuul API to find the latest build of
>>       build-nodepool-image-ubuntu-focal, check the artifacts list, and
>>       download the image file from there.  After that, it looks a lot
>>       like the current nodepool-builder process right after DIB runs (it
>>       has a file on disk and it starts to upload it).
>>       Advantage: we use all of the Nodepool image lifecycle management.
>>       Disadvantage: upload logs are hidden from users (like today).
>>                     to integrate snapshot builds, we would have to
>>                     download the snapshot and re-upload it.
>
> This option's reliance on artifacts ensures that we'd expose the image
> to people doing debugging which is a nice feature.
>
>> 
>> 3) Get nodepool-launcher to use the new image:
>> 
>>    a) If we use 2.a, then we have an image in the cloud without a record
>>       in Zookeeper.
>> 
>>       i) We could have nodepool-builder (or something like it) scan the
>>          cloud for images matching a certain pattern and create ZK
>>          records for them.
>> 
>>       ii) Or we could just have nodepool-launcher look for that pattern
>>           and use the latest image.
>> 
>>    b) If we use 2b, then launcher will work as it does today with
>>       nodepool-builder; no changes are needed.
>> 
>> 4) Delete the image when it's no longer needed
>> 
>> We should use the n-2 rule we have today so there's always at least 2
>> images in the cloud for easy rollback.
>> 
>>    a) If we used 2.a + 3.a.iii then we don't have ZK records.  It would
>>       probably make sense to handle image deletes in Zuul jobs with this
>>       setup.  Presumably the jobs would query the cloud API, list
>>       images, and delete them according to the age rules.  The
>>       disadvantage here is we'd be re-implementing a good chuck of
>>       nodepool-builder in Ansible modules.  It's doable, but may be a
>>       bit clunky.
>> 
>>    b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so
>>       nodepool-builder (or something like it) can delete the images more
>>       or less how it does today.
>> 
>> 
>> I think those are most of the choices.
>> 
>> Now that I've typed all of that out, I think I really like the system
>> with "b" for all the choices.  I'm not too worried about the upload logs
>> being hidden.  And the snapshot system is workable, if not ideal.
>> 
>> If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b
>> would probably be best, but at the cost of moving the upload code into
>> zuul-jobs roles.
>> 
>> Or we could do the same thing we did last time and implement the "b"
>> solution ("nodepool-zuul-builder") and if someone really wants to do
>> snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as
>> ("nodepool-snapshot-builder").  That would probably be fairly efficient
>> as far as code goes.
>> 
>> Finally, a word about DIB and the current nodepool-builder: none of this
>> requires that we change anything about the current nodepool-builder
>> process.  If we like it, we can keep it exactly as it is.  Indeed, in
>> OpenDev's case, we rely very heavily on the cache on the build host, so
>> it's not at all clear that doing builds on throwaway nodes is ideal.  Of
>> course, we could do builds on static Zuul nodes and get the benefit of
>> the cache.  I think the way to approach this would be to implement the
>> new system, then try using it with DIB, and if we love it and find no
>> significant drawbacks, consider deprecating the current
>> nodepool-builder.  If we don't love it, we'll keep it.
>
> One of the major reasons we rely on the cache is that we want to
> ensure that our images have up to date git repos for all of the active
> git repos we host. Using a local cache reduces the overhead when
> updating those git repos and keeps the image builds running quickly. I
> suspect that we can rely on the zuul executor git caches in a similar
> way to help bootstrap builds on single use nodes. Though perhaps with
> a bit more network traffic if we aren't starting the image builds on a
> node that already caches the git repos.

Yeah, the build could get really interesting if do required-projects for
the whole system.  :)

We could perhaps expose the executor cache as an RO mount in bwrap and
bypass Zuul.  That would probably be okay for OpenDev, but not ideal for
others.

Another thing we could do is be more selective in our cache.  Cache the
largest 20% of projects only.

>> I'd appreciate feedback on this.  I know this is a bit free-form; I'll
>> take the feedback I get and synthesize it into a proper spec.
>
> I worry that we could find ourselves in a situation where an image
> update is required to run zuul jobs (python package changes affecting
> ansible, ssh server settings, something something theoretical), and
> Zuul operators would be forced to run their image builder out of band,
> manually upload images, update nodepool configs to use that image,
> then switch back over to automated builds once everything settles
> again. The n-2 images rule should guard against this, but we have had
> situations where nodepool decided it should delete all images (due to
> misconfiguration). An unlikely situation that ops could work through,
> but with the current builder you'd just keep it running it and wait an
> hour.
>
> More generally the more we create cycles in dependencies of the system
> the more I worry that we could find ourselves in less fun
> bootstrapping or fix required situations.

That's a good point.  One way to avoid this for any particular operator
would be to never use Zuul-built images to build images for Zuul.  To
continue with OpenDev as an example, we could choose to only run image
build jobs on provider-supplied images.  It's a bit more work (we'd have
to handle the differences between those images) but the problem is also
very limited in scope -- there's only one job that needs to work across
those images: the "build-*-image" job.  Moreover, we could even limit
the number of providers we use for that (ie, if it work in 2 or 3 of our
providers, that's probably enough to give us resiliency).

-Jim



More information about the Zuul-discuss mailing list