Nodepool image building

James E. Blair jim at acmegating.com
Mon Feb 8 15:08:02 UTC 2021


Tristan Cacqueray <tdecacqu at redhat.com> writes:

> On Sat, Feb 06, 2021 at 15:48 James E. Blair wrote:
>> I think the key requirements are:
>>
>> * Able to create images using any tooling.
>> * Visibility into build history.
>> * Nodepool-launcher should automatically use the "latest" build.
>
> Could we also add the ability to sanity check the image before enabling
> it?

Sure, we can add that as a requirement.  Note that is currently
implemented in Nodepool with:

https://zuul-ci.org/docs/nodepool/openstack.html#attr-providers.[openstack].post-upload-hook

The options in the plan below where we have nodepool uploading the image
could use that as well, and additionally, if Zuul builds the image,
there may be more options for validating it as part of the build
process.

Based on the discussion below, I probably should have included "have a
managed lifecycle for older images" as a requirement as well.

>> Part of this solution is obvious, but I think there are open questions
>> for other parts:
>>
>> * Should the Zuul job be responsible for uploading the image to the
>>   cloud?
>> * Should Nodepool be responsible for deleting images?
>>
>> About that downside: clearly there's an issue when bootstrapping a new
>> Zuul system -- how can you run jobs without images?  Most new Zuul
>> installations start by using cloud images from the provider.  Any Zuul
>> user who wants custom images likely already has some kind of job running
>> on some kind of image anyway.  I think it's a natural progression to
>> create a Zuul job to build a custom image and then begin using that
>> custom image in "real" Zuul jobs.  Long story short: bootstrap the
>> system with cloud-images.  As an OpenDev admin, I think I would be
>> comfortable with a Zuul job that built our DIB images using cloud
>> provider images -- I don't think the differences between those images
>> are so great as to make a DIB job difficult to write.  However, if we
>> did use this system, I'm not sure we would ever have to.  We have enough
>> clouds with our own images in them already, we could probably use our
>> images to make new ones.  In this case: a chicken with no egg.  :)
>>
>> Let's work through how this might look.  First the easy part, then I'm
>> going to propose some alternate solutions.
>>
>> 1) Build the image:
>>
>> A Zuul job ("build-nodepool-image-ubuntu-focal") is triggered by a
>> timer.  It has a mutex to avoid duplicate image builds.  It runs DIB in
>> this case but could just as easily run packer or even snapshot the
>> system it's running on (we can add roles in zuul-jobs for common image
>> building systems).
>>
>> 2) Get the image into the cloud(s):
>>
>>    a) Our first option is to do this work in Zuul.  Have the job (or a
>>       dependent job) upload the image to the cloud(s) (if this was a
>>       snapshot build, that's a no-op -- it's already there).  Advantage:
>>       the upload log is visible along with the build log.  Disadvantage:
>>       if the upload fails, we won't retry like Nodepool does.
>>
>>    b) An alternative would be to have Nodepool do this.  Nodepool could
>>       even query the Zuul API to find the latest build of
>>       build-nodepool-image-ubuntu-focal, check the artifacts list, and
>>       download the image file from there.  After that, it looks a lot
>>       like the current nodepool-builder process right after DIB runs (it
>>       has a file on disk and it starts to upload it).
>>       Advantage: we use all of the Nodepool image lifecycle management.
>>       Disadvantage: upload logs are hidden from users (like today).
>>                     to integrate snapshot builds, we would have to
>>                     download the snapshot and re-upload it.
>>
>> 3) Get nodepool-launcher to use the new image:
>>
>>    a) If we use 2.a, then we have an image in the cloud without a record
>>       in Zookeeper.
>>
>>       i) We could have nodepool-builder (or something like it) scan the
>>          cloud for images matching a certain pattern and create ZK
>>          records for them.
>>
>>       ii) Or we could just have nodepool-launcher look for that pattern
>>           and use the latest image.
>
> Isn't (ii) already the case when using cloud-image?

Cloud-image refers to a single image, so yes, you can replace that and
Nodepool should pick up the new uuid.  But it doesn't pattern match on
names, so you can't easily have the same kind of "just delete the
latest" rollback strategy we do with nodepool-builder and dib.

I think we would need to add some kind of pattern matching support in
order to support keeping more than one image in the cloud (latest and
previous at least).

>>    b) If we use 2b, then launcher will work as it does today with
>>       nodepool-builder; no changes are needed.
>>
>> 4) Delete the image when it's no longer needed
>>
>> We should use the n-2 rule we have today so there's always at least 2
>> images in the cloud for easy rollback.
>>
>>    a) If we used 2.a + 3.a.iii then we don't have ZK records.  It would
>>       probably make sense to handle image deletes in Zuul jobs with this
>>       setup.  Presumably the jobs would query the cloud API, list
>>       images, and delete them according to the age rules.  The
>>       disadvantage here is we'd be re-implementing a good chuck of
>>       nodepool-builder in Ansible modules.  It's doable, but may be a
>>       bit clunky.
>>
>>    b) If we use 2.a + 3.a.i or 2.b + 3.b then we have records in ZK, so
>>       nodepool-builder (or something like it) can delete the images more
>>       or less how it does today.
>>
>>
>> I think those are most of the choices.
>>
>> Now that I've typed all of that out, I think I really like the system
>> with "b" for all the choices.  I'm not too worried about the upload logs
>> being hidden.  And the snapshot system is workable, if not ideal.
>>
>> If we really wanted to support snapshots better, then 2.a + 3.a.i + 4.b
>> would probably be best, but at the cost of moving the upload code into
>> zuul-jobs roles.
>>
>> Or we could do the same thing we did last time and implement the "b"
>> solution ("nodepool-zuul-builder") and if someone really wants to do
>> snapshots, then we implement 2.a[snapshot] + 3.a.i + 4.b as
>> ("nodepool-snapshot-builder").  That would probably be fairly efficient
>> as far as code goes.
>>
>> Finally, a word about DIB and the current nodepool-builder: none of this
>> requires that we change anything about the current nodepool-builder
>> process.  If we like it, we can keep it exactly as it is.  Indeed, in
>> OpenDev's case, we rely very heavily on the cache on the build host, so
>> it's not at all clear that doing builds on throwaway nodes is ideal.  Of
>> course, we could do builds on static Zuul nodes and get the benefit of
>> the cache.  I think the way to approach this would be to implement the
>> new system, then try using it with DIB, and if we love it and find no
>> significant drawbacks, consider deprecating the current
>> nodepool-builder.  If we don't love it, we'll keep it.
>>
>> I'd appreciate feedback on this.  I know this is a bit free-form; I'll
>> take the feedback I get and synthesize it into a proper spec.
>>
>> -Jim
>>
>> PS: there's a fairly well-known hack for making non-DIB image builds by
>> replacing DIB with a script that accepts the same arguments.  The "b"
>> plan could actually be implemented today with no changes to Nodepool
>> using that same hack.  A final implementation probably wouldn't look
>> much different structurally.
>
> I prefer the system with "a" for all the choices, and 3.a.ii because it
> doesn't require an extra service nor informations to be stored in ZK.
> Also everything is defined in Zuul without hidden pieces.
> This is actually the system we implemented to manage container update:

Okay, so that's:

1)      Build the image in Zuul.
2.a)    Upload the image in Zuul.
3.a.ii) Have nodepool-launcher use cloud-images with pattern matching to
        determine the latest image.
4.a)    Write new Zuul jobs to manage deletion.

> We have been working on a Zuul periodic pipeline to update
> container images. The key features of this pipeline are:
>
> - Gate image definition change, by trying to build and sanity check
>   before merging a change.
>
> - Save resources by updating the image instead of doing a clean rebuild.
>   When the image definition did not changed, and when there is enough
>   room for more layers, a `dnf update` task is performed to produce a
>   single extra layer. When nothing get updated, the promotion is skipped.

This advantage probably accrues more easily to container images versus
VM images.

> - Sanity check the image before promotion, by running some commands such
>   as `tox --version` or `twine --version`
>
> You can find the zuul-worker-* images published at
>   https://quay.io/organization/software-factory
>
> The build history is at:
>   https://softwarefactory-project.io/zuul/t/containers-update/builds
>
> The image update jobs are in:
>   https://softwarefactory-project.io/cgit/containers/tree/zuul.d
>
> Note that there is still a bit of work to be done before we release and
> document that use-case.

I don't see a rollback strategy with that.  I think one of the
requirements that we have (and I should have explicitly listed) is to
maintain the image lifecycle management that we have with
nodepool-builder.  It doesn't have to be implemented in n-b, it could be
in a Zuul job (I described this is 4.a).  But we've found that, at least
with heavyweight images, it's very difficult to thoroughly validate
images before use.  There's only way to find out if something esoteric
is going to break devstack: run devstack for an hour.  Multiply that by
all the different jobs OpenDev has, and the only way to thoroughly vet
an image is to run every job on it.  The only practical way to do that
is to put it into production.  We're moving to lighter and lighter
images to reduce the impact of image updates, but the problem still
exists.  And when there's a problem, we need to roll back quickly.

At any rate, I think that rollback ability is important to preserve even
if we build images in Zuul.  That's where this gets complicated.  Step
4.a implements that in this path, but to me, that looks like quite a bit
of tricky Ansible.  Essentially, we would need roles that scanned the
cloud image list, figured out which is the latest image (mirroring the
Python code that we would have implemented for 3.a.ii), then delete
images older than the last 2).  It's not impossible; we have something
similar in zuul-jobs already to delete old container image tags.  But it
feels a little more rube-goldberg.

The (a) options remind me of the Jenkins jobs which used to do this
before we wrote Nodepool -- they broke a lot because clouds aren't
reliable, and it's tricky doing image management with no central state.
There's a reason OpenDev's nodepool-builder spends most of its time
retrying image upload attempts.

I think that's why I lean toward (b) options -- they generally use as
much of the nodepool-builder image handling framework as possible and
move only the build process to Zuul.

I do think the (a) options are workable, but if I'm reading your reply
right, I may not have communicated the importance and complexity of
image lifecycle management before.  With that in mind now, if we went
with (a), do we think the system would be robust enough that it could
(if we wanted to) be used in OpenDev?

-Jim



More information about the Zuul-discuss mailing list