Speculative docker images
James E. Blair
corvus at inaugust.com
Fri Jan 25 19:37:56 UTC 2019
Hi,
As you know, with respect to git repositories, Zuul frequently operates
well into the future, allowing jobs to build software with changes that
haven't landed yet under the assurance that they will. As more of us
have incorporated container image building into our processes, we find
that we want the same thing there -- that not only would our git
repositories represent the future that Zuul sees, but also our container
image registries.
As a concrete and simple example, we have an integration job called
'zuul-quick-start' which uses both Nodepool and Zuul container images.
Those images themselves depend on build images created from the
openstack-infra/system-config repository, which they use via a "FROM"
line in Dockerfiles, and extending the zuul-quick-start job to be able
to rebuild and use those builder images in the case of a Depends-On
change is too complex to consider. In particular, the "FROM" line is
problematic, as the only way to realistically use a speculative image as
a base image is to alter the Dockerfile or make the image available in a
registry.
Therefore, this is a proposal for a system for building speculative
images which can then be used by further speculative images.
Every item in a Zuul pipeline has its own future state which is related
to, but separate from all the other items. It needs to be able to build
an image for a project and store it in a registry under the final
published name and tag, without affecting any of the changes ahead of
it. Therefore, each item needs its own image registry. In the case
of a single, simple job such as zuul-quick-start, we can just run a
docker registry on the node on which the job is running. In the case of
a more complicated system where we want to build an image once and use
it in multiple dependent jobs in a job graph, we will start an image
registry at the root of the job graph, pause the job, and the dependent
jobs will use it.
Now that we have the registry for the change, we need to have that
registry reflect the state of any changes ahead. We need to know when
the change ahead has finished building its images so that we can then
add those images into our change's registry.
To accomplish this, we can extend Zuul so that jobs have a
provides/requires interface. If our job requires docker images produced
by a job in a dependent change, we can say:
- job:
name: zuul-quick-start
requires: docker-images
- job:
name: system-config-build-image-python-builder
provides: docker-images
Zuul will see that zuul-quick-start should not start until all of the
jobs for all of the changes ahead which provide "docker-images" have
completed. Then zuul-quick-start will be able to retrieve images from
the preceding changes' jobs and add them to its own registry.
In the gate, the behavior of this is obvious: if a change has a job with
a requirement, iterate over all changes ahead of it in the shared queue
and search for jobs on those changes which provide that requirement. If
there are none, or all of them have completed or paused, then that job
is ready to start. Otherwise, wait.
In an independent pipeline, we do include dependent changes ahead in the
shared queue, but they are not live -- they don't run any actual jobs.
So in that case, we're going to have to peek outside our shared queue
and see if our dependent changes show up anywhere else in the same
pipeline, and if they do, wait for their jobs. If they don't show up,
then we know they completed earlier, and it's safe to start.
There's one more thing about this change that will help us later: a job
that "provides" something should be able to pass data to a job which
"requires" it. So our image build jobs can use zuul_return to provide
information about which images they built so that it's easier for later
jobs to import those images into their registries. Once again, the
independent pipelines make this tricky, as since those jobs may have run
some time in the past, the scheduler may have forgotten the data they
returned. However, once the SQL database is required, we can rely on
that to fill in any missing data.
That's the only change we need to Zuul itself to support this system.
The rest is all playbooks and roles. That brings us to the challenge of
how to actually transfer these artifacts between items. I can think of
two approaches. They are not mutually exclusive -- we can implement
both if we want. They both have pros and cons. But I would like input
from Zuul operators (including openstack-infra/opendev) as to which they
would prefer, and I'll implement that approach in OpenDev for Zuul's own
upstream use. If we have a split decision, someone else can implement
the second approach. I think we can support both in zuul-jobs, but I'd
like us to run the most widely applicable approach upstream.
Alternative 1: Use the Logserver
================================
In the case of a simple job, at the end of the job, or in the case of a
complex job graph, a child job dependent on the completion of the image
build process (so we don't have to wait for all the jobs to complete),
we will export the images to the filesystem and upload their layers
(minus any layers which are already on Docker Hub, in order to save
space) to the logserver.
This next bit is a little complicated, but remember that this will all
be encapsulated in roles so we only have to implement it once; the user
interface will resemble "roles: [fetch-dependent-images]".
At the end of the image build job, for every image that we built:
Export it to a directory on the filesystem.
For each layer in the image:
Perform a HEAD request against the Docker Hub registry to see if the
layer exists in the target repository.
If that exists, layer exists in docker, remove from filesystem.
rsync image directories to logserver.
zuul_return image location on logserver.
At the start of a job which requires images:
For each previous image (as provided by zuul_return):
Rsync image directory from logserver to filesystem.
For each layer in image manifest:
If layer is missing from filesystem:
Get the layer from Docker Hub.
Import image into local registry.
The advantages of this approach:
* No external operational dependencies (i.e., you don't have to run a
docker registry).
The disadvantages of this approach:
* Images can't easily be fetched and used for debugging.
* Implementation is slightly more complex.
Alternative 2: Use a Central Registry
=====================================
In this approach, we would run a central registry to store speculative
images. It would not store final images at all, and every image
uploaded to it would be tagged only with a tag like
"pipeline_check_change_1234_original_tag" to uniquely identify its build
context.
At the end of the image build job, for every image that we built:
Copy it from the per-change registry to the central registry, and
tag it with pipeline and change id.
zuul_return image location and tag.
At the start of a job which requires images:
For each previous image (as provided by zuul_return):
Copy it from the central registry to the local registry, and tag it
with the original tag.
The advantages of this approach:
* Developers can fetch images from the central registry by pulling
them with the crazy tag.
* Implementation is a little easier, and uses more standardized tools.
The disadvantages of this approach:
* An external service is required.
* We will need to prune old images from the registry.
Conclusion
==========
I don't think this system will take long to implement with either of the
alternatives. Please let me know what you think of the system, and of
the storage/transfer alternatives.
-Jim
More information about the Zuul-discuss
mailing list