Hi, As you know, with respect to git repositories, Zuul frequently operates well into the future, allowing jobs to build software with changes that haven't landed yet under the assurance that they will. As more of us have incorporated container image building into our processes, we find that we want the same thing there -- that not only would our git repositories represent the future that Zuul sees, but also our container image registries. As a concrete and simple example, we have an integration job called 'zuul-quick-start' which uses both Nodepool and Zuul container images. Those images themselves depend on build images created from the openstack-infra/system-config repository, which they use via a "FROM" line in Dockerfiles, and extending the zuul-quick-start job to be able to rebuild and use those builder images in the case of a Depends-On change is too complex to consider. In particular, the "FROM" line is problematic, as the only way to realistically use a speculative image as a base image is to alter the Dockerfile or make the image available in a registry. Therefore, this is a proposal for a system for building speculative images which can then be used by further speculative images. Every item in a Zuul pipeline has its own future state which is related to, but separate from all the other items. It needs to be able to build an image for a project and store it in a registry under the final published name and tag, without affecting any of the changes ahead of it. Therefore, each item needs its own image registry. In the case of a single, simple job such as zuul-quick-start, we can just run a docker registry on the node on which the job is running. In the case of a more complicated system where we want to build an image once and use it in multiple dependent jobs in a job graph, we will start an image registry at the root of the job graph, pause the job, and the dependent jobs will use it. Now that we have the registry for the change, we need to have that registry reflect the state of any changes ahead. We need to know when the change ahead has finished building its images so that we can then add those images into our change's registry. To accomplish this, we can extend Zuul so that jobs have a provides/requires interface. If our job requires docker images produced by a job in a dependent change, we can say: - job: name: zuul-quick-start requires: docker-images - job: name: system-config-build-image-python-builder provides: docker-images Zuul will see that zuul-quick-start should not start until all of the jobs for all of the changes ahead which provide "docker-images" have completed. Then zuul-quick-start will be able to retrieve images from the preceding changes' jobs and add them to its own registry. In the gate, the behavior of this is obvious: if a change has a job with a requirement, iterate over all changes ahead of it in the shared queue and search for jobs on those changes which provide that requirement. If there are none, or all of them have completed or paused, then that job is ready to start. Otherwise, wait. In an independent pipeline, we do include dependent changes ahead in the shared queue, but they are not live -- they don't run any actual jobs. So in that case, we're going to have to peek outside our shared queue and see if our dependent changes show up anywhere else in the same pipeline, and if they do, wait for their jobs. If they don't show up, then we know they completed earlier, and it's safe to start. There's one more thing about this change that will help us later: a job that "provides" something should be able to pass data to a job which "requires" it. So our image build jobs can use zuul_return to provide information about which images they built so that it's easier for later jobs to import those images into their registries. Once again, the independent pipelines make this tricky, as since those jobs may have run some time in the past, the scheduler may have forgotten the data they returned. However, once the SQL database is required, we can rely on that to fill in any missing data. That's the only change we need to Zuul itself to support this system. The rest is all playbooks and roles. That brings us to the challenge of how to actually transfer these artifacts between items. I can think of two approaches. They are not mutually exclusive -- we can implement both if we want. They both have pros and cons. But I would like input from Zuul operators (including openstack-infra/opendev) as to which they would prefer, and I'll implement that approach in OpenDev for Zuul's own upstream use. If we have a split decision, someone else can implement the second approach. I think we can support both in zuul-jobs, but I'd like us to run the most widely applicable approach upstream. Alternative 1: Use the Logserver ================================ In the case of a simple job, at the end of the job, or in the case of a complex job graph, a child job dependent on the completion of the image build process (so we don't have to wait for all the jobs to complete), we will export the images to the filesystem and upload their layers (minus any layers which are already on Docker Hub, in order to save space) to the logserver. This next bit is a little complicated, but remember that this will all be encapsulated in roles so we only have to implement it once; the user interface will resemble "roles: [fetch-dependent-images]". At the end of the image build job, for every image that we built: Export it to a directory on the filesystem. For each layer in the image: Perform a HEAD request against the Docker Hub registry to see if the layer exists in the target repository. If that exists, layer exists in docker, remove from filesystem. rsync image directories to logserver. zuul_return image location on logserver. At the start of a job which requires images: For each previous image (as provided by zuul_return): Rsync image directory from logserver to filesystem. For each layer in image manifest: If layer is missing from filesystem: Get the layer from Docker Hub. Import image into local registry. The advantages of this approach: * No external operational dependencies (i.e., you don't have to run a docker registry). The disadvantages of this approach: * Images can't easily be fetched and used for debugging. * Implementation is slightly more complex. Alternative 2: Use a Central Registry ===================================== In this approach, we would run a central registry to store speculative images. It would not store final images at all, and every image uploaded to it would be tagged only with a tag like "pipeline_check_change_1234_original_tag" to uniquely identify its build context. At the end of the image build job, for every image that we built: Copy it from the per-change registry to the central registry, and tag it with pipeline and change id. zuul_return image location and tag. At the start of a job which requires images: For each previous image (as provided by zuul_return): Copy it from the central registry to the local registry, and tag it with the original tag. The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools. The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry. Conclusion ========== I don't think this system will take long to implement with either of the alternatives. Please let me know what you think of the system, and of the storage/transfer alternatives. -Jim
On Fri, Jan 25, 2019 at 2:38 PM James E. Blair <corvus@inaugust.com> wrote:
Alternative 2: Use a Central Registry =====================================
In this approach, we would run a central registry to store speculative images. It would not store final images at all, and every image uploaded to it would be tagged only with a tag like "pipeline_check_change_1234_original_tag" to uniquely identify its build context.
At the end of the image build job, for every image that we built: Copy it from the per-change registry to the central registry, and tag it with pipeline and change id. zuul_return image location and tag.
At the start of a job which requires images: For each previous image (as provided by zuul_return): Copy it from the central registry to the local registry, and tag it with the original tag.
The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools.
The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry.
This is more or less what we are doing today for building and promoting container images for RDO. In our periodic pipeline, we have a parent job that builds container images and pushes them to an OpenShift standalone registry [1]. These images have a specific tag format that dependant jobs are able to use in order to pull and test them. We have an out-of-band process for promoting the images based on the results of the following jobs -- if the criteria for promotion is met, the images are pushed to DockerHub under an appropriate tag. The container images on the registry are configured to be public and anyone can pull them but pushing requires a token from a service account in the namespace. We prune images using a fairly generic script [2] which prunes tags older than N days that do not match a supplied whitelist. This deletes the tags but does not delete the container image blobs from the filesystem. To delete the (now orphaned) blobs from the filesystem, OpenShift provides a "oadm prune images" command [3] which we run after deleting the tags. We initially chose OpenShift as our registry because it made authentication and pruning simple when we compared it to docker-registry and docker-distribution. We experimented with Atomic Registry which was a simplified implementation of the OpenShift standalone registry but Atomic Registry was deprecated [4]. The OpenShift deployment lives on a single node and contains only the necessary bits to expose the registry publicly. It is not meant to host applications. The registry has been in use for over a year now and is currently hosting near 10 000 tags across 878 images in 4 different namespaces and it prunes around 1000 tags every day. It has been fairly stable with 4 cores, 8GB of RAM and ~500GB of disk. [1]: https://docs.openshift.com/container-platform/3.9/install_config/install/sta... [2]: https://github.com/rdo-infra/rdo-infra-playbooks/blob/master/roles/rdo-infra... [3]: https://docs.openshift.com/container-platform/3.9/admin_guide/pruning_resour... [4]: https://www.projectatomic.io/blog/2017/05/oo-standalone-registry/ David Moreau Simard dmsimard = [irc, github, twitter]
On 25.01.19, 20:38, "James E. Blair" <corvus@inaugust.com> wrote: Hi, As you know, with respect to git repositories, Zuul frequently operates well into the future, allowing jobs to build software with changes that haven't landed yet under the assurance that they will. As more of us have incorporated container image building into our processes, we find that we want the same thing there -- that not only would our git repositories represent the future that Zuul sees, but also our container image registries. As a concrete and simple example, we have an integration job called 'zuul-quick-start' which uses both Nodepool and Zuul container images. Those images themselves depend on build images created from the openstack-infra/system-config repository, which they use via a "FROM" line in Dockerfiles, and extending the zuul-quick-start job to be able to rebuild and use those builder images in the case of a Depends-On change is too complex to consider. In particular, the "FROM" line is problematic, as the only way to realistically use a speculative image as a base image is to alter the Dockerfile or make the image available in a registry. Therefore, this is a proposal for a system for building speculative images which can then be used by further speculative images. Every item in a Zuul pipeline has its own future state which is related to, but separate from all the other items. It needs to be able to build an image for a project and store it in a registry under the final published name and tag, without affecting any of the changes ahead of it. Therefore, each item needs its own image registry. In the case of a single, simple job such as zuul-quick-start, we can just run a docker registry on the node on which the job is running. In the case of a more complicated system where we want to build an image once and use it in multiple dependent jobs in a job graph, we will start an image registry at the root of the job graph, pause the job, and the dependent jobs will use it. Now that we have the registry for the change, we need to have that registry reflect the state of any changes ahead. We need to know when the change ahead has finished building its images so that we can then add those images into our change's registry. To accomplish this, we can extend Zuul so that jobs have a provides/requires interface. If our job requires docker images produced by a job in a dependent change, we can say: - job: name: zuul-quick-start requires: docker-images - job: name: system-config-build-image-python-builder provides: docker-images Zuul will see that zuul-quick-start should not start until all of the jobs for all of the changes ahead which provide "docker-images" have completed. Then zuul-quick-start will be able to retrieve images from the preceding changes' jobs and add them to its own registry. In the gate, the behavior of this is obvious: if a change has a job with a requirement, iterate over all changes ahead of it in the shared queue and search for jobs on those changes which provide that requirement. If there are none, or all of them have completed or paused, then that job is ready to start. Otherwise, wait. In an independent pipeline, we do include dependent changes ahead in the shared queue, but they are not live -- they don't run any actual jobs. So in that case, we're going to have to peek outside our shared queue and see if our dependent changes show up anywhere else in the same pipeline, and if they do, wait for their jobs. If they don't show up, then we know they completed earlier, and it's safe to start. There's one more thing about this change that will help us later: a job that "provides" something should be able to pass data to a job which "requires" it. So our image build jobs can use zuul_return to provide information about which images they built so that it's easier for later jobs to import those images into their registries. Once again, the independent pipelines make this tricky, as since those jobs may have run some time in the past, the scheduler may have forgotten the data they returned. However, once the SQL database is required, we can rely on that to fill in any missing data. That's the only change we need to Zuul itself to support this system. The rest is all playbooks and roles. That brings us to the challenge of how to actually transfer these artifacts between items. I can think of two approaches. They are not mutually exclusive -- we can implement both if we want. They both have pros and cons. But I would like input from Zuul operators (including openstack-infra/opendev) as to which they would prefer, and I'll implement that approach in OpenDev for Zuul's own upstream use. If we have a split decision, someone else can implement the second approach. I think we can support both in zuul-jobs, but I'd like us to run the most widely applicable approach upstream. Alternative 1: Use the Logserver ================================ In the case of a simple job, at the end of the job, or in the case of a complex job graph, a child job dependent on the completion of the image build process (so we don't have to wait for all the jobs to complete), we will export the images to the filesystem and upload their layers (minus any layers which are already on Docker Hub, in order to save space) to the logserver. This next bit is a little complicated, but remember that this will all be encapsulated in roles so we only have to implement it once; the user interface will resemble "roles: [fetch-dependent-images]". At the end of the image build job, for every image that we built: Export it to a directory on the filesystem. For each layer in the image: Perform a HEAD request against the Docker Hub registry to see if the layer exists in the target repository. If that exists, layer exists in docker, remove from filesystem. rsync image directories to logserver. zuul_return image location on logserver. At the start of a job which requires images: For each previous image (as provided by zuul_return): Rsync image directory from logserver to filesystem. For each layer in image manifest: If layer is missing from filesystem: Get the layer from Docker Hub. Import image into local registry. The advantages of this approach: * No external operational dependencies (i.e., you don't have to run a docker registry). The disadvantages of this approach: * Images can't easily be fetched and used for debugging. * Implementation is slightly more complex. Alternative 2: Use a Central Registry ===================================== In this approach, we would run a central registry to store speculative images. It would not store final images at all, and every image uploaded to it would be tagged only with a tag like "pipeline_check_change_1234_original_tag" to uniquely identify its build context. At the end of the image build job, for every image that we built: Copy it from the per-change registry to the central registry, and tag it with pipeline and change id. zuul_return image location and tag. At the start of a job which requires images: For each previous image (as provided by zuul_return): Copy it from the central registry to the local registry, and tag it with the original tag. The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools. The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry. Conclusion ========== I don't think this system will take long to implement with either of the alternatives. Please let me know what you think of the system, and of the storage/transfer alternatives. -Jim I like this and I'd vote for the central registry variant. I think using standardized tools and the ability for the users to fetch the images outweigh the advantages of variant one. -Tobias
Hi, While still on zuul v2, we're currently building docker, libvirt, & vmware images, On Fri, 25 Jan 2019 at 19:38, James E. Blair <corvus@inaugust.com> wrote:
Hi,
<snip>
To accomplish this, we can extend Zuul so that jobs have a
provides/requires interface. If our job requires docker images produced by a job in a dependent change, we can say:
- job: name: zuul-quick-start requires: docker-images
- job: name: system-config-build-image-python-builder provides: docker-images
Zuul will see that zuul-quick-start should not start until all of the jobs for all of the changes ahead which provide "docker-images" have completed. Then zuul-quick-start will be able to retrieve images from the preceding changes' jobs and add them to its own registry.
This is fantastic, and is great to see zuul extending to provide fine control of a pipeline.
Alternative 1: Use the Logserver ================================
The advantages of this approach:
* No external operational dependencies (i.e., you don't have to run a docker registry).
Might be worth considering that users may want to publish more items than just docker images, so this could be used as a generic non specific storage location to get started.
The disadvantages of this approach: * Images can't easily be fetched and used for debugging. * Implementation is slightly more complex.
Alternative 2: Use a Central Registry =====================================
The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools.
The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry.
I expect most will want to use a dedicated service for hosting of artifacts once they are past getting started phase, so this would be the preferred for us, though we'd swap in Artifactory to replace the registry here as it also would cover publishing/consuming of other artifact types we're using as well.
Conclusion ==========
I don't think this system will take long to implement with either of the alternatives. Please let me know what you think of the system, and of the storage/transfer alternatives.
-Jim
It seems alternative 1 would be good for getting started because it could easily cover any type of artifact, and so gives a quicker get going path by not needing an external dependency. But I'm expecting that most people will want to subsequently move to alternative 2 because once running for a while it's likely they will have brought up services to handle supporting hosting their artifacts and these likely provide additional tooling/tracking that would be useful. So as long as it's easy to swap in what's needed for a particular deployment would be most desirable from my view irrespective of whichever you go with. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
corvus@inaugust.com (James E. Blair) writes:
Hi,
As you know, with respect to git repositories, Zuul frequently operates well into the future, allowing jobs to build software with changes that haven't landed yet under the assurance that they will. As more of us have incorporated container image building into our processes, we find that we want the same thing there -- that not only would our git repositories represent the future that Zuul sees, but also our container image registries.
I've completed the work described in this email, and it is now in production use in the OpenDev project infrastructure. Documentation describing how to set the system up is published at: https://zuul-ci.org/docs/zuul-jobs/docker-image.html -Jim
participants (4)
-
corvus@inaugust.com
-
Darragh Bailey
-
David Moreau Simard
-
Tobias.Henkel@bmw.de