Building container images in nodepool
Hi, I think the container spec is just about ready, but one part of it recently prompted a discussion that I think could use some wider input. Specifically, this is regarding the "container-as-machine" use-case. So think "run pep8 in a container" when you're thinking about this. There's a whole bunch of stuff in the spec about container-native workloads. This isn't it. The issue is whether or not Nodepool should be able to build container images, much as we do today for machines. Note that regardless, Nodepool will be able to use "upstream" container images from the local Kubernetes registry or Docker Hub. So this would be an optional feature (just as Nodepool can use extant cloud images). There are a few reasons to build local images (of any kind): * To provide a git cache. * To have more up to date packages than those provided externally. * To have a backup of a working image if something upstream changes. * To have a standardized image. * To add local requirements. To be fair, the last two might be the same thing said different ways. In an offline discussion prompted by the spec, Monty has suggested that running a git mirror in Kubernetes may obviate the need for using custom images for a git cache [related: applying this to VM images as well could reduce image sizes there; he's planning on writing up thoughts on this more fully later]. So if we take that out of the equation, the question becomes: Are the remaining reasons for building an image locally sufficient to build this functionality into Nodepool? A quick clarification: we're obviously not going to write a new image builder either way. We're talking about having Nodepool run some existing container image process on a periodic schedule and upload the result to the local registry. The main benefit to Nodepool doing this is using the same images across different registries, having a daemon performing the image builds automatically, and the ability to roll-back to previous versions. If you have thoughts on whether this is useful, or if it should be omitted in favor of relying more heavily on base images or external systems, please let me know. I'll try to collect feedback and update the spec early next week. Then we can merge it. :) -Jim
Hey, I'm not against nodepool building container images as it's within scope of how it currently builds disk images. However, if you had a zuul job build and push images to a registry, would this be a more suitable pathway forward than having nodepool do what is effectively a type of job? Just something to compare to/consider anyway. Cheers, Josh On Sat, Jul 14, 2018 at 4:56 AM, James E. Blair <corvus@inaugust.com> wrote:
Hi,
I think the container spec is just about ready, but one part of it recently prompted a discussion that I think could use some wider input.
Specifically, this is regarding the "container-as-machine" use-case. So think "run pep8 in a container" when you're thinking about this. There's a whole bunch of stuff in the spec about container-native workloads. This isn't it.
The issue is whether or not Nodepool should be able to build container images, much as we do today for machines.
Note that regardless, Nodepool will be able to use "upstream" container images from the local Kubernetes registry or Docker Hub. So this would be an optional feature (just as Nodepool can use extant cloud images).
There are a few reasons to build local images (of any kind):
* To provide a git cache. * To have more up to date packages than those provided externally. * To have a backup of a working image if something upstream changes. * To have a standardized image. * To add local requirements.
To be fair, the last two might be the same thing said different ways.
In an offline discussion prompted by the spec, Monty has suggested that running a git mirror in Kubernetes may obviate the need for using custom images for a git cache [related: applying this to VM images as well could reduce image sizes there; he's planning on writing up thoughts on this more fully later]. So if we take that out of the equation, the question becomes:
Are the remaining reasons for building an image locally sufficient to build this functionality into Nodepool?
A quick clarification: we're obviously not going to write a new image builder either way. We're talking about having Nodepool run some existing container image process on a periodic schedule and upload the result to the local registry. The main benefit to Nodepool doing this is using the same images across different registries, having a daemon performing the image builds automatically, and the ability to roll-back to previous versions.
If you have thoughts on whether this is useful, or if it should be omitted in favor of relying more heavily on base images or external systems, please let me know. I'll try to collect feedback and update the spec early next week. Then we can merge it. :)
-Jim
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Sat, Jul 14, 2018 at 03:23:21PM +1000, Joshua Hesketh wrote:
Hey,
I'm not against nodepool building container images as it's within scope of how it currently builds disk images.
However, if you had a zuul job build and push images to a registry, would this be a more suitable pathway forward than having nodepool do what is effectively a type of job? Just something to compare to/consider anyway.
Something I've heard a while back, in the context of openstack-infra, was giving jobs access to control plane to upload an image (in this case it was an image for VM, but think same would apply for private registry). While zuul jobs are now able to do it with secrets, to we want to recommend that for people to do? I've long had the want to either have nodepool or zuul job in this case, upload DIBs into our control plane so servers like mirror, gerrit, etc and boot servers from them. But in the past there was some concern about how to secure it. I think it would be interesting to drive image builds from a zuul job, but also intereted how we'd solve the chicken / egg issue with that first job if no image exists :)
Cheers, Josh
On 07/13/2018 01:56 PM, James E. Blair wrote:
Hi,
I think the container spec is just about ready, but one part of it recently prompted a discussion that I think could use some wider input.
Specifically, this is regarding the "container-as-machine" use-case. So think "run pep8 in a container" when you're thinking about this. There's a whole bunch of stuff in the spec about container-native workloads. This isn't it.
The issue is whether or not Nodepool should be able to build container images, much as we do today for machines.
Note that regardless, Nodepool will be able to use "upstream" container images from the local Kubernetes registry or Docker Hub. So this would be an optional feature (just as Nodepool can use extant cloud images).
There are a few reasons to build local images (of any kind):
* To provide a git cache. * To have more up to date packages than those provided externally. * To have a backup of a working image if something upstream changes. * To have a standardized image. * To add local requirements.
To be fair, the last two might be the same thing said different ways.
In an offline discussion prompted by the spec, Monty has suggested that running a git mirror in Kubernetes may obviate the need for using custom images for a git cache [related: applying this to VM images as well could reduce image sizes there; he's planning on writing up thoughts on this more fully later]. So if we take that out of the equation, the question becomes:
Are the remaining reasons for building an image locally sufficient to build this functionality into Nodepool?
A quick clarification: we're obviously not going to write a new image builder either way. We're talking about having Nodepool run some existing container image process on a periodic schedule and upload the result to the local registry. The main benefit to Nodepool doing this is using the same images across different registries, having a daemon performing the image builds automatically, and the ability to roll-back to previous versions.
If you have thoughts on whether this is useful, or if it should be omitted in favor of relying more heavily on base images or external systems, please let me know. I'll try to collect feedback and update the spec early next week. Then we can merge it. :)
I don't think it's particularly useful to have nodepool build container images - although I think it should be pretty easy to accomplish so I'm not personally *opposed* to us having the functionality. The reason I think it's not useful to build container images but still do think it's useful to build VM images is that one of the things dockerhub has provided to the world (I'd possibly argue this is the primary value of docker itself) is a global shared set of images that end users can select. In OpenStack, images are a vendor/operator concern. The 'ubuntu' or 'fedora' image put into a cloud doesn't have consistent or predictable content across clouds because it's literally a different image. So we HAVE to make images for the multi-cloud story to work. With docker/kubernetes, the image 'ubuntu:latest' is the same everywhere, because it comes from dockerhub, and even if people have a local registry, it's going to pull ubuntu from dockerhub. My understanding of best practices are that if someone wants a custom ubuntu image in a local registry, they'd most likely publish it under a different name - and possibly base it on the upstream 'ubuntu' image. Obviously people are free to violate that if they want in their private environments, but from a nodepool persepctive it actually is reasonable for a user to connect a nodepool to 10 different kubernetes provided by 10 different public clouds and expect 'ubuntu:latest' to be the same bits. SO - all of that to say ... I don't think it's a *terrible* idea to have a 'have nodepool build container images for me' feature, but I'm skeptical that it would provide value or even be used by anyone. I do not think we should use it in OpenStack if it existed. Also, as Josh suggests, if someone did want local images, having a zuul job that built then and then published them to a registry that the nodepool is pulling from should work fine ... and would also serve the use-case of "I want to test my images before pushing them to nodepool" that Yolanda has been wanting for years. Monty
On 16.07.18, 17:08, "Monty Taylor" <mordred@inaugust.com> wrote: On 07/13/2018 01:56 PM, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :) I don't think it's particularly useful to have nodepool build container images - although I think it should be pretty easy to accomplish so I'm not personally *opposed* to us having the functionality. The reason I think it's not useful to build container images but still do think it's useful to build VM images is that one of the things dockerhub has provided to the world (I'd possibly argue this is the primary value of docker itself) is a global shared set of images that end users can select. In OpenStack, images are a vendor/operator concern. The 'ubuntu' or 'fedora' image put into a cloud doesn't have consistent or predictable content across clouds because it's literally a different image. So we HAVE to make images for the multi-cloud story to work. With docker/kubernetes, the image 'ubuntu:latest' is the same everywhere, because it comes from dockerhub, and even if people have a local registry, it's going to pull ubuntu from dockerhub. My understanding of best practices are that if someone wants a custom ubuntu image in a local registry, they'd most likely publish it under a different name - and possibly base it on the upstream 'ubuntu' image. Obviously people are free to violate that if they want in their private environments, but from a nodepool persepctive it actually is reasonable for a user to connect a nodepool to 10 different kubernetes provided by 10 different public clouds and expect 'ubuntu:latest' to be the same bits. SO - all of that to say ... I don't think it's a *terrible* idea to have a 'have nodepool build container images for me' feature, but I'm skeptical that it would provide value or even be used by anyone. I do not think we should use it in OpenStack if it existed. Also, as Josh suggests, if someone did want local images, having a zuul job that built then and then published them to a registry that the nodepool is pulling from should work fine ... and would also serve the use-case of "I want to test my images before pushing them to nodepool" that Yolanda has been wanting for years. Monty I agree with Monty. We use containers on top of images built by nodepool since a long time and I've always looked at them as artifacts produced by the ci system. We try to treat the current nodepool like infrastructure (can I call that some kind of Iaas?) that provides some generic image to the that should run their workload within a container on the node supplied by nodepool. One of the major advantages is that the container images directly fall out of the ci process of the project while changing images that are built by nodepool is much more coupled to operations as this is clearly something that should be at least reviewed by the operations team. Further by treating the container images as a CI artifact the project has much more control about the timing when to update the image and when not instead of the decoupled regular image rebuilds currently used in nodepool. So with adding container support to nodepool I see this use case simplified like just ripping out the VM layer while keeping the focus on the project supplied container images which has been very useful to us in the past. It might not be really correct but I see that use case like moving from IaaS to some kind of PaaS which simplifies working with project supplied container images. So I personally don't think we need this but I'm also not opposed to this if someone has a different view and needs container image build functionality in nodepool. Tobias
On July 16, 2018 4:27 pm, Tobias.Henkel@bmw.de wrote:
On 16.07.18, 17:08, "Monty Taylor" <mordred@inaugust.com> wrote:
On 07/13/2018 01:56 PM, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :)
I don't think it's particularly useful to have nodepool build container images - although I think it should be pretty easy to accomplish so I'm not personally *opposed* to us having the functionality.
The reason I think it's not useful to build container images but still do think it's useful to build VM images is that one of the things dockerhub has provided to the world (I'd possibly argue this is the primary value of docker itself) is a global shared set of images that end users can select.
In OpenStack, images are a vendor/operator concern. The 'ubuntu' or 'fedora' image put into a cloud doesn't have consistent or predictable content across clouds because it's literally a different image. So we HAVE to make images for the multi-cloud story to work.
With docker/kubernetes, the image 'ubuntu:latest' is the same everywhere, because it comes from dockerhub, and even if people have a local registry, it's going to pull ubuntu from dockerhub. My understanding of best practices are that if someone wants a custom ubuntu image in a local registry, they'd most likely publish it under a different name - and possibly base it on the upstream 'ubuntu' image.
Obviously people are free to violate that if they want in their private environments, but from a nodepool persepctive it actually is reasonable for a user to connect a nodepool to 10 different kubernetes provided by 10 different public clouds and expect 'ubuntu:latest' to be the same bits.
SO - all of that to say ...
I don't think it's a *terrible* idea to have a 'have nodepool build container images for me' feature, but I'm skeptical that it would provide value or even be used by anyone. I do not think we should use it in OpenStack if it existed.
Also, as Josh suggests, if someone did want local images, having a zuul job that built then and then published them to a registry that the nodepool is pulling from should work fine ... and would also serve the use-case of "I want to test my images before pushing them to nodepool" that Yolanda has been wanting for years.
Monty
I agree with Monty. We use containers on top of images built by nodepool since a long time and I've always looked at them as artifacts produced by the ci system. We try to treat the current nodepool like infrastructure (can I call that some kind of Iaas?) that provides some generic image to the that should run their workload within a container on the node supplied by nodepool. One of the major advantages is that the container images directly fall out of the ci process of the project while changing images that are built by nodepool is much more coupled to operations as this is clearly something that should be at least reviewed by the operations team. Further by treating the container images as a CI artifact the project has much more control about the timing when to update the image and when not instead of the decoupled regular image rebuilds currently used in nodepool.
So with adding container support to nodepool I see this use case simplified like just ripping out the VM layer while keeping the focus on the project supplied container images which has been very useful to us in the past.
It might not be really correct but I see that use case like moving from IaaS to some kind of PaaS which simplifies working with project supplied container images.
So I personally don't think we need this but I'm also not opposed to this if someone has a different view and needs container image build functionality in nodepool.
Tobias
I agree with Monty and Tobias, While we could add image building and uploading interface to the Nodepool driver API, for example to be able to use virt-customize or upload diskfile to a libvirt host, that seems like a lot of work. Using Zuul jobs to manage image building as Josh suggests sounds a like a better implementation. I'm actually looking forward building snapshot based image that way. -Tristan
On Fri, Jul 13, 2018 at 11:56:13AM -0700, James E. Blair wrote:
Hi,
I think the container spec is just about ready, but one part of it recently prompted a discussion that I think could use some wider input.
Specifically, this is regarding the "container-as-machine" use-case. So think "run pep8 in a container" when you're thinking about this. There's a whole bunch of stuff in the spec about container-native workloads. This isn't it.
The issue is whether or not Nodepool should be able to build container images, much as we do today for machines.
Note that regardless, Nodepool will be able to use "upstream" container images from the local Kubernetes registry or Docker Hub. So this would be an optional feature (just as Nodepool can use extant cloud images).
There are a few reasons to build local images (of any kind):
* To provide a git cache. * To have more up to date packages than those provided externally. * To have a backup of a working image if something upstream changes. * To have a standardized image. * To add local requirements.
To be fair, the last two might be the same thing said different ways.
In an offline discussion prompted by the spec, Monty has suggested that running a git mirror in Kubernetes may obviate the need for using custom images for a git cache [related: applying this to VM images as well could reduce image sizes there; he's planning on writing up thoughts on this more fully later]. So if we take that out of the equation, the question becomes:
Are the remaining reasons for building an image locally sufficient to build this functionality into Nodepool?
A quick clarification: we're obviously not going to write a new image builder either way. We're talking about having Nodepool run some existing container image process on a periodic schedule and upload the result to the local registry. The main benefit to Nodepool doing this is using the same images across different registries, having a daemon performing the image builds automatically, and the ability to roll-back to previous versions.
If you have thoughts on whether this is useful, or if it should be omitted in favor of relying more heavily on base images or external systems, please let me know. I'll try to collect feedback and update the spec early next week. Then we can merge it. :)
I'd be in favor of having nodepool also build containers, along side VMs. However, if nodepool also learn how to use containers from a registry, that is nice too. The main thing I like, is single place to see VMs and container image builds. - Paul
participants (6)
-
corvus@inaugust.com
-
Joshua Hesketh
-
Monty Taylor
-
Paul Belanger
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray