Building container images in nodepool
Tobias.Henkel at bmw.de
Tobias.Henkel at bmw.de
Mon Jul 16 16:27:25 UTC 2018
On 16.07.18, 17:08, "Monty Taylor" <mordred at inaugust.com> wrote:
On 07/13/2018 01:56 PM, James E. Blair wrote:
> Hi,
>
> I think the container spec is just about ready, but one part of it
> recently prompted a discussion that I think could use some wider input.
>
> Specifically, this is regarding the "container-as-machine" use-case. So
> think "run pep8 in a container" when you're thinking about this.
> There's a whole bunch of stuff in the spec about container-native
> workloads. This isn't it.
>
> The issue is whether or not Nodepool should be able to build container
> images, much as we do today for machines.
>
> Note that regardless, Nodepool will be able to use "upstream" container
> images from the local Kubernetes registry or Docker Hub. So this would
> be an optional feature (just as Nodepool can use extant cloud images).
>
> There are a few reasons to build local images (of any kind):
>
> * To provide a git cache.
> * To have more up to date packages than those provided externally.
> * To have a backup of a working image if something upstream changes.
> * To have a standardized image.
> * To add local requirements.
>
> To be fair, the last two might be the same thing said different ways.
>
> In an offline discussion prompted by the spec, Monty has suggested that
> running a git mirror in Kubernetes may obviate the need for using custom
> images for a git cache [related: applying this to VM images as well
> could reduce image sizes there; he's planning on writing up thoughts
> on this more fully later]. So if we take that out of the equation, the
> question becomes:
>
> Are the remaining reasons for building an image locally sufficient to
> build this functionality into Nodepool?
>
> A quick clarification: we're obviously not going to write a new image
> builder either way. We're talking about having Nodepool run some
> existing container image process on a periodic schedule and upload the
> result to the local registry. The main benefit to Nodepool doing this
> is using the same images across different registries, having a daemon
> performing the image builds automatically, and the ability to roll-back
> to previous versions.
>
> If you have thoughts on whether this is useful, or if it should be
> omitted in favor of relying more heavily on base images or external
> systems, please let me know. I'll try to collect feedback and update
> the spec early next week. Then we can merge it. :)
I don't think it's particularly useful to have nodepool build container
images - although I think it should be pretty easy to accomplish so I'm
not personally *opposed* to us having the functionality.
The reason I think it's not useful to build container images but still
do think it's useful to build VM images is that one of the things
dockerhub has provided to the world (I'd possibly argue this is the
primary value of docker itself) is a global shared set of images that
end users can select.
In OpenStack, images are a vendor/operator concern. The 'ubuntu' or
'fedora' image put into a cloud doesn't have consistent or predictable
content across clouds because it's literally a different image. So we
HAVE to make images for the multi-cloud story to work.
With docker/kubernetes, the image 'ubuntu:latest' is the same
everywhere, because it comes from dockerhub, and even if people have a
local registry, it's going to pull ubuntu from dockerhub. My
understanding of best practices are that if someone wants a custom
ubuntu image in a local registry, they'd most likely publish it under a
different name - and possibly base it on the upstream 'ubuntu' image.
Obviously people are free to violate that if they want in their private
environments, but from a nodepool persepctive it actually is reasonable
for a user to connect a nodepool to 10 different kubernetes provided by
10 different public clouds and expect 'ubuntu:latest' to be the same bits.
SO - all of that to say ...
I don't think it's a *terrible* idea to have a 'have nodepool build
container images for me' feature, but I'm skeptical that it would
provide value or even be used by anyone. I do not think we should use it
in OpenStack if it existed.
Also, as Josh suggests, if someone did want local images, having a zuul
job that built then and then published them to a registry that the
nodepool is pulling from should work fine ... and would also serve the
use-case of "I want to test my images before pushing them to nodepool"
that Yolanda has been wanting for years.
Monty
I agree with Monty. We use containers on top of images built by nodepool since
a long time and I've always looked at them as artifacts produced by the ci system.
We try to treat the current nodepool like infrastructure (can I call that some kind of Iaas?)
that provides some generic image to the that should run their workload within a
container on the node supplied by nodepool. One of the major advantages is that
the container images directly fall out of the ci process of the project while changing
images that are built by nodepool is much more coupled to operations as this is clearly
something that should be at least reviewed by the operations team. Further by
treating the container images as a CI artifact the project has much more control
about the timing when to update the image and when not instead of the decoupled
regular image rebuilds currently used in nodepool.
So with adding container support to nodepool I see this use case simplified like just ripping
out the VM layer while keeping the focus on the project supplied container images
which has been very useful to us in the past.
It might not be really correct but I see that use case like moving from IaaS
to some kind of PaaS which simplifies working with project supplied container images.
So I personally don't think we need this but I'm also not opposed to this if
someone has a different view and needs container image build functionality
in nodepool.
Tobias
More information about the Zuul-discuss
mailing list