On 12/12/22 19:45, Ian Wienand wrote:
Hello,
The ensure-kubernetes role in zuul-jobs sets up a k8s environment on the test node so we can do some API testing against, and run, small containers. I think it's fair to say this isn't aimed at being a production installation, but rather the minimal parts of k8s that basic functional testing can validate against.
Currently the ensure-kubernetes role deploys k8s via minikube bionic/focal/jammy with a choice of two backends; cri-o and docker.
I worry about this whole toolchain. It took some effort to get it working on Jammy, and it's not very elegant. Some of the problems
- kubectl installs from a repo called "https://apt.kubernetes.io/ kubernetes-xenial main" [1]. It seems to get updates; and maybe it refers to the lowest version. I don't know, but it's not reassuring.
- minikube isn't packaged and we download a blob directly. This is probably neither here-nor-there; but it is less appealing than using something from a distro.
- everything is written to use the minikube "none" driver, which avoids having to worry about nesting VM's.
- minikube with the "none" driver only supports a CRI compatible backend and has a lot of caveats [2]
- cri-o is CRI compatible, but is not upstream packaged for Ubuntu. We rely on packaging of the tools from the SUSE kubic program, which has now been retired and is in a sort of best-effort community maintenance [3]. What prompted me looking at this was an expired GPG key in this project which broke testing. This was fixed, of course bugs happen to everyone. But we are 100% dependent on this repo (I could not get the docker back-end to work without some of the runtime tools from it, so it is not completely independent).
- docker is packaged externally, but widely used (by OpenDev in production even). But to use it with minikube, we need to run a CRI api -> dockerd shim that we download as a binary blob [4]
- With all these disparate parts, we get conflicts. For example, bits of podman are packaged on Jammy, so now "containernetworking-plugins" comes from the distro. Which of course installs things in a different location to the kubic versions. Which means we have to monkey-patch over the top of patching that minikube does [5].
- I don't think we actually got any of this working for Focal; we just had jobs on bionic and skipped straight to jammy [6]. So we're not keeping up.
I think that's probably evidence enough that setting up a test k8s environment with minikube + <cri-o|docker> on Ubuntu is not very compelling. I've personally spent much more time debugging all this than anything to do with actual k8s support in tools.
minikube seems to have a docker driver now [7], which I'm sure did not exist when any of this was written and might suit us as it says it does not need nested virt. However, I would propose that we are not making good use of our limited resources by continuing along this path.
Ubuntu's preferred method for deploying k8s is via microk8s which they package as snap images [8]. I am not a huge fan of snap images for distribution, but I am a fan of not trying to maintain a bunch of parts that don't really want to work together.
I have written changes in
https://review.opendev.org/q/topic:ensure-kubernetes-microk8s
to setup microk8s in ensure-kubernetes, and to make it so when using use-buildset-repository with microk8s, k8s will pull through the buildset repository. I have converted the nodepool job to use this.
What I would propose is that we deprecate the other paths in ensure-kubernetes and focus our efforts on microk8s on jammy (and future distros going forward). This role is not heavily used, I don't see our time well spent supporting multiple installation methods across multiple releases (and it's never been cross-distro -- which I also think is fine).
I see this having a greater role in the future as part of the work such as the nodepool->zuul conversion, where it may get a lot more exposure.
Thoughts are welcome :)
-i
[1] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/instal... [2] https://minikube.sigs.k8s.io/docs/drivers/none/ [3] https://kubic.opensuse.org/blog/2022-06-10-kubic-retired/ [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-kubernetes... [5] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-kubernetes... [6] https://review.opendev.org/c/zuul/zuul-operator/+/863948/3/.zuul.yaml [7] https://minikube.sigs.k8s.io/docs/drivers/docker/ [8] https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-over...
Thanks Ian! The situation with k8s under dockerd has continued to deteriorate to the point where we have now removed testing of the "docker" runtime from the ensure-kubernetes role and are switching Zuul project jobs (eg nodepool, operator, etc) to use microk8s for testing. As a result, I believe at a minimum we should update the default for that role to use "crio" instead of dockerd, since that path still has testing, and declare the (apparently not working and certainly not tested) "docker" path deprecated. That is implemented in this change: https://review.opendev.org/869411 Ian suggests above that we should deprecate the minikube path entirely (both docker and crio). I think that would be okay too. I'm probably just a bit more conservative since it's difficult for us to tell who might be using what. If you are interested in helping to fix and/or maintain minikube + docker or crio, please chime in. Or if folks agree that we should pro-actively drop minikube entirely, let's update 869411 to do that. If I don't hear anything further, then I'll just proceed with the conservative route in 869411 and send a 2-week notice about the default change in a few days. -Jim