Hello,
The ensure-kubernetes role in zuul-jobs sets up a k8s environment on
the test node so we can do some API testing against, and run, small
containers. I think it's fair to say this isn't aimed at being a
production installation, but rather the minimal parts of k8s that
basic functional testing can validate against.
Currently the ensure-kubernetes role deploys k8s via minikube
bionic/focal/jammy with a choice of two backends; cri-o and docker.
I worry about this whole toolchain. It took some effort to get
it working on Jammy, and it's not very elegant. Some of the
problems
- kubectl installs from a repo called "https://apt.kubernetes.io/
kubernetes-xenial main" [1]. It seems to get updates; and maybe it
refers to the lowest version. I don't know, but it's not
reassuring.
- minikube isn't packaged and we download a blob directly. This is
probably neither here-nor-there; but it is less appealing than using
something from a distro.
- everything is written to use the minikube "none" driver, which
avoids having to worry about nesting VM's.
- minikube with the "none" driver only supports a CRI compatible
backend and has a lot of caveats [2]
- cri-o is CRI compatible, but is not upstream packaged for Ubuntu.
We rely on packaging of the tools from the SUSE kubic program, which
has now been retired and is in a sort of best-effort community
maintenance [3]. What prompted me looking at this was an expired
GPG key in this project which broke testing. This was fixed, of
course bugs happen to everyone. But we are 100% dependent on this
repo (I could not get the docker back-end to work without some of
the runtime tools from it, so it is not completely independent).
- docker is packaged externally, but widely used (by OpenDev in
production even). But to use it with minikube, we need to run a CRI
api -> dockerd shim that we download as a binary blob [4]
- With all these disparate parts, we get conflicts. For example, bits
of podman are packaged on Jammy, so now
"containernetworking-plugins" comes from the distro. Which of
course installs things in a different location to the kubic
versions. Which means we have to monkey-patch over the top of
patching that minikube does [5].
- I don't think we actually got any of this working for Focal; we just
had jobs on bionic and skipped straight to jammy [6]. So we're not
keeping up.
I think that's probably evidence enough that setting up a test k8s
environment with minikube + <cri-o|docker> on Ubuntu is not very
compelling. I've personally spent much more time debugging all this
than anything to do with actual k8s support in tools.
minikube seems to have a docker driver now [7], which I'm sure did not
exist when any of this was written and might suit us as it says it
does not need nested virt. However, I would propose that we are not
making good use of our limited resources by continuing along this
path.
Ubuntu's preferred method for deploying k8s is via microk8s which they
package as snap images [8]. I am not a huge fan of snap images for
distribution, but I am a fan of not trying to maintain a bunch of
parts that don't really want to work together.
I have written changes in
https://review.opendev.org/q/topic:ensure-kubernetes-microk8s
to setup microk8s in ensure-kubernetes, and to make it so when using
use-buildset-repository with microk8s, k8s will pull through the
buildset repository. I have converted the nodepool job to use this.
What I would propose is that we deprecate the other paths in
ensure-kubernetes and focus our efforts on microk8s on jammy (and
future distros going forward). This role is not heavily used, I don't
see our time well spent supporting multiple installation methods
across multiple releases (and it's never been cross-distro -- which I
also think is fine).
I see this having a greater role in the future as part of the work
such as the nodepool->zuul conversion, where it may get a lot more
exposure.
Thoughts are welcome :)
-i
[1] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/insta…
[2] https://minikube.sigs.k8s.io/docs/drivers/none/
[3] https://kubic.opensuse.org/blog/2022-06-10-kubic-retired/
[4] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-kubernete…
[5] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-kubernete…
[6] https://review.opendev.org/c/zuul/zuul-operator/+/863948/3/.zuul.yaml
[7] https://minikube.sigs.k8s.io/docs/drivers/docker/
[8] https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-ove…
Hi,
The position of Zuul Project Lead is an elected position described in
our governance document[1].
I currently hold the position, and would be happy to continue in that
position for another year.
If elected, I plan to continue my current hands-off approach. I believe
that all of the Zuul Maintainers are empowered to collectively decide on
and execute our vision for the project. My purpose is to help
articulate that vision when necessary, drive the process of attaining
consensus, and be a point of contact and representative for the project
in circumstances where a single person is needed (but in those cases, I
promise to share as much of that work as possible with the rest of the
team).
I have uploaded a change[2] to update the expiration date of my term.
If any other Zuul Maintainers would like to nominate themselves for the
position, please upload a similar change and send a reply to this
message to this list.
The nomination period will close on January 16, 2023. If there are
other candidates, we will hold a Condorcet election at that time.
-Jim
[1] https://zuul-ci.org/docs/zuul/governance.html#zuul-project-lead
[2] https://review.opendev.org/869609
This is a reminder that the Diversity and Inclusion WG will be meeting
tomorrow at 14:00 UTC in the #openinfra-diversity channel on OFTC. We hope
members of all OpenInfra projects join us as we continue working on
planning for the OpenInfra Summit as well as Foundation-wide surveys.
Thanks,
Amy (spotz)
0 - https://etherpad.opendev.org/p/diversity-wg-agenda
Hello Everyone!
The next PTG on the calendar will be our usual virtual format and it will
take place March 27 -31. The project signup survey is already ready to go!
If your team is interested in participating, sign them up here[1]!
Registration is also open now[2]
We have also heard many requests from the contributor community to bring
back in person events and also heard that with global travel reduction, it
would be better travel wise for the community at large to combine it with
the OpenInfra Summit[3]. So, an in-person PTG will take place Wednesday and
Thursday! June 14 -15th. OpenInfra Summit registration will be required to
participate, and contributor pricing and travel support[4] will be
available.
We will also be planning another virtual PTG in the latter half of the
year. The exact date is still being finalized.
This will give you three opportunities to get your team together in 2023:
two virtual and one in-person.
-
Virtual: March 27-31
-
In- Person: June 14-15 collocated with the OpenInfra Summit
-
Virtual: TBD, but second half of the year
As any project in our community, you are free to take advantage of all or
some of the PTGs this coming year. I, for one, hope to see you and your
project at as many as make sense for your team!
To make sure we are as inclusive as possible I will continue to encourage
team moderators to write summaries of their discussions to gather and
promote after the events as well.
Stay tuned for more info as we get closer, but the team signup survey[1]
has opened for the first virtual PTG and is ready for you!
-Kendall (diablo_rojo)
[1] https://openinfrafoundation.formstack.com/forms/march2023_vptg_survey
[2] https://openinfra-ptg.eventbrite.com
[3]https://openinfra.dev/summit/vancouver-2023
[4] https://openinfrafoundation.formstack.com/forms/openinfra_tsp
Hello,
At the end of every year the Foundation puts together an Annual Report. Included in this report is a section for each of the Foundation's top level projects. I have been asked to ensure the Zuul portion of the report gets written. I have started with a draft here: https://etherpad.opendev.org/p/2022-zuul-annual-report and would love your input/feedback/content if you have time to help.
I want to make sure that this document reflects the Zuul project as a whole and not just my personal involvement with Zuul. In the etherpad I've put the 2022 report draft at the top, then included ideas for content below that as well as the 2021 report for some historical context. Please feel free to add ideas to the content list or edit the report text directly.
Clark
Hi!
The OpenDev team has upgraded the software running on lists.zuul-ci.org.
I'm sending this email partly to let folks know about that if they
aren't aware. There's a new interface for managing options and
interacting with the list, and if you create an account, that account
will work for all of the OpenDev lists (once the rest are moved over).
You can read more about it here:
https://lists.opendev.org/archives/list/service-announce@lists.opendev.org/…
Here's a link to the site: https://lists.zuul-ci.org/
I'm also sending this email just to make sure everything is still
working. ;)
Finally, this seems like a great time to remind folks about the Zuul
user survey -- please fill it out if you haven't already!
https://www.surveymonkey.com/r/K2B2MWL
Thanks,
Jim
Hi,
I just wanted to share a short video I made showing some of the new
features in Zuul that work with Gerrit's submitWholeTopic setting to
support testing and merging simultaneous dependencies. Take a look if
you haven't seen the new features.
https://www.youtube.com/watch?v=B64RbNwkIa4
-Jim
Hi Team,
We at Western digital IT team , is using Zuul version R4.1.0 .
We have come across an issue whereby the gearman becomes unresponsive without any issues with scheduler service .
We are using default gearman that comes bundles with zuul scheduler.
We have scheduler and executer/merger services running on separate vms.
We have come across the issue twice in 3 month’s time. Restarting zuul-scheduler service fixes the issue and zuul works as usual.
Please let us know if it is a known issue and if you have any solution for the same.
We will be happy to provide additional information if required.
Gearman log :
2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146fb860 type: UNKNOWN>
2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146d87b8 type: UNKNOWN>
2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146fb860 type: UNKNOWN>
2022-10-27 03:21:22,155 ERROR gear.Server: Received unknown packet: <gear.Packet 0x7f58146d87b8 type: UNKNOWN>
We were able to telnet to the gearman port on the scheduler host during this time of issue occurrence.
[root@uls-op-zuul4 zuul]# telnet 127.0.0.1<http://127.0.0.1> 4730
Trying 127.0.0.1<http://127.0.0.1>...
Connected to 127.0.0.1<http://127.0.0.1>.
The zuul webservice becomes unresponsive with the following error in log :
2022-10-27 05:27:22,387 DEBUG zuul.RPCClient: Submitting job zuul:tenant_list with data {}
2022-10-27 05:32:22,388 ERROR gear.Client.b'Zuul Web Server': Connection <gear.Connection 0x7f9f9a14be80 host: uls-op-zuul4.wdc.com<http://uls-op-zuul4.wdc.com> port: 4730> timed out waiting for a response to a submit job request: <gear.Job 0x7f9f88730fd0 handle: None name: zuul:tenant_list unique: 1666848442.3878057>
We can see the following error in scheduler log even though the scheduler service is up and running
2022-10-27 06:35:16,589 ERROR gear.Client.b'Zuul Merge Client': Connection <gear.Connection 0x7f58104de128 host: uls-op-zuul4.wdc.com<http://uls-op-zuul4.wdc.com> port: 4730> timed out waiting for a response to a submit job request: <gear.Job 0x7f578d950e80 handle: None name: merger:merge unique: 29be9ce958ef4d19b0e75e274c6dd741>
Regards,
Mithun Mistry
IT engineering , Western digital
Hello Everyone!
Congratulations on a great virtual PTG last week :)
The OpenInfra Foundation is hosting the OpenInfra Summit in Vancouver [1],
June 13 -15, 2023.
We are trying to determine the level of interest among contributors to
OpenInfra projects on attending a co-located PTG in Vancouver. *The exact
format and dates are still being determined.* At this time, we are
evaluating the level of interest from an attendee and employer perspective.
Please complete the following poll so we can measure the level of interest
and plan accordingly. Future updates will be distributed to the project
mailing lists as well as previous PTG attendees.
Poll: https://openinfrafoundation.formstack.com/forms/openinfra_ptg_2023
As a reminder, we are also gathering feedback for the virtual PTG here:
https://etherpad.opendev.org/p/Oct2022_PTGFeedback
-Kendall Nelson (diablo_rojo)
[1] https://openinfra.dev/summit/