Team Meeting Agenda for September 23, 2025
We will meet with this agenda on September 23, 2025 at 19:00 UTC in #opendev-meeting: == Agenda for next meeting == * Announcements ** clarkb will be out Thursday the 25th. * Actions from last meeting * Specs Review * Topics ** Gerrit 3.11 Upgrade Planning (clarkb 20250401) *** https://www.gerritcodereview.com/3.11.html *** Please check this for any concerns with the way we use Gerrit. *** New held nodes for Gerrit 3.10.8 and 3.11.5 here: **** https://zuul.opendev.org/t/openstack/build/54f6629a3041466ca2b1cc6bf17886c4 **** https://zuul.opendev.org/t/openstack/build/c9051c435bf7414b986c37256f71538e *** There is discussion about replication and reindexing bugs in 3.11.4/3.12.1 that don't affect us yet due to being on 3.10. We should try to test these problems to see if they affect us. **** Clarkb tested this with held notes on admittedly trivial test data sets but was unable to reproduce the problems *** Shutting down Gerrit can race the indexing of new changes. If this happens we can have new changes created that are not indexed leading to multiple changes with the same changeid later. To mitigate this we can/should reindex changes on Gerrit startup. *** Gerrit shutdown remain plagued by large h2 cache files. Seems removing our long timeout was not enough to prevent shutdowns timing out. *** https://etherpad.opendev.org/p/gerrit-upgrade-3.11 Planning Document for the eventual Upgrade *** Will need to select a date once we know more about the upgrade itself. ** Upgrading Old Servers (clarkb 20230627) *** https://etherpad.opendev.org/p/opendev-bionic-server-upgrades *** https://etherpad.opendev.org/p/opendev-focal-server-upgrades *** https://etherpad.opendev.org/p/opendev-server-replacement-sprint **** wiki.openstack.org: https://etherpad.opendev.org/p/opendev-mediawiki-upgrade **** tonyb looking at cacti after wiki *** Next on the list are graphite and backup servers *** Can probably spin up new backup servers alongside the old ones then migrate the old volumes off the old servers to the new ones and finally delete the old servers. Just need to double check borg version support matrix details and also what adding new backup servers will do to our cron job setups for backups. *** Remember to use launch-node's --config-drive flag when booting new Noble nodes in Rax Classic ** AFS mirror content cleanup (clarkb 20250916) *** Debian stretch, bullseye backports, and openeuler content has been cleaned up. *** Can we automate cleanups for reprepro content? In other words are the steps cheap enough during the noop case to run with our regular reprepro update script? *** Additional volumes to look into cleanups for include ceph, docker, and puppetlabs package mirrors as well as our python wheel caches *** CentOS 9 Stream grows at a rate that requires regular quota bumps. Is there anything we can do about that? *** With these cleanups and server upgrades completed we can probably start to consider adding new mirrors again. ** Lists Server Slowness (clarkb 20250909) *** Mailman's data is now backed by a dedicated volume capable of many more IOPs than the root disk. Let us know if you see new occurrences of unexpected slowness. ** Deleting ze11 (clarkb 20250923) *** The ze11 zuul executor has been shutdown for some time due to network slowness cloning the nova repository. *** We have a 10 minute timeout for that clone, most nodes do it in ~4 minutes, but ze11 does it in about 12-13 minutes. *** This slowness may also impact job runtimes as well *** At this point 11 or even 10 executors is probably sufficient for our needs the simplest solution here may be to delete ze11 entirely (and ze12 if we find its existence confusing) ** Zuul Launcher Updates (clarkb 20250923) *** We've run into weirdness with rax flex clouds booting instances with two network interfaces if networks are configured via zuul-providers config. This does not occur when networks are configured via clouds.yaml instead. This will need debugging *** All existing nodepool nodes have been deleted out of all cloud regions except for rax-ord. There are a number of nodepool instances there in an ERROR state that we cannot delete ourselves. *** I have sent email to Rackspace asking them to clean those nodes up as I believe they do count against our quota. *** We have also disabled rax-dfw as it started having API errors similar to what we saw in IAD and ORD previously. My email included questions about this situation too. ** Moving OpenDev Synchronous Communication to Matrix (clarkb 20250520) *** The spec (954826) has merged. *** clarkb will start working on this as soon as time permits. ** Pre PTG Planning (clarkb 20250624) *** https://etherpad.opendev.org/p/opendev-preptg-october-2025 Planning happening in this document *** Clarkb has added a number of topic Ideas. Please add more. *** Times: Tuesday October 7 1800-2000 UTC, Wednesday October 8 1500-1700 UTC, Thursday October 9 1500-1700 *** The Tuesday time block would effectively replace our weekly IRC meeting. ** Etherpad 2.5.0 Upgrade (clarkb 20250805) *** https://github.com/ether/etherpad-lite/blob/v2.5.0/CHANGELOG.md *** This release is reported to have fixed our css problems. *** https://review.opendev.org/c/opendev/system-config/+/956593/ *** 104.130.127.119 is a held node for testing. You need to edit /etc/hosts to point etherpad.opendev.org at that IP. * Open discussion
participants (1)
-
Clark Boylan