Hello Everyone :)
We want to get your input on tooling for the PTG. We have a general idea of
what we have for requirements, but wanted to get the community's input-
i.e. make sure we aren't missing any.
Here is the list of requirements we have right now[1].
Also, if anyone has suggestions of tools that meet these criteria, we would
love to hear them! We are gathering and vetting options for the event and
want to make sure nothing gets overlooked.
-the Kendalls (diablo_rojo & wendallkaters)
[1] https://etherpad.opendev.org/p/virt-ptg-requirements
Hello!
==Virtual PTG Signup==
Wanted to take a second to remind teams about the process for signing up
for virtual PTG slots.
0. Look at the slots available and talk to your contributors and team
members to find out the best times to signup for.
1. Sign up in the ethercalc [1] for the time slots your team needs. You can
sign up for up to 4 hours/day and 16 hours/ the week (restrictions in
place until *April 28th at 7:00 UTC*). If you are a SIG or WG, try to keep
your conversations at the start of the week wherever possible. Vertical
teams, try to focus your conversations towards the latter part of the week
(Wednesday-Friday).
*AND*
2. Fill out the survey [2] to give us the other team information we need to
help facilitate the event.
We NEED you to do both steps! I can see a number of teams that have signed
up for time, but haven't filled out the survey so please do so ASAP.
There are still a lot of slots left!
Original thread[3].
==Registration==
After a lot of discussion, we have decided to have registration. It is free
but its still extremely important that you register as this is primarily so
we have a way to contact attendees with information about the event as we
figure the rest of the details out. We will have the registration live next
week.
-the Kendalls (diablo_rojo & wendallkaters)
[1] Ethercalc:https://ethercalc.openstack.org/126u8ek25noy
[2] Survey:
https://openstackfoundation.formstack.com/forms/june2020_virtual_ptg_survey
[3] PTG Layout & Signup Thread:
http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.ht…
Greetings!
I am sure you have all been anxiously awaiting more details on how we are
going to virtualize the PTG :) After the community meetings we had last
week/the week before to understand the key challenges and ideate on
solutions, we have a plan!
The virtual event will be held from Monday June 1 to Friday June 5. We set
up available times in 3 different time windows where each window is a cross
over time between at least 2 regions (Window A - Americas/Europe, Window B
- Europe/APAC, and Window C - APAC/Americas).
Here is an ethercalc[1]. We ask that the PTL/SIG Chair/Team lead sign up
for time to have their discussions in with 4 rules/guidelines.
1. Cross project discussions (i.e. SIGs and horizonal teams), try to
schedule yourselves towards the start of the week so that any discussions
that might shape those of vertical teams might happen first.
2. Vertical teams (ex. Nova, Cyborg, Manila) should wait till April 6th at
7:00 UTC to start signing up for slots to help prioritize the scheduling of
cross project conversations.
3. No team (SIG, horizonal project, vertical project, etc) can sign up for
more than 4 hours per UTC day to help keep participants actively engaged.
4. No team (SIG, horizonal project, vertical project, etc) can sign up for
more than 16 hours across all time slots to avoid burning out our
contributors and to enable participation in multiple teams discussions.
Once every team has had a chance to sign up (lets say two weeks from today
April 28th at 7:00 UTC) if signing up for more time makes sense, it will be
possible.
We want to encourage ANY team to sign up, even those that had originally
said no to the initial outreach when the PTG was supposed to be in person.
The more the merrier!
Once you have signed up for time slots, please also fill out this survey[2]
with your team information. We need this additional information to help
shape other decisions down the road and coordinate other details closer to
the event.
If you have any issues with signing up your team, due to conflict or
otherwise, please let me know! While we are trying to empower you to make
your own decisions as to when you meet and for how long (after all, you
know your needs and teams timezones better than we do), we are here to help!
Continue to check back for updates at openstack.org/ptg.
-the Kendalls (diablo_rojo & wendallkaters)
[1] https://ethercalc.openstack.org/126u8ek25noy
[2]
https://openstackfoundation.formstack.com/forms/june2020_virtual_ptg_survey
Hi,
I'm looking at possible ways of showing how a change in Gerrit affects code
coverage. In the end, I would like to show this info for two kinds of
projects:
- a Python project that is already using the stock tox-py* jobs,
- our custom C++ jobs which just call a simple playbok that includes a role
which uses Ansible's `command` for execution [1] (and we're only using the
existing jobs for basic preparation and log upload).
Note that "difference of coverage" is something else than "coverage of the
actual diff". Coverage of the diff can be generated by e.g. diff_cover [2].
It shows which lines of the change (patch) were executed. This won't catch
this example where all diff lines are covered, but suddenly there's nothing
left to execute other_func():
def my_func():
return 666
def other_func(x):
return 123
def test_foo():
assert(my_func() == 666)
- assert(other_func() == 123)
It looks to me that pycobertura [3] can catch this by comparing "previous"
and "current" coverage. The catch is that this comparison requires building
and running the test suite twice -- once in the usual manner, and then once
more for the "previous version". How to determine the previous version? It
might not be HEAD~1 because of Zuul's merges into the target branch. Is
there a Zuul ref that I can rely on from a context of a job? I think I can
use something like HEAD^1 + plus a merge of that into the target branch,
but it would be awesome if I could get it from Zuul and not have to
duplicate its merge settings in my job.
Here's how a simple ad-hoc job which handles my use case for our "custom
C++ jobs" might look like:
# original content goes here
...
git-new-workdir ...
pushd workdir-of-previous-version
git reset --hard HEAD^1
git merge {{ zuul.branch }}
make all
make test
popd
pycobertura diff \
coverage.xml workdir-of-previous-version/coverage.xml \
--source1 . --source2 workdir-of-previous-version
However, this is something that is totally non-reusable. Given that I'm
already using standard tox-* jobs for our Python jobs, I would prefer to
create a reusable job for coverage changes. Ideally, this job would run
after my "main" build job, so I think I'm looking for a child job [4]. This
job can look at two different artifacts, "previous-coverage" and
"current-coverage", run a comparison, generate a nice HTML report, and
(possibly) fail if a certain threshold is not met.
However, I am not sure if current Zuul artifacts really fully get me there.
If this is a "first change in the check pipeline with no parents", i.e.,
head of the queue, there's no previous change to fetch the results from, so
I have to consult some external artifact provider for "latest thing that
got merged" version of "coverage". This brings some requirements:
- Use of the "gate" pipeline. When there's no "gate", just "check",
there's no guarantee that the current state of my destination branch was
built as-is. (Possible workaround: a fast-forward-only merge policy in
Gerrit and a "promote" pipeline which picks the latest jobs results from
"check", see [5] and [6] for how it might look in a slightly different use
case...)
- A custom artifact registry, or a well-known naming scheme for artifact
naming, or Zuul keeping artifact locations persistent when a change merges.
Since I don't want to maintain ad-hoc code, I don't want to run my custom
image registry per site. Let the gate job upload artifacts with storage
URLs such as:
/t/{{ zuul.tenant }}/artifacts/{{ zuul.project }}/{{ zuul.newrev }}
Suppose my "check" job has no preceding items (no Depends-On, and it's
based on something that has been already merged into the target branch).
The job.artifacts[] variable is therefore empty, so I have to infer my
artifact location. The easieast way is just looking at the current SHA1 of
the destination branch, and wget that. Alternative: let Zuul pass artifact
location from the "promote" pipeline to "gate" jobs, somehow.
That's not all. There are two proposed changes, #2 depends on #1. The child
job of #2 depends on the build job of #2, and is correctly executed only
after #2's build job succeeds. However, #2's child job also has to wait for
artifacts from #1's build job. Can I hold a child job of the second change
until not only its parent job completes, but also for the duration of the
parent job of the preceding change in the queue?
With kind regards,
Jan
[1]
https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/refs/heads/m…
[2] https://github.com/Bachmann1234/diff_cover
[3] https://pypi.org/project/pycobertura/#command-diff
[4] https://zuul-ci.org/docs/zuul/reference/jobs.html#var-zuul.child_jobs
[5]
https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/refs/heads/m…
[6]
https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/netconf-cli/+/refs/head…
Hi,
Tomorrow, April 10, 2020, the OpenDev team will be performing minor
updates to two services.
* Gerrit restart for https://review.opendev.org/ at 16:00 UTC.
This should be a momentary outage to restart the service in a
container.
We will maintain the same version in the same configuration; this is a
preparatory step in order to perform future version upgrades more
easily.
Other than a short outage of about a minute, there should be no
service impact or visible changes, but since the environment is
changing, we wanted to announce this in case any problems are
detected.
* Etherpad migration and domain change at 17:00 UTC.
We will migrate the data currently hosted on etherpad.openstack.org to
a new server, also running in a container, on etherpad.opendev.org.
We will set up redirects so that on completion of the migration, all
etherpad.openstack.org URLs will redirect to the same pages on
etherpad.opendev.org. https://etherpad.opendev.org/ will become the
new canonical service URL.
This also includes an upgrade to the latest release of Etherpad Lite
(1.8.0).
This involves a database migration as well. The service will be
unavailable for the duration of this migration (expected to be about
30 minutes).
Please report any issues in #opendev on Freenode or to the
service-discuss(a)lists.opendev.org mailing list.
Note that this message is being sent to service-announce and several
other lists. Future announcements of this type will only be sent to
service-announce, so please sign up[1] for that list if you haven't
already.
[1] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce
-Jim
Hi,
Related to that thread we were pulling on in IRC this morning, I think
there's some refactoring we can do for setting up git repos on remote
nodes. We currently have the following roles:
* mirror-workspace-git-repos
Uses git over ssh or kubectl to push repos onto remote vms or
containers.
No support for cached repos. Trusted playbooks only.
* prepare-workspace
Starts the Zuul console daemon.
Uses synchronize to push the repos onto remote vms.
No support for cached repos. Can run in untrusted playbooks.
* prepare-workspace-git
Clones cached repos into place.
Includes mirror-workspace-git-repos role to use git over ssh or kubectl to
push repos onto remote vms or containers.
Trusted playbooks only.
OpenDev is using this role.
* prepare-workspace-openshift
Uses "oc rsync" to push repos onto remote containers.
No support for cached repos. Can run in untrusted playbooks.
It seems to me that if we set aside the requirement to be able to run in
untrusted playbooks, then we effectively already have a single role that
can perform the necessary tasks in all conditions. My understanding is
that the untrusted forms are used to support a split base job
construction, where a tenant's default parent job is actually defined in
an untrusted repo and performs many setup tasks in that job, inheriting
very few (or no) tasks from a parent in a trusted repo. That makes it
easier to test changes to the default parent job speculatively.
The question I have for the community is whether at this point that is
still worth the trade off in complexity of using rsync to set up git
repos?
We now have much better testing of changes to roles used in base jobs,
including a test which currently covers prepare-workspace-git[1]. Also,
these roles change a lot less often than they used to.
If we're okay dropping that requirement, I'd like to suggest that we
incorporate 'mirror-workspace-git-repos' into 'prepare-workspace-git',
and then deprecate all the other roles, and eventually remove them.
That would mean we could have a very clear suggested base job, and we'd
all be using the same role.
What do folks think about that?
[1] https://opendev.org/zuul/zuul-jobs/src/branch/master/zuul-tests.d/general-r…
-Jim
Hi,
Earlier today in IRC several of us started pulling on a thread related
to base pre- and post-run playbooks and containers. In short, we use
the synchronize module a lot, and it only works on VMs. The
fetch-output role is designed to address this partially, by reducing the
number of indepedent synchronize tasks, but in order to synchronize
files from a remote container, it needs to run "oc rsync" on the
executor, which means that trick will only work in a trusted playbook.
That may be okay for that role, but it still leaves any other role or
playbook that needs to synchronize something without an important tool.
We came up with a plan to improve the situation which should fit within
the following constraints:
* Not a Zuul-specific solution (playbooks can still run outside of
Zuul).
* Usable in untrusted playbooks.
* Makes VM or container transparent to the role (works the same on
both).
The general idea is to make a new Ansible module using the collections
framework which behaves the same as the current 'synchronize' module,
but transparently supports containers by using "oc rsync". This module
will be available automatically in Zuul, and users will be able to
install the module on their own, external to Zuul, using the Ansible
collections feature.
Here are the steps to achieve this:
1) Create a new module in Zuul which is a fork of synchronize[1] with
Tristan's oc rsync patch[2].
This module will be named "zuul.posix.synchronize" to indicate that
it is related to the upstream "ansible.posix.synchronize" module.
2) Start using that in zuul-jobs roles that use synchronize.
3) Create a new repo named "zuul/ansible-collection-posix" and copy the
module into that.
4) Publish that repo as an Ansible collection.
5) Install that collection in the Zuul Ansible virtualenvs for versions
which support collections and use that in favor of the built-in
module.
6) When all Zuul Ansible versions support collections, remove the
built-in module.
I don't think it's our job to make *everything* about VMs and containers
transparent in Zuul, but for something so basic as getting data to and
from remote nodes, I think it should be easy to handle both cases, and
will make it easier for us to keep a lot of roles simple.
How does this sound to folks?
[1] https://github.com/ansible-collections/ansible.posix/blob/master/plugins/mo…
[2] https://review.opendev.org/681553
-Jim
P.S. Monty also suggests that we can similarly move the Zuul-specific
modules such as zuul-console and zuul-return into standalone collections
repo. That seems like a good idea too, as it may aid local development
and testing of playbooks.
Hi everyone,
Thank you to those who attended any of the community meetings last week!
If you missed one of the calls last week, below are links to the
presentation [1], recording in English [2], and recording in Mandarin
[3]. A lot of links were discussed and you can find those in the meeting
section of the notes as well as the chat from the recordings.
We are planning to do this format with project and community updates and
OSF event updates quarterly. Stay tuned for the next meeting, and let me
know if you have any questions in the meantime.
In addition to the quarterly meetings, we would like to hear from the
project communities on topics you would like to cover in community
meetings or have covered. If you have an idea you would like to share,
please drop it in this etherpad [4]. If you see a topic already shared
that you would like to collaborate on, please share your name, email and
IRC nick next to the topic.
Cheers,
Jimmy
[1]
https://docs.google.com/presentation/d/1l05skj_BCfF8fgYWu4n0b1rQmbNhHp8sMeY…
[2]
https://zoom.us/rec/share/7vVXdIvopzxIYbPztF7SVpAKXYnbX6a82iMaqfZfmEl1b0Fqb…
[3]
https://zoom.us/rec/share/zudNaIrQ6mNLR9Lvr2Xxf5QQIL7geaa8hCcf_vsIxEwoHkn0b…
[4] https://etherpad.openstack.org/p/OSF_Community_Meetings
Hello All,
Recently, we've transitioned to using #opendev on Freenode for our synchronous IRC communications. Since then we've spun up a new service-discuss(a)lists.opendev.org mailing list [1] where we'll plan changes to services, notify of meetings, answer questions about services and usage, and otherwise communicate about OpenDev. service-announce(a)lists.opendev.org [2] will remain for important announcements. If you're interested in the developer infrastructure we run please join us on these mailing lists.
We encourage every developer to at least subscribe to the service-announce[2] list. It will be very low traffic, and used only for important announcements with wide impact.
We are also going to start having our weekly team meeting in #opendev-meeting. The time will continue to be 19:00 UTC Tuesdays.
See you there,
Clark
[1] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss
[2] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce
Hi,
There were some discussion on irc yesterday about getting some better role
name consistency, more specific the install-$foo vs ensure-$foo.
The idea is to be more consistent when naming roles that makes sure some
software is installed on a node.
A question from me was if there could be any difference between the
install-$foo vs ensure-$foo, my guess is that there isn't since they should
be idempotent.
I've pushed a couple of changes with the topic role-naming-consistency that
renames all roles to the ensure-$foo naming standard and should mostly work
by now.
https://review.opendev.org/#/q/status:open+project:zuul/zuul
-jobs+branch:master+topic:role-naming-consistency
At the moment the old install-$foo roles are kept and include the new
ensure-$roles to make sure nothing breaks.
Regards
Albin