From jim at acmegating.com Fri Apr 9 23:33:51 2021 From: jim at acmegating.com (James E. Blair) Date: Fri, 09 Apr 2021 16:33:51 -0700 Subject: Proposed behavior change on reconfiguration Message-ID: <87zgy7kp7k.fsf@fuligin> Hi, We recently merged some changes which improve the consistency of the git repos in different jobs within the same buildset. While it's always the case that every job in a buildset has the same view of the projects involved in its shared queue, projects outside of that queue (including projects involved in providing Zuul roles, playbooks and job required-projects) have not been synchronized, meaning that if a change merged in one of those projects between two jobs in a buildset starting, the earlier job would not see the change and the later job would. It seems more natural for Zuul to fix the state of all of the repos involved in all of the jobs of a buildset, and so the change that we recently merged did this. While this seems like a straightforward change, it had an unanticipated effect on reconfiguration. When Zuul performs a reconfiguration, any jobs that are added to a project are added to currently running buildsets and started; likewise, jobs may be removed from buildsets. Any jobs which are modified but have already run will retain their previous results, but jobs which have not yet run will do so with the new configuration. It turns out that the idea of fixing all of the repos involved in a buildset and also being able to dynamically add, remove, and alter jobs are fundamentally at odds. The simplest example of this is a reconfiguration that adds a playbook to the zuul-jobs repo, while a job on an unrelated repo is running If Zuul has frozen the state of all the repos involved (including zuul-jobs) then the updated version of the job will reference a playbook which will not appear in the previously frozen repo state. In other words, we can freeze both the configuration and the repos, or neither. Earlier in Zuul's development, we felt that the ability to add, remove, or alter jobs while they were running was fairly important. However, Zuul and its usage has evolved considerably since then. I'd like to propose that we continue down the path of freezing both the repo state and the job configuration within a buildset. This means than in the future, if a job is added, removed, or altered during a reconfiguration, it will not take effect for any currently running buildsets, but will take effect for any buildsets started after the reconfiguration (that includes new buildsets created on gate resets). Given the potentially significant change to a long-standing Zuul behavior, I ask that anyone relying on this or who feels that they may be affected by this to please let us know. Thanks, Jim From jim at acmegating.com Wed Apr 14 18:36:30 2021 From: jim at acmegating.com (James E. Blair) Date: Wed, 14 Apr 2021 11:36:30 -0700 Subject: Proposed behavior change on reconfiguration In-Reply-To: <87zgy7kp7k.fsf@fuligin> (James E. Blair's message of "Fri, 09 Apr 2021 16:33:51 -0700") References: <87zgy7kp7k.fsf@fuligin> Message-ID: <87im4olnm9.fsf@fuligin> "James E. Blair" writes: > Hi, > > We recently merged some changes which improve the consistency of the git > repos in different jobs within the same buildset. While it's always the > case that every job in a buildset has the same view of the projects > involved in its shared queue, projects outside of that queue (including > projects involved in providing Zuul roles, playbooks and job > required-projects) have not been synchronized, meaning that if a change > merged in one of those projects between two jobs in a buildset starting, > the earlier job would not see the change and the later job would. > > It seems more natural for Zuul to fix the state of all of the repos > involved in all of the jobs of a buildset, and so the change that we > recently merged did this. While this seems like a straightforward > change, it had an unanticipated effect on reconfiguration. > > When Zuul performs a reconfiguration, any jobs that are added to a > project are added to currently running buildsets and started; likewise, > jobs may be removed from buildsets. Any jobs which are modified but > have already run will retain their previous results, but jobs which have > not yet run will do so with the new configuration. > > It turns out that the idea of fixing all of the repos involved in a > buildset and also being able to dynamically add, remove, and alter jobs > are fundamentally at odds. The simplest example of this is a > reconfiguration that adds a playbook to the zuul-jobs repo, while a job > on an unrelated repo is running If Zuul has frozen the state of all the > repos involved (including zuul-jobs) then the updated version of the job > will reference a playbook which will not appear in the previously frozen > repo state. > > In other words, we can freeze both the configuration and the repos, or > neither. > > Earlier in Zuul's development, we felt that the ability to add, remove, > or alter jobs while they were running was fairly important. However, > Zuul and its usage has evolved considerably since then. > > I'd like to propose that we continue down the path of freezing both the > repo state and the job configuration within a buildset. This means than > in the future, if a job is added, removed, or altered during a > reconfiguration, it will not take effect for any currently running > buildsets, but will take effect for any buildsets started after the > reconfiguration (that includes new buildsets created on gate resets). > > Given the potentially significant change to a long-standing Zuul > behavior, I ask that anyone relying on this or who feels that they may > be affected by this to please let us know. > > Thanks, > > Jim Just in case anyone missed it since I sent it late on a Friday, I wanted to raise this again. :) There is a work in progress change implementing this: https://review.opendev.org/785536 It is only failing tests that I would expect it to fail at this point. I should be able to get it in shape for review soon. -Jim From amy at demarco.com Fri Apr 16 17:07:52 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 16 Apr 2021 12:07:52 -0500 Subject: [Diversity] D&I WG Office Hours at the PTG Message-ID: The Diversity and Inclusion WG has grabbed our usual PTG slot Monday at 16:00UTC and we will be holding office hours to help any project with Inclusive Naming or other D&I related topics. If you'd like us to attend any of your sessions please let me know. If you've been interested in learning more about the WG please come join us! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikeyang at andestech.com Fri Apr 16 18:49:50 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Fri, 16 Apr 2021 18:49:50 +0000 Subject: Newbie question(s) for eval Message-ID: Hi, I’m doing an eval of Zuul-CI for my team. I’ve previously used Jenkins and internally/purpose-built CI systems. So Ansible is new to me. I’m using the quick-start tutorial so this is all in the Docker containers and images from the tutorial. I’m running this on a Ubuntu bare-metal machine. And please let me know if there is a more appropriate forum to ask this question or get eval support. So what I tried to do was use a simple job from zuul-jobs. I thought `markdownlint` would be straight-forward. Here’s the rabbit hole that I went down. 1. The quick-start tutorial is using `Ubuntu Focal` images for the `node` 2. I added a second `node` using `Ubuntu Groovy` 3. I can run `testjob` and a custom job (playbook with `shell`s) on the Groovy node 4. I tried to run the `markdownlint` Zuul job on the Groovy node but it fails on the `ensure-nodejs` role * It tries to add `https://deb.nodesource.com/node_6.x groovy main` as a repo * This fails because `dists/groovy` only exists in the 10/12/14/15.x repos * I tried many different ways to assign `node_version` to 15, but the job always fails trying to add `node_6.x` From the Ansible documentation, I see that there are many places in precedence to overwrite a role’s default value, but I have not gotten any of these to work. And I’m unclear about how the precedence extends into Zuul’s `job.vars` and `jobs.extra-vars` (are these equivalent to passing on Ansible’s command line?). Thanks, - mike. CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at acmegating.com Fri Apr 16 19:09:35 2021 From: jim at acmegating.com (James E. Blair) Date: Fri, 16 Apr 2021 12:09:35 -0700 Subject: Newbie question(s) for eval In-Reply-To: (mikeyang@andestech.com's message of "Fri, 16 Apr 2021 18:49:50 +0000") References: Message-ID: <87blaekpw0.fsf@fuligin> writes: > 1. The quick-start tutorial is using `Ubuntu Focal` images for the `node` > 2. I added a second `node` using `Ubuntu Groovy` > 3. I can run `testjob` and a custom job (playbook with `shell`s) on the Groovy node > 4. I tried to run the `markdownlint` Zuul job on the Groovy node but it fails on the `ensure-nodejs` role > * It tries to add `https://deb.nodesource.com/node_6.x groovy main` as a repo > * This fails because `dists/groovy` only exists in the 10/12/14/15.x repos > * I tried many different ways to assign `node_version` to 15, but the job always fails trying to add `node_6.x` > > From the Ansible documentation, I see that there are many places in precedence to overwrite a role’s default value, but I have not gotten any of these to work. And I’m unclear about how the precedence extends into Zuul’s `job.vars` and `jobs.extra-vars` (are these equivalent to passing on Ansible’s command line?). I think you're on the right track. It should be as simple as just setting what node version you want to use. And in that role, we set the version variable as an Ansible "default" which has the lowest level of precedence, so almost anything you do in Zuul will override it. So it's mostly a matter of how to do it. The really short version of this is to set this on your job: vars: node_version: 15 But I'm going to give a longer answer because there's some cool stuff worth knowing here. First, a direct answer to your variable precedence question: https://zuul-ci.org/docs/zuul/reference/jobs.html#variables Items 1-3 on that list are passed on the command line to Ansible, and items 4-6 are inventory variables. But as I said earlier, all of those override Ansible defaults, so we can use any of them. The most straightforward way to do what you want would, as I said above, be to just add a variable to the job definition. Here are the docs for that: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.vars Note that anything you can add to a job you can also add to a project-template, or the invocation of that job in a project-pipeline definition. Here's an example of OpenStack setting that variable in a version-specific project-template: https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L2164-L2187 Basically, that's a template that bundles together some jobs for projects that use node 10. And finally, for this particular variable, and others like it -- that is to say, a variable that is really a characteristic of a *project* rather than an individual *job* (i.e., "this *project* is written for node 10", not "I want to use node 10 in this specific job"), you can set the variable at the project level and it will be passed to every job it runs. Here are the docs for that: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.vars And that is actually what we do in the Zuul project itself (since we have some nodejs jobs): https://opendev.org/zuul/zuul/src/branch/master/.zuul.yaml#L271 > And please let me know if there is a more appropriate forum to ask this question or get eval support. This is a great place for questions like this; it helps everyone (new users with similar problems and maintainers who might otherwise have a blind spot for what problems users actually have). -Jim P.S. Thanks to Clark who had some suggestions of additional info to include here. From fungi at yuggoth.org Fri Apr 16 19:14:47 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Apr 2021 19:14:47 +0000 Subject: Newbie question(s) for eval In-Reply-To: References: Message-ID: <20210416191447.q2hr5gzvt3knkadd@yuggoth.org> On 2021-04-16 18:49:50 +0000 (+0000), mikeyang at andestech.com wrote: [...] > And please let me know if there is a more appropriate forum to ask > this question or get eval support. [...] Nope, this is it. Or the #zuul channel on the Freenode IRC network if you enjoy a sometimes more synchronous discussion. > 4. I tried to run the `markdownlint` Zuul job on the Groovy > node but it fails on the `ensure-nodejs` role > * It tries to add `https://deb.nodesource.com/node_6.x > groovy main` as a repo > * This fails because `dists/groovy` only exists in the > 10/12/14/15.x repos We could probably address this in a user-friendly way by adding some platform detection logic in the role defaults, but right now it's just always 6 unless overridden: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-nodejs/defaults/main.yaml > * I tried many different ways to assign `node_version` to > 15, but the job always fails trying to add `node_6.x` Some examples of how we set the node_version var can be found in the js-jobs.yaml of Zuul's standard library: https://opendev.org/zuul/zuul-jobs/src/branch/master/zuul.d/js-jobs.yaml > From the Ansible documentation, I see that there are many places > in precedence to overwrite a role’s default value, but I have not > gotten any of these to work. And I’m unclear about how the > precedence extends into Zuul’s `job.vars` and `jobs.extra-vars` > (are these equivalent to passing on Ansible’s command line?). Yes, exactly, also explained here: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.extra-vars Hope that helps, and feel free to post more questions to the list! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mikeyang at andestech.com Fri Apr 16 19:21:19 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Fri, 16 Apr 2021 19:21:19 +0000 Subject: Newbie question(s) for eval In-Reply-To: <87blaekpw0.fsf@fuligin> References: <87blaekpw0.fsf@fuligin> Message-ID: Thanks for the quick and detailed reply. I swear I have tried several variations of these suggestions to no avail, but I will try again with your specific examples (when I can get back to my computer). Following up to your additional details of how variables work. I had also tried to make Ansible more verbose by running `zuul-executor verbose` (by connecting a bash shell to the `zuul-tutorial-scheduler_1` container) but I did not see any difference in the output in the Zuul web logs. Am I not enabling that correctly or am I looking in the wrong place to see the additional verbosity? - mike. On 4/16/21, 12:09 PM, "corvus at inaugust.com on behalf of James E. Blair" wrote: writes: > 1. The quick-start tutorial is using `Ubuntu Focal` images for the `node` > 2. I added a second `node` using `Ubuntu Groovy` > 3. I can run `testjob` and a custom job (playbook with `shell`s) on the Groovy node > 4. I tried to run the `markdownlint` Zuul job on the Groovy node but it fails on the `ensure-nodejs` role > * It tries to add `https://deb.nodesource.com/node_6.x groovy main` as a repo > * This fails because `dists/groovy` only exists in the 10/12/14/15.x repos > * I tried many different ways to assign `node_version` to 15, but the job always fails trying to add `node_6.x` > > From the Ansible documentation, I see that there are many places in precedence to overwrite a role’s default value, but I have not gotten any of these to work. And I’m unclear about how the precedence extends into Zuul’s `job.vars` and `jobs.extra-vars` (are these equivalent to passing on Ansible’s command line?). I think you're on the right track. It should be as simple as just setting what node version you want to use. And in that role, we set the version variable as an Ansible "default" which has the lowest level of precedence, so almost anything you do in Zuul will override it. So it's mostly a matter of how to do it. The really short version of this is to set this on your job: vars: node_version: 15 But I'm going to give a longer answer because there's some cool stuff worth knowing here. First, a direct answer to your variable precedence question: https://zuul-ci.org/docs/zuul/reference/jobs.html#variables Items 1-3 on that list are passed on the command line to Ansible, and items 4-6 are inventory variables. But as I said earlier, all of those override Ansible defaults, so we can use any of them. The most straightforward way to do what you want would, as I said above, be to just add a variable to the job definition. Here are the docs for that: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.vars Note that anything you can add to a job you can also add to a project-template, or the invocation of that job in a project-pipeline definition. Here's an example of OpenStack setting that variable in a version-specific project-template: https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L2164-L2187 Basically, that's a template that bundles together some jobs for projects that use node 10. And finally, for this particular variable, and others like it -- that is to say, a variable that is really a characteristic of a *project* rather than an individual *job* (i.e., "this *project* is written for node 10", not "I want to use node 10 in this specific job"), you can set the variable at the project level and it will be passed to every job it runs. Here are the docs for that: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.vars And that is actually what we do in the Zuul project itself (since we have some nodejs jobs): https://opendev.org/zuul/zuul/src/branch/master/.zuul.yaml#L271 > And please let me know if there is a more appropriate forum to ask this question or get eval support. This is a great place for questions like this; it helps everyone (new users with similar problems and maintainers who might otherwise have a blind spot for what problems users actually have). -Jim P.S. Thanks to Clark who had some suggestions of additional info to include here. CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. From jim at acmegating.com Fri Apr 16 19:52:40 2021 From: jim at acmegating.com (James E. Blair) Date: Fri, 16 Apr 2021 12:52:40 -0700 Subject: Newbie question(s) for eval In-Reply-To: (mikeyang@andestech.com's message of "Fri, 16 Apr 2021 19:21:19 +0000") References: <87blaekpw0.fsf@fuligin> Message-ID: <877dl2knw7.fsf@fuligin> writes: > Thanks for the quick and detailed reply. I swear I have tried several > variations of these suggestions to no avail, but I will try again with > your specific examples (when I can get back to my computer). > > Following up to your additional details of how variables work. I had > also tried to make Ansible more verbose by running `zuul-executor > verbose` (by connecting a bash shell to the > `zuul-tutorial-scheduler_1` container) but I did not see any > difference in the output in the Zuul web logs. Am I not enabling that > correctly or am I looking in the wrong place to see the additional > verbosity? Oops, that could use more clarity in the docs. You'll find the output in the container logs. And when it says verbose, it means it. Every build has a UUID, you'll be best served by grepping the logs for that uuid. It's in the container logs because the Ansible output with -vvv is more likely to contain sensitive information that users perhaps shouldn't have access to; it's mostly intended as an admin debugging tool. Note that there is quite a lot of information in the build console which is available to all users; more than even appears in the streaming text log. For example, here is the Ansible task that installs node 14 in one of Zuul's builds; you can see all of the module arguments are there for inspection (even though they aren't printed out in the streaming log): https://zuul.opendev.org/t/zuul/build/90087de040e94f32ae92b5d749a884af/console#2/0/11/ubuntu-bionic The other useful user-level debugging tool for a situation like this is the inventory that Zuul provided to Ansible; here's how we can confirm that node_version was set to 14 for this build. https://zuul.opendev.org/t/zuul/build/90087de040e94f32ae92b5d749a884af/log/zuul-info/inventory.yaml#25 -Jim From y2kenny at gmail.com Mon Apr 19 01:16:22 2021 From: y2kenny at gmail.com (Kenny Ho) Date: Sun, 18 Apr 2021 21:16:22 -0400 Subject: Proposed behavior change on reconfiguration In-Reply-To: <87im4olnm9.fsf@fuligin> References: <87zgy7kp7k.fsf@fuligin> <87im4olnm9.fsf@fuligin> Message-ID: I just want to acknowledge that I have read the email but I did not reply because I am not sure if I fully understand the change. On Wed, Apr 14, 2021 at 2:37 PM James E. Blair wrote: > "James E. Blair" writes: > > in the future, if a job is added, removed, or altered during a > > reconfiguration, it will not take effect for any currently running > > buildsets, but will take effect for any buildsets started after the > > reconfiguration (that includes new buildsets created on gate resets). So let say I have a job config called job M stored in repo C that is triggered on changes from repo X, and currently I have a buildset 1 running job M triggered by changes to repo X. So if I made a change to job M and push to repo C on gerrit, nothing change. But when I submit the change to job M in repo C, you are saying the current behaviour is that buildset 1 will run the new job M (M') if job M hasn't started for buildset 1? And the proposed change to Zuul is that in the future, buildset1 will always run job M and only buildset 2 (triggered after M' existed) will run job M'. Is that a correct understanding? Regards, Kenny From jim at acmegating.com Mon Apr 19 02:41:26 2021 From: jim at acmegating.com (James E. Blair) Date: Sun, 18 Apr 2021 19:41:26 -0700 Subject: Proposed behavior change on reconfiguration In-Reply-To: References: <87zgy7kp7k.fsf@fuligin> <87im4olnm9.fsf@fuligin> Message-ID: <08e6ba1f-2867-0e6d-0b9f-721f38c6b148@acmegating.com> On 4/18/21 6:16 PM, Kenny Ho wrote: > I just want to acknowledge that I have read the email but I did not > reply because I am not sure if I fully understand the change. > > On Wed, Apr 14, 2021 at 2:37 PM James E. Blair wrote: >> "James E. Blair" writes: >>> in the future, if a job is added, removed, or altered during a >>> reconfiguration, it will not take effect for any currently running >>> buildsets, but will take effect for any buildsets started after the >>> reconfiguration (that includes new buildsets created on gate resets). > > So let say I have a job config called job M stored in repo C that is > triggered on changes from repo X, and currently I have a buildset 1 > running job M triggered by changes to repo X. So if I made a change > to job M and push to repo C on gerrit, nothing change. But when I > submit the change to job M in repo C, you are saying the current > behaviour is that buildset 1 will run the new job M (M') if job M > hasn't started for buildset 1? And the proposed change to Zuul is > that in the future, buildset1 will always run job M and only buildset > 2 (triggered after M' existed) will run job M'. Is that a correct > understanding? > Exactly so. -Jim From y2kenny at gmail.com Mon Apr 19 03:03:22 2021 From: y2kenny at gmail.com (Kenny Ho) Date: Sun, 18 Apr 2021 23:03:22 -0400 Subject: Proposed behavior change on reconfiguration In-Reply-To: <08e6ba1f-2867-0e6d-0b9f-721f38c6b148@acmegating.com> References: <87zgy7kp7k.fsf@fuligin> <87im4olnm9.fsf@fuligin> <08e6ba1f-2867-0e6d-0b9f-721f38c6b148@acmegating.com> Message-ID: Cool. This sounds good to me since I was not aware of the existing behaviour as a feature to begin with :). Kenny On Sun, Apr 18, 2021 at 10:41 PM James E. Blair wrote: > > On 4/18/21 6:16 PM, Kenny Ho wrote: > > I just want to acknowledge that I have read the email but I did not > > reply because I am not sure if I fully understand the change. > > > > On Wed, Apr 14, 2021 at 2:37 PM James E. Blair wrote: > >> "James E. Blair" writes: > >>> in the future, if a job is added, removed, or altered during a > >>> reconfiguration, it will not take effect for any currently running > >>> buildsets, but will take effect for any buildsets started after the > >>> reconfiguration (that includes new buildsets created on gate resets). > > > > So let say I have a job config called job M stored in repo C that is > > triggered on changes from repo X, and currently I have a buildset 1 > > running job M triggered by changes to repo X. So if I made a change > > to job M and push to repo C on gerrit, nothing change. But when I > > submit the change to job M in repo C, you are saying the current > > behaviour is that buildset 1 will run the new job M (M') if job M > > hasn't started for buildset 1? And the proposed change to Zuul is > > that in the future, buildset1 will always run job M and only buildset > > 2 (triggered after M' existed) will run job M'. Is that a correct > > understanding? > > > > Exactly so. > > -Jim From jim at acmegating.com Mon Apr 19 23:42:26 2021 From: jim at acmegating.com (James E. Blair) Date: Mon, 19 Apr 2021 16:42:26 -0700 Subject: Nested repo structures Message-ID: <8735vllu3h.fsf@fuligin> Hi, We were recently presented with a case of a user who uses Gerrit's support for arbitrary repo hierarchies in an unexpected way. Essentially, they have both of the following projects defined in gerrit: * project * project/subproject That is to say, project is not just a pathname, it is a full-blown Gerrit repo. Further, this has nothing to do with git submodules or anything. There are several potential issues in Zuul related to nested repos: * Stats * Secret key storage * Time database * Internal merger repos * Workspace checkouts Stats are good. We already flatten project names there (eg, translate / to _). The secret key storage we are about to move to ZK. We can handle that right now. The time database will be eliminated as part of the v5 work as well, and replaced with SQL. Not a problem. That leaves the git repos. The main issue is that git doesn't deal well with git repos that are cloned into other git repos. We need to keep these separate. The internal merger repos (that is, the ones that users never see) will need a flattened namespace. That should be relatively straightforward since we can take care of that without a user-visible impact. Finally, that leaves the subject of this mail: the user-visible workspace checkouts. Once we get Zuul to a point where it doesn't error-out on a nested repo structure, we still have to decide how to lay it down on disk in the executor workspace. I can see three options: 1) Attempt to lay the repos out in the current fashion, with project/subproject cloned on top of project. I do not think we should do this. While we might be able to do this in the limited context of a workspace checkout (as compared to the internal merger repos), it would still be fraught with problems if, for example, anyone ever did add a "subproject" directory to "project". 2) Allow users to map projects to arbitrary repo checkouts. For this we might add a new mapping to a job like this: - job: checkouts: project: project project/subproject: subproject This would allow users to write out the repos wherever it suits them, whether that's a flat structure as above, or something more complex. They could also move just a single troublesome repo. In fact, you'd really only need the second line of the map above, the first is redundant. 3) Allow users to indicate they want a flat checkout. Given that this is the first use-case for this which we have seen, it's instructive to learn how the users who prompted this discussion are actually using their repos. They want a flat checkout. So they essentially only want the last component of the path to appear in their workspace. This option is a specialized version of the above that's less flexible, but probably simpler to use for this case: - job: flat-checkout: true That would map each project to its last path component. We would need to decide what to do if *that* caused a collision. Presumably we would error out. In this case, the user would have no recourse with Zuul (they would need to change a repo name or stop using a repo in that job). Options 2 and 3 are not mutually exclusive. We could do both so that the typical use would be #3 with #2 as a backup (or for someone with really creative ideas about how to lay down repos). We could also implement one and leave the other as an idea for future expansion. I suspect the internal implementation will be about the same for both. I think I'm inclined to start with #3 just because it concisely enables a specific pattern that may be useful in other contexts. Does anyone have feedback or a preference? -Jim P.S. I suspect we're going to find cases in zuul-jobs where we have assumed project.src_dir == 'src/'+project.canonical_name, but we can and should fix those. We made that a variable for a reason. From iwienand at redhat.com Tue Apr 20 01:03:10 2021 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 20 Apr 2021 11:03:10 +1000 Subject: Nested repo structures In-Reply-To: <8735vllu3h.fsf@fuligin> References: <8735vllu3h.fsf@fuligin> Message-ID: On Mon, Apr 19, 2021 at 04:42:26PM -0700, James E. Blair wrote: > 2) Allow users to map projects to arbitrary repo checkouts. > > For this we might add a new mapping to a job like this: > > - job: > checkouts: > project: project > project/subproject: subproject I was going to say "I'm pretty sure in places we iterate, etc. we use zuul.projects.src_dir so should basically be isolated from such a change" then noted your PS > P.S. I suspect we're going to find cases in zuul-jobs where we have > assumed project.src_dir == 'src/'+project.canonical_name, but we can and > should fix those. We made that a variable for a reason. so yeah, I'd agree anything failing there would be in the class of a bug anyway. > 3) Allow users to indicate they want a flat checkout. > > Given that this is the first use-case for this which we have seen, > it's instructive to learn how the users who prompted this discussion > are actually using their repos. They want a flat checkout. So they > essentially only want the last component of the path to appear in > their workspace. > Options 2 and 3 are not mutually exclusive. We could do both so > that the typical use would be #3 with #2 as a backup (or for someone > with really creative ideas about how to lay down repos). Something like website/ <-- keep common stuff (?) here website/website <-- website code website/project <-- a plugin feels like it might be possible and not too "creative" :) But we also got this far without it being a problem, and I don't think you could get yourself into this position using github/gitlab etc., further limiting the problem space. Starting with #3 sounds good, with #2 as a backup. -i From ssbarnea at redhat.com Tue Apr 20 06:31:55 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Mon, 19 Apr 2021 23:31:55 -0700 Subject: Nested repo structures In-Reply-To: References: <8735vllu3h.fsf@fuligin> Message-ID: Hpw bad it would be if we would sanitize project name into flat directory name all the time? (no configuration option). I am thinking of converting any "/" in the project path into a "_" (or dash) by default. I would personally be against using only the last bit of project path because this will cause conflicts. Sooner or later we will have two checkouts with "/web". In addition to that this would also diverge from a development environment. I do have folders with >40-50 repos clones under opendev/openstack and many of use use the assumption that sibling repositories are cloned as the same level as the current repository. Sometimes we also have code to clone them if they were not already cloned. I would really want to keep the zuul workspace as close as possible to a developer "workspace", diverging on CI from a local development environment would only make it harder to test our software on both local and ci. If we can get around with an implicit flattened layout, we may avoid adding some extra complexity to zuul, something that could slow us down later. Does this make sense? -- /zbr On 20 Apr 2021 at 02:03:10, Ian Wienand wrote: > On Mon, Apr 19, 2021 at 04:42:26PM -0700, James E. Blair wrote: > > 2) Allow users to map projects to arbitrary repo checkouts. > > > For this we might add a new mapping to a job like this: > > > - job: > > checkouts: > > project: project > > project/subproject: subproject > > > I was going to say "I'm pretty sure in places we iterate, etc. we use > zuul.projects.src_dir so should basically be isolated from such a > change" then noted your PS > > P.S. I suspect we're going to find cases in zuul-jobs where we have > > assumed project.src_dir == 'src/'+project.canonical_name, but we can and > > should fix those. We made that a variable for a reason. > > > so yeah, I'd agree anything failing there would be in the class of a > bug anyway. > > 3) Allow users to indicate they want a flat checkout. > > > Given that this is the first use-case for this which we have seen, > > it's instructive to learn how the users who prompted this discussion > > are actually using their repos. They want a flat checkout. So they > > essentially only want the last component of the path to appear in > > their workspace. > > > Options 2 and 3 are not mutually exclusive. We could do both so > > that the typical use would be #3 with #2 as a backup (or for someone > > with really creative ideas about how to lay down repos). > > > Something like > > website/ <-- keep common stuff (?) here > website/website <-- website code > website/project <-- a plugin > > feels like it might be possible and not too "creative" :) But we also > got this far without it being a problem, and I don't think you could > get yourself into this position using github/gitlab etc., further > limiting the problem space. > > Starting with #3 sounds good, with #2 as a backup. > > -i > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Apr 20 13:47:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 20 Apr 2021 13:47:30 +0000 Subject: Nested repo structures In-Reply-To: References: <8735vllu3h.fsf@fuligin> Message-ID: <20210420134730.xipytiuhbkruc53f@yuggoth.org> On 2021-04-19 23:31:55 -0700 (-0700), Sorin Sbarnea wrote: > Hpw bad it would be if we would sanitize project name into flat > directory name all the time? (no configuration option). The current flattening option, as proposed, relies only on the part after the final / in the repository name. If it were on all the time you couldn't include opendev/project-config, openstack/project-config and zuul/project-config in the same tree for example. Maybe it's rare to want to use two repositories with the same short name in a job at the same time, but then again the situation which gave rise to this proposal is probably even more rare. > I am thinking of converting any "/" in the project path into a "_" > (or dash) by default. I would personally be against using only the > last bit of project path because this will cause conflicts. Sooner > or later we will have two checkouts with "/web". [...] I think the idea is that your job either needs a flat tree (in order to work around project names conflicting with the parent directories of other projects) or it doesn't (so allows projects with the same short name to not collide). If we replace / with _ we create a new potential conflict, projects named like foo/bar and foo_bar can't coexist since _ is also an allowed character. There is little hope of a collision-free 1:1 replacement solution. (This is why the proposal for storing them in Zookeeper relies on RFC 1738 §2.2 URL encoding, as % itself is also re-encoded in that scheme.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jim at acmegating.com Tue Apr 20 13:58:22 2021 From: jim at acmegating.com (James E. Blair) Date: Tue, 20 Apr 2021 06:58:22 -0700 Subject: Nested repo structures In-Reply-To: <20210420134730.xipytiuhbkruc53f@yuggoth.org> (Jeremy Stanley's message of "Tue, 20 Apr 2021 13:47:30 +0000") References: <8735vllu3h.fsf@fuligin> <20210420134730.xipytiuhbkruc53f@yuggoth.org> Message-ID: <87y2ddjbwh.fsf@fuligin> Jeremy Stanley writes: > On 2021-04-19 23:31:55 -0700 (-0700), Sorin Sbarnea wrote: >> Hpw bad it would be if we would sanitize project name into flat >> directory name all the time? (no configuration option). > > The current flattening option, as proposed, relies only on the part > after the final / in the repository name. If it were on all the time > you couldn't include opendev/project-config, > openstack/project-config and zuul/project-config in the same tree > for example. Maybe it's rare to want to use two repositories with > the same short name in a job at the same time, but then again the > situation which gave rise to this proposal is probably even more > rare. > >> I am thinking of converting any "/" in the project path into a "_" >> (or dash) by default. I would personally be against using only the >> last bit of project path because this will cause conflicts. Sooner >> or later we will have two checkouts with "/web". > [...] > > I think the idea is that your job either needs a flat tree (in order > to work around project names conflicting with the parent directories > of other projects) or it doesn't (so allows projects with the same > short name to not collide). If we replace / with _ we create a new > potential conflict, projects named like foo/bar and foo_bar can't > coexist since _ is also an allowed character. There is little hope > of a collision-free 1:1 replacement solution. (This is why the > proposal for storing them in Zookeeper relies on RFC 1738 §2.2 URL > encoding, as % itself is also re-encoded in that scheme.) Worth noting also that the current behavior is not accidental, it was a deliberate choice to use golang-style layouts, which is about as close to a standard for repo layouts as there is. I don't think we should change that; it seems to be working quite well in most cases. -Jim From mikeyang at andestech.com Wed Apr 21 04:18:54 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Wed, 21 Apr 2021 04:18:54 +0000 Subject: Newbie question(s) for eval In-Reply-To: <877dl2knw7.fsf@fuligin> References: <87blaekpw0.fsf@fuligin> , <877dl2knw7.fsf@fuligin> Message-ID: Sorry for the slow reply. I finally got around to trying this again. I still can't figure out what is going on and now the results are weirder. On my running instance of the quick-start tutorial, I added 2 things: 1. `markdownlint` job in the `check` pipeline, with `node_version: 15` in `test1` (running on the original Focal nodeset) 2. `log-inventory` in the `pre.yaml` in `zuul-config` This worked. It even flagged an error in a `README.md` file I had put in the repo. And I could run it successfully on the Focal node or my new Groovy node. Then I was added some other bells and whistles (`dstat-graph`) on a different job ... at some point, this quick-start instance hit an error and the whole quick-start instance died. So I cleaned up and started a new quick-start instance. And now I can't reproduce the above results. I'm running into the original problem that I mentioned last week ... ``` Failed to update apt cache: W:Updating from such a repository can't be done securely, and is therefore disabled by default., W:See apt-secure(8) manpage for repository creation and user configuration details., E:The repository 'https://deb.nodesource.com/node_6.x focal Release' does not have a Release file. ``` This is what my `zuul.yaml` contains ... ``` - job: name: testjob run: playbooks/testjob.yaml - project: vars: node_version: 14 check: jobs: - testjob - noop - markdownlint: vars: node_version: 15 gate: jobs: - testjob ``` But now I have the `inventory.yaml` which shows (which looks correct) ... ``` all: hosts: ubuntu-focal: ansible_connection: ssh ansible_host: node ansible_port: 22 ansible_python_interpreter: /usr/bin/python3 ansible_user: root nodepool: az: null cloud: null host_id: null interface_ip: node label: ubuntu-focal private_ipv4: null provider: static-vms public_ipv4: 172.18.0.3 public_ipv6: null region: null vars: node_version: 15 zuul: _inheritance_path: - '' - '' - '' attempts: 3 branch: master build: 74401968d018454ab49e530a2a219fe7 buildset: c776db4d164d419b9a51857a3ff3d1d3 change: '2' change_url: http://gerrit:8080/2 child_jobs: [] event_id: 8321b5dee7624811b53c19e732caab96 executor: hostname: 3a4c19244fa2 inventory_file: /var/lib/zuul/builds/74401968d018454ab49e530a2a219fe7/ansible/inventory.yaml log_root: /var/lib/zuul/builds/74401968d018454ab49e530a2a219fe7/work/logs result_data_file: /var/lib/zuul/builds/74401968d018454ab49e530a2a219fe7/work/results.json src_root: /var/lib/zuul/builds/74401968d018454ab49e530a2a219fe7/work/src work_root: /var/lib/zuul/builds/74401968d018454ab49e530a2a219fe7/work items: - branch: master change: '2' change_url: http://gerrit:8080/2 patchset: '5' project: canonical_hostname: gerrit canonical_name: gerrit/test1 name: test1 short_name: test1 src_dir: src/gerrit/test1 job: markdownlint jobtags: [] message: SW5pdGlhbCBqb2IKCkNoYW5nZS1JZDogSTVkYTA1NzdkMjk3ZTk3MDcyZmMxMmNiNGZhMDYyNzE2NGY1OGM0Y2MK patchset: '5' pipeline: check post_review: false project: canonical_hostname: gerrit canonical_name: gerrit/test1 name: test1 short_name: test1 src_dir: src/gerrit/test1 projects: gerrit/test1: canonical_hostname: gerrit canonical_name: gerrit/test1 checkout: master name: test1 required: false short_name: test1 src_dir: src/gerrit/test1 ref: refs/changes/02/2/5 resources: {} tenant: example-tenant timeout: 1800 voting: true ``` In the console, the failure occurs while executing `opendev.org/zuul/zuul-jobs/playbooks/markdownlint/pre.yaml` on the `Play: Ensure Markdownlint is installed via NPM` task. - mike. ________________________________________ From: corvus at inaugust.com [corvus at inaugust.com] on behalf of James E. Blair [jim at acmegating.com] Sent: Friday, April 16, 2021 12:52 PM To: Mike Yang Cc: zuul-discuss at lists.zuul-ci.org Subject: Re: Newbie question(s) for eval writes: > Thanks for the quick and detailed reply. I swear I have tried several > variations of these suggestions to no avail, but I will try again with > your specific examples (when I can get back to my computer). > > Following up to your additional details of how variables work. I had > also tried to make Ansible more verbose by running `zuul-executor > verbose` (by connecting a bash shell to the > `zuul-tutorial-scheduler_1` container) but I did not see any > difference in the output in the Zuul web logs. Am I not enabling that > correctly or am I looking in the wrong place to see the additional > verbosity? Oops, that could use more clarity in the docs. You'll find the output in the container logs. And when it says verbose, it means it. Every build has a UUID, you'll be best served by grepping the logs for that uuid. It's in the container logs because the Ansible output with -vvv is more likely to contain sensitive information that users perhaps shouldn't have access to; it's mostly intended as an admin debugging tool. Note that there is quite a lot of information in the build console which is available to all users; more than even appears in the streaming text log. For example, here is the Ansible task that installs node 14 in one of Zuul's builds; you can see all of the module arguments are there for inspection (even though they aren't printed out in the streaming log): https://zuul.opendev.org/t/zuul/build/90087de040e94f32ae92b5d749a884af/console#2/0/11/ubuntu-bionic The other useful user-level debugging tool for a situation like this is the inventory that Zuul provided to Ansible; here's how we can confirm that node_version was set to 14 for this build. https://zuul.opendev.org/t/zuul/build/90087de040e94f32ae92b5d749a884af/log/zuul-info/inventory.yaml#25 -Jim CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. From artem.goncharov at gmail.com Wed Apr 21 08:54:19 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 21 Apr 2021 10:54:19 +0200 Subject: GitHub issues with 4.2.0 Message-ID: Hi all, I had unfortunately no time to investigate it deeper and rolled back to 4.1 (since we deploy zuul in k8 there are currently no old logs). What I have however noted: - gating for GitHub driver doesn’t work for us. The event is empty (https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/github/githubmodel.py#L384 tells event.label is empty). The same was also happening for reviews (https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/github/githubmodel.py#L194 - "because Reviews [{'type': 'approved', 'permission': 'write'}] does not match dict_values([])”) - merger was failing: https://opendev.org/zuul/zuul/src/branch/master/zuul/merger/merger.py#L1032 - ‘branch’ key is missing. Here I might be giving wrong line reference, since this is purely from memory, but it was definitely complaining to ‘branch' Rollback to 4.1 immediately "fixed" both issues, therefore I do not think this is a problem in the configuration on its own. I would try to deploy some test instance to play with it. This is currently just a feedback to 4.2.0 and potential warning to the ones using Zuul with GitHub Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From albinvass at gmail.com Wed Apr 21 15:50:58 2021 From: albinvass at gmail.com (albin vass) Date: Wed, 21 Apr 2021 17:50:58 +0200 Subject: GitHub issues with 4.2.0 In-Reply-To: References: Message-ID: Hi! I'm currently running the tip of the master branch (4.2.1.dev8 4f3f973a) and not seeing these issues, both labels and reviews work as they should. Maybe there's some strange combination of a change breaking but only with specific configuration in github? On Wed, Apr 21, 2021 at 10:55 AM Artem Goncharov wrote: > > Hi all, > > I had unfortunately no time to investigate it deeper and rolled back to 4.1 (since we deploy zuul in k8 there are currently no old logs). What I have however noted: > > - gating for GitHub driver doesn’t work for us. The event is empty (https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/github/githubmodel.py#L384 tells event.label is empty). The same was also happening for reviews (https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/github/githubmodel.py#L194 - "because Reviews [{'type': 'approved', 'permission': 'write'}] does not match dict_values([])”) > - merger was failing: https://opendev.org/zuul/zuul/src/branch/master/zuul/merger/merger.py#L1032 - ‘branch’ key is missing. Here I might be giving wrong line reference, since this is purely from memory, but it was definitely complaining to ‘branch' > > Rollback to 4.1 immediately "fixed" both issues, therefore I do not think this is a problem in the configuration on its own. > > I would try to deploy some test instance to play with it. This is currently just a feedback to 4.2.0 and potential warning to the ones using Zuul with GitHub > > Regards, > Artem > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mikeyang at andestech.com Tue Apr 27 20:04:14 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Tue, 27 Apr 2021 20:04:14 +0000 Subject: questions about locally running zuul repo tests Message-ID: Hi, As part of my eval, I'm looking at the dev process. And we're interested in a Gitea driver (it looks like there was a WIP change<726391:%20more%20work%20towards%20a%20working%20gitea%20driver%20|%20https://review.opendev.org/c/zuul/zuul/+/726391> last year). When I try to run the tests locally with "tox", following the instructions in TESTING.rst, it looks like the Docker Zookeeper instance does not start up properly and the majority of tests fail with a timeout. I tried modifying "tox.ini" in order to open/map port 2181 and I can see that "tox" starts up the zookeeper container, but the tests still timeout. Looking at the code, I see a couple of issues: 1. hostname ... how does tox know the hostname/ip-address of the container (with default bridge networking) 2. certificates ... are these required and how is tox configuring this? Or am I on the wrong path? I see that tools/test-setup-docker.sh seems to have a different way of running the tests. Thanks, - mike. CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikeyang at andestech.com Tue Apr 27 22:28:16 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Tue, 27 Apr 2021 22:28:16 +0000 Subject: questions about locally running zuul repo tests In-Reply-To: References: Message-ID: Okay, answering my own question. It looks like the procedure for running tests locally requires ... 1. install docker-compose 2. run `tools/test-setup-docker.sh` 3. change envlist in tox.ini from "linters,py3{-docker}" to just "linters,py3" 4. run `tox` I see ~100 failures. I tried rerunning one of the failures and it passed in < 3 seconds. I'm not sure if there is some non-determinism due to resource contraints or races/timing. - mike. ________________________________ From: mikeyang at andestech.com [mikeyang at andestech.com] Sent: Tuesday, April 27, 2021 1:04 PM To: Zuul-discuss at lists.zuul-ci.org Subject: questions about locally running zuul repo tests Hi, As part of my eval, I'm looking at the dev process. And we're interested in a Gitea driver (it looks like there was a WIP change last year). When I try to run the tests locally with "tox", following the instructions in TESTING.rst, it looks like the Docker Zookeeper instance does not start up properly and the majority of tests fail with a timeout. I tried modifying "tox.ini" in order to open/map port 2181 and I can see that "tox" starts up the zookeeper container, but the tests still timeout. Looking at the code, I see a couple of issues: 1. hostname ... how does tox know the hostname/ip-address of the container (with default bridge networking) 2. certificates ... are these required and how is tox configuring this? Or am I on the wrong path? I see that tools/test-setup-docker.sh seems to have a different way of running the tests. Thanks, - mike. CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 27 22:33:25 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 27 Apr 2021 15:33:25 -0700 Subject: questions about locally running zuul repo tests In-Reply-To: References: Message-ID: <5f6e7649-52f7-4b11-a33c-0f56df4648b9@www.fastmail.com> On Tue, Apr 27, 2021, at 3:28 PM, mikeyang at andestech.com wrote: > > Okay, answering my own question. > > It looks like the procedure for running tests locally requires ... > > 1. install docker-compose > 2. run `tools/test-setup-docker.sh` > 3. change envlist in tox.ini from "linters,py3{-docker}" to just "linters,py3" > 4. run `tox` > > I see ~100 failures. I tried rerunning one of the failures and it > passed in < 3 seconds. I'm not sure if there is some non-determinism > due to resource contraints or races/timing. The test suite will run in parallel based on the number of CPUs you have. That means conflicts are possible. Our typical CI runner will run with 8 way parallelism. If you've got more CPUs than that you can limit to 8 and see if you get a different result with: `tox -epy38 -- --concurrency=8` I would expect this to be fairly robust since we do run it frequently. I do wonder if there is some env/setup item that is still missed (maybe mysql or build of the js artifacts)? > > - mike. > > *From:* mikeyang at andestech.com [mikeyang at andestech.com] > *Sent:* Tuesday, April 27, 2021 1:04 PM > *To:* Zuul-discuss at lists.zuul-ci.org > *Subject:* questions about locally running zuul repo tests > > Hi, > > As part of my eval, I'm looking at the dev process. And we're > interested in a Gitea driver (it looks like there was a WIP change > last year). > > When I try to run the tests locally with "tox", following the > instructions in TESTING.rst, it looks like the Docker Zookeeper > instance does not start up properly and the majority of tests fail with > a timeout. I tried modifying "tox.ini" in order to open/map port 2181 > and I can see that "tox" starts up the zookeeper container, but the > tests still timeout. > > Looking at the code, I see a couple of issues: > 1. hostname ... how does tox know the hostname/ip-address of the > container (with default bridge networking) > 2. certificates ... are these required and how is tox configuring this? > > Or am I on the wrong path? I see that tools/test-setup-docker.sh seems > to have a different way of running the tests. > > Thanks, > > - mike. > CONFIDENTIALITY NOTICE: > This e-mail (and its attachments) may contain confidential and legally > privileged information or information protected from disclosure. If you > are not the intended recipient, you are hereby notified that any > disclosure, copying, distribution, or use of the information contained > herein is strictly prohibited. In this case, please immediately notify > the sender by return e-mail, delete the message (and any accompanying > documents) and destroy all printed hard copies. Thank you for your > cooperation. > > Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. > CONFIDENTIALITY NOTICE: > This e-mail (and its attachments) may contain confidential and legally > privileged information or information protected from disclosure. If you > are not the intended recipient, you are hereby notified that any > disclosure, copying, distribution, or use of the information contained > herein is strictly prohibited. In this case, please immediately notify > the sender by return e-mail, delete the message (and any accompanying > documents) and destroy all printed hard copies. Thank you for your > cooperation. > > Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From mikeyang at andestech.com Tue Apr 27 23:49:39 2021 From: mikeyang at andestech.com (mikeyang at andestech.com) Date: Tue, 27 Apr 2021 23:49:39 +0000 Subject: questions about locally running zuul repo tests In-Reply-To: <5f6e7649-52f7-4b11-a33c-0f56df4648b9@www.fastmail.com> References: , <5f6e7649-52f7-4b11-a33c-0f56df4648b9@www.fastmail.com> Message-ID: I have 12 (old) vCPUs, so running $ python3 -c "import multiprocessing; print(int(multiprocessing.cpu_count()/2))" prints "6". I had some other things running on my machine while the tests were running and I saw that CPU usage had maxed out. I just kicked off ... $ stestr run --failing (no concurrency). I'll update once this finishes (this time, ~64 failures) with my new results to see if the remaining failures suggest some specific that is missing. Thanks, - mike. ________________________________________ From: Clark Boylan [cboylan at sapwetik.org] Sent: Tuesday, April 27, 2021 3:33 PM To: zuul-discuss at lists.zuul-ci.org Subject: Re: questions about locally running zuul repo tests On Tue, Apr 27, 2021, at 3:28 PM, mikeyang at andestech.com wrote: > > Okay, answering my own question. > > It looks like the procedure for running tests locally requires ... > > 1. install docker-compose > 2. run `tools/test-setup-docker.sh` > 3. change envlist in tox.ini from "linters,py3{-docker}" to just "linters,py3" > 4. run `tox` > > I see ~100 failures. I tried rerunning one of the failures and it > passed in < 3 seconds. I'm not sure if there is some non-determinism > due to resource contraints or races/timing. The test suite will run in parallel based on the number of CPUs you have. That means conflicts are possible. Our typical CI runner will run with 8 way parallelism. If you've got more CPUs than that you can limit to 8 and see if you get a different result with: `tox -epy38 -- --concurrency=8` I would expect this to be fairly robust since we do run it frequently. I do wonder if there is some env/setup item that is still missed (maybe mysql or build of the js artifacts)? > > - mike. > > *From:* mikeyang at andestech.com [mikeyang at andestech.com] > *Sent:* Tuesday, April 27, 2021 1:04 PM > *To:* Zuul-discuss at lists.zuul-ci.org > *Subject:* questions about locally running zuul repo tests > > Hi, > > As part of my eval, I'm looking at the dev process. And we're > interested in a Gitea driver (it looks like there was a WIP change > last year). > > When I try to run the tests locally with "tox", following the > instructions in TESTING.rst, it looks like the Docker Zookeeper > instance does not start up properly and the majority of tests fail with > a timeout. I tried modifying "tox.ini" in order to open/map port 2181 > and I can see that "tox" starts up the zookeeper container, but the > tests still timeout. > > Looking at the code, I see a couple of issues: > 1. hostname ... how does tox know the hostname/ip-address of the > container (with default bridge networking) > 2. certificates ... are these required and how is tox configuring this? > > Or am I on the wrong path? I see that tools/test-setup-docker.sh seems > to have a different way of running the tests. > > Thanks, > > - mike. > CONFIDENTIALITY NOTICE: > This e-mail (and its attachments) may contain confidential and legally > privileged information or information protected from disclosure. If you > are not the intended recipient, you are hereby notified that any > disclosure, copying, distribution, or use of the information contained > herein is strictly prohibited. In this case, please immediately notify > the sender by return e-mail, delete the message (and any accompanying > documents) and destroy all printed hard copies. Thank you for your > cooperation. > > Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. > CONFIDENTIALITY NOTICE: > This e-mail (and its attachments) may contain confidential and legally > privileged information or information protected from disclosure. If you > are not the intended recipient, you are hereby notified that any > disclosure, copying, distribution, or use of the information contained > herein is strictly prohibited. In this case, please immediately notify > the sender by return e-mail, delete the message (and any accompanying > documents) and destroy all printed hard copies. Thank you for your > cooperation. > > Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss CONFIDENTIALITY NOTICE: This e-mail (and its attachments) may contain confidential and legally privileged information or information protected from disclosure. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein is strictly prohibited. In this case, please immediately notify the sender by return e-mail, delete the message (and any accompanying documents) and destroy all printed hard copies. Thank you for your cooperation. Copyright ANDES TECHNOLOGY CORPORATION - All Rights Reserved. From asmundo at gmail.com Wed Apr 28 10:49:38 2021 From: asmundo at gmail.com (=?UTF-8?B?w4VzbXVuZCDDmHN0dm9sZA==?=) Date: Wed, 28 Apr 2021 12:49:38 +0200 Subject: help to getting speculative merged testing fixed Message-ID: hi Probably a newbie mistake. I have: - job: name: build-integration-tests parent: base run: playbooks/build_and_integrationtest.yaml required-projects: - resoptima/irma - name: resoptima/tools override-checkout: main - zuul/irma-common-jobs - resoptima/irma.fileimporter In the gate pipeline I get parallel unrelated merged executions, see attached image. I am sure I did something wrong. Any help appreciated! Cheers, Åsmund [image: image.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 33633 bytes Desc: not available URL: From jim at acmegating.com Wed Apr 28 17:57:47 2021 From: jim at acmegating.com (James E. Blair) Date: Wed, 28 Apr 2021 10:57:47 -0700 Subject: help to getting speculative merged testing fixed In-Reply-To: References: Message-ID: <48f498bc-e3c4-d774-3703-80a077f2bfa6@acmegating.com> On 4/28/21 3:49 AM, Åsmund Østvold wrote: > hi > > Probably a newbie mistake. I have: > > - job: >     name: build-integration-tests >     parent: base >     run: playbooks/build_and_integrationtest.yaml >     required-projects: >       - resoptima/irma >       - name: resoptima/tools >         override-checkout: main >       - zuul/irma-common-jobs >       - resoptima/irma.fileimporter > > In the gate pipeline I get parallel unrelated merged executions, see > attached image. > > I am sure I did something wrong. Any help appreciated! Ah, you need to tell Zuul that those two projects share a queue (it probably could figure that out automatically based on jobs and required projects, but it doesn't). Set the 'queue' attribute with the same value on the project definition of both projects: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.queue -Jim From asmundo at gmail.com Fri Apr 30 11:55:23 2021 From: asmundo at gmail.com (=?UTF-8?B?w4VzbXVuZCDDmHN0dm9sZA==?=) Date: Fri, 30 Apr 2021 13:55:23 +0200 Subject: help to getting speculative merged testing fixed In-Reply-To: <48f498bc-e3c4-d774-3703-80a077f2bfa6@acmegating.com> References: <48f498bc-e3c4-d774-3703-80a077f2bfa6@acmegating.com> Message-ID: Thank you so much for your help James! On Wed, Apr 28, 2021 at 7:57 PM James E. Blair wrote: > On 4/28/21 3:49 AM, Åsmund Østvold wrote: > > hi > > > > Probably a newbie mistake. I have: > > > > - job: > > name: build-integration-tests > > parent: base > > run: playbooks/build_and_integrationtest.yaml > > required-projects: > > - resoptima/irma > > - name: resoptima/tools > > override-checkout: main > > - zuul/irma-common-jobs > > - resoptima/irma.fileimporter > > > > In the gate pipeline I get parallel unrelated merged executions, see > > attached image. > > > > I am sure I did something wrong. Any help appreciated! > > Ah, you need to tell Zuul that those two projects share a queue (it > probably could figure that out automatically based on jobs and required > projects, but it doesn't). > > Set the 'queue' attribute with the same value on the project definition > of both projects: > > https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.queue > > -Jim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 30 22:14:59 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 30 Apr 2021 17:14:59 -0500 Subject: [Diversity] Diversity & Inclusion WG Meeting reminder Message-ID: The Diversity & Inclusion WG invites members of all OSF projects to our meeting Monday May 3rd, at 17:00 UTC in the #openinfra-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity -wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: