From corvus at inaugust.com Mon Jan 7 22:46:54 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 07 Jan 2019 14:46:54 -0800 Subject: Zuul project update 2019-01-07 Message-ID: <87r2dorspt.fsf@meyer.lemoncheese.net> Hi, Welcome back from the holidays! Fabian has been working on a driver for Pagure (https://pagure.io/ ) and now has a usable implementation here: https://review.openstack.org/#/c/604404/ I have continued work on artifact URL storage in the db in service of building a 'promote' pipeline; I should be able to have jobs and roles demonstrating it soon. I'd like to call everyone's attention to three items in review with wide-ranging impact; it would be good for as many folks as possible to review these: Governance document: https://review.openstack.org/622439 Ansible multi-version support spec: https://review.openstack.org/623927 Scheduler scale-out spec: https://review.openstack.org/621479 Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From pabelanger at redhat.com Tue Jan 8 12:03:00 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 8 Jan 2019 07:03:00 -0500 Subject: Zuul project update 2019-01-07 In-Reply-To: <87r2dorspt.fsf@meyer.lemoncheese.net> References: <87r2dorspt.fsf@meyer.lemoncheese.net> Message-ID: <20190108120300.GA10454@localhost.localdomain> On Mon, Jan 07, 2019 at 02:46:54PM -0800, James E. Blair wrote: > Hi, > > Welcome back from the holidays! > > Fabian has been working on a driver for Pagure (https://pagure.io/ ) > and now has a usable implementation here: https://review.openstack.org/#/c/604404/ > > I have continued work on artifact URL storage in the db in service of > building a 'promote' pipeline; I should be able to have jobs and roles > demonstrating it soon. > > I'd like to call everyone's attention to three items in review with > wide-ranging impact; it would be good for as many folks as possible to > review these: > > Governance document: https://review.openstack.org/622439 > > Ansible multi-version support spec: https://review.openstack.org/623927 > > Scheduler scale-out spec: https://review.openstack.org/621479 > > Please help write next week's email by updating this etherpad: > > https://etherpad.openstack.org/p/zuul-update-email > Where do we stand on getting releases of nodepool and zuul this week? I know there was some issue with zuul.o.o last week, and _think_ everything was fixed? If not, anything specific to look at before we can tag? Thanks! Paul From feng.3.yang at nokia-sbell.com Wed Jan 9 07:51:37 2019 From: feng.3.yang at nokia-sbell.com (Yang, Feng 3. (NSB - CN/Hangzhou)) Date: Wed, 9 Jan 2019 07:51:37 +0000 Subject: Is it possible to support merge-mode: rebase in zuul merger? Message-ID: Hello Zuul experts, Happy New Year! I didn’t find clues in the historical mail list. So please allow me to ask here. We are using zuul v2 for ~2 years, recently we encountered a special case: merged codes in zuul merger is NOT the same with gerrit rebase codes. On zuul side, we are using default mode ‘git merge -s resolve’ On Gerrit repo side, we are using “rebase if necessary” strategy. Then we are thinking if it is possible to add a new merge-mode: rebase into zuul merger. Did you plan to add rebase strategy into zuul before? Or do you have any suggestions? I really appreciate it. Thanks. Looking forward your comments. Thank you in advance. 😊 Thanks Br, Finn -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Jan 9 17:13:52 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 09 Jan 2019 09:13:52 -0800 Subject: Is it possible to support merge-mode: rebase in zuul merger? In-Reply-To: (Feng 3. Yang's message of "Wed, 9 Jan 2019 07:51:37 +0000") References: Message-ID: <87muo9pxdb.fsf@meyer.lemoncheese.net> "Yang, Feng 3. (NSB - CN/Hangzhou)" writes: > Hello Zuul experts, > > Happy New Year! > > I didn’t find clues in the historical mail list. So please allow me to ask here. > > We are using zuul v2 for ~2 years, recently we encountered a special > case: merged codes in zuul merger is NOT the same with gerrit rebase > codes. > On zuul side, we are using default mode ‘git merge -s resolve’ > On Gerrit repo side, we are using “rebase if necessary” strategy. > > Then we are thinking if it is possible to add a new merge-mode: rebase into zuul merger. > Did you plan to add rebase strategy into zuul before? Or do you have > any suggestions? I really appreciate it. Thanks. > > Looking forward your comments. Thank you in advance. 😊 Currently the supported merge modes[1] in Zuul are "merge", "merge-resolve", and "cherry-pick". In Gerrit, "rebase if necessary" attempts a fast-forward and if that fails, rebases the change, "rebase always" simply always rebases the change. The differences between "rebase always" and "cherry pick" in Gerrit only relate to Gerrit's own dependency handling ("cherry pick" ignores dependencies and "rebase always" does not). That doesn't apply to Zuul, so if you use "cherry-pick" as the merge mode for a project, you should get behavior similar to Gerrit's "rebase always". That's not the same as "rebase if necessary", but perhaps it's closer than the merge strategies. It should be possible to implement "rebase if necessary" in Zuul -- I think it would just attempt a fast forward, and, if that fails, cherry-pick (a rebase of a single patch is the same as a cherry-pick). [1] https://zuul-ci.org/docs/zuul/user/config.html#attr-project.merge-mode -Jim From fungi at yuggoth.org Wed Jan 9 17:34:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Jan 2019 17:34:30 +0000 Subject: Is it possible to support merge-mode: rebase in zuul merger? In-Reply-To: <87muo9pxdb.fsf@meyer.lemoncheese.net> References: <87muo9pxdb.fsf@meyer.lemoncheese.net> Message-ID: <20190109173430.3537hzbmprob4zzo@yuggoth.org> On 2019-01-09 09:13:52 -0800 (-0800), James E. Blair wrote: [...] > In Gerrit, "rebase if necessary" attempts a fast-forward and if that > fails, rebases the change, "rebase always" simply always rebases the > change. The differences between "rebase always" and "cherry pick" in > Gerrit only relate to Gerrit's own dependency handling ("cherry pick" > ignores dependencies and "rebase always" does not). That doesn't apply > to Zuul, so if you use "cherry-pick" as the merge mode for a project, > you should get behavior similar to Gerrit's "rebase always". That's not > the same as "rebase if necessary", but perhaps it's closer than the > merge strategies. [...] Confirmed through local experimentation, a cherry-pick of a fast-forwardable commit seems to only modify the committer (name/address and timestamp) as well as removing or replacing the OpenPGP signature (if any), and so alters the commit hash accordingly. It is effectively identical to a pull of the commit followed by a `git commit --amend`. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From hashar at free.fr Fri Jan 11 21:25:52 2019 From: hashar at free.fr (Antoine Musso) Date: Fri, 11 Jan 2019 22:25:52 +0100 Subject: Is it possible to support merge-mode: rebase in zuul merger? In-Reply-To: <87muo9pxdb.fsf@meyer.lemoncheese.net> References: <87muo9pxdb.fsf@meyer.lemoncheese.net> Message-ID: On 09/01/2019 18:13, James E. Blair wrote: > Currently the supported merge modes[1] in Zuul are "merge", > "merge-resolve", and "cherry-pick". > > In Gerrit, "rebase if necessary" attempts a fast-forward and if that > fails, rebases the change, "rebase always" simply always rebases the > change. The differences between "rebase always" and "cherry pick" in > Gerrit only relate to Gerrit's own dependency handling ("cherry pick" > ignores dependencies and "rebase always" does not). That doesn't apply > to Zuul, so if you use "cherry-pick" as the merge mode for a project, > you should get behavior similar to Gerrit's "rebase always". That's not > the same as "rebase if necessary", but perhaps it's closer than the > merge strategies. > > It should be possible to implement "rebase if necessary" in Zuul -- I > think it would just attempt a fast forward, and, if that fails, > cherry-pick (a rebase of a single patch is the same as a cherry-pick). > > [1]https://zuul-ci.org/docs/zuul/user/config.html#attr-project.merge-mode Hello, TLDR: switch to -s recursive zuul-merger defaults to use the resolve strategy which would handle a trivial conflict in a file. However Gerrit behavior is slightly different. The code is the mergeStrategyName function in gerrit-server/src/main/java/com/google/gerrit/server/git/MergeUtil.java Since Gerrit 2.10, the default is to use JGit recursive merge (controlled by core.useRecursiveMerge defaulting to true). Most probably zuul-merger should be switched to use the recursive strategy. In practice I don't think it changes much. Gerrit also have the per project option "Allow Content Merge" which in project.config corresponds to: [submit] mergeContent = false When true, Gerrit uses the recursive strategy. However when it is false, Gerrit uses JGit SIMPLE_TWO_WAY_IN_CORE strategy: "Simple strategy to merge paths, without simultaneous edits" The option would cause Gerrit to reject the submit of a change if it has to do a conflict resolution, even if it is a trivial one. To elaborate, on a repository that does not allow content merge and given the topology: (master) | B C | / A With B and C both altering the same file. The zuul-merger will attempt to merge C in master and succeeds. It uses the resolve strategy which would most probably solve the conflict between B and C. Then Zuul asks Gerrit to submit the change, the merge get rejected due to the SIMPLE_TWO_WAY_IN_CORE and the change fails to merge. The change did get the Verified +2 and gate success message, but it is not submitted and marked in merge conflict. Unfortunately, git does not seem to have the equivalent of jgit SIMPLE_TWO_WAY_IN_CORE strategy. Though maybe the octopus strategy would reject it: > octopus > This resolves cases with more than two heads, but refuses to do a complex merge that needs manual resolution. It is primarily meant to be used for bundling topic branch heads together. This is the default merge strategy when pulling or merging more than one branch. A last barrier would be to have Zuul to detect the Gerrit reporter is willing to submit, then check the change mergeability and reject the reporting action when Gerrit says the change is not submittable. But that seems prone to errors/race conditions. Forcing octopus might be working, though I haven't tested that hypothesis. More food, Zuul could query Gerrit for the project submit type and thus adjust automatically between cherry-pick/rebase only etc :-) Related: https://phabricator.wikimedia.org/T210442#4776011 -- Antoine "hashar" Musso From Felix.FS.Schmidt at bmw.de Mon Jan 14 14:36:51 2019 From: Felix.FS.Schmidt at bmw.de (Felix.FS.Schmidt at bmw.de) Date: Mon, 14 Jan 2019 14:36:51 +0000 Subject: Zubbi 1.0.0 released Message-ID: Hello, With the start of the new year we are happy to announce that Zubbi 1.0.0 is now available on PyPI: https://pypi.org/project/zubbi/. The new Zubbi version provides support for Gerrit repositories. As opposed to GitHub, Zubbi yet doesn't support event handling for Gerrit. Thus, Gerrit repositories can only be scraped periodically. During the OpenStack summit, the question for a test instance came up. So we deployed a Zubbi demo showing the jobs and roles from the openstack-infra/zuul-jobs repo at http://ad43b5505180211e99b0002e3f6a3815-573606705.eu-west-1.elb.amazonaws.com/. Feel free to explore Zubbi :-) We hope that you like the demo and are interested in setting up a Zubbi on your own. For this case, we've added a quickstart guide in our README, explaining how to set up Zubbi with a minimal set of configuration (the same as the demo). In case you have any further questions or need help for setting up Zubbi, don't hesitate to contact us or create an issue in the GitHub repository: https://github.com/bmwcarit/zubbi. We would be happy to see a Zubbi for your Zuul setups :-) Best regards Felix -------------- next part -------------- An HTML attachment was scrubbed... URL: From lauren at openstack.org Mon Jan 14 18:07:01 2019 From: lauren at openstack.org (Lauren Sell) Date: Mon, 14 Jan 2019 12:07:01 -0600 Subject: Zuul 2019 planning Message-ID: Hi everyone, We’d like to work with the Zuul community on 2019 planning so we can best support and promote the project this year. If the mailing list is the best way to start the conversation, we can use the below etherpad, or have a discussion on this thread. We’re also happy to host a quick call if folks want to brainstorm in a high bandwidth setting. The main things we’re looking to document are the goals for the community this year (for example, attract more contributors or document more users in production or make Zuul easier to try out and use…) along with any key milestones, features, topics, etc. that we’ll want to communicate. Please take a look at the etherpad and add your thoughts over the next week: https://etherpad.openstack.org/p/zuul-2019-planning Thanks! Lauren -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Jan 14 23:28:54 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 14 Jan 2019 15:28:54 -0800 Subject: Zuul project update 2019-01-14 Message-ID: <87tviaddjd.fsf@meyer.lemoncheese.net> Hi, Today I approved our formal governance document[1], after receiving votes from all of the current maintainers. Thanks, everyone, for helping to create that! David is working on nodepool image build timeouts[2]. We made a Zuul release last week, and are about to make a Nodepool release. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email [1] https://zuul-ci.org/docs/zuul/governance.html [2] https://review.openstack.org/629923 -Jim From feng.3.yang at nokia-sbell.com Tue Jan 15 06:24:59 2019 From: feng.3.yang at nokia-sbell.com (Yang, Feng 3. (NSB - CN/Hangzhou)) Date: Tue, 15 Jan 2019 06:24:59 +0000 Subject: Is it possible to support merge-mode: rebase in zuul merger? In-Reply-To: <87muo9pxdb.fsf@meyer.lemoncheese.net> References: <87muo9pxdb.fsf@meyer.lemoncheese.net> Message-ID: Hello Jim, Thanks a lot for your clarification. That's valuable for us. 😊 Thanks Br, Finn -----Original Message----- From: James E. Blair Sent: Thursday, January 10, 2019 1:14 AM To: Yang, Feng 3. (NSB - CN/Hangzhou) Cc: zuul-discuss at lists.zuul-ci.org Subject: Re: Is it possible to support merge-mode: rebase in zuul merger? "Yang, Feng 3. (NSB - CN/Hangzhou)" writes: > Hello Zuul experts, > > Happy New Year! > > I didn’t find clues in the historical mail list. So please allow me to ask here. > > We are using zuul v2 for ~2 years, recently we encountered a special > case: merged codes in zuul merger is NOT the same with gerrit rebase > codes. > On zuul side, we are using default mode ‘git merge -s resolve’ > On Gerrit repo side, we are using “rebase if necessary” strategy. > > Then we are thinking if it is possible to add a new merge-mode: rebase into zuul merger. > Did you plan to add rebase strategy into zuul before? Or do you have > any suggestions? I really appreciate it. Thanks. > > Looking forward your comments. Thank you in advance. 😊 Currently the supported merge modes[1] in Zuul are "merge", "merge-resolve", and "cherry-pick". In Gerrit, "rebase if necessary" attempts a fast-forward and if that fails, rebases the change, "rebase always" simply always rebases the change. The differences between "rebase always" and "cherry pick" in Gerrit only relate to Gerrit's own dependency handling ("cherry pick" ignores dependencies and "rebase always" does not). That doesn't apply to Zuul, so if you use "cherry-pick" as the merge mode for a project, you should get behavior similar to Gerrit's "rebase always". That's not the same as "rebase if necessary", but perhaps it's closer than the merge strategies. It should be possible to implement "rebase if necessary" in Zuul -- I think it would just attempt a fast forward, and, if that fails, cherry-pick (a rebase of a single patch is the same as a cherry-pick). [1] https://zuul-ci.org/docs/zuul/user/config.html#attr-project.merge-mode -Jim From mhuin at redhat.com Tue Jan 15 18:25:36 2019 From: mhuin at redhat.com (Matthieu Huin) Date: Tue, 15 Jan 2019 19:25:36 +0100 Subject: Discrepancies in zuul-jobs Message-ID: Hello, There seems to be a few discrepancies in the way some roles are implemented in zuul-jobs. The main problems I have encountered are: [1] discrepancies in dependencies management: for example the upload-to-pypi job assumes twine is installed on the build host, and will fail if twine is missing; whereas the npm-related jobs install nodejs and npm in the playbooks. [2] discrepancies in OS support: some of the npm-related jobs assume they're running on Debian or Ubuntu (explicit call to apt to install dependencies), other jobs go with bindep. Concerning [1], maybe guidelines for writing jobs and the kind of assumptions that can be made about execution environments should be set; I don't remember seeing any in the documentation but let me know if I am wrong. Anyhow, should we assume that jobs will handle pre-requisites and dependencies or should this be left up to the node images maintainers? Concerning [2], the Software Factory is working on expanding third party CI on zuul-jobs by adding coverage on CentOS and Fedora. This is actually how I encountered the problem with the npm jobs. This will help detect these kinds of problems. As with [1], guidelines should be issued about OS support of zuul-jobs. If you think this is the way to go I can open a review to start a section on guidelines for writing jobs in the documentation. WDYT ? Cheers, MHU -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 15 18:37:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Jan 2019 18:37:16 +0000 Subject: Discrepancies in zuul-jobs In-Reply-To: References: Message-ID: <20190115183716.q5wwxvkocmgjierb@yuggoth.org> On 2019-01-15 19:25:36 +0100 (+0100), Matthieu Huin wrote: [...] > some of the npm-related jobs assume they're running on Debian or > Ubuntu (explicit call to apt to install dependencies), other jobs > go with bindep. [...] Bindep can be a useful shortcut at times, but we should remember that its primary use case is in setting up targeted dev environments for folks working on modifying and testing software locally. Leveraging an existing bindep.txt (or a specific named profile within one) for certain CI jobs may be convenient but this is not really the point of the tool. I don't personally think bindep lists should be the preferred means of communicating job-specific (rather than project-specific) dependencies, especially since the job definition language we use (Ansible) provides its own package management interface. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Tue Jan 15 18:45:34 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 15 Jan 2019 13:45:34 -0500 Subject: Discrepancies in zuul-jobs In-Reply-To: References: Message-ID: <20190115184534.GA3422@localhost.localdomain> On Tue, Jan 15, 2019 at 07:25:36PM +0100, Matthieu Huin wrote: > Hello, > > There seems to be a few discrepancies in the way some roles are implemented > in zuul-jobs. The main problems I have encountered are: > > [1] discrepancies in dependencies management: for example the > upload-to-pypi job assumes twine is installed on the build host, and will > fail if twine is missing; whereas the npm-related jobs install nodejs and > npm in the playbooks. > [2] discrepancies in OS support: some of the npm-related jobs assume > they're running on Debian or Ubuntu (explicit call to apt to install > dependencies), other jobs go with bindep. > I would prefer if we didn't call out to apt / yum directly in jobs and use the package task. Then, we can just define package dependencies per OS. I also prefer to keep specific repos out side of the job, and handle that setup via a pre-run. That way, jobs are not hardcoded to specific versions or repos. Personally, I like the idea of placing job dependencies in bindep, then parent the job to unittests and things should just work. The downside to this approach, is in the case of a large amount of repos. > Concerning [1], maybe guidelines for writing jobs and the kind of > assumptions that can be made about execution environments should be set; I > don't remember seeing any in the documentation but let me know if I am > wrong. Anyhow, should we assume that jobs will handle pre-requisites and > dependencies or should this be left up to the node images maintainers? > > Concerning [2], the Software Factory is working on expanding third party CI > on zuul-jobs by adding coverage on CentOS and Fedora. This is actually how > I encountered the problem with the npm jobs. This will help detect these > kinds of problems. As with [1], guidelines should be issued about OS > support of zuul-jobs. > > If you think this is the way to go I can open a review to start a section > on guidelines for writing jobs in the documentation. > > WDYT ? > > Cheers, > > MHU > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From dmsimard at redhat.com Tue Jan 15 18:53:05 2019 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 15 Jan 2019 13:53:05 -0500 Subject: Discrepancies in zuul-jobs In-Reply-To: References: Message-ID: David Moreau Simard dmsimard = [irc, github, twitter] On Tue, Jan 15, 2019 at 1:26 PM Matthieu Huin wrote: > > discrepancies in OS support: some of the npm-related jobs assume they're running on Debian or Ubuntu (explicit call to apt to install dependencies), other jobs go with bindep. > It's hard to maintain every distro for every role -- there needs to be interest for it and contributors need to show up. For this particular case, I actually happen to have sent a patch for it [1] although I still want to write integration tests before removing my WIP. I'm thinking about this again and I guess one thing we could do is to make better use of Ansible's meta/main.yml file for describing what is supported and what isn't [2]. [1]: https://review.openstack.org/#/c/629554/ [2]: https://galaxy.ansible.com/docs/contributing/creating_role.html#role-metadata From corvus at inaugust.com Tue Jan 15 23:00:17 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 15 Jan 2019 15:00:17 -0800 Subject: Discrepancies in zuul-jobs In-Reply-To: (Matthieu Huin's message of "Tue, 15 Jan 2019 19:25:36 +0100") References: Message-ID: <87bm4hbk72.fsf@meyer.lemoncheese.net> Matthieu Huin writes: > Hello, > > There seems to be a few discrepancies in the way some roles are implemented > in zuul-jobs. The main problems I have encountered are: > > [1] discrepancies in dependencies management: for example the > upload-to-pypi job assumes twine is installed on the build host, and will > fail if twine is missing; whereas the npm-related jobs install nodejs and > npm in the playbooks. > [2] discrepancies in OS support: some of the npm-related jobs assume > they're running on Debian or Ubuntu (explicit call to apt to install > dependencies), other jobs go with bindep. > > Concerning [1], maybe guidelines for writing jobs and the kind of > assumptions that can be made about execution environments should be set; I > don't remember seeing any in the documentation but let me know if I am > wrong. Anyhow, should we assume that jobs will handle pre-requisites and > dependencies or should this be left up to the node images maintainers? > > Concerning [2], the Software Factory is working on expanding third party CI > on zuul-jobs by adding coverage on CentOS and Fedora. This is actually how > I encountered the problem with the npm jobs. This will help detect these > kinds of problems. As with [1], guidelines should be issued about OS > support of zuul-jobs. > > If you think this is the way to go I can open a review to start a section > on guidelines for writing jobs in the documentation. I think addressing both issues and documenting the outcome in the zuul-jobs repo is great. Other folks have responded with specific and sometimes contradictory suggestions. I think that's good too, because it clearly indicates the need. :) -Jim From mhuin at redhat.com Tue Jan 15 23:07:06 2019 From: mhuin at redhat.com (Matthieu Huin) Date: Wed, 16 Jan 2019 00:07:06 +0100 Subject: Discrepancies in zuul-jobs In-Reply-To: <87bm4hbk72.fsf@meyer.lemoncheese.net> References: <87bm4hbk72.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Jan 16, 2019 at 12:00 AM James E. Blair wrote: > Matthieu Huin writes: > > > Hello, > > > > There seems to be a few discrepancies in the way some roles are > implemented > > in zuul-jobs. The main problems I have encountered are: > > > > [1] discrepancies in dependencies management: for example the > > upload-to-pypi job assumes twine is installed on the build host, and will > > fail if twine is missing; whereas the npm-related jobs install nodejs and > > npm in the playbooks. > > [2] discrepancies in OS support: some of the npm-related jobs assume > > they're running on Debian or Ubuntu (explicit call to apt to install > > dependencies), other jobs go with bindep. > > > > Concerning [1], maybe guidelines for writing jobs and the kind of > > assumptions that can be made about execution environments should be set; > I > > don't remember seeing any in the documentation but let me know if I am > > wrong. Anyhow, should we assume that jobs will handle pre-requisites and > > dependencies or should this be left up to the node images maintainers? > > > > Concerning [2], the Software Factory is working on expanding third party > CI > > on zuul-jobs by adding coverage on CentOS and Fedora. This is actually > how > > I encountered the problem with the npm jobs. This will help detect these > > kinds of problems. As with [1], guidelines should be issued about OS > > support of zuul-jobs. > > > > If you think this is the way to go I can open a review to start a section > > on guidelines for writing jobs in the documentation. > > I think addressing both issues and documenting the outcome in the > zuul-jobs repo is great. > > Other folks have responded with specific and sometimes contradictory > suggestions. I think that's good too, because it clearly indicates > the need. :) > > -Jim > > Then I'll open a review in the morning, unless someone else would like to own this. Looking forward to y'all's input! -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu Jan 17 00:49:56 2019 From: claire at openstack.org (Claire Massey) Date: Wed, 16 Jan 2019 18:49:56 -0600 Subject: Zuul featured in Open Infrastructure Community Newsletter Message-ID: <0B917E70-2BC5-4AA6-A63D-D209F509DF6E@openstack.org> Hi everyone, Today, OSF distributed the next Open Infrastructure Community Newsletter. The bi-weekly newsletters provide a digest of the latest developments and activities across open infrastructure projects, events and users. This week, we highlighted the latest with *Zuul* as well as a brief update about the Open Infrastructure Summit and additional OSF project news. You can checkout the full newsletter on Superuser [1], and if you are interested in receiving the upcoming newsletters, you can subscribe here [2]. If you would like to contribute or have feedback, please reach out to community at openstack.org . Thanks, Claire [1] http://superuser.openstack.org/articles/inside-open-infrastructure-1-16/ [2] https://www.openstack.org/community/email-signup -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhuin at redhat.com Thu Jan 17 14:03:07 2019 From: mhuin at redhat.com (Matthieu Huin) Date: Thu, 17 Jan 2019 15:03:07 +0100 Subject: Discrepancies in zuul-jobs In-Reply-To: References: <87bm4hbk72.fsf@meyer.lemoncheese.net> Message-ID: Hello, I have opened this review to further the discussion: https://review.openstack.org/631507 The gist of it is suggesting that roles should be self-sufficient (ie handle dependencies), OS-agnostic as much as possible, and "host type"-agnostic (ie VM or Container) as much as possible. On Wed, Jan 16, 2019 at 12:07 AM Matthieu Huin wrote: > > > On Wed, Jan 16, 2019 at 12:00 AM James E. Blair > wrote: > >> Matthieu Huin writes: >> >> > Hello, >> > >> > There seems to be a few discrepancies in the way some roles are >> implemented >> > in zuul-jobs. The main problems I have encountered are: >> > >> > [1] discrepancies in dependencies management: for example the >> > upload-to-pypi job assumes twine is installed on the build host, and >> will >> > fail if twine is missing; whereas the npm-related jobs install nodejs >> and >> > npm in the playbooks. >> > [2] discrepancies in OS support: some of the npm-related jobs assume >> > they're running on Debian or Ubuntu (explicit call to apt to install >> > dependencies), other jobs go with bindep. >> > >> > Concerning [1], maybe guidelines for writing jobs and the kind of >> > assumptions that can be made about execution environments should be >> set; I >> > don't remember seeing any in the documentation but let me know if I am >> > wrong. Anyhow, should we assume that jobs will handle pre-requisites and >> > dependencies or should this be left up to the node images maintainers? >> > >> > Concerning [2], the Software Factory is working on expanding third >> party CI >> > on zuul-jobs by adding coverage on CentOS and Fedora. This is actually >> how >> > I encountered the problem with the npm jobs. This will help detect these >> > kinds of problems. As with [1], guidelines should be issued about OS >> > support of zuul-jobs. >> > >> > If you think this is the way to go I can open a review to start a >> section >> > on guidelines for writing jobs in the documentation. >> >> I think addressing both issues and documenting the outcome in the >> zuul-jobs repo is great. >> >> Other folks have responded with specific and sometimes contradictory >> suggestions. I think that's good too, because it clearly indicates >> the need. :) >> >> -Jim >> >> > Then I'll open a review in the morning, unless someone else would like to > own this. Looking forward to y'all's input! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Jan 17 17:37:45 2019 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 17 Jan 2019 17:37:45 +0000 Subject: Two new Zuul ideas pass-to-parent: true and named project groups Message-ID: Hey all! I had an idea for a zuul feature to make the containers jobs (and jobs like them) easier. And then one that also came to mind to make some of our central jobs easier to deal with in openstack. First idea: Right now, secrets, for good reason, are only usable by jobs in their own repo. However, for use-cases like the containers job, or publishing to pypi, or readthedocs even - it's hard to have a job in zuul-jobs that is usable for doing those tasks because they all need secrets. The idea is to add a flag to the secret request, like this: - job: name: system-config-build-image parent: build-container-image abstract: true allowed-projects: openstack-infra/system-config - job: name: system-config-upload-image parent: system-config-build-image secrets: name: credentials secret: system-config-dockerhub allowed-projects: openstack-infra/system-config secrets: name: credentials secret: system-config-dockerhub pass-to-parent: true By passing the flag, we tell zuul to pass the secret variable file to every playbook in the inheritance chain above the job (but still not child jobs). Obviously this requires that the user of the job trust the parent job. In fact, maybe we even add a "warning" - so that if someone submits a job config to zuul with a pass-to-parent flag, it leaves an inline comment on the change (without a negative vote) warning about the implications of choosing to do that - that way reviewers can be sure they understand that they're trusting someone upstream from them. Second idea: Somewhere - maybe in config repo - but maybe in main.yaml - or maybe it's even just a normal global object - we allow for the creation of arbitrary named groups of projects. So - for instance, if we go for config-projects, something like: - project-group: name: openstack projects: - openstack/nova - openstack/swift - project-group: name: opendev projects: - openstack-infra/system-config - openstack-infra/bindep Then, we allow project-group entries to be used in allowed-projects, like this: - job: name: publish-openstack-sphinx-docs parent: publish-openstack-sphinx-docs-base final: true pre-run: post-run: - playbooks/sphinx/post.yaml allowed-projects: - project-group: openstack This would allow people (say AJaeger, or whoever) to maintain a list of "official" projects that can publish docs to docs.openstack.org ... but let any of those projects add the job, so that we didn't have to put the jobs into project pipelines in project-config. We could do a similar thing for container jobs. Say put this in system-config: - job: name: opendev-upload-image parent: opendev-build-image secrets: name: credentials secret: opendev-dockerhub allowed-projects: - project-group: opendev Then in bindep we have: - job: name: bindep-upload-image parent: opendev-upload-image final: true vars: &jinja-init_vars images: - target: bindepo repository: opendevorg/bindep The more I type things, the more I think there's no need to restrict the creation of named project groups. I could see openstack projects, like say kolla, wanting to create a project-group in one of their repos, so that they could put a secret into a job in a central repo for their dockerhub org, but use the job they make in each of their individual repos. Some of this could be done with tenants - but those are a bit more heavy-weight - and we already see that inside of OpenStack there are teams with many repos. There's also some similarity in problem solved between the two - both make for better reuse of jobs that needs privileges. But I think it's two different use-cases. One is about facilitating writing jobs in zuul-jobs (or other generic job locations) that need secrets to be passed to them but are otherwise generic and parameterizable. The second is about having discreet jobs that have specific secrets that should only be used in a subset of projects - but where it's possible that a given subset of projects might get access to a group of discreet credentialed jobs and managing membership in the group makes more logical sense than managing project-pipeline assignment. It also allows for having child-jobs of discreet credentialed jobs (like the container example) Anywho - what do people think? Monty From corvus at inaugust.com Thu Jan 17 18:11:43 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 17 Jan 2019 10:11:43 -0800 Subject: Two new Zuul ideas pass-to-parent: true and named project groups In-Reply-To: (Monty Taylor's message of "Thu, 17 Jan 2019 17:37:45 +0000") References: Message-ID: <874la7w3vk.fsf@meyer.lemoncheese.net> Monty Taylor writes: > Hey all! > > I had an idea for a zuul feature to make the containers jobs (and jobs > like them) easier. And then one that also came to mind to make some of > our central jobs easier to deal with in openstack. > > First idea: > > Right now, secrets, for good reason, are only usable by jobs in their > own repo. However, for use-cases like the containers job, or > publishing to pypi, or readthedocs even - it's hard to have a job in > zuul-jobs that is usable for doing those tasks because they all need > secrets. > > The idea is to add a flag to the secret request, like this: > > - job: > name: system-config-build-image > parent: build-container-image > abstract: true > allowed-projects: openstack-infra/system-config > > - job: > name: system-config-upload-image > parent: system-config-build-image > secrets: > name: credentials > secret: system-config-dockerhub > allowed-projects: openstack-infra/system-config > secrets: > name: credentials > secret: system-config-dockerhub > pass-to-parent: true > > By passing the flag, we tell zuul to pass the secret variable file to > every playbook in the inheritance chain above the job (but still not > child jobs). Obviously this requires that the user of the job trust > the parent job. > > In fact, maybe we even add a "warning" - so that if someone submits a > job config to zuul with a pass-to-parent flag, it leaves an inline > comment on the change (without a negative vote) warning about the > implications of choosing to do that - that way reviewers can be sure > they understand that they're trusting someone upstream from them. I was having similar thoughts, unsurprisingly, as I was working on the container jobs as well. Basically, we have to ask folks to create boilerplate jobs in each project because of this, which isn't terrible, but it's not as nice as, say, our pypi publication jobs. Moreover, this would even let us move our pypi publication jobs into zuul-jobs, and then folks can set up site-local publication jobs which inherit from them and just add secrets in their project-config repos. I was about to send out an email about how to use the container publication roles that we just wrote, however, if this idea works out, I think it would significantly change how I would recommend those jobs be constructed, so I'll delay further work on the container roles and jobs while I start looking into this. > Second idea: > > Somewhere - maybe in config repo - but maybe in main.yaml - or maybe > it's even just a normal global object - we allow for the creation of > arbitrary named groups of projects. So - for instance, if we go for > config-projects, something like: > > - project-group: > name: openstack > projects: > - openstack/nova > - openstack/swift > > - project-group: > name: opendev > projects: > - openstack-infra/system-config > - openstack-infra/bindep > > Then, we allow project-group entries to be used in allowed-projects, > like this: > > - job: > name: publish-openstack-sphinx-docs > parent: publish-openstack-sphinx-docs-base > final: true > pre-run: > post-run: > - playbooks/sphinx/post.yaml > allowed-projects: > - project-group: openstack > > This would allow people (say AJaeger, or whoever) to maintain a list > of "official" projects that can publish docs to docs.openstack.org > ... but let any of those projects add the job, so that we didn't have > to put the jobs into project pipelines in project-config. We could do > a similar thing for container jobs. Say put this in system-config: > > - job: > name: opendev-upload-image > parent: opendev-build-image > secrets: > name: credentials > secret: opendev-dockerhub > allowed-projects: > - project-group: opendev > > Then in bindep we have: > > - job: > name: bindep-upload-image > parent: opendev-upload-image > final: true > vars: &jinja-init_vars > images: > - target: bindepo > repository: opendevorg/bindep > > The more I type things, the more I think there's no need to restrict > the creation of named project groups. I could see openstack projects, > like say kolla, wanting to create a project-group in one of their > repos, so that they could put a secret into a job in a central repo > for their dockerhub org, but use the job they make in each of their > individual repos. > > Some of this could be done with tenants - but those are a bit more > heavy-weight - and we already see that inside of OpenStack there are > teams with many repos. > > There's also some similarity in problem solved between the two - both > make for better reuse of jobs that needs privileges. But I think it's > two different use-cases. One is about facilitating writing jobs in > zuul-jobs (or other generic job locations) that need secrets to be > passed to them but are otherwise generic and parameterizable. The > second is about having discreet jobs that have specific secrets that > should only be used in a subset of projects - but where it's possible > that a given subset of projects might get access to a group of > discreet credentialed jobs and managing membership in the group makes > more logical sense than managing project-pipeline assignment. It also > allows for having child-jobs of discreet credentialed jobs (like the > container example) I lean toward plain config object. This could also be useful for "required-projects". -Jim From cboylan at sapwetik.org Thu Jan 17 18:57:30 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 17 Jan 2019 10:57:30 -0800 Subject: Two new Zuul ideas pass-to-parent: true and named project groups In-Reply-To: References: Message-ID: <1547751450.2203850.1637355592.0876B07B@webmail.messagingengine.com> On Thu, Jan 17, 2019, at 9:37 AM, Monty Taylor wrote: > Hey all! > > I had an idea for a zuul feature to make the containers jobs (and jobs > like them) easier. And then one that also came to mind to make some of > our central jobs easier to deal with in openstack. > > First idea: > > Right now, secrets, for good reason, are only usable by jobs in their > own repo. However, for use-cases like the containers job, or publishing > to pypi, or readthedocs even - it's hard to have a job in zuul-jobs that > is usable for doing those tasks because they all need secrets. > > The idea is to add a flag to the secret request, like this: > > > - job: > name: system-config-build-image > parent: build-container-image > abstract: true > allowed-projects: openstack-infra/system-config > > - job: > name: system-config-upload-image > parent: system-config-build-image > secrets: > name: credentials > secret: system-config-dockerhub > allowed-projects: openstack-infra/system-config > secrets: > name: credentials > secret: system-config-dockerhub > pass-to-parent: true > > By passing the flag, we tell zuul to pass the secret variable file to > every playbook in the inheritance chain above the job (but still not > child jobs). Obviously this requires that the user of the job trust the > parent job. > > In fact, maybe we even add a "warning" - so that if someone submits a > job config to zuul with a pass-to-parent flag, it leaves an inline > comment on the change (without a negative vote) warning about the > implications of choosing to do that - that way reviewers can be sure > they understand that they're trusting someone upstream from them. > It would also be helpful to clearly list (or illustrate if a graph is better) the list of playbooks that would get access and links to their content. To give people an idea of what their risk surface is. Clark From corvus at inaugust.com Thu Jan 17 19:20:28 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 17 Jan 2019 11:20:28 -0800 Subject: Two new Zuul ideas pass-to-parent: true and named project groups In-Reply-To: <1547751450.2203850.1637355592.0876B07B@webmail.messagingengine.com> (Clark Boylan's message of "Thu, 17 Jan 2019 10:57:30 -0800") References: <1547751450.2203850.1637355592.0876B07B@webmail.messagingengine.com> Message-ID: <874la7um4j.fsf@meyer.lemoncheese.net> Clark Boylan writes: > It would also be helpful to clearly list (or illustrate if a graph is > better) the list of playbooks that would get access and links to their > content. To give people an idea of what their risk surface is. Due to the late-binding nature of Zuul jobs, I don't think that will be practical pre-review (there may be a lot of conditionals for different branches). We could try it, but it could get out of hand quickly. It should be possible to see what playbooks saw the secret after a build is completed. It will also be possible to explore the different branch conditions using the web dashboard. And the forthcoming freeze API endpoint would be able to indicate it as well. I'm planning to omit the warning message from my initial implementation of this anyway, because I think the idea of leaving warning messages for potential security gotchas warrants separate consideration (if we do it, we should plan on doing it for more things, so we should identify what those are and the criteria for future inclusion). Implementing it partially raises the risk of people becoming complacent elsewhere. Instead I think we should, at least initially, rely on the current practice of "safe by default" and warnings in documentation. -Jim From cboylan at sapwetik.org Thu Jan 17 21:03:01 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 17 Jan 2019 13:03:01 -0800 Subject: Infra team upgrading review.openstack.org Gerrit from 2.13.9 to 2.13.12 January 18 at about 1700UTC Message-ID: <1547758981.2237058.1637449632.37C152F2@webmail.messagingengine.com> We will be performing a minor Gerrit upgrade to version 2.13.12 tomorrow (January 18, 2019) at 1700UTC. We've tested this upgrade on our dev server, https://review-dev.openstack.org, and expect it to be a quick upgrade. Any outage shouldn't last more than 10 minutes. We will let our configuration management tooling manage the upgrade so we won't have an exact time, but will try to get it as close to 1700UTC as possible. Feel free to test out the new version on the dev server. We are happy to answer any questions you might have as well. Sorry for the short notice and the cross post, Clark From iwienand at redhat.com Mon Jan 21 00:46:27 2019 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 21 Jan 2019 11:46:27 +1100 Subject: Two new Zuul ideas pass-to-parent: true and named project groups In-Reply-To: References: Message-ID: <20190121004627.GA23913@fedora19.localdomain> On Thu, Jan 17, 2019 at 05:37:45PM +0000, Monty Taylor wrote: > Hey all! > Right now, secrets, for good reason, are only usable by jobs in their own > repo. However, for use-cases like the containers job, or publishing to pypi, > or readthedocs even - it's hard to have a job in zuul-jobs that is usable > for doing those tasks because they all need secrets. Thanks; this sounds good. This was a major consideration in readthedocs, as they offer project-token based authentication; however that would have required each project to encrypt their own secret and then build the job themselves, rather than simply include a template. If that secret could be passed up to a centralised job via template arguments, that would be helpful. Especially since currently RTD publishing via the centralised user is broken thanks to [1] (further discussion in [2]). One minor concern is that generally these parent jobs won't be logging anything particularly useful (to avoid blasting the secret out). From a self-service POV, this can make it quite opaque as to what's going wrong, especially since it's quite easy to accidentally drop some extra whitespace in when encoding your secret that can mess things up. -i [1] https://github.com/rtfd/readthedocs.org/issues/4986 [2] https://github.com/rtfd/readthedocs.org/pull/5009 From tdecacqu at redhat.com Tue Jan 22 06:44:50 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 22 Jan 2019 15:44:50 +0900 Subject: Parameterized Build Message-ID: <877eex2ntp.tristanC@fedora> Hello, It seems like it would help the migration of Jenkins users if they could have a similar feature[0] in Zuul to manually trigger a job with custom parameters. One solution is to create a fake change or branch to trigger a job, but perhaps Zuul can offer a better user experience. In particular, the Jenkins implementation offers a nice form to set the parameters. I started to implement this feature with a jobtrigger driver[1] along with a web page[2] to expose the job's variables but James asked to describe the use case first. Our current use case is for QA/QE engineers who would like to run a single (usualy long running) job with custom parameters such as the product version, the system version, ... With Jenkins, we can have lots of jobs grouped as multijob running in different phases, and the parameterized build is a very convenient way to run a single job, sometime using a parameter to add a breakpoint and get access to the environment. Another solution would be to build on top of the zuul-runner to provide a similar workflow, but it seems like more efficient if we could re-use the zuul-scheduler and let it request/cleanup the nodes. In summary, could we adjust the architecture to support executing an individual job instead of an entire job graph and could we support a zuul-web trigger? Regards, -Tristan [0]: https://wiki.jenkins.io/display/JENKINS/Parameterized+Build [1]: https://review.openstack.org/555153 [2]: https://review.openstack.org/554839 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From claire at openstack.org Tue Jan 22 14:31:01 2019 From: claire at openstack.org (Claire Massey) Date: Tue, 22 Jan 2019 06:31:01 -0800 Subject: CFP Open Until January 23, Open Infrastructure Summit in Denver In-Reply-To: <22136D38-E6DF-4373-AE18-24DD35EAC544@openstack.org> References: <22136D38-E6DF-4373-AE18-24DD35EAC544@openstack.org> Message-ID: <4A952D2A-929B-407D-95C1-EF7556952D0B@openstack.org> Friendly reminder - *Tomorrow, Jan 23*, is the CFP deadline for the Denver Open Infrastructure Summit (formerly the OpenStack Summit). Submit talks here: https://www.openstack.org/summit/denver-2019/. > On Dec 18, 2018, at 8:15 AM, Claire Massey wrote: > > Hi everyone, > > FYI - the CFP is now open for the first Open Infrastructure Summit (formerly the OpenStack Summit) which will be held in Denver, Colorado April 29 - May 1, 2019. > > Wednesday, *January 23* is the deadline to Submit presentations . > > The Open Infrastructure Summit is organized by OSF and designed to be a place where open source infrastructure communities can come together and collaborate in the open. Zuul will have a large and prominent presence at the event so please submit talks to the CFP and plan to attend! > > SUBMIT YOUR PRESENTATION > Important info: > Based on previous Program Committee and attendee feedback, we have added / updated three Tracks: Security, Getting Started, and Open Development (previously Open Source Community). You can find the Track descriptions here . > All of the OSF pilot projects —including Airship, Kata Containers, StarlingX and Zuul — will be front and center alongside other open source communities like Ansible, Cloud Foundry, Docker, Kubernetes, and many more. > The Open Infrastructure Summit (formerly the OpenStack Summit), has evolved to recognize our diverse audience, and to signal to the market that the event is relevant for all IT infrastructure decision makers. > If you’re interested in influencing the Summit content, apply to be a Programming Committee member *, where you can also find a full list of time requirements and expectations. Nominations will close on January 4, 2019. The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. > > *OSF Staff will serve as the Programming Committee for the Getting Started Track. > > Denver Summit registration and sponsor sales are currently open. Learn more and email summit at openstack.org with any questions. > > Please email speakersupport at openstack.org with any questions or feedback. > > Thanks, > Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at mpagel.de Tue Jan 22 18:30:34 2019 From: mike at mpagel.de (Mike Pagel) Date: Tue, 22 Jan 2019 19:30:34 +0100 Subject: Parameterized Build In-Reply-To: <877eex2ntp.tristanC@fedora> Message-ID: That would be a great enhancement! We would use it for various workflows often related to release management and explicit build promotion. Today we are using release pipelines where the tag name sort of encodes parameters like versions, which are then parsed in the build. Works but certainly is a workaround. In fact, being able to trigger a job via a web page, even without parameters would be great from my point of view. Mike Sent from Nine ________________________________ Von: Tristan Cacqueray Gesendet: Dienstag, 22. Januar 2019 07:44 An: zuul-discuss at lists.zuul-ci.org Betreff: Parameterized Build Hello, It seems like it would help the migration of Jenkins users if they could have a similar feature[0] in Zuul to manually trigger a job with custom parameters. One solution is to create a fake change or branch to trigger a job, but perhaps Zuul can offer a better user experience. In particular, the Jenkins implementation offers a nice form to set the parameters. I started to implement this feature with a jobtrigger driver[1] along with a web page[2] to expose the job's variables but James asked to describe the use case first. Our current use case is for QA/QE engineers who would like to run a single (usualy long running) job with custom parameters such as the product version, the system version, ... With Jenkins, we can have lots of jobs grouped as multijob running in different phases, and the parameterized build is a very convenient way to run a single job, sometime using a parameter to add a breakpoint and get access to the environment. Another solution would be to build on top of the zuul-runner to provide a similar workflow, but it seems like more efficient if we could re-use the zuul-scheduler and let it request/cleanup the nodes. In summary, could we adjust the architecture to support executing an individual job instead of an entire job graph and could we support a zuul-web trigger? Regards, -Tristan [0]: https://wiki.jenkins.io/display/JENKINS/Parameterized+Build [1]: https://review.openstack.org/555153 [2]: https://review.openstack.org/554839 _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jan 22 21:41:35 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 22 Jan 2019 13:41:35 -0800 Subject: Parameterized Build In-Reply-To: <877eex2ntp.tristanC@fedora> (Tristan Cacqueray's message of "Tue, 22 Jan 2019 15:44:50 +0900") References: <877eex2ntp.tristanC@fedora> Message-ID: <87imygjrow.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > Hello, > > It seems like it would help the migration of Jenkins users if they > could have a similar feature[0] in Zuul to manually trigger a job > with custom parameters. > > One solution is to create a fake change or branch to trigger a > job, but perhaps Zuul can offer a better user experience. In > particular, the Jenkins implementation offers a nice form to > set the parameters. > > I started to implement this feature with a jobtrigger driver[1] > along with a web page[2] to expose the job's variables but James > asked to describe the use case first. > > Our current use case is for QA/QE engineers who would like to > run a single (usualy long running) job with custom parameters > such as the product version, the system version, ... > With Jenkins, we can have lots of jobs grouped as multijob running > in different phases, and the parameterized build is a very convenient > way to run a single job, sometime using a parameter to add a breakpoint > and get access to the environment. It is an explicit design choice that a Job in Zuul is completely defined in Zuul's configuration and that is stored in Git. I recognize that in other systems there are ephemeral ways of triggering jobs, but Zuul is designed to facilitate git-driven development and deployment, so any configuration, including job parameters, need to be able to be represented in git. The first thing you mentioned, a "fake" change to run a job with certain parameters. In the case of a truly ad-hoc need -- as in, an engineer needs to alter a job to run in a particular way to debug a single issue -- I, and others, have found that to be a great experience. Zuul's in-repo job definition allows any job to be quickly customized to run with any parameters or breakpoints, etc. And all of that done in a way that is easily auditable, visible and shareable with other folks, enabling collaboration. In the case where more permanent parameters are needed, such as running jobs with a fixed set of parametric values such as product or system versions, I think that with job inheritance and the various kinds of variables, there are a lot of tools to facilitate this. I would be interested in seeing cases where those become insufficient, and perhaps we could improve them. > In summary, could we adjust the architecture to support executing > an individual job instead of an entire job graph and could we > support a zuul-web trigger? I'm open to exploring the idea of a filtered job graph; I don't think it's necessarily in opposition to Zuul's design, though there are some tricky questions to answer in implementation. While I'm not convinced that triggers should be able to pass parameters to jobs, as input, I'm not opposed to the web trigger itself; I think that's a fine thing to add and should be relatively straightforward. -Jim From tdecacqu at redhat.com Wed Jan 23 05:32:50 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 23 Jan 2019 14:32:50 +0900 Subject: Parameterized Build In-Reply-To: <87imygjrow.fsf@meyer.lemoncheese.net> References: <877eex2ntp.tristanC@fedora> <87imygjrow.fsf@meyer.lemoncheese.net> Message-ID: <87womwkkfx.tristanC@fedora> On Tue, Jan 22, 2019 at 13:41 James E. Blair wrote: > Tristan Cacqueray writes: > >> Hello, >> >> It seems like it would help the migration of Jenkins users if they >> could have a similar feature[0] in Zuul to manually trigger a job >> with custom parameters. >> >> One solution is to create a fake change or branch to trigger a >> job, but perhaps Zuul can offer a better user experience. In >> particular, the Jenkins implementation offers a nice form to >> set the parameters. >> >> I started to implement this feature with a jobtrigger driver[1] >> along with a web page[2] to expose the job's variables but James >> asked to describe the use case first. >> >> Our current use case is for QA/QE engineers who would like to >> run a single (usualy long running) job with custom parameters >> such as the product version, the system version, ... >> With Jenkins, we can have lots of jobs grouped as multijob running >> in different phases, and the parameterized build is a very convenient >> way to run a single job, sometime using a parameter to add a breakpoint >> and get access to the environment. > > It is an explicit design choice that a Job in Zuul is completely defined > in Zuul's configuration and that is stored in Git. I recognize that in > other systems there are ephemeral ways of triggering jobs, but Zuul is > designed to facilitate git-driven development and deployment, so any > configuration, including job parameters, need to be able to be > represented in git. > > The first thing you mentioned, a "fake" change to run a job with certain > parameters. In the case of a truly ad-hoc need -- as in, an engineer > needs to alter a job to run in a particular way to debug a single issue > -- I, and others, have found that to be a great experience. Zuul's > in-repo job definition allows any job to be quickly customized to run > with any parameters or breakpoints, etc. And all of that done in a way > that is easily auditable, visible and shareable with other folks, > enabling collaboration. > > In the case where more permanent parameters are needed, such as > running jobs with a fixed set of parametric values such as product or > system versions, I think that with job inheritance and the various kinds > of variables, there are a lot of tools to facilitate this. I would be > interested in seeing cases where those become insufficient, and perhaps > we could improve them. > Considering a matrix of 5 products, 3 systems and 4 types of jobs, we are already at 60 different jobs. Using a "fake" change you could set any parameters to debug or validate a specific combinaison, but that is not a good user experience, especially if you are not familiar with Zuul git-driver development: the user has to manually create a project pipeline configuration and then figure out what are the available parameters' name by looking at possibly multiple parent jobs definition. Perhaps the creation of that "fake" change could be facilitated by the proposed web form, but this is also an odd user experience because once the job is completed, then the "fake" change has to be cleaned-up too. >> In summary, could we adjust the architecture to support executing >> an individual job instead of an entire job graph and could we >> support a zuul-web trigger? > > I'm open to exploring the idea of a filtered job graph; I don't think > it's necessarily in opposition to Zuul's design, though there are some > tricky questions to answer in implementation. > > While I'm not convinced that triggers should be able to pass parameters > to jobs, as input, I'm not opposed to the web trigger itself; I think > that's a fine thing to add and should be relatively straightforward. > That's good to hear, then if it's ok I'd like to rework the initial implementation. Thanks, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Jan 23 05:37:29 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 23 Jan 2019 14:37:29 +0900 Subject: Parameterized Build In-Reply-To: References: Message-ID: <87tvi0kk86.tristanC@fedora> On Tue, Jan 22, 2019 at 19:30 Mike Pagel wrote: > That would be a great enhancement! We would use it for various workflows often related to release management and explicit build promotion. > > Today we are using release pipelines where the tag name sort of encodes parameters like versions, which are then parsed in the build. Works but certainly is a workaround. > > In fact, being able to trigger a job via a web page, even without parameters would be great from my point of view. > Hi Mike, Note that there is already a spec to enable re-enqueue a change from the REST API: https://review.openstack.org/562321 But that would trigger for the whole job graph configured for the pipeline, not just a single job. -Tristan > Mike > > Sent from Nine > ________________________________ > Von: Tristan Cacqueray > Gesendet: Dienstag, 22. Januar 2019 07:44 > An: zuul-discuss at lists.zuul-ci.org > Betreff: Parameterized Build > > > Hello, > > It seems like it would help the migration of Jenkins users if they > could have a similar feature[0] in Zuul to manually trigger a job > with custom parameters. > > One solution is to create a fake change or branch to trigger a > job, but perhaps Zuul can offer a better user experience. In > particular, the Jenkins implementation offers a nice form to > set the parameters. > > I started to implement this feature with a jobtrigger driver[1] > along with a web page[2] to expose the job's variables but James > asked to describe the use case first. > > Our current use case is for QA/QE engineers who would like to > run a single (usualy long running) job with custom parameters > such as the product version, the system version, ... > With Jenkins, we can have lots of jobs grouped as multijob running > in different phases, and the parameterized build is a very convenient > way to run a single job, sometime using a parameter to add a breakpoint > and get access to the environment. > > Another solution would be to build on top of the zuul-runner to > provide a similar workflow, but it seems like more efficient if > we could re-use the zuul-scheduler and let it request/cleanup > the nodes. > > In summary, could we adjust the architecture to support executing > an individual job instead of an entire job graph and could we > support a zuul-web trigger? > > Regards, > -Tristan > > [0]: https://wiki.jenkins.io/display/JENKINS/Parameterized+Build > [1]: https://review.openstack.org/555153 > [2]: https://review.openstack.org/554839 > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gcerami at redhat.com Wed Jan 23 12:17:41 2019 From: gcerami at redhat.com (Gabriele Cerami) Date: Wed, 23 Jan 2019 12:17:41 +0000 Subject: Configurable autohold list of allowed build results. Message-ID: <20190123121741.onb6a4nacjus2oi2@localhost> Hi, autohold currently allows users to examine a live environment to proceed with post-mortem examination. In some occasions, trying to find the root cause of a bug by just examining the logs and the timeline of the failed environment may prove too difficult, and the preferable way may be to catch the failure at its root exactly when the root cause happens. In a mechanism similar to breakpoints, we may want to stop the build at a certain point, the proceed manually to prod for weak spots. The only way to get this behaviour from zuul currently is to request an autohold, and add a step that injects a fake failure to the build. With [1] I'd like to suggest an alternative method, the list of build results that are allowed to put a node in hold state can be passed as argument, and not just hardcoded. In this way, the build may be configured to be brought forward up to a certain known to work point, without having to expressly fail, then begin the step by step manually debugging on the live environment. It may be best suitable for non voting, experimental jobs, or during the development of new jobs, but a useful option nonetheless. Any feedback welcome. Thanks. [1] https://review.openstack.org/632498 From corvus at inaugust.com Wed Jan 23 23:46:53 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 23 Jan 2019 15:46:53 -0800 Subject: Zuul jobs for container publishing Message-ID: <87tvhz6ioi.fsf@meyer.lemoncheese.net> Hi, We recently added a new pipeline to OpenStack's Zuul, and three new jobs and roles to the zuul-jobs repo to support container publishing. Zuul users may be interested in adopting both the pipeline and the jobs into their own workflows. The new pipeline is the "promote" pipeline. It runs jobs on changes after they merge, but unlike the "post" pipeline, it does not run with the resulting branch state after the merge -- it runs in the context of the change. This means that it's not suitable for building release artifacts, but that's not its purpose. Its purpose is to promote artifacts previously built in the gate pipeline to production. That is safe to do because anything built in the gate pipeline is built with the full future state of in-flight changes to all related projects and branches (even if it's built before they land). We build a lot of artifacts in the gate pipeline now, and then throw them away only to build them again in post. The promote pipeline is an attempt to avoid that waste and instead preserve artifacts built in gate and directly publish them -- but only if and when the changes for which they were built are merged. The first functional use of this system is in building container images and publishing them on Docker Hub. We have begun using this in openstack-infra to build container images of third-party applications, but this will work just as well and as easily for building images of applications in our infrastructure (for example, we are also starting to use this to publish Zuul images). To implement this, we wrote three jobs which are available now from zuul-jobs (and therefore can be used in any project or job): * build-docker-image [1] * upload-docker-image [2] * promote-docker-image [3] The first job should run in the check pipeline and simply runs "docker build" on the arguments passed to it. It is used to verify that the build succeeds. The second job should run in the gate pipeline. It uploads the image to Docker Hub into the final repository location, but it only does so with a single tag with the form "change_123456" where '123456' is the Gerrit change number for the change under consideration. This will mean that docker pulls won't accidentally fetch the image yet. But the image is staged in Docker Hub and is ready to be promoted if the change passes all its tests and merges. The third job runs in the promote pipeline. This job, unlike the others, only performs Docker Hub API calls and requires no build resources. Therefore it is a zero-node job in Zuul -- a job which only runs on the Zuul executor. This job finds a previously uploaded image with a "change_" tag, and re-tags it with any specified tags ("latest" by default, but you can also tag version numbers or anything else). It also cleans up unused "change_" tags so they don't clutter up the repository. The nice thing about zero-node jobs is that they start and finish very quickly (since they don't have to deal with resource allocation and contention). With this system, an image will be fully published within about a minute after the change merges. A complete example .zuul.yaml might look like this: - secret: name: myproject-dockerhub data: username: dockerhubuser password: !encrypted/pkcs1-oaep - DFlbrDM5eUMptMGIVMXV1g455xOJLi92UYF08Z2/JlIGu3t6v052o9FKlVyj1ZmpXs5+2 JTa5jHkLTvTsYs9fCaNcQc2nmViCyWNlbOMzjB17uiZOaYFNs1sMqZcUZbGEz7Y8ds6Qq NBXI10jWFPTah4QxUuBvUbT3vmjnUToCzexl5ZGhKgijcnROWfUsnlCdugpgoNIcPsUki zty5FotDihnrC8n8vIomVK6EClY38ty97pLrADzFDd+Cos/OUlvi2xooUhzx8Bn020rJA lqEU5v8LGXp5QkHx0MSDx6JY6KppJ/4p/yM+4By6l+A20zdcimxmgiNc9rMWPwDj7xsao m7NAZWmWqOO0Xkhgt6WOfugwgt9X46sgs2+yDEfbnI5ok8uRbAB/4FWj/KdpyXwhcf+O2 wEfhxLwDbAoGONQPjb4YcZmCXtmR7Qe5t+n2jyczWXvrbaBDUQP5a+YtVNN/xhmQ7D740 POlxv7bLxJAixzqaQ3d8Rz9ZEv6zzRuhWph32UQtZ1JxSNww+EvmXm2eEi2Q2z6pT1Cx/ j2OrFyA2GL/UJOVb15VHKF6bgHPHWJtpjPFhqdcvBhVute4BWB+KPcWH+y+apHN1enK3H tNJO9iqm34nKwSuj5ExmFw50LtwR5/9FyRuRPq/vBL+8y82v8FDmeYsBeobn5M= - job: name: myproject-build-image parent: build-docker-image description: Build Docker images. vars: &image_vars docker_images: - context: . repository: myproject/imagename - job: name: myproject-upload-image parent: upload-docker-image description: Build Docker images and upload to Docker Hub. secrets: - name: docker_credentials secret: myproject-dockerhub pass-to-parent: true vars: *image_vars - job: name: myproject-promote-image parent: promote-docker-image description: Promote previously uploaded Docker images. secrets: - name: docker_credentials secret: myproject-dockerhub pass-to-parent: true vars: *image_vars - project: check: jobs: - myproject-build-image gate: jobs: - myproject-upload-image promote: jobs: - myproject-promote-image And the pipeline definition looks like: - pipeline: name: promote description: | This pipeline runs jobs that operate after each change is merged in order to promote artifacts generated in the gate pipeline. manager: supercedent precedence: low post-review: True trigger: gerrit: - event: change-merged success: gerrit: {} mysql: failure: gerrit: {} mysql: If you find that you need jobs which behave slightly differently, you may be able to inherit from these jobs and add pre or post playbooks. Or if you need something particularly complicated, you can re-implement the jobs using the underlying roles in zuul-jobs, which have the same names an the jobs which invoke them. [1] https://zuul-ci.org/docs/zuul-jobs/jobs.html#job-build-docker-image [2] https://zuul-ci.org/docs/zuul-jobs/jobs.html#job-upload-docker-image [3] https://zuul-ci.org/docs/zuul-jobs/jobs.html#job-promote-docker-image [4] https://zuul-ci.org/docs/zuul/user/encryption.html -Jim From shrewsbury.dave at gmail.com Thu Jan 24 13:55:30 2019 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Thu, 24 Jan 2019 08:55:30 -0500 Subject: Configurable autohold list of allowed build results. In-Reply-To: <20190123121741.onb6a4nacjus2oi2@localhost> References: <20190123121741.onb6a4nacjus2oi2@localhost> Message-ID: Hi, On Wed, Jan 23, 2019 at 7:17 AM Gabriele Cerami wrote: > Hi, > autohold currently allows users to examine a live environment to proceed > with post-mortem examination. > > In some occasions, trying to find the root cause of a bug by just > examining the logs and the timeline of the failed environment may prove > too difficult, and the preferable way may be to catch the failure at its > root exactly when the root cause happens. > In a mechanism similar to breakpoints, we may want to stop the build at > a certain point, the proceed manually to prod for weak spots. > The only way to get this behaviour from zuul currently is to request an > autohold, and add a step that injects a fake failure to the build. > So how does [1] help you achieve this scenario above (stopping the build at a certain point to examine from there)? > With [1] I'd like to suggest an alternative method, the list of build > results that are allowed to put a node in hold state can be passed as > argument, and not just hardcoded. > In this way, the build may be configured to be brought forward up to a > certain known to work point, without having to expressly fail, then begin > the step by step manually debugging on the live environment. > > I don't really like having a CLI command that exposes/uses internal values we may use, personally. But I guess my main issue with this is that I don't understand how this is getting you any more functionality that what we currently have. Are you hoping to use a successful job result as a hold condition? If so, I don't see how that is getting you the "stop at a certain point" functionality you express in the previous paragraph, so I think I'm missing something. Can you use more words to help me understand what I'm missing? It may be best suitable for non voting, experimental jobs, or during the > development of new jobs, but a useful option nonetheless. > > Any feedback welcome. > Thanks. > > [1] https://review.openstack.org/632498 > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Fri Jan 25 19:37:56 2019 From: corvus at inaugust.com (James E. Blair) Date: Fri, 25 Jan 2019 11:37:56 -0800 Subject: Speculative docker images Message-ID: <87imyc34vf.fsf@meyer.lemoncheese.net> Hi, As you know, with respect to git repositories, Zuul frequently operates well into the future, allowing jobs to build software with changes that haven't landed yet under the assurance that they will. As more of us have incorporated container image building into our processes, we find that we want the same thing there -- that not only would our git repositories represent the future that Zuul sees, but also our container image registries. As a concrete and simple example, we have an integration job called 'zuul-quick-start' which uses both Nodepool and Zuul container images. Those images themselves depend on build images created from the openstack-infra/system-config repository, which they use via a "FROM" line in Dockerfiles, and extending the zuul-quick-start job to be able to rebuild and use those builder images in the case of a Depends-On change is too complex to consider. In particular, the "FROM" line is problematic, as the only way to realistically use a speculative image as a base image is to alter the Dockerfile or make the image available in a registry. Therefore, this is a proposal for a system for building speculative images which can then be used by further speculative images. Every item in a Zuul pipeline has its own future state which is related to, but separate from all the other items. It needs to be able to build an image for a project and store it in a registry under the final published name and tag, without affecting any of the changes ahead of it. Therefore, each item needs its own image registry. In the case of a single, simple job such as zuul-quick-start, we can just run a docker registry on the node on which the job is running. In the case of a more complicated system where we want to build an image once and use it in multiple dependent jobs in a job graph, we will start an image registry at the root of the job graph, pause the job, and the dependent jobs will use it. Now that we have the registry for the change, we need to have that registry reflect the state of any changes ahead. We need to know when the change ahead has finished building its images so that we can then add those images into our change's registry. To accomplish this, we can extend Zuul so that jobs have a provides/requires interface. If our job requires docker images produced by a job in a dependent change, we can say: - job: name: zuul-quick-start requires: docker-images - job: name: system-config-build-image-python-builder provides: docker-images Zuul will see that zuul-quick-start should not start until all of the jobs for all of the changes ahead which provide "docker-images" have completed. Then zuul-quick-start will be able to retrieve images from the preceding changes' jobs and add them to its own registry. In the gate, the behavior of this is obvious: if a change has a job with a requirement, iterate over all changes ahead of it in the shared queue and search for jobs on those changes which provide that requirement. If there are none, or all of them have completed or paused, then that job is ready to start. Otherwise, wait. In an independent pipeline, we do include dependent changes ahead in the shared queue, but they are not live -- they don't run any actual jobs. So in that case, we're going to have to peek outside our shared queue and see if our dependent changes show up anywhere else in the same pipeline, and if they do, wait for their jobs. If they don't show up, then we know they completed earlier, and it's safe to start. There's one more thing about this change that will help us later: a job that "provides" something should be able to pass data to a job which "requires" it. So our image build jobs can use zuul_return to provide information about which images they built so that it's easier for later jobs to import those images into their registries. Once again, the independent pipelines make this tricky, as since those jobs may have run some time in the past, the scheduler may have forgotten the data they returned. However, once the SQL database is required, we can rely on that to fill in any missing data. That's the only change we need to Zuul itself to support this system. The rest is all playbooks and roles. That brings us to the challenge of how to actually transfer these artifacts between items. I can think of two approaches. They are not mutually exclusive -- we can implement both if we want. They both have pros and cons. But I would like input from Zuul operators (including openstack-infra/opendev) as to which they would prefer, and I'll implement that approach in OpenDev for Zuul's own upstream use. If we have a split decision, someone else can implement the second approach. I think we can support both in zuul-jobs, but I'd like us to run the most widely applicable approach upstream. Alternative 1: Use the Logserver ================================ In the case of a simple job, at the end of the job, or in the case of a complex job graph, a child job dependent on the completion of the image build process (so we don't have to wait for all the jobs to complete), we will export the images to the filesystem and upload their layers (minus any layers which are already on Docker Hub, in order to save space) to the logserver. This next bit is a little complicated, but remember that this will all be encapsulated in roles so we only have to implement it once; the user interface will resemble "roles: [fetch-dependent-images]". At the end of the image build job, for every image that we built: Export it to a directory on the filesystem. For each layer in the image: Perform a HEAD request against the Docker Hub registry to see if the layer exists in the target repository. If that exists, layer exists in docker, remove from filesystem. rsync image directories to logserver. zuul_return image location on logserver. At the start of a job which requires images: For each previous image (as provided by zuul_return): Rsync image directory from logserver to filesystem. For each layer in image manifest: If layer is missing from filesystem: Get the layer from Docker Hub. Import image into local registry. The advantages of this approach: * No external operational dependencies (i.e., you don't have to run a docker registry). The disadvantages of this approach: * Images can't easily be fetched and used for debugging. * Implementation is slightly more complex. Alternative 2: Use a Central Registry ===================================== In this approach, we would run a central registry to store speculative images. It would not store final images at all, and every image uploaded to it would be tagged only with a tag like "pipeline_check_change_1234_original_tag" to uniquely identify its build context. At the end of the image build job, for every image that we built: Copy it from the per-change registry to the central registry, and tag it with pipeline and change id. zuul_return image location and tag. At the start of a job which requires images: For each previous image (as provided by zuul_return): Copy it from the central registry to the local registry, and tag it with the original tag. The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools. The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry. Conclusion ========== I don't think this system will take long to implement with either of the alternatives. Please let me know what you think of the system, and of the storage/transfer alternatives. -Jim From dmsimard at redhat.com Fri Jan 25 20:25:51 2019 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 25 Jan 2019 15:25:51 -0500 Subject: Speculative docker images In-Reply-To: <87imyc34vf.fsf@meyer.lemoncheese.net> References: <87imyc34vf.fsf@meyer.lemoncheese.net> Message-ID: On Fri, Jan 25, 2019 at 2:38 PM James E. Blair wrote: > > Alternative 2: Use a Central Registry > ===================================== > > In this approach, we would run a central registry to store speculative > images. It would not store final images at all, and every image > uploaded to it would be tagged only with a tag like > "pipeline_check_change_1234_original_tag" to uniquely identify its build > context. > > At the end of the image build job, for every image that we built: > Copy it from the per-change registry to the central registry, and > tag it with pipeline and change id. > zuul_return image location and tag. > > At the start of a job which requires images: > For each previous image (as provided by zuul_return): > Copy it from the central registry to the local registry, and tag it > with the original tag. > > The advantages of this approach: > * Developers can fetch images from the central registry by pulling > them with the crazy tag. > * Implementation is a little easier, and uses more standardized tools. > > The disadvantages of this approach: > * An external service is required. > * We will need to prune old images from the registry. > This is more or less what we are doing today for building and promoting container images for RDO. In our periodic pipeline, we have a parent job that builds container images and pushes them to an OpenShift standalone registry [1]. These images have a specific tag format that dependant jobs are able to use in order to pull and test them. We have an out-of-band process for promoting the images based on the results of the following jobs -- if the criteria for promotion is met, the images are pushed to DockerHub under an appropriate tag. The container images on the registry are configured to be public and anyone can pull them but pushing requires a token from a service account in the namespace. We prune images using a fairly generic script [2] which prunes tags older than N days that do not match a supplied whitelist. This deletes the tags but does not delete the container image blobs from the filesystem. To delete the (now orphaned) blobs from the filesystem, OpenShift provides a "oadm prune images" command [3] which we run after deleting the tags. We initially chose OpenShift as our registry because it made authentication and pruning simple when we compared it to docker-registry and docker-distribution. We experimented with Atomic Registry which was a simplified implementation of the OpenShift standalone registry but Atomic Registry was deprecated [4]. The OpenShift deployment lives on a single node and contains only the necessary bits to expose the registry publicly. It is not meant to host applications. The registry has been in use for over a year now and is currently hosting near 10 000 tags across 878 images in 4 different namespaces and it prunes around 1000 tags every day. It has been fairly stable with 4 cores, 8GB of RAM and ~500GB of disk. [1]: https://docs.openshift.com/container-platform/3.9/install_config/install/stand_alone_registry.html [2]: https://github.com/rdo-infra/rdo-infra-playbooks/blob/master/roles/rdo-infra/registry-image-pruning/files/openshift_tag_pruner.py [3]: https://docs.openshift.com/container-platform/3.9/admin_guide/pruning_resources.html#pruning-images [4]: https://www.projectatomic.io/blog/2017/05/oo-standalone-registry/ David Moreau Simard dmsimard = [irc, github, twitter] From corvus at inaugust.com Mon Jan 28 17:14:57 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 28 Jan 2019 09:14:57 -0800 Subject: Zuul project update 2019-01-28 Message-ID: <87d0ogwvou.fsf@meyer.lemoncheese.net> Hi, Over the past two weeks, Tristan has been working on the topic:freeze_job and got the "zuul-runner execute" command to run a zuul-job locally, as well as a buildset endpoint and a buildset page to show all the build result for a change. I've been working on image building jobs, and will continue to work on improvements to those this week (see two recent emails to list). Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From Tobias.Henkel at bmw.de Mon Jan 28 18:11:44 2019 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 28 Jan 2019 18:11:44 +0000 Subject: Speculative docker images In-Reply-To: <87imyc34vf.fsf@meyer.lemoncheese.net> References: <87imyc34vf.fsf@meyer.lemoncheese.net> Message-ID: <178BAA06-D688-413F-AA42-EFA4376E4BDE@bmw.de> On 25.01.19, 20:38, "James E. Blair" wrote: Hi, As you know, with respect to git repositories, Zuul frequently operates well into the future, allowing jobs to build software with changes that haven't landed yet under the assurance that they will. As more of us have incorporated container image building into our processes, we find that we want the same thing there -- that not only would our git repositories represent the future that Zuul sees, but also our container image registries. As a concrete and simple example, we have an integration job called 'zuul-quick-start' which uses both Nodepool and Zuul container images. Those images themselves depend on build images created from the openstack-infra/system-config repository, which they use via a "FROM" line in Dockerfiles, and extending the zuul-quick-start job to be able to rebuild and use those builder images in the case of a Depends-On change is too complex to consider. In particular, the "FROM" line is problematic, as the only way to realistically use a speculative image as a base image is to alter the Dockerfile or make the image available in a registry. Therefore, this is a proposal for a system for building speculative images which can then be used by further speculative images. Every item in a Zuul pipeline has its own future state which is related to, but separate from all the other items. It needs to be able to build an image for a project and store it in a registry under the final published name and tag, without affecting any of the changes ahead of it. Therefore, each item needs its own image registry. In the case of a single, simple job such as zuul-quick-start, we can just run a docker registry on the node on which the job is running. In the case of a more complicated system where we want to build an image once and use it in multiple dependent jobs in a job graph, we will start an image registry at the root of the job graph, pause the job, and the dependent jobs will use it. Now that we have the registry for the change, we need to have that registry reflect the state of any changes ahead. We need to know when the change ahead has finished building its images so that we can then add those images into our change's registry. To accomplish this, we can extend Zuul so that jobs have a provides/requires interface. If our job requires docker images produced by a job in a dependent change, we can say: - job: name: zuul-quick-start requires: docker-images - job: name: system-config-build-image-python-builder provides: docker-images Zuul will see that zuul-quick-start should not start until all of the jobs for all of the changes ahead which provide "docker-images" have completed. Then zuul-quick-start will be able to retrieve images from the preceding changes' jobs and add them to its own registry. In the gate, the behavior of this is obvious: if a change has a job with a requirement, iterate over all changes ahead of it in the shared queue and search for jobs on those changes which provide that requirement. If there are none, or all of them have completed or paused, then that job is ready to start. Otherwise, wait. In an independent pipeline, we do include dependent changes ahead in the shared queue, but they are not live -- they don't run any actual jobs. So in that case, we're going to have to peek outside our shared queue and see if our dependent changes show up anywhere else in the same pipeline, and if they do, wait for their jobs. If they don't show up, then we know they completed earlier, and it's safe to start. There's one more thing about this change that will help us later: a job that "provides" something should be able to pass data to a job which "requires" it. So our image build jobs can use zuul_return to provide information about which images they built so that it's easier for later jobs to import those images into their registries. Once again, the independent pipelines make this tricky, as since those jobs may have run some time in the past, the scheduler may have forgotten the data they returned. However, once the SQL database is required, we can rely on that to fill in any missing data. That's the only change we need to Zuul itself to support this system. The rest is all playbooks and roles. That brings us to the challenge of how to actually transfer these artifacts between items. I can think of two approaches. They are not mutually exclusive -- we can implement both if we want. They both have pros and cons. But I would like input from Zuul operators (including openstack-infra/opendev) as to which they would prefer, and I'll implement that approach in OpenDev for Zuul's own upstream use. If we have a split decision, someone else can implement the second approach. I think we can support both in zuul-jobs, but I'd like us to run the most widely applicable approach upstream. Alternative 1: Use the Logserver ================================ In the case of a simple job, at the end of the job, or in the case of a complex job graph, a child job dependent on the completion of the image build process (so we don't have to wait for all the jobs to complete), we will export the images to the filesystem and upload their layers (minus any layers which are already on Docker Hub, in order to save space) to the logserver. This next bit is a little complicated, but remember that this will all be encapsulated in roles so we only have to implement it once; the user interface will resemble "roles: [fetch-dependent-images]". At the end of the image build job, for every image that we built: Export it to a directory on the filesystem. For each layer in the image: Perform a HEAD request against the Docker Hub registry to see if the layer exists in the target repository. If that exists, layer exists in docker, remove from filesystem. rsync image directories to logserver. zuul_return image location on logserver. At the start of a job which requires images: For each previous image (as provided by zuul_return): Rsync image directory from logserver to filesystem. For each layer in image manifest: If layer is missing from filesystem: Get the layer from Docker Hub. Import image into local registry. The advantages of this approach: * No external operational dependencies (i.e., you don't have to run a docker registry). The disadvantages of this approach: * Images can't easily be fetched and used for debugging. * Implementation is slightly more complex. Alternative 2: Use a Central Registry ===================================== In this approach, we would run a central registry to store speculative images. It would not store final images at all, and every image uploaded to it would be tagged only with a tag like "pipeline_check_change_1234_original_tag" to uniquely identify its build context. At the end of the image build job, for every image that we built: Copy it from the per-change registry to the central registry, and tag it with pipeline and change id. zuul_return image location and tag. At the start of a job which requires images: For each previous image (as provided by zuul_return): Copy it from the central registry to the local registry, and tag it with the original tag. The advantages of this approach: * Developers can fetch images from the central registry by pulling them with the crazy tag. * Implementation is a little easier, and uses more standardized tools. The disadvantages of this approach: * An external service is required. * We will need to prune old images from the registry. Conclusion ========== I don't think this system will take long to implement with either of the alternatives. Please let me know what you think of the system, and of the storage/transfer alternatives. -Jim I like this and I'd vote for the central registry variant. I think using standardized tools and the ability for the users to fetch the images outweigh the advantages of variant one. -Tobias From daragh.bailey at gmail.com Tue Jan 29 11:32:41 2019 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Tue, 29 Jan 2019 11:32:41 +0000 Subject: Using nodepool v3 with Jenkins Message-ID: Hi, Hoping that we can make use of nodepool v3 with Jenkins to solve some of our existing problems, and at the same time move a bit closer to upgrading from zuul v2 -> v3. Going to outline our current problem, my initial thinking that resulted in some of my questions last night in IRC and some further thinking on it that may provide a way forward for us. Hoping that someone can chime in and identify if this is realistic or if we're heading down the wrong path before getting to far. Keep in mind that I'm only starting with nodepool, so I'm unsure if I even understand some of the concepts. e.g. does nodepool get a request for a particular type of node or is it infered by there being none of that type available in the queues in zookeeper? Current problem: We have some resources that are limited and are not directly accessible as workers for Jenkins, and so need a way to manage the allocation of resources to certain jobs. Some of these resources are provided by teams and some are from a set of resources managed by the CI team. So we probably have ~4 sets of these types of resources. At the moment this is done using some (too much) hackier with groovy, the Global Event Listener Plugin (GELP), and Lockable Resources plugin. The GELP allows us to hook into various points in Jenkins, such as when the job is scheduled but not yet running Jenkins (QUEUED). Unfortunately there doesn't appear to be any way with the gearman protocol to inspect what job is going to be assigned before it is picked up, so we can only do manipulation after the job has been scheduled and before it starts running. The data required to provide the job with the details required to use the limited resource is retrieved from vault and injected right before the job gets fully launched on Jenkins. This is done by creating lockable resource entries dynamically based on the number of the resources available, and then catching the event on when Jenkins allows the job needing this to start and injecting the required data into the environment. Unfortunately this means the queuing for the jobs requiring these limited resources occurs within Jenkins instead of Zuul because the job has still be scheduled even if it is not yet running. We've run into a number of problems resulting in workers getting allocated for jobs that don't get to start, and taking them away from jobs that could be run, and effectively lightly DOSing the system. Initial thoughts: So enter nodepool and the hope that if we move the management of both these resources and the workers out of Jenkins and into something a little more flexible, it can prevent the job from getting scheduled in the first place if not enough of the limited resources are available, and allow jobs that do not need these to be scheduled because a worker isn't getting assigned to the job that is queued waiting. Initially I was thinking of having two resources, making the one representing the limited resource a fake resource of some kind that Jenkins doesn't connect to, but as it appears that it's a requirement of the Jenkins Nodepool agent that nodes returned must be accessible to Jenkins, that doesn't seem to fly. Next idea was whether it would be possible to have a static pool and a another driver that could grab a node from the static pool if a request for the limited resource node type came in, which prompted some of my questions in IRC last night. Further thinking done: After spending a bit more time with the static nodepool driver, and depending on how it works (there are lots of gaps in my understanding), I'm wondering if what we need is to create a custom driver and place all of our static nodes in a internal pool inside of this driver. Additionally query our vault instance for a list of these limited resource groups and build some pools of those internally as well. And assign labels as part of the requirements for the jobs using these limited resources. Then instead of immediate registering all of the nodes with zookeeper, instead depending on the node label request that comes in, populate zookeeper with an entry containing suitable labels. If it is one requiring a limited resource identified by a name such as teamA-resources, the driver code can test if there are any not in use and if so create a node entry in zookeeper that contains labels "teamA-resources" and "teamA-resource-5". The second label can be used in Jenkins by our GELP code to determine which set of data to inject into the environment of the job scheduled on that node. Obviously there is still some hackier in there, but does a custom driver building on the idea of combining some of the ideas of the static driver along with the dynamic drivers where instead of launching a new node, it simply creates a node entry on demand using both a static node and details on what resource is required? Possibly I'm misunderstanding how nodepool works in this case, and maybe it just populates a queue in zookeeper and tries to keep some in there, which may mean that what needs to happen is that one entry allocating a node to each of the sets of limited resources needs to be added as entries to zookeeper queues by the driver with further entries added if there are enough resources available as those queues are consumed? Which is a bit different, but seems like it would still work to limit the amount of damage where multiple jobs were launched and required more of the limited resource than was available. Currently can end up seeing 3 queue instances for a single limited resource queued up in Jenkins, so as we migrate another case were this will be needed we could end up seeing a queue of 9-10. Seems nodepool might limit it to only losing one per team, until we can migrate to zuulv3 and use some of the mutex capabilities to handle this a bit better. Thoughts? Is this the best way to dig up as not digging isn't an option here... -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jan 29 23:59:01 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 29 Jan 2019 15:59:01 -0800 Subject: Using nodepool v3 with Jenkins In-Reply-To: (Darragh Bailey's message of "Tue, 29 Jan 2019 11:32:41 +0000") References: Message-ID: <87fttbkoca.fsf@meyer.lemoncheese.net> First, a disclaimer: I mention this mostly because of the subject line, which might cause people to jump to conclusions -- our goal with Nodepool is not to explicitly support Jenkins or any system other than Zuul at this point. We are currently keeping our options open and trying to limit explicit ties to Zuul so that, in the future, it might be generically useful. But at the moment, we don't guarantee the stability of the API, and we don't intend to merge or support features that aren't in service of Zuul. Having said that, I think you and the rest of the team are on the same page about this, having discussed it previously. This may be a use case that applies equally well to Zuul, and if so, I think there may be options that could help in your situation and improve things for Zuul as well. Darragh Bailey writes: ... > We have some resources that are limited and are not directly accessible as > workers for Jenkins, and so need a way to manage the allocation of > resources to certain jobs. Some of these resources are provided by teams > and some are from a set of resources managed by the CI team. So we probably > have ~4 sets of these types of resources. At the moment this is done using > some (too much) hackier with groovy, the Global Event Listener Plugin > (GELP), and Lockable Resources plugin. ... There's one thing I don't understand here: considering that you go and get credentials to access the resources from vault, why is it that it's not permissible for Jenkins (or Zuul) to access it? Put another way, are these nodes that you will SSH into from within the job? If not, why not? Generally what Nodepool will do is expect a list of all of the required resources for a job, do what is needed to obtain those resources (whether that's locking a static node or creating a dynamic node), and then when they are all present, return the data to Zuul. Ultimately, any solution will involve Nodepool having an explicit understanding of each of the resources. But first I think I need to understand why we can't just tell it this is a static node running SSH or winrm. -Jim From daragh.bailey at gmail.com Wed Jan 30 11:11:27 2019 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Wed, 30 Jan 2019 11:11:27 +0000 Subject: Speculative docker images In-Reply-To: <87imyc34vf.fsf@meyer.lemoncheese.net> References: <87imyc34vf.fsf@meyer.lemoncheese.net> Message-ID: Hi, While still on zuul v2, we're currently building docker, libvirt, & vmware images, On Fri, 25 Jan 2019 at 19:38, James E. Blair wrote: > Hi, > > To accomplish this, we can extend Zuul so that jobs have a > provides/requires interface. If our job requires docker images produced > by a job in a dependent change, we can say: > > - job: > name: zuul-quick-start > requires: docker-images > > - job: > name: system-config-build-image-python-builder > provides: docker-images > > Zuul will see that zuul-quick-start should not start until all of the > jobs for all of the changes ahead which provide "docker-images" have > completed. Then zuul-quick-start will be able to retrieve images from > the preceding changes' jobs and add them to its own registry. > This is fantastic, and is great to see zuul extending to provide fine control of a pipeline. > > Alternative 1: Use the Logserver > ================================ > > The advantages of this approach: > * No external operational dependencies (i.e., you don't have to run a > docker registry). > Might be worth considering that users may want to publish more items than just docker images, so this could be used as a generic non specific storage location to get started. > > The disadvantages of this approach: > * Images can't easily be fetched and used for debugging. > * Implementation is slightly more complex. > > Alternative 2: Use a Central Registry > ===================================== > > The advantages of this approach: > * Developers can fetch images from the central registry by pulling > them with the crazy tag. > * Implementation is a little easier, and uses more standardized tools. > > The disadvantages of this approach: > * An external service is required. > * We will need to prune old images from the registry. > I expect most will want to use a dedicated service for hosting of artifacts once they are past getting started phase, so this would be the preferred for us, though we'd swap in Artifactory to replace the registry here as it also would cover publishing/consuming of other artifact types we're using as well. > Conclusion > ========== > > I don't think this system will take long to implement with either of the > alternatives. Please let me know what you think of the system, and of > the storage/transfer alternatives. > > -Jim > It seems alternative 1 would be good for getting started because it could easily cover any type of artifact, and so gives a quicker get going path by not needing an external dependency. But I'm expecting that most people will want to subsequently move to alternative 2 because once running for a while it's likely they will have brought up services to handle supporting hosting their artifacts and these likely provide additional tooling/tracking that would be useful. So as long as it's easy to swap in what's needed for a particular deployment would be most desirable from my view irrespective of whichever you go with. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From daragh.bailey at gmail.com Wed Jan 30 12:01:40 2019 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Wed, 30 Jan 2019 12:01:40 +0000 Subject: Using nodepool v3 with Jenkins In-Reply-To: <87fttbkoca.fsf@meyer.lemoncheese.net> References: <87fttbkoca.fsf@meyer.lemoncheese.net> Message-ID: Hi, On Tue, 29 Jan 2019 at 23:59, James E. Blair wrote: > First, a disclaimer: > > I mention this mostly because of the subject line, which might cause > people to jump to conclusions -- our goal with Nodepool is not to > explicitly support Jenkins or any system other than Zuul at this point. > We are currently keeping our options open and trying to limit explicit > ties to Zuul so that, in the future, it might be generically useful. > But at the moment, we don't guarantee the stability of the API, and we > don't intend to merge or support features that aren't in service of > Zuul. > Understood, and in case anyone else is reading the goal here is to try and do a phased transition to zuul v3, by sharing the resources it becomes easier to run both zuul v2 & v3 side by side until ready to complete a switch over. > Having said that, I think you and the rest of the team are on the same > page about this, having discussed it previously. This may be a use case > that applies equally well to Zuul, and if so, I think there may be > options that could help in your situation and improve things for Zuul as > well. > > Darragh Bailey writes: > > ... > > > We have some resources that are limited and are not directly accessible > as > > workers for Jenkins, and so need a way to manage the allocation of > > resources to certain jobs. Some of these resources are provided by teams > > and some are from a set of resources managed by the CI team. So we > probably > > have ~4 sets of these types of resources. At the moment this is done > using > > some (too much) hackier with groovy, the Global Event Listener Plugin > > (GELP), and Lockable Resources plugin. > > ... > > There's one thing I don't understand here: considering that you go and > get credentials to access the resources from vault, why is it that it's > not permissible for Jenkins (or Zuul) to access it? > > Put another way, are these nodes that you will SSH into from within the > job? If not, why not? > The resources are k8s systems, and there isn't an issue with providing the credentials to access it, but rather than there appears to be a limitation in the current nodepool agent plugin for Jenkins that expects the node to be attached as a ssh-agent, which I can't see these working in that way. I'm hoping to avoid needing to extensive work on the nodepool agent plugin to allow it to handle other node types given that this is very much a stepping stone to migrating and once in zuulv3 I think based on some of the feedback in the IRC channel it seems likely that we could use the k8s driver to subsequently manage these. > Generally what Nodepool will do is expect a list of all of the required > resources for a job, do what is needed to obtain those resources > (whether that's locking a static node or creating a dynamic node), and > then when they are all present, return the data to Zuul. > > Ultimately, any solution will involve Nodepool having an explicit > understanding of each of the resources. But first I think I need to > understand why we can't just tell it this is a static node running SSH > or winrm. > > -Jim > I think this is a case of some limitations due to how Jenkins understands nodes. It expects that each node is an agent that can run some job. In this case the k8s resources are being used by a job that cannot run directly on the k8s cluster but rather is updating it and then exercising it. It's a test system, there are a limited number available to be used to shared across multiple teams, and need to have somewhere to run the code that will deploy the changes under test to it. My hope was that we could use a pool of machines that can be provided to Jenkins as standard slaves, and when asked for one of these k8s environments take both the k8s system and one of the static machines and provide it to Jenkins, but it would have to be done in a way that Jenkins only sees the machine that it can run the job on and doesn't try to launch an agent on the k8s instance. One solution around this is to do a 1-to-1 mapping in that we reserve some machines or VMs that are managed by nodepool under a static label and when one of these resources are needed. However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes. So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months. Some of this may be unnecessary, so trying to balance complexity to support the base requirements: - Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3 -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Jan 30 22:53:24 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 30 Jan 2019 14:53:24 -0800 Subject: Using nodepool v3 with Jenkins In-Reply-To: (Darragh Bailey's message of "Wed, 30 Jan 2019 12:01:40 +0000") References: <87fttbkoca.fsf@meyer.lemoncheese.net> Message-ID: <87r2ctg3kr.fsf@meyer.lemoncheese.net> Darragh Bailey writes: > The resources are k8s systems, and there isn't an issue with providing the > credentials to access it, but rather than there appears to be a limitation > in the current nodepool agent plugin for Jenkins that expects the node to > be attached as a ssh-agent, which I can't see these working in that way. > > I'm hoping to avoid needing to extensive work on the nodepool agent plugin > to allow it to handle other node types given that this is very much a > stepping stone to migrating and once in zuulv3 I think based on some of the > feedback in the IRC channel it seems likely that we could use the k8s > driver to subsequently manage these. If the issue is that Nodepool collects host keys for static nodes... well, here is the line of code you'd have to change: http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py#n40 If, however, the issue is related to providing data to Jenkins, I'm not sure changes to Nodepool will help. >> Generally what Nodepool will do is expect a list of all of the required >> resources for a job, do what is needed to obtain those resources >> (whether that's locking a static node or creating a dynamic node), and >> then when they are all present, return the data to Zuul. >> >> Ultimately, any solution will involve Nodepool having an explicit >> understanding of each of the resources. But first I think I need to >> understand why we can't just tell it this is a static node running SSH >> or winrm. >> >> -Jim >> > > I think this is a case of some limitations due to how Jenkins understands > nodes. It expects that each node is an agent that can run some job. In this > case the k8s resources are being used by a job that cannot run directly on > the k8s cluster but rather is updating it and then exercising it. It's a > test system, there are a limited number available to be used to shared > across multiple teams, and need to have somewhere to run the code that will > deploy the changes under test to it. > > My hope was that we could use a pool of machines that can be provided to > Jenkins as standard slaves, and when asked for one of these k8s > environments take both the k8s system and one of the static machines and > provide it to Jenkins, but it would have to be done in a way that Jenkins > only sees the machine that it can run the job on and doesn't try to launch > an agent on the k8s instance. > > One solution around this is to do a 1-to-1 mapping in that we reserve some > machines or VMs that are managed by nodepool under a static label and when > one of these resources are needed. > > > However there is another hitch. One of the reasons we're using vault is > that some teams are providing a system built and managed by themselves, and > currently they can add this to be used by their jobs only by adding an > entry under a set path in vault. We grant permissions for them to write to > a specific location and then pick it up dynamically. This allows them to > add a replacement system to be used on the next run, and stop the current > k8s cluster from being used without any interaction with ourselves. This is > something that we'll need to continue to do, so some dynamic > adding/removing of nodes from being used would be highly desirable to avoid > unnecessary requests for us to make changes. You could probably accomplish this in a self-service fashion today in Zuul with a combination of semaphores and secrets (with no involvement from Nodepool). > So if we have to do a custom driver to support that, it might be useful to > have the system be relatively efficient and apply the same dynamic > adding/removing to all of our static nodes to that instead of needing to > reserve 8+ for these k8s systems which are only used some of the time, we > can reuse the nodes used to run the jobs by only injecting the creds needed > for the job that is supposed to have access to the allocated k8s cluster. > Though it was mentioned that such dynamic behaviour causes issues when > listing nodes, but if that doesn't impact the Jenkins nodepool agent, I > think could live with it for a few months. > > > Some of this may be unnecessary, so trying to balance complexity to support > the base requirements: > > - Some teams can run jobs on a k8s cluster they provide in an exclusive > manner > - These can remain self managed > - Avoid jobs needing such resources queuing in Jenkins if the cluster is > not yet available > - Avoid unnecessarily blocking other jobs due to no available machines > to run jobs on (even though a number are sitting idle) > - Make it easier to upgrade to zuul v3 Based on the above, I don't think you will need a custom Nodepool driver. -Jim From claire at openstack.org Thu Jan 31 00:03:05 2019 From: claire at openstack.org (Claire Massey) Date: Wed, 30 Jan 2019 18:03:05 -0600 Subject: [summit] Open Infrastructure Summit Shanghai - Dates and Location Message-ID: Hello Zuul team - We are excited to announce that following Denver, the Open Infrastructure Summit will be traveling to Shanghai, China the week of November 4, 2019. The Summit will be held at the Shanghai Expo Centre. Registration will be available soon. Sign up for updates[1] and reach out to summit at openstack.org with any questions. [1] https://www.openstack.org/summit/shanghai-2019 Cheers, Claire From daragh.bailey at gmail.com Thu Jan 31 10:30:20 2019 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Thu, 31 Jan 2019 10:30:20 +0000 Subject: Zuul jobs for container publishing In-Reply-To: <87tvhz6ioi.fsf@meyer.lemoncheese.net> References: <87tvhz6ioi.fsf@meyer.lemoncheese.net> Message-ID: On Wed, 23 Jan 2019 at 23:47, James E. Blair wrote: > Hi, > > > Its purpose is to promote artifacts previously built in the gate > pipeline to production. That is safe to do because anything built in > the gate pipeline is built with the full future state of in-flight > changes to all related projects and branches (even if it's built before > they land). We build a lot of artifacts in the gate pipeline now, and > then throw them away only to build them again in post. The promote > pipeline is an attempt to avoid that waste and instead preserve > artifacts built in gate and directly publish them -- but only if and > when the changes for which they were built are merged. > We do something similar locally, but need a full build node because we also want to code sign the image as well as do some minor manipulation to ensure that the label meta data on the image contains the merged SHA1 so it is possible connect between the image and the git code used to build. Hence a previous conversation in IRC about whether it would be possible in the future for zuul to be able to push back to the git repos on success so that the final SHA1 used during the build matched anything recorded. However I thought it was worth mentioning that there is another reason to follow this pattern besides efficiency, and that is consistency. It guarantees that the artefact that is published is what passed the gate testing. Often artefacts are built with transitive dependencies which are not fully pinned. Consequently, while artifact built in the post pipeline *should* match what you tested, depending on how long your gate takes to complete, frequency of updates to dependencies and whether your build uses fully pinned references to deps getting the same result in post is not truly guaranteed and you can end up with something that is subtly different. We've opted not to require that all users of the build infra need to know how to manage their deps perfectly and instead try to provide something that they can rely on being exactly what was automatically tested. Just thought it might be a useful extra bit of benefit for anyone trying to persuade others as to why this is beneficial. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From daragh.bailey at gmail.com Thu Jan 31 10:36:29 2019 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Thu, 31 Jan 2019 10:36:29 +0000 Subject: Using nodepool v3 with Jenkins In-Reply-To: <87r2ctg3kr.fsf@meyer.lemoncheese.net> References: <87fttbkoca.fsf@meyer.lemoncheese.net> <87r2ctg3kr.fsf@meyer.lemoncheese.net> Message-ID: On Wed, 30 Jan 2019 at 22:53, James E. Blair wrote: > > If the issue is that Nodepool collects host keys for static > nodes... well, here is the line of code you'd have to change: > > > http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py#n40 > > If, however, the issue is related to providing data to Jenkins, I'm not > sure changes to Nodepool will help. > It is unfortunately the latter :-( > > However there is another hitch. One of the reasons we're using vault is > > that some teams are providing a system built and managed by themselves, > and > > currently they can add this to be used by their jobs only by adding an > > entry under a set path in vault. We grant permissions for them to write > to > > a specific location and then pick it up dynamically. This allows them to > > add a replacement system to be used on the next run, and stop the current > > k8s cluster from being used without any interaction with ourselves. This > is > > something that we'll need to continue to do, so some dynamic > > adding/removing of nodes from being used would be highly desirable to > avoid > > unnecessary requests for us to make changes. > > You could probably accomplish this in a self-service fashion today in > Zuul with a combination of semaphores and secrets (with no involvement > from Nodepool). > That's very good to know, and may result in trying to aim for a temporary solution in doing the 1-to-1 mapping to get to nodepool v3 and then solve it properly with switching to zuul v3 So thanks, that's definitely something I'll share and see if the rest of the team think it's a reasonable approach. > > So if we have to do a custom driver to support that, it might be useful > to > > have the system be relatively efficient and apply the same dynamic > > adding/removing to all of our static nodes to that instead of needing to > > reserve 8+ for these k8s systems which are only used some of the time, we > > can reuse the nodes used to run the jobs by only injecting the creds > needed > > for the job that is supposed to have access to the allocated k8s cluster. > > Though it was mentioned that such dynamic behaviour causes issues when > > listing nodes, but if that doesn't impact the Jenkins nodepool agent, I > > think could live with it for a few months. > > > > > > Some of this may be unnecessary, so trying to balance complexity to > support > > the base requirements: > > > > - Some teams can run jobs on a k8s cluster they provide in an > exclusive > > manner > > - These can remain self managed > > - Avoid jobs needing such resources queuing in Jenkins if the cluster > is > > not yet available > > - Avoid unnecessarily blocking other jobs due to no available machines > > to run jobs on (even though a number are sitting idle) > > - Make it easier to upgrade to zuul v3 > > Based on the above, I don't think you will need a custom Nodepool > driver. > > -Jim > My hope is that your suggestion on using semaphores in zuul v3 means we can live with doing a less efficient solution. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu Jan 31 18:37:02 2019 From: claire at openstack.org (Claire Massey) Date: Thu, 31 Jan 2019 12:37:02 -0600 Subject: Open Infrastructure Summit Denver - Community Voting Open Message-ID: Hi everyone, Community voting for the Open Infrastructure Summit Denver CFP sessions is open! You can VOTE HERE , but what does that mean? Now that the Call for Presentations (CFP) has closed, all submissions are available for community vote and input. After community voting closes, the volunteer Programming Committee members will receive the presentations to review and determine the final selections for Summit schedule. While community votes are meant to help inform the decision, Programming Committee members are expected to exercise judgment in their area of expertise and help ensure diversity of sessions and speakers. View full details of the session selection process here . In order to vote, you need an OSF community membership. If you do not have an account, please create one by going to openstack.org/join . If you need to reset your password, you can do that here . Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, February 5 at 7:59 UTC). Continue to visit https://www.openstack.org/summit/denver-2019 for all Summit-related information. REGISTER Register for the Summit before prices increase in late February! VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM February 27 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (February 28 at 7:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: