From iwienand at redhat.com Mon Jul 2 04:17:57 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 2 Jul 2018 14:17:57 +1000 Subject: Deprecation: zuul-info/inventory.yaml creation moved from validate-host to log-inventory role (v2) Message-ID: Hello (again), This is a follow-up to a prior mail http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-June/000446.html --- The "validate-host" role [1] currently creates the "inventory.yaml" [2] file, which contains the ansible inventory information for the job. For some history, the role was originally called "debug-ansible" but renamed to "validate-host". Since it collects other ansible information, saving the inventory was added to the role shortly after the rename. As copying this inventory information requires access to the executor context, it must be run as a privileged job. However, the role also includes quite a number of other operations that do not have similar requirements. Due to not currently having an automated CI testing framework for privileged jobs, this leaves changes to these unrelated parts (like [3]) with insufficient testing coverage. The creation of the inventory file has been moved into a separate role "log-inventory" [4]. We now wish to remove the inventory creation from the "validate-host" role [5]. This will lower the surface-area of privileged jobs and also enable more straight-forward testing of the rest of the "validate-host" role [6]. For the OpenStack ecosystem, the new role will be added to the base job [7] so this should be entirely transparent. If you build your own base templates you will need to add the "log-inventory" role to continue collecting the inventory.yaml. You can do this *now*, and thus be unaffected when the creation is removed from "validate-host", with one caveat: Note that testing uncovered an issue with the original role -- the "inventory_file" variable is not set by Ansible for localhost only jobs. Thus "log-inventory" will not break any jobs where "validate-host" was working, but you should be careful not to additionally introduce it into localhost (that is to say, executor-only) jobs without considering this. Longer term, the issue has been fixed and "log-inventory" is safe to use in all contexts. A separate variable for the inventory file has been added to the executor context with [8] and is now conditionally used in "log-inventory" with [9] (thus if you are using zuul from master, you are unaffected). Per the deprecation policy for requiring new zuul features [10], this new variable will be used unconditionally to copy the file with [11] around 4 weeks after the next zuul release. Taking into consideration: * reviews posted for several weeks * pending debugging of gate issues with "validate-host" * the informational/debugging nature of the output * the relative ease of adding the new role to maintain the status quo * interaction with current users of the roles I would suggest we merge [7] - addition of "log-inventory" to base jobs in project-config [5] - removal of inventory copy from "validate-host" To complete the initial part of this transition. Thank you -i [1] https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/validate-host [2] see any job results zuul-info/inventory.yaml [3] https://review.openstack.org/576351 [4] https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/log-inventory [5] https://review.openstack.org/563790 [6] https://review.openstack.org/563788 [7] https://review.openstack.org/563789 [8] https://review.openstack.org/578235 [9] https://review.openstack.org/578234 [10] https://zuul-ci.org/docs/zuul-jobs/deprecation.html [11] https://review.openstack.org/578238 From corvus at inaugust.com Mon Jul 2 21:52:51 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 02 Jul 2018 14:52:51 -0700 Subject: Zuul project update 2018-07-02 Message-ID: <87k1qd1f0s.fsf@meyer.lemoncheese.net> Hi, This past week has seen: Monty upgrade the zuul dashboard to Angular 6 and typescript. It seems like that's mostly working now, though there may still be an issue in the multi-tenant hosting configuration. Mohammed fixed status page to not show empty headers when filtering for a specific project (a long requested feature)! Tristan found more issues with latest Nodepool master version regarding multiple pool using different labels (fixed in [1] by David), worked on changes to the log-inventory role[2], and contributed to the Angular 6 work. He investigated a couple of exception being raised in the github driver, and started to work on improving logging by promoting important log messages from debug to info[3]. Started to work on selenium tests[4]. I started looking into adding support for leaving in-line comments. I'll continue to work a bit on that this week, but mostly focus on finishing up the container spec and hopefully getting it merged. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim [1] https://review.openstack.org/578831 [2] https://review.openstack.org/#/q/topic:log-inventory-fixes [3] https://review.openstack.org/#/q/topic:improve-logging [4] https://review.openstack.org/579039 From tdecacqu at redhat.com Tue Jul 3 01:36:20 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 03 Jul 2018 01:36:20 +0000 Subject: Dashboard routing proposals Message-ID: <1530580529.10r9cnlrwg.tristanC@fedora> Hi, with the recent angular6 refactor, the web interface components routing slightly changed as it is now managed by the navigation javascript component. Unfortunately this broke multi-tenant setup and we need an extra fixes to support it. Here are some options: A) Make the navigation component call a relative 'api/info' endpoint onInit to be able to introspect the deployment mode: the returned json may contains a 'tenant' value to indicate this is a white label setup. B) Always assume white-label and document that multi-tenant needs to redirect the entrypoint to 't/tenants.html' C) Always assume multi-tenant and document that white-label needs a rewrite rule to redirect the entry-point to 'status.html' D) Rewrite the navigation endpoint to rely on a '#!' anchor, this would be a mix of the first option (A) to introspect the setup mode, but using a single 'index.html' file instead. I would prefer D because the current design makes multi-tenant setup looks special with the needed '/t/' path and it feels like a second-class citizen. On the other hand, B or C sounds simple enough and this wouldn't delay much the next release. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mordred at inaugust.com Tue Jul 3 20:21:49 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 3 Jul 2018 15:21:49 -0500 Subject: Dashboard routing proposals In-Reply-To: <1530580529.10r9cnlrwg.tristanC@fedora> References: <1530580529.10r9cnlrwg.tristanC@fedora> Message-ID: <754b194c-4d99-9def-131c-25455d0c1c19@inaugust.com> On 07/02/2018 08:36 PM, Tristan Cacqueray wrote: > Hi, > > with the recent angular6 refactor, the web interface components routing > slightly changed as it is now managed by the navigation javascript > component. Unfortunately this broke multi-tenant setup and we need an > extra fixes to support it. Whoops. Sorry. > Here are some options: > > A) Make the navigation component call a relative 'api/info' endpoint > onInit to be able to introspect the deployment mode: the returned json > may contains a 'tenant' value to indicate this is a white label setup. This is actually what's been in my brain-hole all along. It's a little strange because we need to do it before we pass the routing list to the decorator - so I'm not 100% sure if we can use angular http stuff will be set up and we might have to use lower-level stuff. Incidentally, we could actually just have 2 routing lists - one for single and one for multi - and return the right one based on the results of the api/info call so that we don't have both t/:tenant/status.html and status.html I'll work up a patch for this if you don't beat me to it. > B) Always assume white-label and document that multi-tenant needs to > redirect the entrypoint to 't/tenants.html' > > C) Always assume multi-tenant and document that white-label needs a > rewrite rule to redirect the entry-point to 'status.html' Both of these seem suboptimal. > D) Rewrite the navigation endpoint to rely on a '#!' anchor, this would > be a mix of the first option (A) to introspect the setup mode, but > using a single 'index.html' file instead. Well - we could do this -and don't even need to do #! (angular supports "html5 mode" by default which will do single-page-apps without the need for the #1. However, I think as you mention we'll still need to do A - so I think we should consider "just be a single-page-app" independently. I don't think this will wind up being better though. We're actually already using the mechanics that angular uses to do #! - it's the routing table that's currently giving us problems. > > I would prefer D because the current design makes multi-tenant setup > looks special with the needed '/t/' path and it feels like a > second-class citizen. I certainly don't want multi-tenant to be second-class. but I think we need to do the important parts of A anyway. I don't think we'll be able to get rid of the /t/ paths - or, rather, I don't think getting rid of them would help much. We could make the routing go from 't/:tenant/status.html' to just ':tenant/status.html' - but even if we used #! in the url (which we don't need to do) we'd still want the tenant to be in the URL so that people can bookmark locations, yeah? So let's maybe start with A, make sure we get things working properly - and then we can circle back and discuss how much we care about multi-page vs. single-page. > On the other hand, B or C sounds simple enough and this wouldn't delay > much the next release. > > > Regards, > -Tristan > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From tdecacqu at redhat.com Thu Jul 5 22:32:47 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 05 Jul 2018 22:32:47 +0000 Subject: Dashboard routing proposals In-Reply-To: <754b194c-4d99-9def-131c-25455d0c1c19@inaugust.com> References: <1530580529.10r9cnlrwg.tristanC@fedora> <754b194c-4d99-9def-131c-25455d0c1c19@inaugust.com> Message-ID: <1530826956.wtm92a6lt0.tristanC@fedora> On July 3, 2018 8:21 pm, Monty Taylor wrote: > On 07/02/2018 08:36 PM, Tristan Cacqueray wrote: >> Hi, >> >> with the recent angular6 refactor, the web interface components routing >> slightly changed as it is now managed by the navigation javascript >> component. Unfortunately this broke multi-tenant setup and we need an >> extra fixes to support it. > > Whoops. Sorry. > >> Here are some options: >> >> A) Make the navigation component call a relative 'api/info' endpoint >> onInit to be able to introspect the deployment mode: the returned json >> may contains a 'tenant' value to indicate this is a white label setup. > > This is actually what's been in my brain-hole all along. It's a little > strange because we need to do it before we pass the routing list to the > decorator - so I'm not 100% sure if we can use angular http stuff will > be set up and we might have to use lower-level stuff. > > Incidentally, we could actually just have 2 routing lists - one for > single and one for multi - and return the right one based on the results > of the api/info call so that we don't have both t/:tenant/status.html > and status.html > > I'll work up a patch for this if you don't beat me to it. > The 2 routing lists proposal sounds good, though I think you over-estimate my understanding of that angular6 stack :-) Can we call 'api/info' right before doing the NgModule import? And what's the best practice to make the info structure global so that the other component can access it? >> B) Always assume white-label and document that multi-tenant needs to >> redirect the entrypoint to 't/tenants.html' >> >> C) Always assume multi-tenant and document that white-label needs a >> rewrite rule to redirect the entry-point to 'status.html' > > Both of these seem suboptimal. > >> D) Rewrite the navigation endpoint to rely on a '#!' anchor, this would >> be a mix of the first option (A) to introspect the setup mode, but >> using a single 'index.html' file instead. > > Well - we could do this -and don't even need to do #! (angular supports > "html5 mode" by default which will do single-page-apps without the need > for the #1. However, I think as you mention we'll still need to do A - > so I think we should consider "just be a single-page-app" independently. > > I don't think this will wind up being better though. We're actually > already using the mechanics that angular uses to do #! - it's the > routing table that's currently giving us problems. > I actually just meant single-page. >> >> I would prefer D because the current design makes multi-tenant setup >> looks special with the needed '/t/' path and it feels like a >> second-class citizen. > > I certainly don't want multi-tenant to be second-class. > Then maybe we could re-think the UX in the future... On one hand we have a white-label view which is great for a tenant user. Otoh we have the multi-tenant view with the tenant list and its many statuses, jobs and builds page that are not very practical when you have more than 2 or 3 tenants. In most cases, shouldn't this multi-tenant view be restricted to admin users anyway? > but I think we need to do the important parts of A anyway. > > I don't think we'll be able to get rid of the /t/ paths - or, rather, I > don't think getting rid of them would help much. We could make the > routing go from 't/:tenant/status.html' to just ':tenant/status.html' - > but even if we used #! in the url (which we don't need to do) we'd still > want the tenant to be in the URL so that people can bookmark locations, > yeah? > Yes, I was hoping that a single-page interface would let us use simpler http route like zuul.example.com/{tenant-name}/status and zuul.white-label.com/status. Having a single index.html file would also simplify caching, right now the browser may fetch the whole bundle if you start from a different view... > So let's maybe start with A, make sure we get things working properly - > and then we can circle back and discuss how much we care about > multi-page vs. single-page. > Sounds like a good plan. The option A seems relevant no matter what directions we take. We are already waiting for a new release of Zuul so I can find some time to work on this if needed. Thanks, -Tristan >> On the other hand, B or C sounds simple enough and this wouldn't delay >> much the next release. >> >> >> Regards, >> -Tristan >> >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From artem.goncharov at gmail.com Fri Jul 6 09:29:22 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 6 Jul 2018 11:29:22 +0200 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker Message-ID: Hi all, as promised in the IRC I am publishing POC work on preparing zuul-apb. It is intended to help (currently only myself) to install Zuul on top of Openshift (Kubernetes should be supported also, but with limited functionality and not tried so far). The project started to be initially used in my company, where after running zuul on VMs I decided to shift it to using Openshift. It is currently located in my personal repo in github with intention to be moved to corporate repo shortly. But before I do this I wanted to offer it to Zuul community for discussion. If there is enough interest in adopting it, I am open and will be glad to shift development under the Openstack hat. So here are some details: - Repo: https://github.com/gtema/zuul-apb - project is built to make use of source-to-image to build own images. This is, however, only possible if you use Openshift (not the S2I tool itself, but presence of BuildConfigs). Example repo with s2i image scripts. For plain Kubernetes this might be later solved by separate pod invoking s2i inside. - regular project-config repo is also mandatory, since it is used to fetch initial components configuration. It should contain zuul/{zuul.conf,main.yaml}, nodepool{nodepool.yaml} files. It is also planned to watch project-config changes and redeploy configMaps. - gearman certificates and all other secrets are passed as APB parameters (unless you use `dev` plan and bootstrap special variables file inside for dev purposes) and stored as Secret - zookeper cluster is created also part of the project (external cluster might be used, but I decided to initially provide own) - project is not yet used in production, therefore not everything is how it should be. I intend to work on this further. There are also some issues with AnsibleServiceBroker itself, which caused some delays and few iterations with upstream. So for example the Dockerfile bases on my upstream fork due to issues with ansible 2.7 (some bugs in `lookup`, causing line breaks to be broken). When changes upstream are published I will rebase back. - work is based on Tobias's shared Openshift templates - thanks Tobias - currently 2 components (nodepool-builder and zuul-executor) are requiring privileged accounts to work. This is kind of problem with APB approach (and generally using containers), since there is currently no way to deliver this nicely and it poses big security risk to the whole cluster. Therefore there is a manual step required for deployment to grant those privileges. On the other hand nodepool-builder is very poor on running inside of container due to lots of required privileges, security concerns, and bugs in `elements` with respect to i.e. SELinux handling ( https://bugs.launchpad.net/diskimage-builder/+bug/1779273), potential problems with cross-platform builds. Tristian had an idea of using nodepool-launcher to start VM with nodepool-builder (I personally like the approach). Otherwise it is anyway funny - VM is running container, which is building VM image ;-). Additionally Tristian volunteered to work also on zuul-executor not to required privileged account. - together with work on containerization of zuul jobs this might become a full solution of running zuul under K8/Openshift and complement Windmill with a VM installation approach. I would be glad, if you share your thoughts. Especially whether community is interested in having this as an official project. Regards and have a nice weekend, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Fri Jul 6 13:23:58 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Fri, 6 Jul 2018 09:23:58 -0400 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: References: Message-ID: Hi, On Fri, Jul 6, 2018 at 5:29 AM Artem Goncharov wrote: [snip] > > - work is based on Tobias's shared Openshift templates - thanks Tobias > > NB: I'm not entirely sure what's going to happen with this driver given the proposed container spec: https://review.openstack.org/560136 > > - currently 2 components (nodepool-builder and zuul-executor) are > requiring privileged accounts to work. This is kind of problem with APB > approach (and generally using containers), since there is currently no way > to deliver this nicely and it poses big security risk to the whole cluster. > Therefore there is a manual step required for deployment to grant those > privileges. On the other hand nodepool-builder is very poor on running > inside of container due to lots of required privileges, security concerns, > and bugs in `elements` with respect to i.e. SELinux handling ( > https://bugs.launchpad.net/diskimage-builder/+bug/1779273), potential > problems with cross-platform builds. Tristian had an idea of using > nodepool-launcher to start VM with nodepool-builder (I personally like the > approach). Otherwise it is anyway funny - VM is running container, which is > building VM image ;-). Additionally Tristian volunteered to work also on > zuul-executor not to required privileged account. > > You have the option of using pre-existing cloud images, rather than building your own with nodepool-builder. Not sure how that fits into your workflow, though. See: https://zuul-ci.org/docs/nodepool/configuration.html#cloud-images. -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Jul 9 18:15:01 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Jul 2018 11:15:01 -0700 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: (Artem Goncharov's message of "Fri, 6 Jul 2018 11:29:22 +0200") References: Message-ID: <87muv0ff8a.fsf@meyer.lemoncheese.net> Artem Goncharov writes: > Hi all, > > as promised in the IRC I am publishing POC work on preparing zuul-apb. It > is intended to help (currently only myself) to install Zuul on top of > Openshift (Kubernetes should be supported also, but with limited > functionality and not tried so far). > > The project started to be initially used in my company, where after running > zuul on VMs I decided to shift it to using Openshift. It is currently > located in my personal repo in github with intention to be moved to > corporate repo shortly. But before I do this I wanted to offer it to Zuul > community for discussion. If there is enough interest in adopting it, I am > open and will be glad to shift development under the Openstack hat. So here > are some details: Thanks! First, I'd like to point out that Monty is working on some jobs to automatically build and publish Zuul container images. Perhaps once those are working, you may be able to use those with this project. Generally speaking, I think we should take the approach of being fairly conservative in adding to the official Zuul project. I think things that we adopt in that context should be very widely applicable. If we adopt deployment tooling, then I think it should be very fundamental and provide building blocks that can be used in more complex systems. By way of examples, I could see some Ansible roles, or perhaps a helm chart for a simple all-in-one system (so that we can use it in our own CI tests and document it as a quickstart deployment method), but more complex systems may not be a good fit. Of course, those of us who have been working on OpenStack for a while will recognize this question. :) I think right now zuul-apb would be best served as being a friendly affiliated project, but not one that we officially adopt. Within the OpenStack-Infra project, we're working on an approach for hosting related projects without having to deal with issues of OpenStack (or even Zuul) branding. I think this question would be much easier to answer if we were further along with that. This is a very tricky question, and we may want to periodically re-evaluate it, or even change our minds. I'd like to hear from other folks too as to what direction we should take with regards to deployment tooling. Thanks again for writing the tool and offering it. Deployment systems are important and they strengthen the community overall. -Jim From corvus at inaugust.com Mon Jul 9 21:05:58 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Jul 2018 14:05:58 -0700 Subject: Zuul project update 2018-07-09 Message-ID: <87sh4sdsqx.fsf@meyer.lemoncheese.net> Hi, Last week Shrews tracked down and fixed a Nodepool bug preventing the launcher from finding unmanaged images by ID[1]. Paul is working on a series of changes to enable jobs to specify that specific child jobs should be skipped[2][3]. I have been "helping" Tobias fix a bug related to dynamic configuration. Discussion is continuing on the container spec, but I think it's getting very close to approval. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim [1] https://review.openstack.org/579664 [2] https://review.openstack.org/578181 [3] https://review.openstack.org/578230 [4] https://review.openstack.org/560136 From mordred at inaugust.com Tue Jul 10 01:17:32 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 9 Jul 2018 21:17:32 -0400 Subject: Exciting opportunity to bikeshed over a naming question! Message-ID: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> Hey everybody, I've been working on jobs to build zuul container images. The obvious next step, once the build job works - is to publish them to dockerhub. This is where the bikeshed comes in. A while back, I registered dockerhub orgs for both "zuul" and "zuulci". Which org should we publish container images to? A part of me wants to use the 'zuul' one, since we have it. OTOH, the website is zuul-ci - so the other part of me thinks we should just use that. The end result of this will be docker images that look like: zuul/zuul-base zuul/zuul-executor zuul/zuul-scheduler zuul/zuul-web etc or zuulci/zuul-base zuulci/zuul-executor zuulci/zuul-scheduler zuulci/zuul-web etc Thoughts? Monty From mrhillsman at gmail.com Tue Jul 10 02:50:15 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 9 Jul 2018 21:50:15 -0500 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> Message-ID: I think since the website uses zuul-ci zuulci is reasonable, maybe even zuul-ci if dockerhub allows dashes in names :) On Mon, Jul 9, 2018 at 8:17 PM, Monty Taylor wrote: > Hey everybody, > > I've been working on jobs to build zuul container images. The obvious next > step, once the build job works - is to publish them to dockerhub. > > This is where the bikeshed comes in. > > A while back, I registered dockerhub orgs for both "zuul" and "zuulci". > > Which org should we publish container images to? > > A part of me wants to use the 'zuul' one, since we have it. > > OTOH, the website is zuul-ci - so the other part of me thinks we should > just use that. > > The end result of this will be docker images that look like: > > zuul/zuul-base > zuul/zuul-executor > zuul/zuul-scheduler > zuul/zuul-web > etc > > or > > zuulci/zuul-base > zuulci/zuul-executor > zuulci/zuul-scheduler > zuulci/zuul-web > etc > > Thoughts? > > Monty > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdk at redhat.com Tue Jul 10 03:12:09 2018 From: gdk at redhat.com (Greg Dekoenigsberg) Date: Mon, 9 Jul 2018 23:12:09 -0400 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> Message-ID: +1 to zuul-ci if you can get it, zuulci otherwise. On Mon, Jul 9, 2018, 10:51 PM Melvin Hillsman wrote: > I think since the website uses zuul-ci zuulci is reasonable, maybe even > zuul-ci if dockerhub allows dashes in names :) > > On Mon, Jul 9, 2018 at 8:17 PM, Monty Taylor wrote: > >> Hey everybody, >> >> I've been working on jobs to build zuul container images. The obvious >> next step, once the build job works - is to publish them to dockerhub. >> >> This is where the bikeshed comes in. >> >> A while back, I registered dockerhub orgs for both "zuul" and "zuulci". >> >> Which org should we publish container images to? >> >> A part of me wants to use the 'zuul' one, since we have it. >> >> OTOH, the website is zuul-ci - so the other part of me thinks we should >> just use that. >> >> The end result of this will be docker images that look like: >> >> zuul/zuul-base >> zuul/zuul-executor >> zuul/zuul-scheduler >> zuul/zuul-web >> etc >> >> or >> >> zuulci/zuul-base >> zuulci/zuul-executor >> zuulci/zuul-scheduler >> zuulci/zuul-web >> etc >> >> Thoughts? >> >> Monty >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jul 10 03:35:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Jul 2018 20:35:24 -0700 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> (Monty Taylor's message of "Mon, 9 Jul 2018 21:17:32 -0400") References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> Message-ID: <87efgbbw5f.fsf@meyer.lemoncheese.net> Monty Taylor writes: > Hey everybody, > > I've been working on jobs to build zuul container images. The obvious > next step, once the build job works - is to publish them to dockerhub. > > This is where the bikeshed comes in. > > A while back, I registered dockerhub orgs for both "zuul" and "zuulci". > > Which org should we publish container images to? > > A part of me wants to use the 'zuul' one, since we have it. > > OTOH, the website is zuul-ci - so the other part of me thinks we > should just use that. I believe this has come up before, and we decided that our brand was Zuul and we should use it where possible. Among the reasons to avoid using "zuul ci" is the fact that it's much more than a CI system. So "zuul" has my vote. -Jim From fungi at yuggoth.org Tue Jul 10 03:38:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 10 Jul 2018 03:38:14 +0000 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: <87efgbbw5f.fsf@meyer.lemoncheese.net> References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> <87efgbbw5f.fsf@meyer.lemoncheese.net> Message-ID: <20180710033814.p2sfk6f3rwqe4pfz@yuggoth.org> On 2018-07-09 20:35:24 -0700 (-0700), James E. Blair wrote: [...] > I believe this has come up before, and we decided that our brand was > Zuul and we should use it where possible. Among the reasons to avoid > using "zuul ci" is the fact that it's much more than a CI system. > > So "zuul" has my vote. I too prefer "zuul" (in part due to the above stated reasons, but also because "zuulci" seems like it should be a pasta dish). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Tue Jul 10 03:58:29 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 9 Jul 2018 22:58:29 -0500 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: <20180710033814.p2sfk6f3rwqe4pfz@yuggoth.org> References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> <87efgbbw5f.fsf@meyer.lemoncheese.net> <20180710033814.p2sfk6f3rwqe4pfz@yuggoth.org> Message-ID: I also prefer zuul for the reasons you mentioned Jim but figured that with the domain being zuul-ci that the decision had been made to use that branding. I was going to say in my original message that I thought zuul was more about gating or not specifically ci but again, domain name, so I figured not to rock the boat. I much rather prefer zuul but does that mean the domain name would change (even plans to do so are in the mix) or what makes zuul-ci for the domain ok but rather zuul for dockerhub? Just some thoughts that if zuul-ci will stay the domain name it may make it easier for folks to use zuul-ci for dockerhub and any other such things but I agree if it can be just zuul it should be. On Mon, Jul 9, 2018 at 10:38 PM, Jeremy Stanley wrote: > On 2018-07-09 20:35:24 -0700 (-0700), James E. Blair wrote: > [...] > > I believe this has come up before, and we decided that our brand was > > Zuul and we should use it where possible. Among the reasons to avoid > > using "zuul ci" is the fact that it's much more than a CI system. > > > > So "zuul" has my vote. > > I too prefer "zuul" (in part due to the above stated reasons, but > also because "zuulci" seems like it should be a pasta dish). > -- > Jeremy Stanley > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Jul 10 04:21:07 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Jul 2018 21:21:07 -0700 Subject: Exciting opportunity to bikeshed over a naming question! In-Reply-To: (Melvin Hillsman's message of "Mon, 9 Jul 2018 22:58:29 -0500") References: <52e0e492-a4a5-ee7a-7602-66db457bad78@inaugust.com> <87efgbbw5f.fsf@meyer.lemoncheese.net> <20180710033814.p2sfk6f3rwqe4pfz@yuggoth.org> Message-ID: <87muuzafgs.fsf@meyer.lemoncheese.net> Melvin Hillsman writes: > I also prefer zuul for the reasons you mentioned Jim but figured that with > the domain being zuul-ci that the decision had been made to use that > branding. I was going to say in my original message that I thought zuul was > more about gating or not specifically ci but again, domain name, so I > figured not to rock the boat. I much rather prefer zuul but does that mean > the domain name would change (even plans to do so are in the mix) or what > makes zuul-ci for the domain ok but rather zuul for dockerhub? Just some > thoughts that if zuul-ci will stay the domain name it may make it easier > for folks to use zuul-ci for dockerhub and any other such things but I > agree if it can be just zuul it should be. If zuul.org were available, I'd love to use it, but it wasn't. This was the best we could do at the time. It's a compromise rather than a branding choice. We should probably collect all of this info into a branding guide in the docs. -Jim From artem.goncharov at gmail.com Tue Jul 10 07:10:43 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 10 Jul 2018 09:10:43 +0200 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <87muv0ff8a.fsf@meyer.lemoncheese.net> References: <87muv0ff8a.fsf@meyer.lemoncheese.net> Message-ID: Hi James, thanks a lot, I really appreciate you thoughts. > Thanks! > > First, I'd like to point out that Monty is working on some jobs to > automatically build and publish Zuul container images. Perhaps once > those are working, you may be able to use those with this project. > I am glad this is happening, but I don't think I will use those directly. I am building them with S2I, and there are absolutely no problem with that. Moreover, on each Openshift meetup a question arise: how do we maintain security of our system using 3rdparty images. RedHat came with a tool in Openshift to inspect and alert about vulnerabilities in used images. When you take a ready image from somewhere, you are stuck with hoping it will be updated promptly. Let's be fair, in the last month critical vulnerabilities are being discovered nearly on a weekly basis. Are we going to patch our images or monitor this in any way? Using S2I I do not have this constraint. Whenever OS image is updated I am ready to build my image. In addition to that I have flexibility of just specifying the git ref to be used in images. I think operating your own installation of any SW you would want to have a more granular control over components versions. I even have a feeling some here are also basing on specific gerrit changes for their own installations. > Generally speaking, I think we should take the approach of being fairly > conservative in adding to the official Zuul project. I think things > that we adopt in that context should be very widely applicable. If we > adopt deployment tooling, then I think it should be very fundamental and > provide building blocks that can be used in more complex systems. > By way of examples, I could see some Ansible roles, or perhaps a helm > chart for a simple all-in-one system (so that we can use it in our own > CI tests and document it as a quickstart deployment method), but more > complex systems may not be a good fit. > > Of course, those of us who have been working on OpenStack for a while > will recognize this question. :) > > I think right now zuul-apb would be best served as being a friendly > affiliated project, but not one that we officially adopt. Within the > OpenStack-Infra project, we're working on an approach for hosting > related projects without having to deal with issues of OpenStack (or > even Zuul) branding. I think this question would be much easier to > answer if we were further along with that. > Official or not official. This is not a point. I offered to make it known and optionally develop on the openstack infra the same way if there is interest in community, as i.e. windmill is. So far I do not see any interest in contradict to what I was expecting. No problem. > This is a very tricky question, and we may want to periodically > re-evaluate it, or even change our minds. I'd like to hear from other > folks too as to what direction we should take with regards to deployment > tooling. > Strictly personal strong opinion: Think about a box, and not about what is inside of a box. When you try to make your product attractive, you work not only on a product, but on it's package also. I personally would not buy a product if I need to organize transport packaging for it myself or when I am not able to adapt it in my home. It is not sufficient that the product is great if I can't deploy it. To tell the truth I am tired of those SWs, which put usability and deployment on the second place. You can't expect people will use it, if you did nothing to help them deploy SW and get a functional configuration. I guess this is exactly the same discussion, that happened couple of days ago in openstack-dev about Openstack deployment as a whole, where some were mentioning that K8s is much easier to operate. I personally see this problem also. Openstack is great, but is incredibly complex and there is way too less work done on easing it's operations. Even when you take RDO it is not much better. When I first started looking at Openstack and tried deploying it it took me around 2 weeks even having step-by-step manual and packages ready. When I first started looking at Zuul it took me over 2 weeks with zuul install guides. It was not hard to install it, but as soon as you try to get more or less reasonable configuration you stuck for weeks. Windmill tries to make it easier, but is there any reference, any hint on it's existence? To tell the truth, assuming not being affiliated with Openstack, if I would have been asked to evaluate usage of Zuul in company, I would gave it up after a week. Documentation is there, but it is not something you need when you try to start using Zuul outside of Openstack. It focuses on the technical aspects, and a "tutorial"-like guides, administration guides are missing as such. Short doc on manual installing Zuul on Fedora is absolutely insufficient for productive use. So overall my suggestion: think about Openstack and affiliated SW also as anyone outside of Openstack would think. Only this way you will be able to see challenges they have. I personally see that Zuul community is focused mostly inside (while still seeking to become successful outside of Openstack). Reminder: it is a strictly personal opinion and I am not trying to blame anyone. > Thanks again for writing the tool and offering it. Deployment systems > are important and they strengthen the community overall. You are welcome Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Jul 10 10:26:50 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 10 Jul 2018 10:26:50 +0000 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: References: Message-ID: <1531216046.820khvt23x.tristanC@fedora> On July 6, 2018 9:29 am, Artem Goncharov wrote: [...] > Additionally Tristian volunteered to work also on > zuul-executor not to required privileged account. Hi, I would love to work on making Zuul a "cloud-native" application, but Zuul devs told me that this is too early and we better work on the container spec first... Since this would have a positive impact on what Artem is doing, here are the details of my proposal: First we extend the ExecutionContext driver to take over workspace preparation so that we can implement a k8s driver to use in-place of the current bubblewrap implementation. Then we extend the zuul tenant configuration to be able to associate git projects with k8s namespaces. We may also define the available container images to be used for job's localhost, e.g. ansible-2.5-python, ansible-2.6-golang, ... Finally, when zuul executor executes a job, it picks the target namespace based on the change's project, then spawns the correct container image, upload the speculative source, and run the ansible-playbook command through the kubernetes exec interface. The pros of this approach are: * greatly simplify deployment, basic setup do not need zookeeper or nodepool, * compatible with the legacy architecture, * fit nicely k8s since build resources are dispatched to each individual project, and * reduce job cost if we relax localhost usage. After all, if the whole execution context is throwable, then job could simply exec on the localhost. This shouldn't be too difficult to implement, though it's quite an important refactor, so I am waiting for comments and/or alternative solutions from the Zuul core team before doing the legwork. Best regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Tue Jul 10 11:23:41 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 10 Jul 2018 11:23:41 +0000 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <87muv0ff8a.fsf@meyer.lemoncheese.net> References: <87muv0ff8a.fsf@meyer.lemoncheese.net> Message-ID: <1531220916.cwtafzif43.tristanC@fedora> On July 9, 2018 6:15 pm, James E. Blair wrote: > Artem Goncharov writes: > >> Hi all, >> >> as promised in the IRC I am publishing POC work on preparing zuul-apb. It >> is intended to help (currently only myself) to install Zuul on top of >> Openshift (Kubernetes should be supported also, but with limited >> functionality and not tried so far). >> >> The project started to be initially used in my company, where after running >> zuul on VMs I decided to shift it to using Openshift. It is currently >> located in my personal repo in github with intention to be moved to >> corporate repo shortly. But before I do this I wanted to offer it to Zuul >> community for discussion. If there is enough interest in adopting it, I am >> open and will be glad to shift development under the Openstack hat. So here >> are some details: > > Thanks! > > First, I'd like to point out that Monty is working on some jobs to > automatically build and publish Zuul container images. Perhaps once > those are working, you may be able to use those with this project. > > Generally speaking, I think we should take the approach of being fairly > conservative in adding to the official Zuul project. I think things > that we adopt in that context should be very widely applicable. If we > adopt deployment tooling, then I think it should be very fundamental and > provide building blocks that can be used in more complex systems. > > By way of examples, I could see some Ansible roles, or perhaps a helm > chart for a simple all-in-one system (so that we can use it in our own > CI tests and document it as a quickstart deployment method), but more > complex systems may not be a good fit. > > Of course, those of us who have been working on OpenStack for a while > will recognize this question. :) > > I think right now zuul-apb would be best served as being a friendly > affiliated project, but not one that we officially adopt. Within the > OpenStack-Infra project, we're working on an approach for hosting > related projects without having to deal with issues of OpenStack (or > even Zuul) branding. I think this question would be much easier to > answer if we were further along with that. > > This is a very tricky question, and we may want to periodically > re-evaluate it, or even change our minds. I'd like to hear from other > folks too as to what direction we should take with regards to deployment > tooling. > It seems like most users has it's own craft to operate a Zuul service. In particular, those tasks are pretty difficult: * upgrade zuul, * scale up/down executors, and * backup/restore the service. With K8/Openshift, we should be able to agree on common tooling and collaborate to automate those actions. Artem PoC looks like a great starting point. Thanks, -Tristan > Thanks again for writing the tool and offering it. Deployment systems > are important and they strengthen the community overall. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From artem.goncharov at gmail.com Tue Jul 10 11:35:27 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 10 Jul 2018 13:35:27 +0200 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <1531220916.cwtafzif43.tristanC@fedora> References: <87muv0ff8a.fsf@meyer.lemoncheese.net> <1531220916.cwtafzif43.tristanC@fedora> Message-ID: Thanks Tristian, totally agree with you. In my case I am this Zuul operator (alone). And while this is only one of low prio responsibilities, it becomes incredibly time and effort costly. But then sometimes you need to migrate Zuul (or Openshift) to other HW (VM) and you need to start the whole job again (especially tricky are keys for secrets used in project-config). Regards, Artem On Tue, Jul 10, 2018 at 1:23 PM, Tristan Cacqueray wrote: > On July 9, 2018 6:15 pm, James E. Blair wrote: > >> Artem Goncharov writes: >> >> Hi all, >>> >>> as promised in the IRC I am publishing POC work on preparing zuul-apb. It >>> is intended to help (currently only myself) to install Zuul on top of >>> Openshift (Kubernetes should be supported also, but with limited >>> functionality and not tried so far). >>> >>> The project started to be initially used in my company, where after >>> running >>> zuul on VMs I decided to shift it to using Openshift. It is currently >>> located in my personal repo in github with intention to be moved to >>> corporate repo shortly. But before I do this I wanted to offer it to Zuul >>> community for discussion. If there is enough interest in adopting it, I >>> am >>> open and will be glad to shift development under the Openstack hat. So >>> here >>> are some details: >>> >> >> Thanks! >> >> First, I'd like to point out that Monty is working on some jobs to >> automatically build and publish Zuul container images. Perhaps once >> those are working, you may be able to use those with this project. >> >> Generally speaking, I think we should take the approach of being fairly >> conservative in adding to the official Zuul project. I think things >> that we adopt in that context should be very widely applicable. If we >> adopt deployment tooling, then I think it should be very fundamental and >> provide building blocks that can be used in more complex systems. >> >> By way of examples, I could see some Ansible roles, or perhaps a helm >> chart for a simple all-in-one system (so that we can use it in our own >> CI tests and document it as a quickstart deployment method), but more >> complex systems may not be a good fit. >> >> Of course, those of us who have been working on OpenStack for a while >> will recognize this question. :) >> >> I think right now zuul-apb would be best served as being a friendly >> affiliated project, but not one that we officially adopt. Within the >> OpenStack-Infra project, we're working on an approach for hosting >> related projects without having to deal with issues of OpenStack (or >> even Zuul) branding. I think this question would be much easier to >> answer if we were further along with that. >> >> This is a very tricky question, and we may want to periodically >> re-evaluate it, or even change our minds. I'd like to hear from other >> folks too as to what direction we should take with regards to deployment >> tooling. >> >> It seems like most users has it's own craft to operate a Zuul service. > In particular, those tasks are pretty difficult: > * upgrade zuul, > * scale up/down executors, and > * backup/restore the service. > > With K8/Openshift, we should be able to agree on common tooling and > collaborate to automate those actions. Artem PoC looks like a great > starting point. > > Thanks, > -Tristan > > > > Thanks again for writing the tool and offering it. Deployment systems >> are important and they strengthen the community overall. >> >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhuin at redhat.com Tue Jul 10 13:29:46 2018 From: mhuin at redhat.com (Matthieu Huin) Date: Tue, 10 Jul 2018 15:29:46 +0200 Subject: Canceling a running job Message-ID: Hello, Yesterday we ran into a case where a periodic job had to be stopped in the middle of its execution. We ended up doing the following: * check the console stream to grab the nodes' IP addresses, from the ansible role that sets up firewall rules (this was a multi-nodes build) * SSH into each node and manually reboot the nodes This is obviously suboptimal, and possibly not pullable on single-node builds. Is there any better way to do this, or at least to get the nodes' IP addresses for any given build? Generally speaking, this use case is rather generic, as one might want to be able to stop a running job for a variety of reasons (free up resources, prevent "something bad" from happening accidentally, etc). Is this something we could add into Zuul? Is there a spec for this or should we start one? Cheers, MHU From pabelanger at redhat.com Tue Jul 10 13:37:15 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 10 Jul 2018 09:37:15 -0400 Subject: Canceling a running job In-Reply-To: References: Message-ID: <20180710133715.GA24312@localhost.localdomain> On Tue, Jul 10, 2018 at 03:29:46PM +0200, Matthieu Huin wrote: > Hello, > > Yesterday we ran into a case where a periodic job had to be stopped in > the middle of its execution. We ended up doing the following: > > * check the console stream to grab the nodes' IP addresses, from the > ansible role that sets up firewall rules (this was a multi-nodes > build) > * SSH into each node and manually reboot the nodes > > This is obviously suboptimal, and possibly not pullable on single-node > builds. Is there any better way to do this, or at least to get the > nodes' IP addresses for any given build? > > Generally speaking, this use case is rather generic, as one might want > to be able to stop a running job for a variety of reasons (free up > resources, prevent "something bad" from happening accidentally, etc). > Is this something we could add into Zuul? Is there a spec for this or > should we start one? > Long standing feature request[1], allows operator to dequeue a change from a pipeline. I believe this is the first step to helping here. [1] https://review.openstack.org/95035/ From fungi at yuggoth.org Tue Jul 10 13:42:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 10 Jul 2018 13:42:20 +0000 Subject: Canceling a running job In-Reply-To: References: Message-ID: <20180710134219.qjq7xgjkdhfw26ul@yuggoth.org> On 2018-07-10 15:29:46 +0200 (+0200), Matthieu Huin wrote: > Yesterday we ran into a case where a periodic job had to be > stopped in the middle of its execution. [...] > Generally speaking, this use case is rather generic, as one might > want to be able to stop a running job for a variety of reasons > (free up resources, prevent "something bad" from happening > accidentally, etc). Is this something we could add into Zuul? Is > there a spec for this or should we start one? For v2 there was a partly-completed implementation of a `zuul dequeue` RPC client subcommand as a counterpart to `zuul enqueue` but I'm not aware of any work restarted for this under v3. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Tue Jul 10 14:34:22 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 07:34:22 -0700 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: (Artem Goncharov's message of "Tue, 10 Jul 2018 09:10:43 +0200") References: <87muv0ff8a.fsf@meyer.lemoncheese.net> Message-ID: <877em39n2p.fsf@meyer.lemoncheese.net> Artem Goncharov writes: > Think about a box, and not about what is inside of a box. When you try to > make your product attractive, you work not only on a product, but on it's > package also. I personally would not buy a product if I need to organize > transport packaging for it myself or when I am not able to adapt it in my > home. It is not sufficient that the product is great if I can't deploy it. > To tell the truth I am tired of those SWs, which put usability and > deployment on the second place. You can't expect people will use it, if you > did nothing to help them deploy SW and get a functional configuration. I agree with this fully, but in order for this to be effective, we can't support every possible way of deploying a piece of software. I think we will be able to build images, and have a simple method to deploy those images (eg, a helm chart). When that is in place, we will have a very short "quickstart" install guide. The zuul-apb system is much more complicated and while I'm sure it's great for your use case, it's not going to be suitable for everyone. There's an opportunity for folks to pitch in on getting upstream container images built, and then a helm chart, followed by building jobs that apply that helm chart to a k8s to test all of this in upstream CI. I think that will go a long way toward your goal and provide a building block for folks to extend to more complex deployments. -Jim From corvus at inaugust.com Tue Jul 10 14:57:01 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 07:57:01 -0700 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <1531216046.820khvt23x.tristanC@fedora> (Tristan Cacqueray's message of "Tue, 10 Jul 2018 10:26:50 +0000") References: <1531216046.820khvt23x.tristanC@fedora> Message-ID: <8736wr9m0y.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On July 6, 2018 9:29 am, Artem Goncharov wrote: > [...] >> Additionally Tristian volunteered to work also on >> zuul-executor not to required privileged account. My understanding is that in later operating systems, may be able to set the necessary capabilities for bubblewrap to work in a container? Can you elaborate on what would be required for the existing executor process to work in a container without admin privileges? > I would love to work on making Zuul a "cloud-native" application, but > Zuul devs told me that this is too early and we better work on the > container spec first... Since this would have a positive impact on what > Artem is doing, here are the details of my proposal: There are some good ideas in what your proposal, but I believe that if we implement something like this, it may have some considerable differences. I would caution against beginning work on an implementation of this for now, and, if anyone reading this is interested in the area, please help with the current container spec first. There are a lot of subtle complexities about how Zuul will interact with containers which we *think* we have ideas for written down in the container spec, but we really need experience with them before we go out on a limb. And, depending on the answer above, we may decide this additional complexity is not required at all. Also, I'd like to emphasize that the current system is not "legacy". Even if we do eventually make a container executor interface, Zuul's ability to use virtual (or real) machines as build resources is a key feature. It shouldn't be required for use, but it will always remain a first-class feature. We've solved some very hard problems to make it happen and it's one of the things that sets us apart from other systems. -Jim From corvus at inaugust.com Tue Jul 10 14:59:01 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 07:59:01 -0700 Subject: Canceling a running job In-Reply-To: <20180710133715.GA24312@localhost.localdomain> (Paul Belanger's message of "Tue, 10 Jul 2018 09:37:15 -0400") References: <20180710133715.GA24312@localhost.localdomain> Message-ID: <87y3ej87d6.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Tue, Jul 10, 2018 at 03:29:46PM +0200, Matthieu Huin wrote: >> Hello, >> >> Yesterday we ran into a case where a periodic job had to be stopped in >> the middle of its execution. We ended up doing the following: >> >> * check the console stream to grab the nodes' IP addresses, from the >> ansible role that sets up firewall rules (this was a multi-nodes >> build) >> * SSH into each node and manually reboot the nodes >> >> This is obviously suboptimal, and possibly not pullable on single-node >> builds. Is there any better way to do this, or at least to get the >> nodes' IP addresses for any given build? >> >> Generally speaking, this use case is rather generic, as one might want >> to be able to stop a running job for a variety of reasons (free up >> resources, prevent "something bad" from happening accidentally, etc). >> Is this something we could add into Zuul? Is there a spec for this or >> should we start one? >> > Long standing feature request[1], allows operator to dequeue a change from a > pipeline. I believe this is the first step to helping here. > > [1] https://review.openstack.org/95035/ Indeed. To be clear, this would be welcome, someone just needs to finish it. -Jim From tdecacqu at redhat.com Tue Jul 10 15:43:54 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 10 Jul 2018 15:43:54 +0000 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <8736wr9m0y.fsf@meyer.lemoncheese.net> References: <1531216046.820khvt23x.tristanC@fedora> <8736wr9m0y.fsf@meyer.lemoncheese.net> Message-ID: <1531236153.bnkfi2smx9.tristanC@fedora> On July 10, 2018 2:57 pm, James E. Blair wrote: > Tristan Cacqueray writes: > >> On July 6, 2018 9:29 am, Artem Goncharov wrote: >> [...] >>> Additionally Tristian volunteered to work also on >>> zuul-executor not to required privileged account. > > My understanding is that in later operating systems, may be able to set > the necessary capabilities for bubblewrap to work in a container? Can > you elaborate on what would be required for the existing executor > process to work in a container without admin privileges? > Beside the security concern, it is considered a bad practice to mix different execution contexts (e.g. different tenant running different job) in the same pod. >> I would love to work on making Zuul a "cloud-native" application, but >> Zuul devs told me that this is too early and we better work on the >> container spec first... Since this would have a positive impact on what >> Artem is doing, here are the details of my proposal: > > There are some good ideas in what your proposal, but I believe that if > we implement something like this, it may have some considerable > differences. I would caution against beginning work on an > implementation of this for now, and, if anyone reading this is > interested in the area, please help with the current container spec > first. There are a lot of subtle complexities about how Zuul will > interact with containers which we *think* we have ideas for written down > in the container spec, but we really need experience with them before we > go out on a limb. And, depending on the answer above, we may decide > this additional complexity is not required at all. > One important detail is the ability to have build affinity to the projects. Another solution would be to add the project name in the NodeRequest so that Nodepool can create the resources in the user's namespace. For example, Openshift.io creates and uses a special ${user}-ci project for each user. > Also, I'd like to emphasize that the current system is not "legacy". > Even if we do eventually make a container executor interface, Zuul's > ability to use virtual (or real) machines as build resources is a key > feature. It shouldn't be required for use, but it will always remain a > first-class feature. We've solved some very hard problems to make it > happen and it's one of the things that sets us apart from other systems. > Sorry, I choosed the wrong word, I meant "existing" setup. Supporting OpenStack or other resources provider is absolutely a Key feature. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mordred at inaugust.com Tue Jul 10 19:04:05 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 10 Jul 2018 14:04:05 -0500 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: References: <87muv0ff8a.fsf@meyer.lemoncheese.net> Message-ID: On 07/10/2018 02:10 AM, Artem Goncharov wrote: > Hi James, > > thanks a lot, I really appreciate you thoughts. > > > Thanks! > > First, I'd like to point out that Monty is working on some jobs to > automatically build and publish Zuul container images.  Perhaps once > those are working, you may be able to use those with this project. > > > I am glad this is happening, but I don't think I will use those > directly. I am building them with S2I, and there are absolutely no > problem with that. Moreover, on each Openshift meetup a question arise: > how do we maintain security of our system using 3rdparty images. RedHat > came with a tool in Openshift to inspect and alert about vulnerabilities > in used images. When you take a ready image from somewhere, you are > stuck with hoping it will be updated promptly. Let's be fair, in the > last month critical vulnerabilities are being discovered nearly on a > weekly basis. Are we going to patch our images or monitor this in any > way? Using S2I I do not have this constraint. Whenever OS image is > updated I am ready to build my image. > > In addition to that I have flexibility of just specifying the git ref to > be used in images. I think operating your own installation of any SW you > would want to have a more granular control over components versions. I > even have a feeling some here are also basing on specific gerrit changes > for their own installations. Totally. A few thoughts here, for what they're worth. One of the things that will be involved in the upstream jobs to build and publish images would be a job that can build and publish images. Since zuul can, of course, share job config, I would hope we could make a job that would be potentially usable by people with downstream installations like you to build images based on local copies of things (because as you say, people run with patches applied locally, or tied to a specific upstream review, or whatever) Of course, you should never be forced to do that, but I would like whatever job/jobs/scripts we put together for upstream to be re-usable. In my brain, I've been thinking about that sort of like this: - "I want to install and run Zuul on my k8s/openshift" - Run $something which installs Zuul from the upstream images (since they're published) - Update zuul config to start building/publishing images - Update zuul deployment to use the zuul-built images But that's just me, and it may be a terrible idea. Relying on the openshift built-in s2i *service* to build the zuul images feels weird to me, since zuul is, itself, a build automation system. Using the s2i software in the image building process on the other hand might be a fine idea. The pattern pbrx is using right now is *very* similar to what s2i does - in that it uses a builder pattern - and I've got some initial thoughts on how the pbrx code could be restructured to take advantage of s2i. What I don't want to do though is maintain s2i scripts or Dockerfiles for each zuul service - mainly because it's almost _completely_ redundant - and it's only useful for people with container workflows. Pretty much every python project that uses pbr and bindep has every piece of information needed for building appropriate images already expressed in files that are also useful for installing the software without containers. The general steps for this, that apply without modification to every python project using bindep and pbr: - Get a base image (python:slim is the current choice, but python:alpine would work too. python:slim is the current choice because it's a sane python environment. I am **VERY** uninterested in otherwise 'choosing' a distro winner. It's a python project - it should use the python base image) - Update the packages in the base image (for the security reasons you mention above) - Make a bindep builder image, based on the base image, that has bindep installed. - Boot a bindep wheel builder container based on the bindep builder image - In the bindep wheel builder container: - run "bindep -b" to get the list of needed distro packages for the final install - install all of the needed distro packages needed to make wheels of the package and its dependencies (returned by 'bindep -b compile') - install the python project - Collect *all* of the wheels built as a result of installing the python project - Make a project-base image for the project based on the base image: - Install the distro packages collected earlier from bindep -b - Install the wheels collected from the builder image step - For each console-script listed in setup.cfg: - Make an image named for the console-script based on the project-base image with ENTRYPOINT set to the console-script There are a few details - but that process can be followed for everything using bindep/pbr without expressing any duplicate information about dependencies or process anywhere. Now - as we think about using the s2i tool in the context of the above process, we can do the following (assume that '.' is the source repo) (Incidentally, even though this is a python app, I think following https://github.com/openshift/source-to-image/blob/master/docs/runtime_image.md is a good plan, because there are C dependencies that have to get built, so the build toolchain, build depends and intermediate artifacts are much larger than the final container needs to be) - Make and publish a bindep builder image It's a really simple image to make. This Dockerfile: FROM python:slim RUN pip install bindep ENTRYPOINT /usr/local/bin/bindep But to make it an s2i builder image and not just a builder image, we need to add some scripts to it that are named appropriately and in the right place. Let's assume we've done that. Then we do: s2i . bindep-builder package-base --runtime-image python:slim --runtime-artifact /root/.cache/pip/wheels That's actually pretty great, and it gets us the 'package-base' contianer mentioned above. Perhaps that's ok and all we need - afterall, one could do: docker run -it --rm package-base -- console-script-command For any of the commands. BUT - using zuul as a conveniently complex example, two gotchas arise. First - zuul-executor needs a couple of distro packages that are not needed elsewhere. One of them, git, should be fine to just add to bindep. But the other, bubblewrap, is more complicated, since it needs to come from a package repository containing a newer package than what is in the python:slim image. We could just make a zuul-bindep builder image that is based on the bindep builder image but additionally just adds the additional repo - and then add bubblewrap to the normal bindep depends. In fact, writing it down, I think that's likely the way to go. So let's assume that's a good solution there. Second - zuul-dashboard is a javascript application that is co-located with the python zuul project. It needs to be different - and wants to be in a runtime image based on nginx (probably). We can make a second builder image based on node:8 and then run s2i a second time. It also has a build step that produces a final artifact (compiled html/js) so we'll use the same runtime image pattern: s2i . bindep-javascript-builder zuul-dashboard --runtime-image node:8 --runtime-artifact /src/zuul/web/static Now we've got 2 s2i commands - and wind up with two images, zuul-base and zuul-dashboard. Finally, it would be nice to get entrypoint images for each of the other console scripts - but we can do that in a very simple script to read the setup.cfg and just do essentially a simple: for console_script in get_scripts_from_setup_cfg: FROM zuul-base CMD ["/usr/local/bin/{console_script}"] In any case- I think it's worth exploring - but I also think there is a real opportunity for general work here, rather than needing automation specific to the Zuul repo itself. I'm not 100% sure, after all of that, if I think we should change pbrx to use s2i behind the scenes, adopt s2i instead of pbrx or something else. It's mostly trying to share thoughts and think out loud. We're still pretty early on this particular task, so I think switching gears a few times should be be upsetting. There are **very** few details of the above, or any of this I feel strongly about - all of the rest are implementation details. Those details are: - Use python:slim or python:alpine for the base image of the python content. - Make the resulting container images as much 'single-process' designed images as possible. - Avoid build depends or source code repositories in the final images. - Put the built javascript dashboard into its own container image. - Avoid duplicating information about distro-depends, python-depends or entrypoints. As you can probably tell, the last point is particularly important to me. I am VERY against expressing the information in bindep.txt or requirements.txt in a second container-build-specific place. The middle points are also important - because we can actually make some very nice small images here with a little bit of work. I think it's important to have per-entrypoint images rather than a single image with multiple potential commands in it that we invoke differently - because it is possible that in the future we reimplement one of the components in another language and if we're just publishing a single 'zuul' container, if we then rewrote zuul-web in C++, it would get awkward. The first point I explained a bit above - but to expand, given the way the world has changed over the last few years, I would prefer to focus on the fact that we are shipping apps based on a language. I don't think there is a benefit in 'picking' a linux distro by our choice of base image, nor do I think there is benefit in writing the automation to be able to create images based on different base linux distros. These are application containers, not machine containers. For zuul, it's a python app, so it should use the python base image. For the dashboard, it's a static html/js/css application so it should go into an nginx or apache base image. It occurs to me that we should probably write up a spec or a design document or something with thoughts like the above - but I guess let's let the email thread be a first-draft with comments and whatnot and we can circle back around and write up something as a followup. > > Generally speaking, I think we should take the approach of being fairly > conservative in adding to the official Zuul project.  I think things > that we adopt in that context should be very widely applicable.  If we > adopt deployment tooling, then I think it should be very fundamental and > provide building blocks that can be used in more complex systems. > > By way of examples, I could see some Ansible roles, or perhaps a helm > chart for a simple all-in-one system (so that we can use it in our own > CI tests and document it as a quickstart deployment method), but more > complex systems may not be a good fit. > > Of course, those of us who have been working on OpenStack for a while > will recognize this question.  :) > > I think right now zuul-apb would be best served as being a friendly > affiliated project, but not one that we officially adopt.  Within the > OpenStack-Infra project, we're working on an approach for hosting > related projects without having to deal with issues of OpenStack (or > even Zuul) branding.  I think this question would be much easier to > answer if we were further along with that. > > > Official or not official. This is not a point. I offered to make it > known and optionally develop on the openstack infra the same way if > there is interest in community, as i.e. windmill is. So far I do not see > any interest in contradict to what I was expecting. No problem. I think we're trying to look a bit ahead and be careful that we don't repeat organizational difficulties that OpenStack found itself in. I think (and I imagine Jim does too) that windmill and your zuul-apb code are awesome things to exist. Similarly software factory. In terms of the wider zuul ecosystem it helps humans be successful, which is yay! Today we're not at much risk of people getting confused by anything (or any more than they are already) because of the co-location with the 1000s of OpenStack repos. However, if you consider the canonical location of zuul repos to be git.zuul-ci.org - then we start running in to the issue that OpenStack ran in to with the Big Tent- which is that there are people who infer a support association due to adjacency of git repos. (these people are obviously crazy - but we've gone through this once so are mostly trying to learn the lesson from it) As the zuul team, if we associate with a project such as zuul-apb or windmill, then people are going to assume that we're taking care of it. If we aren't actually collectively taking care of it, then we're potentially communicating something to people that could confuse them. Past the software maint itself, We list things in the documentation tab at https://zuul-ci.org/ - and we have install guide in-work at https://zuul-ci.org/docs/zuul/admin/zuul-from-scratch.html ... how we add things to the main documentation or how we communicate to people how to install things is a thing we need to manage as a community with limited resources. zuul-from-scratch doesn't even use ansible playbooks for now - and that's on purpose, we wanted to make sure there is documentation that doesn't assume any specific automation worldview. We've said that once we're happy that the main docs are good (actually get you a working system) that then providing automated versions of them is a desirable thing. But how many automation options do we want to provide? Ansible seems like a no-brainer, given Zuul's relationship with Ansible. But would we want to provide puppet/chef/salt too? Then we get to container-based deployments. Do we want to cover k8s and openshift? What about docker swarm? What about mesos? APB is a combo of ansible and kubernetes - how do we collectively think about that? With a core team of 13, we clearly can't do all of the options - which I don't think anyone is suggesting. The question is- which ones do we think we can deal with, which ones do we feel we can or should recommend, and how do we navigate that? That's really long winded. I think what I'm trying to say is that I find deployment things essential to our wider ecosystem - but we're _today_ still navigating the openstack/zuul split as well as how to organize ourselves and communicate who we are and what's in/out of scope for us as an upstream. So I think for right now it's about being conservative with how we proceed. It's easier to add zuul-apb *LATER* as a deployment option to our docs later - or as the way we test the containers we build in the gate - than it is to add/adopt today and then walk back later. (what if you get sick of us and stop caring about zuul-apb? what if apb itself goes away or changes or turns in to something new?) All of that is related to listing somethign in the zuul docs, or adding it to zuul-ci.org in some way. However, to make things even more complicated - zuul is hosted in the openstack infrastructure which is pretty darned accepting of projects. Adding zuul-apb to git.openstack.org (but not git.zuul-ci.org) is a thing that doesn't carry the same connotations. (similarly, openstack-infra/puppet-zuul exists, and I do not think that the Zuul team feels it is on the hook for it) I would certainly find it easier to interact with zuul-apb if it was on git.openstack.org than over in github, and I can think of no reason from an openstack-infra point of view why it wouldn't be an acceptable repo to add. > This is a very tricky question, and we may want to periodically > re-evaluate it, or even change our minds.  I'd like to hear from other > folks too as to what direction we should take with regards to deployment > tooling. > > > Strictly personal strong opinion: > > Think about a box, and not about what is inside of a box. When you try > to make your product attractive, you work not only on a product, but on > it's package also. I personally would not buy a product if I need to > organize transport packaging for it myself or when I am not able to > adapt it in my home. It is not sufficient that the product is great if I > can't deploy it. To tell the truth I am tired of those SWs, which put > usability and deployment on the second place. You can't expect people > will use it, if you did nothing to help them deploy SW and get a > functional configuration. > > I guess this is exactly the same discussion, that happened couple of > days ago in openstack-dev about Openstack deployment as a whole, where > some were mentioning that K8s is much easier to operate. I personally > see this problem also. Openstack is great, but is incredibly complex and > there is way too less work done on easing it's operations. Even when you > take RDO it is not much better. When I first started looking at > Openstack and tried deploying it it took me around 2 weeks even having > step-by-step manual and packages ready. When I first started looking at > Zuul it took me over 2 weeks with zuul install guides. It was not hard > to install it, but as soon as you try to get more or less reasonable > configuration you stuck for weeks. Windmill tries to make it easier, but > is there any reference, any hint on it's existence? > To tell the truth, assuming not being affiliated with Openstack, if I > would have been asked to evaluate usage of Zuul in company, I would gave > it up after a week. Documentation is there, but it is not something you > need when you try to start using Zuul outside of Openstack. It focuses > on the technical aspects, and a "tutorial"-like guides, administration > guides are missing as such. Short doc on manual installing Zuul on > Fedora is absolutely insufficient for productive use. Yes, absolutely. I touched on this a little bit above, but I'll say more words here. We ABSOLUTELY care about deployment and operational ease of use for our end users - we're just not there yet. We've been focused on making the documentation complete first- so that we'd actually accomplish that step. If we started with some ansible playbooks, we were worried that we'd never find the time to go back and actually document the end to end. When that documentation is done - or at least complete enough that it can be followed and the user end up with a working system - then providing easy-buttons is a thing I believe we've talked about being quite interested in as a community. I personally think that a simple "install zuul" ansible playbook - perhaps windmill - and a simple helm chart would be great entry points. Maybe zuul-apb wants to be an option too (that's a harder one for me personally as a non-openshift user, as the idea of having kubernetes run my ansible rather than ansible run my kubernetes is alien to me. Not saying I think it's bad - it's just the opposite of how my brain would think of it so there is a learning curve for me on that topic I'll need to deal with) > So overall my suggestion: think about Openstack and affiliated SW also > as anyone outside of Openstack would think. Only this way you will be > able to see challenges they have. I personally see that Zuul community > is focused mostly inside (while still seeking to become successful > outside of Openstack). Yes, I totally agree. We are absolutely focused on folks outside of OpenStack - but we're not there yet due to bandwidth, so I think it's totally fair feedback that to a new community member we might not yet seem like it. Just know it's not for lack of desire. :) > Reminder: it is a strictly personal opinion and I am not trying to blame > anyone. > > > Thanks again for writing the tool and offering it.  Deployment systems > are important and they strengthen the community overall. > > > You are welcome > > Artem > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From corvus at inaugust.com Tue Jul 10 19:10:13 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 12:10:13 -0700 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <1531236153.bnkfi2smx9.tristanC@fedora> (Tristan Cacqueray's message of "Tue, 10 Jul 2018 15:43:54 +0000") References: <1531216046.820khvt23x.tristanC@fedora> <8736wr9m0y.fsf@meyer.lemoncheese.net> <1531236153.bnkfi2smx9.tristanC@fedora> Message-ID: <87lgai6h62.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On July 10, 2018 2:57 pm, James E. Blair wrote: >> Tristan Cacqueray writes: >> >>> On July 6, 2018 9:29 am, Artem Goncharov wrote: >>> [...] >>>> Additionally Tristian volunteered to work also on >>>> zuul-executor not to required privileged account. >> >> My understanding is that in later operating systems, may be able to set >> the necessary capabilities for bubblewrap to work in a container? Can >> you elaborate on what would be required for the existing executor >> process to work in a container without admin privileges? >> > Beside the security concern, it is considered a bad practice to > mix different execution contexts (e.g. different tenant running > different job) in the same pod. That would be the case if Zuul were a native k8s app, but it's not. We need to work on running Zuul as it's currently designed in k8s, and it's currently designed to support its own firewall between tenants. Can you describe exactly what is currently needed in order to run a Zuul executor (as written today) in a container under k8s or openshift? I think someone suggested that newer kernel versions may make alleviate that concern? Can you speak to that? > One important detail is the ability to have build affinity to the > projects. Another solution would be to add the project name > in the NodeRequest so that Nodepool can create the resources in the > user's namespace. For example, Openshift.io creates and uses a > special ${user}-ci project for each user. I think this is an example of something that we don't want. Zuul is designed for cross-project testing -- reducing that to something where a job has to be allocated to a specific project is not compatible. Tenants, yes, but not projects. Anyway, I don't think we're ready to discuss this yet. Let's focus on making what we have today work better. -Jim From corvus at inaugust.com Tue Jul 10 19:15:27 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 12:15:27 -0700 Subject: Improving new user experience Message-ID: <878t6i6gxc.fsf@meyer.lemoncheese.net> Hi, Recently Artem has rightly pointed out that the experience for new users wanting to run Zuul is not good. We have some work in progress to improve that, and some efforts which are stalled and could use help. Here's what I think we should do: 1) Continue work on the Zuul From Scratch guide. This part of the documentation ends right after the Zuul services start up, but without telling anyone how to actually get Zuul to do anything useful. It needs to continue to explain: * Configuring a pipeline * What a base job needs to do * How to create a base job from scratch * How to use or copy the existing base jobs in zuul-base-jobs * How to use jobs from zuul-jobs * Configuring a project * Actually reporting on a change or pull request I'll note that the original Zuul From Scratch document focused only on GitHub and Fedora. Obviously there's a lot of options, and folks have added sections on Gerrit and other operating systems. The most important aspect of this document, however, is that it walk someone through the entire process of setting up Zuul for at least *one* configuration. If we can handle more, that's fine. Ideall, we can keep divergent steps to a minimum. For example, we may want to describe how to push a configuration change up once (for Github, and once for Gerrit) and then subsequently refer to that process generally when we discuss how to build pipelines, jobs, etc. Otherwise we'll have to write the entire document twice. 2) Separately, we should make the process of bootstrapping a running system easier. While Zuul From Scratch should describe everything needed to build an entire Zuul system, well, "from scratch", we should also have a system that makes it easy to deploy a working Zuul so someone can at least start playing with it, and perhaps even continue to run it in production. I'll call this the "quickstart", and it should replace the current erroneously titled "quickstart" in our docs. Monty is working on jobs which will automatically build and upload container images of Zuul to Dockerhub. Once those are in place, we can write a helm chart, or Ansible playbook, or something similar to deploy those with a very minimal configuration. The goal should be to get something up and running with the minimum amount of work and also the minimum amount of documentation. If you want to learn how a Zuul works, read Zuul From Scratch. If you want to just start breaking it, use the quickstart. I think any progress we make on these two items will drastically improve the experience for new users. -Jim From mhuin at redhat.com Tue Jul 10 20:37:37 2018 From: mhuin at redhat.com (Matthieu Huin) Date: Tue, 10 Jul 2018 22:37:37 +0200 Subject: Improving new user experience In-Reply-To: <878t6i6gxc.fsf@meyer.lemoncheese.net> References: <878t6i6gxc.fsf@meyer.lemoncheese.net> Message-ID: Hello, On Tue, Jul 10, 2018 at 9:15 PM, James E. Blair wrote: > Hi, > > Recently Artem has rightly pointed out that the experience for new users > wanting to run Zuul is not good. We have some work in progress to > improve that, and some efforts which are stalled and could use help. > Here's what I think we should do: > > 1) Continue work on the Zuul From Scratch guide. This part of the > documentation ends right after the Zuul services start up, but without > telling anyone how to actually get Zuul to do anything useful. It needs > to continue to explain: > > * Configuring a pipeline > * What a base job needs to do > * How to create a base job from scratch > * How to use or copy the existing base jobs in zuul-base-jobs > * How to use jobs from zuul-jobs > * Configuring a project > * Actually reporting on a change or pull request Tristan, Fabien, Nicolas and I have been working on some hands-on articles for using Zuul within Software Factory. These cover some of the items you list, so we'll gladly rework them as upstream documentation patches if this is something you'd be interested in. https://softwarefactory-project.io/r/#/c/12475/ https://softwarefactory-project.io/r/#/c/12476/ https://softwarefactory-project.io/r/#/c/12590/ We have also published articles about setting up Zuul as a 3rd party CI, and as the gating system for Github PRs, which could be a nice complement to the upstream doc as well. > I'll note that the original Zuul From Scratch document focused only on > GitHub and Fedora. Obviously there's a lot of options, and folks have > added sections on Gerrit and other operating systems. The most > important aspect of this document, however, is that it walk someone > through the entire process of setting up Zuul for at least *one* > configuration. If we can handle more, that's fine. Ideall, we can keep > divergent steps to a minimum. For example, we may want to describe how > to push a configuration change up once (for Github, and once for Gerrit) > and then subsequently refer to that process generally when we discuss > how to build pipelines, jobs, etc. Otherwise we'll have to write the > entire document twice. > > 2) Separately, we should make the process of bootstrapping a running > system easier. While Zuul From Scratch should describe everything > needed to build an entire Zuul system, well, "from scratch", we should > also have a system that makes it easy to deploy a working Zuul so > someone can at least start playing with it, and perhaps even continue to > run it in production. I'll call this the "quickstart", and it should > replace the current erroneously titled "quickstart" in our docs. > > Monty is working on jobs which will automatically build and upload > container images of Zuul to Dockerhub. Once those are in place, > we can write a helm chart, or Ansible playbook, or something similar to > deploy those with a very minimal configuration. The goal should be to > get something up and running with the minimum amount of work and also > the minimum amount of documentation. If you want to learn how a Zuul > works, read Zuul From Scratch. If you want to just start breaking it, > use the quickstart. Another way to get a sandbox environment is to follow this documentation: https://softwarefactory-project.io/docs/contributor/how_to_setup_sf_sandbox.html Granted, you need to deploy on CentOS and it installs more services than just Zuul itself, but it allows anyone to have Zuul up and running in 4 commands (3 of which are yum related). > I think any progress we make on these two items will drastically improve > the experience for new users. > Absolutely. We've had great feedback on Zuul from various people experimenting with it using our "sandbox". > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Tue Jul 10 20:54:41 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 10 Jul 2018 13:54:41 -0700 Subject: Improving new user experience In-Reply-To: (Matthieu Huin's message of "Tue, 10 Jul 2018 22:37:37 +0200") References: <878t6i6gxc.fsf@meyer.lemoncheese.net> Message-ID: <871sca4xri.fsf@meyer.lemoncheese.net> Matthieu Huin writes: > Tristan, Fabien, Nicolas and I have been working on some hands-on > articles for using Zuul within Software Factory. These cover some of > the items you list, so we'll gladly rework them as upstream > documentation patches if this is something you'd be interested in. > > https://softwarefactory-project.io/r/#/c/12475/ > https://softwarefactory-project.io/r/#/c/12476/ > https://softwarefactory-project.io/r/#/c/12590/ You are experts at deploying Zuul, so that makes a lot of sense! One thing that may add difficulty to porting your current work directly upstream is that Zuul's documentation can't really recommend installing softwarefactory RPMs. But, happily, most of the Zuul From Scratch document is already written past that point. The work you're doing on how to start using it once the software is actually running may be easier to port, and that's the part we're missing upstream. > We have also published articles about setting up Zuul as a 3rd party > CI, and as the gating system for Github PRs, which could be a nice > complement to the upstream doc as well. Also, I believe Tristan and I discussed adding a HOWTO section which may be particularly suited to articles like these, once we have the basics in zuul-from-scratch. I just went and looked for it and found that the change to add it fell off of my review dashboard because of a false test failure; I've rechecked it and will re-review it soon. It's at: https://review.openstack.org/571420 -Jim From tdecacqu at redhat.com Wed Jul 11 00:46:49 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 11 Jul 2018 00:46:49 +0000 Subject: zuul-apb to install Zuul on K8/openshift using ansible-service-broker In-Reply-To: <87lgai6h62.fsf@meyer.lemoncheese.net> References: <1531216046.820khvt23x.tristanC@fedora> <8736wr9m0y.fsf@meyer.lemoncheese.net> <1531236153.bnkfi2smx9.tristanC@fedora> <87lgai6h62.fsf@meyer.lemoncheese.net> Message-ID: <1531266001.8df9h7wwd8.tristanC@fedora> On July 10, 2018 7:10 pm, James E. Blair wrote: > Tristan Cacqueray writes: > >> On July 10, 2018 2:57 pm, James E. Blair wrote: >>> Tristan Cacqueray writes: >>> >>>> On July 6, 2018 9:29 am, Artem Goncharov wrote: >>>> [...] >>>>> Additionally Tristian volunteered to work also on >>>>> zuul-executor not to required privileged account. >>> >>> My understanding is that in later operating systems, may be able to set >>> the necessary capabilities for bubblewrap to work in a container? Can >>> you elaborate on what would be required for the existing executor >>> process to work in a container without admin privileges? >>> >> Beside the security concern, it is considered a bad practice to >> mix different execution contexts (e.g. different tenant running >> different job) in the same pod. > > That would be the case if Zuul were a native k8s app, but it's not. We > need to work on running Zuul as it's currently designed in k8s, and it's > currently designed to support its own firewall between tenants. > > Can you describe exactly what is currently needed in order to run a Zuul > executor (as written today) in a container under k8s or openshift? > For OpenShift, you need an administrator to setup this security context: https://github.com/gtema/zuul-apb/blob/master/zuul_scc.yaml#L21-L23 > I think someone suggested that newer kernel versions may make alleviate > that concern? Can you speak to that? > Basically, bwrap needs to do what K8 does natively: create namespaces. This is similar to nested-virt in OpenStack. And it seems like this greatly extend the attack surface, similarly to allowing chroot to be used inside a chroot, that may give the pod a way to escape the container and escalate to the host. I don't know the full kernel implication, but I suspect it will take time before bwrap usage is allowed inside non-privileged pod. And even if it does, it may still be considered bad practice to mix execution context. >> One important detail is the ability to have build affinity to the >> projects. Another solution would be to add the project name >> in the NodeRequest so that Nodepool can create the resources in the >> user's namespace. For example, Openshift.io creates and uses a >> special ${user}-ci project for each user. > > I think this is an example of something that we don't want. Zuul is > designed for cross-project testing -- reducing that to something where a > job has to be allocated to a specific project is not compatible. > > Tenants, yes, but not projects. > What if there is a Tenant per projects? For example, in the context of a project bringing its own cloud, shouldn't we be able to spawn its jobs on the resources it provides? > Anyway, I don't think we're ready to discuss this yet. Let's focus on > making what we have today work better. > I agree we should focus on what we have today, and I already wrote Nodepool driver that implements both features of the Container Spec: - K8 (containers that behave like a machine)[0] and - Openshift (native container workflow)[1] Unfortunately, OpenShift users are wary to use them because you need a vm to run Zuul. Best Regards, -Tristan [0] https://review.openstack.org/535557 [1] https://review.openstack.org/570667 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joshua.hesketh at gmail.com Wed Jul 11 04:47:48 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 11 Jul 2018 14:47:48 +1000 Subject: Improving new user experience In-Reply-To: <878t6i6gxc.fsf@meyer.lemoncheese.net> References: <878t6i6gxc.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Jul 11, 2018 at 5:15 AM, James E. Blair wrote: > Hi, > > Recently Artem has rightly pointed out that the experience for new users > wanting to run Zuul is not good. We have some work in progress to > improve that, and some efforts which are stalled and could use help. > Here's what I think we should do: > > 1) Continue work on the Zuul From Scratch guide. This part of the > documentation ends right after the Zuul services start up, but without > telling anyone how to actually get Zuul to do anything useful. It needs > to continue to explain: > > * Configuring a pipeline > * What a base job needs to do > Specifically something about how to handle logs and artifacts. This, in my opinion, is currently a poor experience for new users. I would expect to be able to install zuul and tell it to run python jobs and be done with it if I was just playing/evaluating. Having to set up some kind of log server (or any other service for that matter) is something I would only care about once I go into production or scale. Additionally I would add: * How to set up apache/reverse proxy. It's documented elsewhere but needs to be in the from scratch guide. (This could incorporate a log server). * How to set up database results. These are just documentation improvements for new users. I also think we could make a few improvements in general. Some of them may require being more opinioned, but that could just be by selecting reasonable defaults. For example: * Make merge-check'ing a first class citizen/feature that zuul does automatically, not by configuring complex pipeline relationships. * Fix the process for testing base jobs. Perhaps when a change to a base job is made it is ran as an aliased job somehow. * Logs and artifacts should be a first class citizen for zuul. We should expose them via the web ui and allow some kind of backable storage. This means we'd also have a common interface on how to receive jobs (say to some kinda of agent that then puts them into the appropriate backend, creates indexes, and links etc). We could then improve our base job definitions. * Perhaps database results should also be a first class citizen. Rather than implementing it through pipeline reporters, zuul could take a more direct approach at storing the information it knows. * Don't require a trigger for a pipeline: at the moment the config schema requires at least one trigger per pipeline. Instead this should be optional but still triggerable via the CLI or web api (eventually). These are just ideas off the top of my head and clearly have pros/cons that would need fleshing out (and may be bad ideas altogether). Cheers, Josh > * How to create a base job from scratch > * How to use or copy the existing base jobs in zuul-base-jobs > * How to use jobs from zuul-jobs > * Configuring a project > * Actually reporting on a change or pull request > > I'll note that the original Zuul From Scratch document focused only on > GitHub and Fedora. Obviously there's a lot of options, and folks have > added sections on Gerrit and other operating systems. The most > important aspect of this document, however, is that it walk someone > through the entire process of setting up Zuul for at least *one* > configuration. If we can handle more, that's fine. Ideall, we can keep > divergent steps to a minimum. For example, we may want to describe how > to push a configuration change up once (for Github, and once for Gerrit) > and then subsequently refer to that process generally when we discuss > how to build pipelines, jobs, etc. Otherwise we'll have to write the > entire document twice. > > 2) Separately, we should make the process of bootstrapping a running > system easier. While Zuul From Scratch should describe everything > needed to build an entire Zuul system, well, "from scratch", we should > also have a system that makes it easy to deploy a working Zuul so > someone can at least start playing with it, and perhaps even continue to > run it in production. I'll call this the "quickstart", and it should > replace the current erroneously titled "quickstart" in our docs. > > Monty is working on jobs which will automatically build and upload > container images of Zuul to Dockerhub. Once those are in place, > we can write a helm chart, or Ansible playbook, or something similar to > deploy those with a very minimal configuration. The goal should be to > get something up and running with the minimum amount of work and also > the minimum amount of documentation. If you want to learn how a Zuul > works, read Zuul From Scratch. If you want to just start breaking it, > use the quickstart. > > I think any progress we make on these two items will drastically improve > the experience for new users. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Jul 11 14:49:40 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 11 Jul 2018 07:49:40 -0700 Subject: Improving new user experience In-Reply-To: (Joshua Hesketh's message of "Wed, 11 Jul 2018 14:47:48 +1000") References: <878t6i6gxc.fsf@meyer.lemoncheese.net> Message-ID: <87h8l5zv23.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: > On Wed, Jul 11, 2018 at 5:15 AM, James E. Blair wrote: > >> Hi, >> >> Recently Artem has rightly pointed out that the experience for new users >> wanting to run Zuul is not good. We have some work in progress to >> improve that, and some efforts which are stalled and could use help. >> Here's what I think we should do: >> >> 1) Continue work on the Zuul From Scratch guide. This part of the >> documentation ends right after the Zuul services start up, but without >> telling anyone how to actually get Zuul to do anything useful. It needs >> to continue to explain: >> >> * Configuring a pipeline >> * What a base job needs to do >> > > Specifically something about how to handle logs and artifacts. This, in my > opinion, is currently a poor experience for new users. I would expect to be > able to install zuul and tell it to run python jobs and be done with it if > I was just playing/evaluating. Having to set up some kind of log server (or > any other service for that matter) is something I would only care about > once I go into production or scale. > > Additionally I would add: > > * How to set up apache/reverse proxy. It's documented elsewhere but needs > to be in the from scratch guide. (This could incorporate a log server). > * How to set up database results. Yes these are great things to document. We want the base job in zuul-base-jobs to be usable out of the box (either by reference, or by coping) to put logs either on a remote server, or just in a directory on the executor, so that no additional setup is required for a new user. That's not quite working yet, but Paul is picking up some work Monty started to rework and standardize log and artifact publishing interfaces[1], which will hopefully feed into this. Any help on the items you listed (including that) would be great. [1] https://review.openstack.org/#/q/topic:zuulv3-output -Jim From corvus at inaugust.com Fri Jul 13 18:56:13 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 13 Jul 2018 11:56:13 -0700 Subject: Building container images in nodepool Message-ID: <874lh3q81e.fsf@meyer.lemoncheese.net> Hi, I think the container spec is just about ready, but one part of it recently prompted a discussion that I think could use some wider input. Specifically, this is regarding the "container-as-machine" use-case. So think "run pep8 in a container" when you're thinking about this. There's a whole bunch of stuff in the spec about container-native workloads. This isn't it. The issue is whether or not Nodepool should be able to build container images, much as we do today for machines. Note that regardless, Nodepool will be able to use "upstream" container images from the local Kubernetes registry or Docker Hub. So this would be an optional feature (just as Nodepool can use extant cloud images). There are a few reasons to build local images (of any kind): * To provide a git cache. * To have more up to date packages than those provided externally. * To have a backup of a working image if something upstream changes. * To have a standardized image. * To add local requirements. To be fair, the last two might be the same thing said different ways. In an offline discussion prompted by the spec, Monty has suggested that running a git mirror in Kubernetes may obviate the need for using custom images for a git cache [related: applying this to VM images as well could reduce image sizes there; he's planning on writing up thoughts on this more fully later]. So if we take that out of the equation, the question becomes: Are the remaining reasons for building an image locally sufficient to build this functionality into Nodepool? A quick clarification: we're obviously not going to write a new image builder either way. We're talking about having Nodepool run some existing container image process on a periodic schedule and upload the result to the local registry. The main benefit to Nodepool doing this is using the same images across different registries, having a daemon performing the image builds automatically, and the ability to roll-back to previous versions. If you have thoughts on whether this is useful, or if it should be omitted in favor of relying more heavily on base images or external systems, please let me know. I'll try to collect feedback and update the spec early next week. Then we can merge it. :) -Jim From joshua.hesketh at gmail.com Sat Jul 14 05:23:21 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Sat, 14 Jul 2018 15:23:21 +1000 Subject: Building container images in nodepool In-Reply-To: <874lh3q81e.fsf@meyer.lemoncheese.net> References: <874lh3q81e.fsf@meyer.lemoncheese.net> Message-ID: Hey, I'm not against nodepool building container images as it's within scope of how it currently builds disk images. However, if you had a zuul job build and push images to a registry, would this be a more suitable pathway forward than having nodepool do what is effectively a type of job? Just something to compare to/consider anyway. Cheers, Josh On Sat, Jul 14, 2018 at 4:56 AM, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :) > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Jul 16 15:08:12 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 16 Jul 2018 10:08:12 -0500 Subject: Building container images in nodepool In-Reply-To: <874lh3q81e.fsf@meyer.lemoncheese.net> References: <874lh3q81e.fsf@meyer.lemoncheese.net> Message-ID: <4b117fcb-ee69-a7f7-ed03-f01172ccfd76@inaugust.com> On 07/13/2018 01:56 PM, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :) I don't think it's particularly useful to have nodepool build container images - although I think it should be pretty easy to accomplish so I'm not personally *opposed* to us having the functionality. The reason I think it's not useful to build container images but still do think it's useful to build VM images is that one of the things dockerhub has provided to the world (I'd possibly argue this is the primary value of docker itself) is a global shared set of images that end users can select. In OpenStack, images are a vendor/operator concern. The 'ubuntu' or 'fedora' image put into a cloud doesn't have consistent or predictable content across clouds because it's literally a different image. So we HAVE to make images for the multi-cloud story to work. With docker/kubernetes, the image 'ubuntu:latest' is the same everywhere, because it comes from dockerhub, and even if people have a local registry, it's going to pull ubuntu from dockerhub. My understanding of best practices are that if someone wants a custom ubuntu image in a local registry, they'd most likely publish it under a different name - and possibly base it on the upstream 'ubuntu' image. Obviously people are free to violate that if they want in their private environments, but from a nodepool persepctive it actually is reasonable for a user to connect a nodepool to 10 different kubernetes provided by 10 different public clouds and expect 'ubuntu:latest' to be the same bits. SO - all of that to say ... I don't think it's a *terrible* idea to have a 'have nodepool build container images for me' feature, but I'm skeptical that it would provide value or even be used by anyone. I do not think we should use it in OpenStack if it existed. Also, as Josh suggests, if someone did want local images, having a zuul job that built then and then published them to a registry that the nodepool is pulling from should work fine ... and would also serve the use-case of "I want to test my images before pushing them to nodepool" that Yolanda has been wanting for years. Monty From Tobias.Henkel at bmw.de Mon Jul 16 16:27:25 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 16 Jul 2018 16:27:25 +0000 Subject: Building container images in nodepool In-Reply-To: <4b117fcb-ee69-a7f7-ed03-f01172ccfd76@inaugust.com> References: <874lh3q81e.fsf@meyer.lemoncheese.net> <4b117fcb-ee69-a7f7-ed03-f01172ccfd76@inaugust.com> Message-ID: On 16.07.18, 17:08, "Monty Taylor" wrote: On 07/13/2018 01:56 PM, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :) I don't think it's particularly useful to have nodepool build container images - although I think it should be pretty easy to accomplish so I'm not personally *opposed* to us having the functionality. The reason I think it's not useful to build container images but still do think it's useful to build VM images is that one of the things dockerhub has provided to the world (I'd possibly argue this is the primary value of docker itself) is a global shared set of images that end users can select. In OpenStack, images are a vendor/operator concern. The 'ubuntu' or 'fedora' image put into a cloud doesn't have consistent or predictable content across clouds because it's literally a different image. So we HAVE to make images for the multi-cloud story to work. With docker/kubernetes, the image 'ubuntu:latest' is the same everywhere, because it comes from dockerhub, and even if people have a local registry, it's going to pull ubuntu from dockerhub. My understanding of best practices are that if someone wants a custom ubuntu image in a local registry, they'd most likely publish it under a different name - and possibly base it on the upstream 'ubuntu' image. Obviously people are free to violate that if they want in their private environments, but from a nodepool persepctive it actually is reasonable for a user to connect a nodepool to 10 different kubernetes provided by 10 different public clouds and expect 'ubuntu:latest' to be the same bits. SO - all of that to say ... I don't think it's a *terrible* idea to have a 'have nodepool build container images for me' feature, but I'm skeptical that it would provide value or even be used by anyone. I do not think we should use it in OpenStack if it existed. Also, as Josh suggests, if someone did want local images, having a zuul job that built then and then published them to a registry that the nodepool is pulling from should work fine ... and would also serve the use-case of "I want to test my images before pushing them to nodepool" that Yolanda has been wanting for years. Monty I agree with Monty. We use containers on top of images built by nodepool since a long time and I've always looked at them as artifacts produced by the ci system. We try to treat the current nodepool like infrastructure (can I call that some kind of Iaas?) that provides some generic image to the that should run their workload within a container on the node supplied by nodepool. One of the major advantages is that the container images directly fall out of the ci process of the project while changing images that are built by nodepool is much more coupled to operations as this is clearly something that should be at least reviewed by the operations team. Further by treating the container images as a CI artifact the project has much more control about the timing when to update the image and when not instead of the decoupled regular image rebuilds currently used in nodepool. So with adding container support to nodepool I see this use case simplified like just ripping out the VM layer while keeping the focus on the project supplied container images which has been very useful to us in the past. It might not be really correct but I see that use case like moving from IaaS to some kind of PaaS which simplifies working with project supplied container images. So I personally don't think we need this but I'm also not opposed to this if someone has a different view and needs container image build functionality in nodepool. Tobias From pabelanger at redhat.com Mon Jul 16 19:44:35 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 16 Jul 2018 15:44:35 -0400 Subject: Building container images in nodepool In-Reply-To: <874lh3q81e.fsf@meyer.lemoncheese.net> References: <874lh3q81e.fsf@meyer.lemoncheese.net> Message-ID: <20180716194435.GA24135@localhost.localdomain> On Fri, Jul 13, 2018 at 11:56:13AM -0700, James E. Blair wrote: > Hi, > > I think the container spec is just about ready, but one part of it > recently prompted a discussion that I think could use some wider input. > > Specifically, this is regarding the "container-as-machine" use-case. So > think "run pep8 in a container" when you're thinking about this. > There's a whole bunch of stuff in the spec about container-native > workloads. This isn't it. > > The issue is whether or not Nodepool should be able to build container > images, much as we do today for machines. > > Note that regardless, Nodepool will be able to use "upstream" container > images from the local Kubernetes registry or Docker Hub. So this would > be an optional feature (just as Nodepool can use extant cloud images). > > There are a few reasons to build local images (of any kind): > > * To provide a git cache. > * To have more up to date packages than those provided externally. > * To have a backup of a working image if something upstream changes. > * To have a standardized image. > * To add local requirements. > > To be fair, the last two might be the same thing said different ways. > > In an offline discussion prompted by the spec, Monty has suggested that > running a git mirror in Kubernetes may obviate the need for using custom > images for a git cache [related: applying this to VM images as well > could reduce image sizes there; he's planning on writing up thoughts > on this more fully later]. So if we take that out of the equation, the > question becomes: > > Are the remaining reasons for building an image locally sufficient to > build this functionality into Nodepool? > > A quick clarification: we're obviously not going to write a new image > builder either way. We're talking about having Nodepool run some > existing container image process on a periodic schedule and upload the > result to the local registry. The main benefit to Nodepool doing this > is using the same images across different registries, having a daemon > performing the image builds automatically, and the ability to roll-back > to previous versions. > > If you have thoughts on whether this is useful, or if it should be > omitted in favor of relying more heavily on base images or external > systems, please let me know. I'll try to collect feedback and update > the spec early next week. Then we can merge it. :) > I'd be in favor of having nodepool also build containers, along side VMs. However, if nodepool also learn how to use containers from a registry, that is nice too. The main thing I like, is single place to see VMs and container image builds. - Paul From pabelanger at redhat.com Mon Jul 16 19:53:08 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 16 Jul 2018 15:53:08 -0400 Subject: Building container images in nodepool In-Reply-To: References: <874lh3q81e.fsf@meyer.lemoncheese.net> Message-ID: <20180716195308.GB24135@localhost.localdomain> On Sat, Jul 14, 2018 at 03:23:21PM +1000, Joshua Hesketh wrote: > Hey, > > I'm not against nodepool building container images as it's within scope of > how it currently builds disk images. > > However, if you had a zuul job build and push images to a registry, would > this be a more suitable pathway forward than having nodepool do what is > effectively a type of job? Just something to compare to/consider anyway. > Something I've heard a while back, in the context of openstack-infra, was giving jobs access to control plane to upload an image (in this case it was an image for VM, but think same would apply for private registry). While zuul jobs are now able to do it with secrets, to we want to recommend that for people to do? I've long had the want to either have nodepool or zuul job in this case, upload DIBs into our control plane so servers like mirror, gerrit, etc and boot servers from them. But in the past there was some concern about how to secure it. I think it would be interesting to drive image builds from a zuul job, but also intereted how we'd solve the chicken / egg issue with that first job if no image exists :) > Cheers, > Josh > From corvus at inaugust.com Mon Jul 16 22:27:53 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 16 Jul 2018 15:27:53 -0700 Subject: Potential enhancements to logging Message-ID: <87fu0in7di.fsf@meyer.lemoncheese.net> Hi, Some recent (and not-so-recent) issues with log hosting in OpenStack Infra have caused a renewed interest in reworking our log storing infrastructure. Some of the motivation and much of the initial work is OpenStack-specific, so I have written a post to the openstack-infra list describing that. Technically, none of what is in there affects the wider Zuul community, but folks reading this may want to read that message for background (or, if you follow our patterns closely, it may affect you as well). That message is here: http://lists.openstack.org/pipermail/openstack-infra/2018-July/006020.html Also, like that message, this one is the result of several conversations, especially with Monty; many of these ideas are his. Essentially, that message describes a process for us to begin hosting our logs in swift instead of on a file server. I'll note here that Zuul is (and will continue to be, at least for the foreseeable future) agnostic about where logs are hosted. Log storage is implemented as roles in base jobs, so any site can store logs where and how they wish. We can help each other by sharing the roles which perform these tasks. OpenStack moving its log storage to swift doesn't mean anyone else needs to -- storing logs on a sever (or even the executor) will still be supported. As would storing in S3, or any other system for which someone is motivated to write a role. That message also describes decommissioning our log-hosting server which happens to provide a number of useful services today. Among the these are dynamic generation of directory indexes, and HTMLification of text logs. Our plan is to pre-generate that data, however, that's not ideal, as it expands and duplicates the amount of data stored, and it also creates more work for the executors. We can expand the role of the Zuul dashboard to take on some of those duties, thereby reducing the storage requirements, and offloading the computation to the browser of the user requesting it. In the process, we can make a better experience for users navigating log files, all without compromising Zuul's independence from backend storage mechanisms. All of what follows should apply equally to file or swift-based storage. Let's start by creating a per-build page in the dashboard. Currently, we show a list of builds, and link to the log locations for those builds. In the future, we should link to a new page with details about the build. Not only so that we can show more info (which we already collect) than can be shown on one line, but also so that we have a place for the additional functionality which follows. If we have the uploader also create and upload a file manifest (say, zuul-manifest.json), then the build page can fetch[1] this file from the log storage and display an index of artifacts for the build. This can allow for access to artifacts directly within the Zuul web user interface, without the discontinuity of hitting an artifact server for the index. Since swift (and other object storage systems) may not be capable of dynamically creating indexes, this allows the functionality to work in all cases and removes the need to pre-generate index pages when using swift. We can also extend Zuul to store additional artifact information about a build. For example, in our docs build jobs, we override the success-url to link to the generated doc content. If we return the build location as additional information about the build, we can highlight that in a prominent place on the build page. In the future, we may want to leave the success-url alone so that users visit the build page and then follow a link to the preview. It would be an extra click compared to what we do today, but it may be a better experience in the long run. Either way would still be an option, of course, according to operator preference. Another important feature we currently have in OpenStack is a piece of middleware we call "os-loganalyze" (or OSLA) which HTMLifies text logs. It dynamically parses log files and creates anchors for each line (for easy hyperlink references) and also highlights and allows filtering by severity. We could implement this in javascript, so that when a user browses from the build page to a text file, the javascript component could fetch[1] the file and perform the HTMLification as OSLA does today. If it can't parse the file, it will just pass it through unchanged. And in the virtual file listing on the build page, we can add a "raw" link for direct download of the file without going through the HTMLification. This would eliminate the need for us to pre-render content when we upload to swift, and by implementing this in a generic manner in javascript in Zuul, more non-OpenStack Zuul users would benefit from this functionality. Any unknown files, or existing .html files, should just be direct links which leave the Zuul dashboard interface (we don't want to attempt to proxy or embed everything). Finally, even today Zuul creates a job-output.json, with the idea that we would eventually use it as a substitute for job-output.txt when reviewing logs. The build page would be a natural launching point for a page which read this json file and created a more interactive and detailed version of the streaming job output. In summary, by moving some of the OpenStack-specific log processing into Zuul-dashboard javascript, we can make it more generic and widely applicable, and at the same time provide better support for multiple log storage systems. -Jim [1] Swift does support specifying CORS headers. Users would need to do so to support this, and users of static file servers would need to do something similar. From corvus at inaugust.com Mon Jul 16 22:49:21 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 16 Jul 2018 15:49:21 -0700 Subject: Zuul project update 2018-07-16 Message-ID: <87h8kylrta.fsf@meyer.lemoncheese.net> Hi Last week Tristan fixed issues with direct enqueueing of changes with ambiguous project names, Improved timer trigger to skip un-needed projects, and improving Github Event filter debugging. Tobias fixed a potentially serious bug with contamination of configuration. He's working on improving job freeze performance, and parent jobs which pause and continue to run while their children run (for resources and artifact exchange during a buildset). This week I plan on finishing the container spec. I've also written up some ideas for log storage which I may do some work on. We're waiting on a resolution to an issue with multi-tenancy in the new Javascript code. Once that's resolved, or we revert the new JS code, we should be in position to make a new release. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From tdecacqu at redhat.com Mon Jul 16 23:16:29 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 16 Jul 2018 23:16:29 +0000 Subject: Building container images in nodepool In-Reply-To: References: <874lh3q81e.fsf@meyer.lemoncheese.net> <4b117fcb-ee69-a7f7-ed03-f01172ccfd76@inaugust.com> Message-ID: <1531782302.twvt7vh977.tristanC@fedora> On July 16, 2018 4:27 pm, Tobias.Henkel at bmw.de wrote: > > > On 16.07.18, 17:08, "Monty Taylor" wrote: > > On 07/13/2018 01:56 PM, James E. Blair wrote: > > Hi, > > > > I think the container spec is just about ready, but one part of it > > recently prompted a discussion that I think could use some wider input. > > > > Specifically, this is regarding the "container-as-machine" use-case. So > > think "run pep8 in a container" when you're thinking about this. > > There's a whole bunch of stuff in the spec about container-native > > workloads. This isn't it. > > > > The issue is whether or not Nodepool should be able to build container > > images, much as we do today for machines. > > > > Note that regardless, Nodepool will be able to use "upstream" container > > images from the local Kubernetes registry or Docker Hub. So this would > > be an optional feature (just as Nodepool can use extant cloud images). > > > > There are a few reasons to build local images (of any kind): > > > > * To provide a git cache. > > * To have more up to date packages than those provided externally. > > * To have a backup of a working image if something upstream changes. > > * To have a standardized image. > > * To add local requirements. > > > > To be fair, the last two might be the same thing said different ways. > > > > In an offline discussion prompted by the spec, Monty has suggested that > > running a git mirror in Kubernetes may obviate the need for using custom > > images for a git cache [related: applying this to VM images as well > > could reduce image sizes there; he's planning on writing up thoughts > > on this more fully later]. So if we take that out of the equation, the > > question becomes: > > > > Are the remaining reasons for building an image locally sufficient to > > build this functionality into Nodepool? > > > > A quick clarification: we're obviously not going to write a new image > > builder either way. We're talking about having Nodepool run some > > existing container image process on a periodic schedule and upload the > > result to the local registry. The main benefit to Nodepool doing this > > is using the same images across different registries, having a daemon > > performing the image builds automatically, and the ability to roll-back > > to previous versions. > > > > If you have thoughts on whether this is useful, or if it should be > > omitted in favor of relying more heavily on base images or external > > systems, please let me know. I'll try to collect feedback and update > > the spec early next week. Then we can merge it. :) > > I don't think it's particularly useful to have nodepool build container > images - although I think it should be pretty easy to accomplish so I'm > not personally *opposed* to us having the functionality. > > The reason I think it's not useful to build container images but still > do think it's useful to build VM images is that one of the things > dockerhub has provided to the world (I'd possibly argue this is the > primary value of docker itself) is a global shared set of images that > end users can select. > > In OpenStack, images are a vendor/operator concern. The 'ubuntu' or > 'fedora' image put into a cloud doesn't have consistent or predictable > content across clouds because it's literally a different image. So we > HAVE to make images for the multi-cloud story to work. > > With docker/kubernetes, the image 'ubuntu:latest' is the same > everywhere, because it comes from dockerhub, and even if people have a > local registry, it's going to pull ubuntu from dockerhub. My > understanding of best practices are that if someone wants a custom > ubuntu image in a local registry, they'd most likely publish it under a > different name - and possibly base it on the upstream 'ubuntu' image. > > Obviously people are free to violate that if they want in their private > environments, but from a nodepool persepctive it actually is reasonable > for a user to connect a nodepool to 10 different kubernetes provided by > 10 different public clouds and expect 'ubuntu:latest' to be the same bits. > > SO - all of that to say ... > > I don't think it's a *terrible* idea to have a 'have nodepool build > container images for me' feature, but I'm skeptical that it would > provide value or even be used by anyone. I do not think we should use it > in OpenStack if it existed. > > Also, as Josh suggests, if someone did want local images, having a zuul > job that built then and then published them to a registry that the > nodepool is pulling from should work fine ... and would also serve the > use-case of "I want to test my images before pushing them to nodepool" > that Yolanda has been wanting for years. > > Monty > > > I agree with Monty. We use containers on top of images built by nodepool since > a long time and I've always looked at them as artifacts produced by the ci system. > We try to treat the current nodepool like infrastructure (can I call that some kind of Iaas?) > that provides some generic image to the that should run their workload within a > container on the node supplied by nodepool. One of the major advantages is that > the container images directly fall out of the ci process of the project while changing > images that are built by nodepool is much more coupled to operations as this is clearly > something that should be at least reviewed by the operations team. Further by > treating the container images as a CI artifact the project has much more control > about the timing when to update the image and when not instead of the decoupled > regular image rebuilds currently used in nodepool. > > So with adding container support to nodepool I see this use case simplified like just ripping > out the VM layer while keeping the focus on the project supplied container images > which has been very useful to us in the past. > > It might not be really correct but I see that use case like moving from IaaS > to some kind of PaaS which simplifies working with project supplied container images. > > So I personally don't think we need this but I'm also not opposed to this if > someone has a different view and needs container image build functionality > in nodepool. > > Tobias > I agree with Monty and Tobias, While we could add image building and uploading interface to the Nodepool driver API, for example to be able to use virt-customize or upload diskfile to a libvirt host, that seems like a lot of work. Using Zuul jobs to manage image building as Josh suggests sounds a like a better implementation. I'm actually looking forward building snapshot based image that way. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joshua.hesketh at gmail.com Tue Jul 17 05:40:26 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 17 Jul 2018 15:40:26 +1000 Subject: Potential enhancements to logging In-Reply-To: <87fu0in7di.fsf@meyer.lemoncheese.net> References: <87fu0in7di.fsf@meyer.lemoncheese.net> Message-ID: On Tue, Jul 17, 2018 at 8:27 AM, James E. Blair wrote: > Hi, > > Some recent (and not-so-recent) issues with log hosting in OpenStack > Infra have caused a renewed interest in reworking our log storing > infrastructure. Some of the motivation and much of the initial work is > OpenStack-specific, so I have written a post to the openstack-infra list > describing that. Technically, none of what is in there affects the > wider Zuul community, but folks reading this may want to read that > message for background (or, if you follow our patterns closely, it may > affect you as well). > > That message is here: > > http://lists.openstack.org/pipermail/openstack-infra/ > 2018-July/006020.html > > Also, like that message, this one is the result of several > conversations, especially with Monty; many of these ideas are his. > > Essentially, that message describes a process for us to begin hosting > our logs in swift instead of on a file server. I'll note here that Zuul > is (and will continue to be, at least for the foreseeable future) > agnostic about where logs are hosted. Log storage is implemented as > roles in base jobs, so any site can store logs where and how they wish. > We can help each other by sharing the roles which perform these tasks. > OpenStack moving its log storage to swift doesn't mean anyone else needs > to -- storing logs on a sever (or even the executor) will still be > supported. As would storing in S3, or any other system for which > someone is motivated to write a role. > > That message also describes decommissioning our log-hosting server which > happens to provide a number of useful services today. Among the these > are dynamic generation of directory indexes, and HTMLification of text > logs. Our plan is to pre-generate that data, however, that's not ideal, > as it expands and duplicates the amount of data stored, and it also > creates more work for the executors. > > We can expand the role of the Zuul dashboard to take on some of those > duties, thereby reducing the storage requirements, and offloading the > computation to the browser of the user requesting it. In the process, > we can make a better experience for users navigating log files, all > without compromising Zuul's independence from backend storage > mechanisms. > > All of what follows should apply equally to file or swift-based storage. > > Let's start by creating a per-build page in the dashboard. Currently, > we show a list of builds, and link to the log locations for those > builds. In the future, we should link to a new page with details about > the build. Not only so that we can show more info (which we already > collect) than can be shown on one line, but also so that we have a place > for the additional functionality which follows. > This adds weight to my argument that we should be storing results in a database as a first class citizen in zuul rather than via a reporter. This will allow us to build up a comprehensive view of both the state of zuul and the history. Perhaps I should start a spec for this? > > If we have the uploader also create and upload a file manifest (say, > zuul-manifest.json), then the build page can fetch[1] this file from the > log storage and display an index of artifacts for the build. This can > allow for access to artifacts directly within the Zuul web user > interface, without the discontinuity of hitting an artifact server for > the index. Since swift (and other object storage systems) may not be > capable of dynamically creating indexes, this allows the functionality > to work in all cases and removes the need to pre-generate index pages > when using swift. > Why fetch this from a log store rather than having it a part of the zuul_return? This way the zuul_return becomes the canonical way to let zuul know of anything. And after it is stored in a database and displayed via the dashboard, zuul contains all the information it needs to know and can link out to arbitrary storage systems from there. > > We can also extend Zuul to store additional artifact information about a > build. For example, in our docs build jobs, we override the success-url > to link to the generated doc content. If we return the build location > as additional information about the build, we can highlight that in a > prominent place on the build page. In the future, we may want to leave > the success-url alone so that users visit the build page and then follow > a link to the preview. It would be an extra click compared to what we > do today, but it may be a better experience in the long run. Either way > would still be an option, of course, according to operator preference. > I like this approach. In fact, I'd even argue to deprecate success+failure url's and hold any extra link logic in the build results. > > Another important feature we currently have in OpenStack is a piece of > middleware we call "os-loganalyze" (or OSLA) which HTMLifies text logs. > It dynamically parses log files and creates anchors for each line (for > easy hyperlink references) and also highlights and allows filtering by > severity. > > We could implement this in javascript, so that when a user browses from > the build page to a text file, the javascript component could fetch[1] > the file and perform the HTMLification as OSLA does today. If it can't > parse the file, it will just pass it through unchanged. And in the > virtual file listing on the build page, we can add a "raw" link for > direct download of the file without going through the HTMLification. > This would eliminate the need for us to pre-render content when we > upload to swift, and by implementing this in a generic manner in > javascript in Zuul, more non-OpenStack Zuul users would benefit from > this functionality. > Right, this would also allow us to embed/browse logs inside the build results page (eg as an iframe). Totally an implementation detail but this is what I did with turbo-hipster and with large files it would sometimes be quite slow (just something we'd have to be conscious on how we handle). > > Any unknown files, or existing .html files, should just be direct links > which leave the Zuul dashboard interface (we don't want to attempt to > proxy or embed everything). > > Finally, even today Zuul creates a job-output.json, with the idea that > we would eventually use it as a substitute for job-output.txt when > reviewing logs. The build page would be a natural launching point for a > page which read this json file and created a more interactive and > detailed version of the streaming job output. > This does require the operator to be capturing that log through a base job though. I'd like to see a way where we don't require any particular log capturing. The dashboard gets us very close, and really just leaves the job-output as the only part that we may want every run. For now I'm fine with this approach though. Cheers, Josh > > In summary, by moving some of the OpenStack-specific log processing into > Zuul-dashboard javascript, we can make it more generic and widely > applicable, and at the same time provide better support for multiple log > storage systems. > > -Jim > > [1] Swift does support specifying CORS headers. Users would need to do > so to support this, and users of static file servers would need to do > something similar. > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Jul 17 08:25:09 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 17 Jul 2018 08:25:09 +0000 Subject: Potential enhancements to logging In-Reply-To: References: <87fu0in7di.fsf@meyer.lemoncheese.net> Message-ID: <1531815322.0u7npik8k5.tristanC@fedora> On July 17, 2018 5:40 am, Joshua Hesketh wrote: > On Tue, Jul 17, 2018 at 8:27 AM, James E. Blair wrote: [...] >> We can also extend Zuul to store additional artifact information about a >> build. For example, in our docs build jobs, we override the success-url >> to link to the generated doc content. If we return the build location >> as additional information about the build, we can highlight that in a >> prominent place on the build page. In the future, we may want to leave >> the success-url alone so that users visit the build page and then follow >> a link to the preview. It would be an extra click compared to what we >> do today, but it may be a better experience in the long run. Either way >> would still be an option, of course, according to operator preference. >> > > I like this approach. In fact, I'd even argue to deprecate success+failure > url's and hold any extra link logic in the build results. > We could use a new build field to store the raw log_url without the success+failure modifications: https://review.openstack.org/583066 Adding a build manifest sounds very good, I currently rely on a the zuul-info/inventory.yaml file to extra builds' informations from the logs. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Tue Jul 17 08:27:33 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 17 Jul 2018 08:27:33 +0000 Subject: Zuul project update 2018-07-16 In-Reply-To: <87h8kylrta.fsf@meyer.lemoncheese.net> References: <87h8kylrta.fsf@meyer.lemoncheese.net> Message-ID: <1531816041.hjcnp29c72.tristanC@fedora> On July 16, 2018 10:49 pm, James E. Blair wrote: > Hi > > Last week Tristan fixed issues with direct enqueueing of changes with > ambiguous project names, Improved timer trigger to skip un-needed > projects, and improving Github Event filter debugging. > > Tobias fixed a potentially serious bug with contamination of > configuration. He's working on improving job freeze performance, and > parent jobs which pause and continue to run while their children run > (for resources and artifact exchange during a buildset). > > This week I plan on finishing the container spec. I've also written up > some ideas for log storage which I may do some work on. > > We're waiting on a resolution to an issue with multi-tenancy in the new > Javascript code. Once that's resolved, or we revert the new JS code, we > should be in position to make a new release. > There is now a fix here: https://review.openstack.org/579418 -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Tue Jul 17 16:02:51 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 17 Jul 2018 09:02:51 -0700 Subject: Potential enhancements to logging In-Reply-To: (Joshua Hesketh's message of "Tue, 17 Jul 2018 15:40:26 +1000") References: <87fu0in7di.fsf@meyer.lemoncheese.net> Message-ID: <87pnzlj1ec.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: >> Let's start by creating a per-build page in the dashboard. Currently, >> we show a list of builds, and link to the log locations for those >> builds. In the future, we should link to a new page with details about >> the build. Not only so that we can show more info (which we already >> collect) than can be shown on one line, but also so that we have a place >> for the additional functionality which follows. > > This adds weight to my argument that we should be storing results in a > database as a first class citizen in zuul rather than via a reporter. This > will allow us to build up a comprehensive view of both the state of zuul > and the history. Perhaps I should start a spec for this? I agree. So much requires the sql reporter now I think we should start thinking of it as non-optional. The only hesitation I have at the moment is that we *may* want to consider storing the build history in Zookeeper. I have my doubts about that though. Er, to wrap this up: I agree that we should be considering requiring a build metadata store. I don't want to rush into it yet without considering our other future architecture requirements. But it's not too early to start thinking about it. >> If we have the uploader also create and upload a file manifest (say, >> zuul-manifest.json), then the build page can fetch[1] this file from the >> log storage and display an index of artifacts for the build. This can >> allow for access to artifacts directly within the Zuul web user >> interface, without the discontinuity of hitting an artifact server for >> the index. Since swift (and other object storage systems) may not be >> capable of dynamically creating indexes, this allows the functionality >> to work in all cases and removes the need to pre-generate index pages >> when using swift. > > Why fetch this from a log store rather than having it a part of the > zuul_return? The file manifest could be very, very large. As Clark pointed out in another thread, we may be uploading tens of thousands of files. > This way the zuul_return becomes the canonical way to let zuul know of > anything. And after it is stored in a database and displayed via the > dashboard, zuul contains all the information it needs to know and can link > out to arbitrary storage systems from there. I *do* think that we should use zuul_return to get things like the doc build preview URL (see next paragraph) -- high-level build artifacts that should be displayed prominently on the build page. Just not every file we upload. >> We can also extend Zuul to store additional artifact information about a >> build. For example, in our docs build jobs, we override the success-url >> to link to the generated doc content. If we return the build location >> as additional information about the build, we can highlight that in a >> prominent place on the build page. In the future, we may want to leave >> the success-url alone so that users visit the build page and then follow >> a link to the preview. It would be an extra click compared to what we >> do today, but it may be a better experience in the long run. Either way >> would still be an option, of course, according to operator preference. >> > > I like this approach. In fact, I'd even argue to deprecate success+failure > url's and hold any extra link logic in the build results. Yeah, it's worth considering. I probably lean in that direction as well, but I'd like to have a period where we keep both to see how it gets used. -Jim From pabelanger at redhat.com Tue Jul 17 20:50:33 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 17 Jul 2018 16:50:33 -0400 Subject: Rethinking the design of base jobs Message-ID: <20180717205033.GA10808@localhost.localdomain> Greetings, I'd like to open a discussion around the idea of changing how base jobs work, today base jobs are stored in a trusted config project. Mostly, this is done because certain aspects of a base jobs need a secret, for example uploading logs to logs.exexmple.org. We also have other tasks in a base job, lets say configure-mirrors, which pre-populates the mirror info on a node from nodepool. As a recap, the main thing to remember about a config project, or trusted job, is we cannot leverage the speculative testing around jobs. We first need to merge a patch, then hope the job continues to work. In openstack-infra, we've create an idea of base / base-test jobs and some instructions[1] on how to interact with them. Now back to configure-mirrors role above, I believe there isn't really a reason for the role to live in a trusted job, nothing it does will affect zuul security (think secrets) or interact with zuul-executor (delagate_localhost). And given that changes to the role need to do the base / base-test dance outlined above, I'd like to make the following suggestion. We keep the idea of base job, but we split it into base (trusted) and base (untrusted). Given that today zuul cannot support 2 jobs with the same name, I think it would look more like. foo-zuul-jobs (untrusted project) - name: base parent: base-minimal config-project (trusted) - name: base-minimal parent: null This means all jobs, now parent to an untrusted project job, with the main benefit that speculative to base are more easily tested. And things that really need to be locked down (eg secrets), still live in the trusted base-minimal job (NOTE: open to renames). In the case of openstack-infra[2], I actually think a lot of the roles in pre.yaml can be moved into untrusted jobs, if we followed this idea. The post-run tasks, are almost all trusted context. Mind you, we don't really need to make this change in zuul, but I'm more thinking we should use it as the default recommendation for new users. I'm looking to start implementing the change in rdoproject.org but wanted to get the idea out before I proposed changes. [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/jobs.yaml#n7 [2] http://git.openstack.org/cgit/openstack-infra/project-config/tree/playbooks/base/pre.yaml From corvus at inaugust.com Tue Jul 17 21:08:35 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 17 Jul 2018 14:08:35 -0700 Subject: Rethinking the design of base jobs In-Reply-To: <20180717205033.GA10808@localhost.localdomain> (Paul Belanger's message of "Tue, 17 Jul 2018 16:50:33 -0400") References: <20180717205033.GA10808@localhost.localdomain> Message-ID: <87bmb5h8oc.fsf@meyer.lemoncheese.net> Paul Belanger writes: > Greetings, > > I'd like to open a discussion around the idea of changing how base jobs work, > today base jobs are stored in a trusted config project. Mostly, this is done > because certain aspects of a base jobs need a secret, for example uploading logs > to logs.exexmple.org. We also have other tasks in a base job, lets say > configure-mirrors, which pre-populates the mirror info on a node from nodepool. > > As a recap, the main thing to remember about a config project, or trusted job, > is we cannot leverage the speculative testing around jobs. We first need to > merge a patch, then hope the job continues to work. > > In openstack-infra, we've create an idea of base / base-test jobs and some > instructions[1] on how to interact with them. > > Now back to configure-mirrors role above, I believe there isn't really a reason > for the role to live in a trusted job, nothing it does will affect zuul security > (think secrets) or interact with zuul-executor (delagate_localhost). And given > that changes to the role need to do the base / base-test dance outlined above, > I'd like to make the following suggestion. > > We keep the idea of base job, but we split it into base (trusted) and base > (untrusted). Given that today zuul cannot support 2 jobs with the same name, I > think it would look more like. > > foo-zuul-jobs (untrusted project) > - name: base > parent: base-minimal > > config-project (trusted) > - name: base-minimal > parent: null > > This means all jobs, now parent to an untrusted project job, with the main > benefit that speculative to base are more easily tested. And things that really > need to be locked down (eg secrets), still live in the trusted base-minimal job > (NOTE: open to renames). > > In the case of openstack-infra[2], I actually think a lot of the roles in > pre.yaml can be moved into untrusted jobs, if we followed this idea. The > post-run tasks, are almost all trusted context. > > Mind you, we don't really need to make this change in zuul, but I'm more > thinking we should use it as the default recommendation for new users. I'm > looking to start implementing the change in rdoproject.org but wanted to get the > idea out before I proposed changes. I think that's a fine thing to experiment with. The documentation describing how to set up and use base jobs in zuul-from-scratch hasn't been written yet (see a recent mail from me to this list on the subject). I'm not certain whether we should go as far as recommending this approach until someone has tried it and can report their experiences, but we could certainly at least mention it and highlight its advantages and the pitfalls of putting too much action into the trusted context. There's an opportunity to editorialize should you wish to write that documentation. The closest we come to a recommendation would be the zuul-base-jobs repo, which does, in fact, have very little in the base job playbooks: https://git.zuul-ci.org/cgit/zuul-base-jobs/tree/playbooks/base -Jim From joshua.hesketh at gmail.com Wed Jul 18 03:26:13 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 18 Jul 2018 13:26:13 +1000 Subject: Potential enhancements to logging In-Reply-To: <87pnzlj1ec.fsf@meyer.lemoncheese.net> References: <87fu0in7di.fsf@meyer.lemoncheese.net> <87pnzlj1ec.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Jul 18, 2018 at 2:02 AM, James E. Blair wrote: > Joshua Hesketh writes: > > >> Let's start by creating a per-build page in the dashboard. Currently, > >> we show a list of builds, and link to the log locations for those > >> builds. In the future, we should link to a new page with details about > >> the build. Not only so that we can show more info (which we already > >> collect) than can be shown on one line, but also so that we have a place > >> for the additional functionality which follows. > > > > This adds weight to my argument that we should be storing results in a > > database as a first class citizen in zuul rather than via a reporter. > This > > will allow us to build up a comprehensive view of both the state of zuul > > and the history. Perhaps I should start a spec for this? > > I agree. So much requires the sql reporter now I think we should start > thinking of it as non-optional. The only hesitation I have at the > moment is that we *may* want to consider storing the build history in > Zookeeper. I have my doubts about that though. > > Er, to wrap this up: I agree that we should be considering requiring a > build metadata store. I don't want to rush into it yet without > considering our other future architecture requirements. But it's not > too early to start thinking about it. > Yep, that's why I said database ;-) > > >> If we have the uploader also create and upload a file manifest (say, > >> zuul-manifest.json), then the build page can fetch[1] this file from the > >> log storage and display an index of artifacts for the build. This can > >> allow for access to artifacts directly within the Zuul web user > >> interface, without the discontinuity of hitting an artifact server for > >> the index. Since swift (and other object storage systems) may not be > >> capable of dynamically creating indexes, this allows the functionality > >> to work in all cases and removes the need to pre-generate index pages > >> when using swift. > > > > Why fetch this from a log store rather than having it a part of the > > zuul_return? > > The file manifest could be very, very large. As Clark pointed out in > another thread, we may be uploading tens of thousands of files. > Right, but why not just require a list of asset/artefact URL's and metadata (instead of a complete list)? For example: { 'manifest.json': { 'url': 'https://logs.example.org/build/uuid/manifest.json' }, 'console.log': { 'url': 'https://logs.example.org/build/uuid/console.log' }, 'ARA': { 'url': 'https://logs.example.org/build/uuid/ara/', 'pre-rendered': true }, 'docker-build': { 'url': 'https://hub.docker.com/_/my_project/', 'tag': '3.3', 'registry': '...', '...': {} } } Clearly that's not an actual schema we'd want to use, but I'm sure we could come up with something. Main point being we'd just require a couple of fields (name+url?) and the rest is some kind of metadata. > > > This way the zuul_return becomes the canonical way to let zuul know of > > anything. And after it is stored in a database and displayed via the > > dashboard, zuul contains all the information it needs to know and can > link > > out to arbitrary storage systems from there. > > I *do* think that we should use zuul_return to get things like the doc > build preview URL (see next paragraph) -- high-level build artifacts > that should be displayed prominently on the build page. Just not every > file we upload. > Heh, okay, we're in agreement. I'm unsure why we'd want a manifest. Or more accurately why we'd want zuul to fetch the manifest by default. > > >> We can also extend Zuul to store additional artifact information about a > >> build. For example, in our docs build jobs, we override the success-url > >> to link to the generated doc content. If we return the build location > >> as additional information about the build, we can highlight that in a > >> prominent place on the build page. In the future, we may want to leave > >> the success-url alone so that users visit the build page and then follow > >> a link to the preview. It would be an extra click compared to what we > >> do today, but it may be a better experience in the long run. Either way > >> would still be an option, of course, according to operator preference. > >> > > > > I like this approach. In fact, I'd even argue to deprecate > success+failure > > url's and hold any extra link logic in the build results. > > Yeah, it's worth considering. I probably lean in that direction as > well, but I'd like to have a period where we keep both to see how it > gets used. > Sounds reasonable to me. Cheers, Josh > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Jul 18 16:16:31 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 18 Jul 2018 09:16:31 -0700 Subject: Potential enhancements to logging In-Reply-To: (Joshua Hesketh's message of "Wed, 18 Jul 2018 13:26:13 +1000") References: <87fu0in7di.fsf@meyer.lemoncheese.net> <87pnzlj1ec.fsf@meyer.lemoncheese.net> Message-ID: <877elsfrj4.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: >> >> If we have the uploader also create and upload a file manifest (say, >> >> zuul-manifest.json), then the build page can fetch[1] this file from the >> >> log storage and display an index of artifacts for the build. This can >> >> allow for access to artifacts directly within the Zuul web user >> >> interface, without the discontinuity of hitting an artifact server for >> >> the index. Since swift (and other object storage systems) may not be >> >> capable of dynamically creating indexes, this allows the functionality >> >> to work in all cases and removes the need to pre-generate index pages >> >> when using swift. >> > >> > Why fetch this from a log store rather than having it a part of the >> > zuul_return? >> >> The file manifest could be very, very large. As Clark pointed out in >> another thread, we may be uploading tens of thousands of files. >> > > > Right, but why not just require a list of asset/artefact URL's and metadata > (instead of a complete list)? For example: > > { > 'manifest.json': { > 'url': 'https://logs.example.org/build/uuid/manifest.json' > }, > 'console.log': { > 'url': 'https://logs.example.org/build/uuid/console.log' > }, > 'ARA': { > 'url': 'https://logs.example.org/build/uuid/ara/', > 'pre-rendered': true > }, > 'docker-build': { > 'url': 'https://hub.docker.com/_/my_project/', > 'tag': '3.3', > 'registry': '...', > '...': {} > } > } > > Clearly that's not an actual schema we'd want to use, but I'm sure we could > come up with something. Main point being we'd just require a couple of > fields (name+url?) and the rest is some kind of metadata. > > >> >> > This way the zuul_return becomes the canonical way to let zuul know of >> > anything. And after it is stored in a database and displayed via the >> > dashboard, zuul contains all the information it needs to know and can >> link >> > out to arbitrary storage systems from there. >> >> I *do* think that we should use zuul_return to get things like the doc >> build preview URL (see next paragraph) -- high-level build artifacts >> that should be displayed prominently on the build page. Just not every >> file we upload. >> > > > Heh, okay, we're in agreement. I'm unsure why we'd want a manifest. Or more > accurately why we'd want zuul to fetch the manifest by default. Yeah, I think we may have been talking past each other a bit there, probably because I used the word "artifact" to mean "log file" as well as "important thing like a tarball". It does look like we agree that zuul_return is the right way to communicate about the location and nature of "important thing like a tarball". :) The manifest.json file is there to list everything that's uploaded. Especially including all of the individual log files. That's so that the proposed javascript file browser/HTMLifier can use the manifest to learn what log files exist and display a virtual index of them, and for the ones which look like text files, htmlify them. -Jim From pabelanger at redhat.com Thu Jul 19 16:31:26 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Jul 2018 12:31:26 -0400 Subject: Rethinking the design of base jobs In-Reply-To: <87bmb5h8oc.fsf@meyer.lemoncheese.net> References: <20180717205033.GA10808@localhost.localdomain> <87bmb5h8oc.fsf@meyer.lemoncheese.net> Message-ID: <20180719163126.GB29608@localhost.localdomain> On Tue, Jul 17, 2018 at 02:08:35PM -0700, James E. Blair wrote: > Paul Belanger writes: > > > Greetings, > > > > I'd like to open a discussion around the idea of changing how base jobs work, > > today base jobs are stored in a trusted config project. Mostly, this is done > > because certain aspects of a base jobs need a secret, for example uploading logs > > to logs.exexmple.org. We also have other tasks in a base job, lets say > > configure-mirrors, which pre-populates the mirror info on a node from nodepool. > > > > As a recap, the main thing to remember about a config project, or trusted job, > > is we cannot leverage the speculative testing around jobs. We first need to > > merge a patch, then hope the job continues to work. > > > > In openstack-infra, we've create an idea of base / base-test jobs and some > > instructions[1] on how to interact with them. > > > > Now back to configure-mirrors role above, I believe there isn't really a reason > > for the role to live in a trusted job, nothing it does will affect zuul security > > (think secrets) or interact with zuul-executor (delagate_localhost). And given > > that changes to the role need to do the base / base-test dance outlined above, > > I'd like to make the following suggestion. > > > > We keep the idea of base job, but we split it into base (trusted) and base > > (untrusted). Given that today zuul cannot support 2 jobs with the same name, I > > think it would look more like. > > > > foo-zuul-jobs (untrusted project) > > - name: base > > parent: base-minimal > > > > config-project (trusted) > > - name: base-minimal > > parent: null > > > > This means all jobs, now parent to an untrusted project job, with the main > > benefit that speculative to base are more easily tested. And things that really > > need to be locked down (eg secrets), still live in the trusted base-minimal job > > (NOTE: open to renames). > > > > In the case of openstack-infra[2], I actually think a lot of the roles in > > pre.yaml can be moved into untrusted jobs, if we followed this idea. The > > post-run tasks, are almost all trusted context. > > > > Mind you, we don't really need to make this change in zuul, but I'm more > > thinking we should use it as the default recommendation for new users. I'm > > looking to start implementing the change in rdoproject.org but wanted to get the > > idea out before I proposed changes. > > I think that's a fine thing to experiment with. The documentation > describing how to set up and use base jobs in zuul-from-scratch hasn't > been written yet (see a recent mail from me to this list on the > subject). I'm not certain whether we should go as far as recommending > this approach until someone has tried it and can report their > experiences, but we could certainly at least mention it and highlight > its advantages and the pitfalls of putting too much action into the > trusted context. > > There's an opportunity to editorialize should you wish to write that > documentation. > > The closest we come to a recommendation would be the zuul-base-jobs > repo, which does, in fact, have very little in the base job playbooks: > > https://git.zuul-ci.org/cgit/zuul-base-jobs/tree/playbooks/base > As a follow up, we've pushed this change into rdoproject.org today after some testing, and so far works as expected. Right now the base (untrusted) job only runs configure-mirrors, but looking to test more things that have historically been trusted. This parents to base-minimal (trusted), which parents to null. I'll be also reaching out to openstack-infra to also see about experimenting there. I'll also start work on an documentation patch to highlight, as suggested above. - Paul From corvus at inaugust.com Mon Jul 23 15:42:59 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 23 Jul 2018 08:42:59 -0700 Subject: Zuul project update 2018-07-23 Message-ID: <87a7qi3qm4.fsf@meyer.lemoncheese.net> Hi, I've published what I hope is the final revision of the container spec. Please take a look and leave feedback and a vote. I spent part of last week working on a new zuul-jobs role to upload logs to swift. I will continue that this week. I'm hoping that with two viable log upload roles, we'll end up with less ambiguity around log uploading. Paul is also performing work to improve roles around log handling. Once that's done, I'll return to my work on line comments. I believe Tristan has fixed the problem with zuul-web which was blocking our release last week. However, we did notice a memory leak in OpenStack-Infra. It's possible that it's already fixed by patches that Tobias happened to already be working on, or it may not. I'd like to observe our instance for a few more days to see how it behaves. But hopefully we can release Zuul this week. We're also planning on burning in the current master version of Nodepool with the idea of making a release there before Shrews goes on vacation. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim From corvus at inaugust.com Mon Jul 23 17:26:37 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 23 Jul 2018 10:26:37 -0700 Subject: Zuul project update 2018-07-23 In-Reply-To: <87a7qi3qm4.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Mon, 23 Jul 2018 08:42:59 -0700") References: <87a7qi3qm4.fsf@meyer.lemoncheese.net> Message-ID: <87h8kp3lte.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > I believe Tristan has fixed the problem with zuul-web which was blocking > our release last week. However, we did notice a memory leak in > OpenStack-Infra. It's possible that it's already fixed by patches that > Tobias happened to already be working on, or it may not. I'd like to > observe our instance for a few more days to see how it behaves. But > hopefully we can release Zuul this week. We should fix this issue before release as well, as it's a regression: https://storyboard.openstack.org/#!/story/2003026 -Jim From corvus at inaugust.com Mon Jul 30 17:59:56 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 30 Jul 2018 10:59:56 -0700 Subject: Zuul project update 2018-07-30 Message-ID: <87a7q8eh9f.fsf@meyer.lemoncheese.net> Hi, This past week has seen: Paul doing some work to fix and better test the ansible synchronize task. Also fixing an issue related to the child-jobs changes from last week. I made progress on both the swift log uploads and file comments efforts. I should be able to test swift uploads this week. Tobias has been working on the ability to pause a job while its children run (this will also be helpful for container work). Tristan has staged several changes implementing the containers spec. I plan on releasing Zuul 3.2.0 shortly. Please help write next week's email by updating this etherpad: https://etherpad.openstack.org/p/zuul-update-email -Jim