From mordred at inaugust.com Wed Mar 6 17:31:11 2019 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 6 Mar 2019 17:31:11 +0000 Subject: Proposal to create zuul-operator project Message-ID: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> Hey everybody, It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist. SO ... I would like to propose that we create a zuul-operator project as part of Zuul. Given our close ties with Ansible, I think we should use ansible-operator [0] to write it. Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1] Does anyone object? It not, I'll get a repo started. Monty [0] https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md [1] https://www.operatorhub.io/ From artem.goncharov at gmail.com Wed Mar 6 17:34:17 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 6 Mar 2019 18:34:17 +0100 Subject: Proposal to create zuul-operator project In-Reply-To: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> Message-ID: I'm totally for it. Actually it was even on my to-do list for quitr some time already. Artem On Wed, 6 Mar 2019, 18:31 Monty Taylor, wrote: > Hey everybody, > > It seems that kubernetes operators are all the rage these days, and in > the last 2 weeks I've had multiple conversations that have all come back > around to someone wanting a zuul operator to exist. > > SO ... > > I would like to propose that we create a zuul-operator project as part > of Zuul. > > Given our close ties with Ansible, I think we should use > ansible-operator [0] to write it. > > Since we have zuul in production on both vanilla k8s and on OpenShift, I > think we should set up gate jobs to make sure it works on both. Once > we're happy with it - we should publish it to operatorhub. [1] > > Does anyone object? It not, I'll get a repo started. > > Monty > > > [0] > > https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md > [1] https://www.operatorhub.io/ > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goern at redhat.com Wed Mar 6 17:50:30 2019 From: goern at redhat.com (Christoph Goern) Date: Wed, 6 Mar 2019 18:50:30 +0100 Subject: Proposal to create zuul-operator project In-Reply-To: References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> Message-ID: +10 for that! //G Artem Goncharov schrieb am Mi. 6. März 2019 um 18:35: > I'm totally for it. Actually it was even on my to-do list for quitr some > time already. > > Artem > > On Wed, 6 Mar 2019, 18:31 Monty Taylor, wrote: > >> Hey everybody, >> >> It seems that kubernetes operators are all the rage these days, and in >> the last 2 weeks I've had multiple conversations that have all come back >> around to someone wanting a zuul operator to exist. >> >> SO ... >> >> I would like to propose that we create a zuul-operator project as part >> of Zuul. >> >> Given our close ties with Ansible, I think we should use >> ansible-operator [0] to write it. >> >> Since we have zuul in production on both vanilla k8s and on OpenShift, I >> think we should set up gate jobs to make sure it works on both. Once >> we're happy with it - we should publish it to operatorhub. [1] >> >> Does anyone object? It not, I'll get a repo started. >> >> Monty >> >> >> [0] >> >> https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md >> [1] https://www.operatorhub.io/ >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -- Principal Software Engineer - Systems Design & Engineering Mobile: +49 171 2801345 Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Handelsregister: Amtsgericht München, HRB 153243, Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Mar 6 18:12:04 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 6 Mar 2019 13:12:04 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> Message-ID: <20190306181204.GA16087@localhost.localdomain> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > Hey everybody, > > It seems that kubernetes operators are all the rage these days, and in the > last 2 weeks I've had multiple conversations that have all come back around > to someone wanting a zuul operator to exist. > > SO ... > > I would like to propose that we create a zuul-operator project as part of > Zuul. > > Given our close ties with Ansible, I think we should use ansible-operator > [0] to write it. > > Since we have zuul in production on both vanilla k8s and on OpenShift, I > think we should set up gate jobs to make sure it works on both. Once we're > happy with it - we should publish it to operatorhub. [1] > > Does anyone object? It not, I'll get a repo started. > I think it is a fine idea, I've also seen some talk about it recently too. I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not? Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed). I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits. > Monty > > > [0] https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md > [1] https://www.operatorhub.io/ > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Mar 6 18:22:31 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 06 Mar 2019 10:22:31 -0800 Subject: Proposal to create zuul-operator project In-Reply-To: <20190306181204.GA16087@localhost.localdomain> (Paul Belanger's message of "Wed, 6 Mar 2019 13:12:04 -0500") References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> Message-ID: <87bm2n50d4.fsf@meyer.lemoncheese.net> Paul Belanger writes: > I think it is a fine idea, I've also seen some talk about it recently > too. > > I do think we should have some consensus on the bits we want to support > for the deployment. Maybe going as far as saying it is an opinionated > deployment, for better or worse. As an example, graphite / statsd is > that in play or not? I think some things we can be opinionated on (for example, we can pick one SQL backend, because our operator will manage it). There are probably a number of ways to handle stats (manage it ourselves, or leave it to the user to be able to report to their central dashboard). We can make things easy for ourselves by ignoring that for starters. We can always add it later. > Also, could we maybe say a single distro for all container things? I > guess by default, that is what the current containers by zuul project is > running today. I don't much have the energy to get into the weeds of pip > / dpkg / rpm for how things maybe get installed (if needed). I expect our operator would just use the container images we're building. -Jim From cboylan at sapwetik.org Wed Mar 6 18:54:58 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 06 Mar 2019 13:54:58 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> Message-ID: <064599ac-b478-4139-be5c-6d857abce4dc@www.fastmail.com> On Wed, Mar 6, 2019, at 9:31 AM, Monty Taylor wrote: > Hey everybody, > > It seems that kubernetes operators are all the rage these days, and in > the last 2 weeks I've had multiple conversations that have all come back > around to someone wanting a zuul operator to exist. > > SO ... > > I would like to propose that we create a zuul-operator project as part > of Zuul. > > Given our close ties with Ansible, I think we should use > ansible-operator [0] to write it. We should check that the bug we found managing gitea with the ansible k8s module is handled by the ansible-operator or push the appropriate parties to get the fixes released. Long story short the kubernetes python module's thread management doesn't close threads properly in the synchronous case which is how the k8s ansible module consumes it. This library is autogenerated python code and the 9.0 beta version fixes it but there isn't a real release yet [2]. On top of that the openshift client library is how ansible consumes this and this in the k8s module and it pins to 8.x [3]. Ideally the kubernetes python lib would release 9.0, then openshift can require >=9.0, then ansible talking to k8s will work reliably. > > Since we have zuul in production on both vanilla k8s and on OpenShift, I > think we should set up gate jobs to make sure it works on both. Once > we're happy with it - we should publish it to operatorhub. [1] > > Does anyone object? It not, I'll get a repo started. > > Monty > > > [0] > https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md > [1] https://www.operatorhub.io/ [2] https://pypi.org/project/kubernetes/#history [3] https://github.com/openshift/openshift-restclient-python/blob/master/requirements.txt#L3 From clint at fewbar.com Wed Mar 6 18:59:33 2019 From: clint at fewbar.com (Clint Byrum) Date: Wed, 06 Mar 2019 10:59:33 -0800 Subject: Proposal to create zuul-operator project In-Reply-To: <87bm2n50d4.fsf@meyer.lemoncheese.net> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <87bm2n50d4.fsf@meyer.lemoncheese.net> Message-ID: <155189877319.32107.14363302381980918068@clint-ThinkPad-X1-Carbon-6th> Quoting James E. Blair (2019-03-06 10:22:31) > Paul Belanger writes: > > > I think it is a fine idea, I've also seen some talk about it recently > > too. > > > > I do think we should have some consensus on the bits we want to support > > for the deployment. Maybe going as far as saying it is an opinionated > > deployment, for better or worse. As an example, graphite / statsd is > > that in play or not? > > I think some things we can be opinionated on (for example, we can pick > one SQL backend, because our operator will manage it). There are > probably a number of ways to handle stats (manage it ourselves, or leave > it to the user to be able to report to their central dashboard). We can > make things easy for ourselves by ignoring that for starters. We can > always add it later. > All of the clouds I am interested inputting Zuul on have DBaaS and I tend to prefer those for a number of reasons (global backup policies, better performance, less ops load). I'm not saying v0 has to have this, but it would be nice if the DB part had an option to consume an external one. But I'd probably use it without that, so, just a nice to have. :) > > Also, could we maybe say a single distro for all container things? I > > guess by default, that is what the current containers by zuul project is > > running today. I don't much have the energy to get into the weeds of pip > > / dpkg / rpm for how things maybe get installed (if needed). > > I expect our operator would just use the container images we're > building. > Another really hard part of this is what to include in the executor image. I currently add in kubectl for kubernetes resources, and boto3 for s3 logs. These are local choices that not everyone needs, and I imagine there are other examples of base jobs that rely on executor image contents not in the base executor image. So I think it's important to make it relatively simple to build your own images, but still use said operator. From mordred at inaugust.com Wed Mar 6 19:06:08 2019 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 6 Mar 2019 19:06:08 +0000 Subject: Proposal to create zuul-operator project In-Reply-To: <20190306181204.GA16087@localhost.localdomain> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> Message-ID: <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> On 3/6/19 6:12 PM, Paul Belanger wrote: > On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >> Hey everybody, >> >> It seems that kubernetes operators are all the rage these days, and in the >> last 2 weeks I've had multiple conversations that have all come back around >> to someone wanting a zuul operator to exist. >> >> SO ... >> >> I would like to propose that we create a zuul-operator project as part of >> Zuul. >> >> Given our close ties with Ansible, I think we should use ansible-operator >> [0] to write it. >> >> Since we have zuul in production on both vanilla k8s and on OpenShift, I >> think we should set up gate jobs to make sure it works on both. Once we're >> happy with it - we should publish it to operatorhub. [1] >> >> Does anyone object? It not, I'll get a repo started. >> > I think it is a fine idea, I've also seen some talk about it recently > too. > > I do think we should have some consensus on the bits we want to support > for the deployment. Maybe going as far as saying it is an opinionated > deployment, for better or worse. As an example, graphite / statsd is > that in play or not? I think that's a great question. I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box. Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one. I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now? But - yes - opinionated, mostly batteries included - maybe some amount of optional things. > Also, could we maybe say a single distro for all container things? I > guess by default, that is what the current containers by zuul project is > running today. I don't much have the energy to get into the weeds of pip > / dpkg / rpm for how things maybe get installed (if needed). Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use. I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python. I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine. > I am likely showing how much I don't know about k8s / openshift here, > but wanting to limit the amount of different ways to do a deployment > based on underlying distro bits. +10000000000 I believe there is no value in producing divergent images. From mordred at inaugust.com Wed Mar 6 19:18:02 2019 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 6 Mar 2019 19:18:02 +0000 Subject: Proposal to create zuul-operator project In-Reply-To: <155189877319.32107.14363302381980918068@clint-ThinkPad-X1-Carbon-6th> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <87bm2n50d4.fsf@meyer.lemoncheese.net> <155189877319.32107.14363302381980918068@clint-ThinkPad-X1-Carbon-6th> Message-ID: <3b117cc6-1c05-8e7d-4b21-0810c3191136@inaugust.com> On 3/6/19 6:59 PM, Clint Byrum wrote: > Quoting James E. Blair (2019-03-06 10:22:31) >> Paul Belanger writes: >> >>> I think it is a fine idea, I've also seen some talk about it recently >>> too. >>> >>> I do think we should have some consensus on the bits we want to support >>> for the deployment. Maybe going as far as saying it is an opinionated >>> deployment, for better or worse. As an example, graphite / statsd is >>> that in play or not? >> >> I think some things we can be opinionated on (for example, we can pick >> one SQL backend, because our operator will manage it). There are >> probably a number of ways to handle stats (manage it ourselves, or leave >> it to the user to be able to report to their central dashboard). We can >> make things easy for ourselves by ignoring that for starters. We can >> always add it later. >> > > All of the clouds I am interested inputting Zuul on have DBaaS and I > tend to prefer those for a number of reasons (global backup policies, > better performance, less ops load). I'm not saying v0 has to have this, > but it would be nice if the DB part had an option to consume an external > one. But I'd probably use it without that, so, just a nice to have. :) I crossed thread streams and said words about this replying to Paul but missed these... I think I agree with Jim - we should only manage one db. But I also agree with Clint - we should allow users to ask us to not manage a db and hand us a connection url. The default should be to manage a database and using an external one should be an opt-in for more advanced users. >>> Also, could we maybe say a single distro for all container things? I >>> guess by default, that is what the current containers by zuul project is >>> running today. I don't much have the energy to get into the weeds of pip >>> / dpkg / rpm for how things maybe get installed (if needed). >> >> I expect our operator would just use the container images we're >> building. >> > > Another really hard part of this is what to include in the executor > image. I currently add in kubectl for kubernetes resources, and boto3 > for s3 logs. These are local choices that not everyone needs, and I > imagine there are other examples of base jobs that rely on executor > image contents not in the base executor image. So I think it's important > to make it relatively simple to build your own images, but still use > said operator. rook has a config field in its CephCluster spec: image: ceph/ceph:v13.2.2-20181206 that let's people do that. In our case, there are potentially a set of related images - but that should still be easy enough to do - and have them default to the images we build and expect. From pabelanger at redhat.com Wed Mar 6 19:18:47 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 6 Mar 2019 14:18:47 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <155189877319.32107.14363302381980918068@clint-ThinkPad-X1-Carbon-6th> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <87bm2n50d4.fsf@meyer.lemoncheese.net> <155189877319.32107.14363302381980918068@clint-ThinkPad-X1-Carbon-6th> Message-ID: <20190306191847.GB16087@localhost.localdomain> On Wed, Mar 06, 2019 at 10:59:33AM -0800, Clint Byrum wrote: > Quoting James E. Blair (2019-03-06 10:22:31) > > Paul Belanger writes: > > > > > I think it is a fine idea, I've also seen some talk about it recently > > > too. > > > > > > I do think we should have some consensus on the bits we want to support > > > for the deployment. Maybe going as far as saying it is an opinionated > > > deployment, for better or worse. As an example, graphite / statsd is > > > that in play or not? > > > > I think some things we can be opinionated on (for example, we can pick > > one SQL backend, because our operator will manage it). There are > > probably a number of ways to handle stats (manage it ourselves, or leave > > it to the user to be able to report to their central dashboard). We can > > make things easy for ourselves by ignoring that for starters. We can > > always add it later. > > > > All of the clouds I am interested inputting Zuul on have DBaaS and I > tend to prefer those for a number of reasons (global backup policies, > better performance, less ops load). I'm not saying v0 has to have this, > but it would be nice if the DB part had an option to consume an external > one. But I'd probably use it without that, so, just a nice to have. :) > Yes, completly agree. I'd love to see things like db / zookeeper be able to consume external and if no able to, have option for zuul-operator to spin it up. I am sure we can do it, just wanted to have the discussion around it for all the reasons you just listed. > > > Also, could we maybe say a single distro for all container things? I > > > guess by default, that is what the current containers by zuul project is > > > running today. I don't much have the energy to get into the weeds of pip > > > / dpkg / rpm for how things maybe get installed (if needed). > > > > I expect our operator would just use the container images we're > > building. > > > > Another really hard part of this is what to include in the executor > image. I currently add in kubectl for kubernetes resources, and boto3 > for s3 logs. These are local choices that not everyone needs, and I > imagine there are other examples of base jobs that rely on executor > image contents not in the base executor image. So I think it's important > to make it relatively simple to build your own images, but still use > said operator. > ++ Also been running into this issue locally myself. > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From tdecacqu at redhat.com Thu Mar 7 02:32:39 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 07 Mar 2019 11:32:39 +0900 Subject: Proposal to create zuul-operator project In-Reply-To: <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> Message-ID: <87va0vqurc.tristanC@fedora> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: > On 3/6/19 6:12 PM, Paul Belanger wrote: >> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >>> Hey everybody, >>> >>> It seems that kubernetes operators are all the rage these days, and in the >>> last 2 weeks I've had multiple conversations that have all come back around >>> to someone wanting a zuul operator to exist. >>> Hello, Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools. Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator We also started to investigate operators for Software Factory component and perhaps we could join force? >>> SO ... >>> >>> I would like to propose that we create a zuul-operator project as part of >>> Zuul. >>> >>> Given our close ties with Ansible, I think we should use ansible-operator >>> [0] to write it. >>> >>> Since we have zuul in production on both vanilla k8s and on OpenShift, I >>> think we should set up gate jobs to make sure it works on both. Once we're >>> happy with it - we should publish it to operatorhub. [1] >>> >>> Does anyone object? It not, I'll get a repo started. >>> >> I think it is a fine idea, I've also seen some talk about it recently >> too. >> >> I do think we should have some consensus on the bits we want to support >> for the deployment. Maybe going as far as saying it is an opinionated >> deployment, for better or worse. As an example, graphite / statsd is >> that in play or not? > > I think that's a great question. > > I do believe it should be an opinionated deployment - but there are > probably places where people will want to make some choices from a > specific set of choices. Like - by default I think it should spin up a > percona-xtradb cluster as provided by the percona operator. BUT - I > could see a setting in teh main Zuul CRD like "use-existing-db" that > takes a sqlalchemy url. That way if there is someone who has an existing > db infrastructure, they could take advantage of it - but the Zuul > operator gets you a working system out of the box. > > Obviously we'll need to spin up a zookeeper. Not sure if there is an > operator for that yet - we might want to make one or work with someone > to make one. > There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators > I could see a graphite/statsd option with the ability to point to an > existing graphite. Maybe we just start with CRD field that indicates the > stats_host like we have now? > > But - yes - opinionated, mostly batteries included - maybe some amount > of optional things. > Should we discuss this with a spec? For example: How are we going to configure the connections and the tenants' configuration? Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? >> Also, could we maybe say a single distro for all container things? I >> guess by default, that is what the current containers by zuul project is >> running today. I don't much have the energy to get into the weeds of pip >> / dpkg / rpm for how things maybe get installed (if needed). > > Yes. All of our container images as we are publishing them now are > awesome, and are what I expect us to use. > > I am categorically opposed to us making and publishing different images. > One of the nice things about publishing container images is that you can > do them once and do them well. We're using the python base image. It > happens to be based on debian-strech - but that's irrelevant. It's > running python. > That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published. Or what happen when the zuul image gets published with an incompatible openstacksdk requirement? Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild. Regards, -Tristan > I DO think we'll likely need to support specifying an image location in > the CRD. (rook does this for ceph too) That's because I'd like for > Tobias to be able to use the operator if he chooses, but he uses local > builds of the images. And I think that's fine. > >> I am likely showing how much I don't know about k8s / openshift here, >> but wanting to limit the amount of different ways to do a deployment >> based on underlying distro bits. > > +10000000000 > > I believe there is no value in producing divergent images. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 7 13:47:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Mar 2019 13:47:10 +0000 Subject: Proposal to create zuul-operator project In-Reply-To: <87va0vqurc.tristanC@fedora> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> Message-ID: <20190307134710.rdowyyqkbrcvf5hg@yuggoth.org> On 2019-03-07 11:32:39 +0900 (+0900), Tristan Cacqueray wrote: [...] > IIUC, we would need to wait for a fix to land in the > debian-stretch image, We could just wait for the fix to land in Debian itself (they're going to be part of the coordinated disclosure of most of the sort of vulnerabilities you're talking about so should be available at basically the same time as it's announced by researchers) and then rebuild those images ourselves if that step is taking too long... or get involved with the community who maintains them and help out? > then wait for a new opendevorg/python-builder image and finally > wait for a new zuul image to be published. Luckily OpenDev and Zuul image building is managed in the same system and maintained by mostly the same people so I wouldn't anticipate much lag there. But also, this entire stack of images is buildable by downstream consumers who want to do that themselves instead of waiting for us. > Or what happen when the zuul image gets published with an > incompatible openstacksdk requirement? [...] I don't see where the choice of underlying distribution comes into play here. We'd explicitly choose which version of OpenStackSDK to install, so if it's incompatible then that's on us, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Thu Mar 7 14:13:47 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 7 Mar 2019 09:13:47 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <87va0vqurc.tristanC@fedora> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> Message-ID: <20190307141347.GA20461@localhost.localdomain> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: > > On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: > > On 3/6/19 6:12 PM, Paul Belanger wrote: > >> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > >>> Hey everybody, > >>> > >>> It seems that kubernetes operators are all the rage these days, and in the > >>> last 2 weeks I've had multiple conversations that have all come back around > >>> to someone wanting a zuul operator to exist. > >>> > > Hello, > > Thank you for starting this thread, It seems like a great oportunity to > converge the different deployment and operation tools. > > Ideally, we would grow a hierarchy of operators to manage the gating > system, and then you could have an opendev-operator to compose: > * log-processor-operator > * elastic-search-operator > * paste-operator, etherpad-operator, ... > * gerrit-operator > I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit. > We also started to investigate operators for Software Factory component > and perhaps we could join force? > > > >>> SO ... > >>> > >>> I would like to propose that we create a zuul-operator project as part of > >>> Zuul. > >>> > >>> Given our close ties with Ansible, I think we should use ansible-operator > >>> [0] to write it. > >>> > >>> Since we have zuul in production on both vanilla k8s and on OpenShift, I > >>> think we should set up gate jobs to make sure it works on both. Once we're > >>> happy with it - we should publish it to operatorhub. [1] > >>> > >>> Does anyone object? It not, I'll get a repo started. > >>> > >> I think it is a fine idea, I've also seen some talk about it recently > >> too. > >> > >> I do think we should have some consensus on the bits we want to support > >> for the deployment. Maybe going as far as saying it is an opinionated > >> deployment, for better or worse. As an example, graphite / statsd is > >> that in play or not? > > > > I think that's a great question. > > > > I do believe it should be an opinionated deployment - but there are > > probably places where people will want to make some choices from a > > specific set of choices. Like - by default I think it should spin up a > > percona-xtradb cluster as provided by the percona operator. BUT - I > > could see a setting in teh main Zuul CRD like "use-existing-db" that > > takes a sqlalchemy url. That way if there is someone who has an existing > > db infrastructure, they could take advantage of it - but the Zuul > > operator gets you a working system out of the box. > > > > Obviously we'll need to spin up a zookeeper. Not sure if there is an > > operator for that yet - we might want to make one or work with someone > > to make one. > > > > There are actually 2 operators for ZooKeeper already listed in this page: > https://github.com/operator-framework/awesome-operators > > > > I could see a graphite/statsd option with the ability to point to an > > existing graphite. Maybe we just start with CRD field that indicates the > > stats_host like we have now? > > > > But - yes - opinionated, mostly batteries included - maybe some amount > > of optional things. > > > > Should we discuss this with a spec? For example: > > How are we going to configure the connections and the tenants' > configuration? > > Software Factory batteries include a zuul-reconfigure job to manage the > main.yaml file. Should the zuul-operator setup such job, or perhaps > should we also need a zuul-quick-start-operator? > > > >> Also, could we maybe say a single distro for all container things? I > >> guess by default, that is what the current containers by zuul project is > >> running today. I don't much have the energy to get into the weeds of pip > >> / dpkg / rpm for how things maybe get installed (if needed). > > > > Yes. All of our container images as we are publishing them now are > > awesome, and are what I expect us to use. > > > > I am categorically opposed to us making and publishing different images. > > One of the nice things about publishing container images is that you can > > do them once and do them well. We're using the python base image. It > > happens to be based on debian-strech - but that's irrelevant. It's > > running python. > > > > That sounds great, but how are we going to cope with the next heartbleed > or shellshock? IIUC, we would need to wait for a fix to land in the > debian-stretch image, then wait for a new opendevorg/python-builder > image and finally wait for a new zuul image to be published. > > Or what happen when the zuul image gets published with an incompatible > openstacksdk requirement? > > Thus I think the base image and how the zuul bits gets bundled are > relevant. For example OpenShift users may want to use BuildConfig > instead so that a parent layer update automatically trigger childs' > layer rebuild. > Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. > Regards, > -Tristan > > > > I DO think we'll likely need to support specifying an image location in > > the CRD. (rook does this for ceph too) That's because I'd like for > > Tobias to be able to use the operator if he chooses, but he uses local > > builds of the images. And I think that's fine. > > > >> I am likely showing how much I don't know about k8s / openshift here, > >> but wanting to limit the amount of different ways to do a deployment > >> based on underlying distro bits. > > > > +10000000000 > > > > I believe there is no value in producing divergent images. > > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mordred at inaugust.com Thu Mar 7 15:19:07 2019 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 7 Mar 2019 15:19:07 +0000 Subject: Proposal to create zuul-operator project In-Reply-To: <20190307141347.GA20461@localhost.localdomain> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> Message-ID: On 3/7/19 2:13 PM, Paul Belanger wrote: > On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >> >> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: >>> On 3/6/19 6:12 PM, Paul Belanger wrote: >>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >>>>> Hey everybody, >>>>> >>>>> It seems that kubernetes operators are all the rage these days, and in the >>>>> last 2 weeks I've had multiple conversations that have all come back around >>>>> to someone wanting a zuul operator to exist. >>>>> >> >> Hello, >> >> Thank you for starting this thread, It seems like a great oportunity to >> converge the different deployment and operation tools. >> >> Ideally, we would grow a hierarchy of operators to manage the gating >> system, and then you could have an opendev-operator to compose: >> * log-processor-operator >> * elastic-search-operator >> * paste-operator, etherpad-operator, ... >> * gerrit-operator >> > I think these are fine to have, but I am on the fence if they are under > the scope of what zuul should do. As an example, I can totally see > opendev.org wanting doing this. As long as everything is all > disconnected from zuul-operator, I would be okay. Personally, I don't > want zuul to managed a gerrit. I agree. This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them) >> We also started to investigate operators for Software Factory component >> and perhaps we could join force? We should absolutely always join forces. :) >> >>>>> SO ... >>>>> >>>>> I would like to propose that we create a zuul-operator project as part of >>>>> Zuul. >>>>> >>>>> Given our close ties with Ansible, I think we should use ansible-operator >>>>> [0] to write it. >>>>> >>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I >>>>> think we should set up gate jobs to make sure it works on both. Once we're >>>>> happy with it - we should publish it to operatorhub. [1] >>>>> >>>>> Does anyone object? It not, I'll get a repo started. >>>>> >>>> I think it is a fine idea, I've also seen some talk about it recently >>>> too. >>>> >>>> I do think we should have some consensus on the bits we want to support >>>> for the deployment. Maybe going as far as saying it is an opinionated >>>> deployment, for better or worse. As an example, graphite / statsd is >>>> that in play or not? >>> >>> I think that's a great question. >>> >>> I do believe it should be an opinionated deployment - but there are >>> probably places where people will want to make some choices from a >>> specific set of choices. Like - by default I think it should spin up a >>> percona-xtradb cluster as provided by the percona operator. BUT - I >>> could see a setting in teh main Zuul CRD like "use-existing-db" that >>> takes a sqlalchemy url. That way if there is someone who has an existing >>> db infrastructure, they could take advantage of it - but the Zuul >>> operator gets you a working system out of the box. >>> >>> Obviously we'll need to spin up a zookeeper. Not sure if there is an >>> operator for that yet - we might want to make one or work with someone >>> to make one. >>> >> >> There are actually 2 operators for ZooKeeper already listed in this page: >> https://github.com/operator-framework/awesome-operators WOOT! >> >>> I could see a graphite/statsd option with the ability to point to an >>> existing graphite. Maybe we just start with CRD field that indicates the >>> stats_host like we have now? >>> >>> But - yes - opinionated, mostly batteries included - maybe some amount >>> of optional things. >>> >> >> Should we discuss this with a spec? For example: Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together. >> How are we going to configure the connections and the tenants' >> configuration? It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed. It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more. But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea. >> Software Factory batteries include a zuul-reconfigure job to manage the >> main.yaml file. Should the zuul-operator setup such job, or perhaps >> should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access. That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it. A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl) >> >>>> Also, could we maybe say a single distro for all container things? I >>>> guess by default, that is what the current containers by zuul project is >>>> running today. I don't much have the energy to get into the weeds of pip >>>> / dpkg / rpm for how things maybe get installed (if needed). >>> >>> Yes. All of our container images as we are publishing them now are >>> awesome, and are what I expect us to use. >>> >>> I am categorically opposed to us making and publishing different images. >>> One of the nice things about publishing container images is that you can >>> do them once and do them well. We're using the python base image. It >>> happens to be based on debian-strech - but that's irrelevant. It's >>> running python. >>> >> >> That sounds great, but how are we going to cope with the next heartbleed >> or shellshock? IIUC, we would need to wait for a fix to land in the >> debian-stretch image, then wait for a new opendevorg/python-builder >> image and finally wait for a new zuul image to be published. Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words. We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack. Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with. >> Or what happen when the zuul image gets published with an incompatible >> openstacksdk requirement? >> >> Thus I think the base image and how the zuul bits gets bundled are >> relevant. For example OpenShift users may want to use BuildConfig >> instead so that a parent layer update automatically trigger childs' >> layer rebuild. Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want. > Yah, this is what I was hoping we'd discuss more when we dove into the > weeds. Can a BuildConfig work with k8s? Or does some other k8s items > work with openshift? If not, I think it is fine if we only suppoert one > and not try to do everything. Which one we support, I don't really know, > hopefully others can say which one is better :) We can also support > both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes. If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you. It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else. >> Regards, >> -Tristan >> >> >>> I DO think we'll likely need to support specifying an image location in >>> the CRD. (rook does this for ceph too) That's because I'd like for >>> Tobias to be able to use the operator if he chooses, but he uses local >>> builds of the images. And I think that's fine. >>> >>>> I am likely showing how much I don't know about k8s / openshift here, >>>> but wanting to limit the amount of different ways to do a deployment >>>> based on underlying distro bits. >>> >>> +10000000000 >>> >>> I believe there is no value in producing divergent images. >>> >> > > > >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > From corvus at inaugust.com Thu Mar 7 15:22:32 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 07 Mar 2019 07:22:32 -0800 Subject: Proposal to create zuul-operator project In-Reply-To: <20190307141347.GA20461@localhost.localdomain> (Paul Belanger's message of "Thu, 7 Mar 2019 09:13:47 -0500") References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> Message-ID: <87d0n2zp3b.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >> >> Ideally, we would grow a hierarchy of operators to manage the gating >> system, and then you could have an opendev-operator to compose: >> * log-processor-operator >> * elastic-search-operator >> * paste-operator, etherpad-operator, ... >> * gerrit-operator >> > I think these are fine to have, but I am on the fence if they are under > the scope of what zuul should do. As an example, I can totally see > opendev.org wanting doing this. As long as everything is all > disconnected from zuul-operator, I would be okay. Personally, I don't > want zuul to managed a gerrit. I'm not on the fence about that. :) If we take Tristan's comment as describing a Zuul operator's place in potentially one of many larger systems, that's great. It's good to think about how what we create might be used. But here on this mailing list, this team of folks is focused on the Zuul project, which is the components here: https://git.zuul-ci.org/cgit So it's fine to suggest that perhaps this Zuul operator might be part of a larger OpenDev operator project, but that decision would be made by the OpenDev team. If folks are interested in contributing to such a project, it would be worth bringing it up on the openstack-infra mailing list. >> We also started to investigate operators for Software Factory component >> and perhaps we could join force? Please do. :) -Jim From pabelanger at redhat.com Thu Mar 7 15:30:53 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 7 Mar 2019 10:30:53 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <87d0n2zp3b.fsf@meyer.lemoncheese.net> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87d0n2zp3b.fsf@meyer.lemoncheese.net> Message-ID: <20190307153053.GA27706@localhost.localdomain> On Thu, Mar 07, 2019 at 07:22:32AM -0800, James E. Blair wrote: > Paul Belanger writes: > > > On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: > >> > >> Ideally, we would grow a hierarchy of operators to manage the gating > >> system, and then you could have an opendev-operator to compose: > >> * log-processor-operator > >> * elastic-search-operator > >> * paste-operator, etherpad-operator, ... > >> * gerrit-operator > >> > > I think these are fine to have, but I am on the fence if they are under > > the scope of what zuul should do. As an example, I can totally see > > opendev.org wanting doing this. As long as everything is all > > disconnected from zuul-operator, I would be okay. Personally, I don't > > want zuul to managed a gerrit. > > I'm not on the fence about that. :) > > If we take Tristan's comment as describing a Zuul operator's place in > potentially one of many larger systems, that's great. It's good to > think about how what we create might be used. > > But here on this mailing list, this team of folks is focused on the Zuul > project, which is the components here: https://git.zuul-ci.org/cgit > > So it's fine to suggest that perhaps this Zuul operator might be part of > a larger OpenDev operator project, but that decision would be made by > the OpenDev team. > > If folks are interested in contributing to such a project, it would be > worth bringing it up on the openstack-infra mailing list. > Agree, you said it much better that I originally did. :) > >> We also started to investigate operators for Software Factory component > >> and perhaps we could join force? > > Please do. :) > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Thu Mar 7 21:48:17 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 07 Mar 2019 13:48:17 -0800 Subject: Migrating git repos to OpenDev Message-ID: <87sgvys6e6.fsf@meyer.lemoncheese.net> Hi, As discussed in November[1], the OpenStack project infrastructure is being rebranded as "OpenDev" to better support a wider community of projects. We are nearly ready to perform the part of this transition with the largest impact: moving the authoritative git repositories for existing projects. In this email, I'd like to introduce the new hosting system we are preparing, discuss the transition, and invite projects to work with us on the logistics of the change. Gerrit ====== Gerrit is the core of our system and it will remain so in OpenDev. As part of this move, we will rename the gerrit server from review.openstack.org to review.opendev.org. As part of the transition, we will automatically merge appropriate changes to all branches of all repositories updating .gitreview and Zuul configuration files. Any further changes (README files, etc.) we expect to be made by individual project contributors. Repository Browsing =================== Currently our canonical *public* repository system is the cgit server at https://git.openstack.org/ (and git.airshipit.org, git.starlingx.io, and git.zuul-ci.org). This is a load balanced cluster of several servers which is designed to handle all the public git repository traffic, as it scales much better than Gerrit (and has a more friendly domain name). >From a technical standpoint, it's excellent, but its usability could be improved. Therefore, as part of this transition, we will replace the cgit servers with a new system based on Gitea. Gitea is a complete development collaboration system, but it's very flexible and will allow us to disable components which we aren't using. We will operate it in a read-only configuration where it will act as the public mirror for Gerrit. The advantages it has over the current system are: * Shorter domain name in project URLs: https://git.openstack.org/openstack/nova vs https://opendev.org/openstack/nova * Clone and browsing URLs are the same (with cgit, the browsing URL has an extra path component) * More visually pleasing code browsing * Integrated code searching * Ability to highlight multiple lines in links When we perform the transition we will install redirects from git.openstack.org (and the other git sites) to opendev.org, and will maintain those redirects for the foreseeable future. We will construct them so that even existing deep links to individual files in individual commits to cgit will redirect to the correct location on opendev.org. This system is up and running now with a live mirror of data from Gerrit, and you can start testing it out today at https://opendev.org/ Please let us know if you encounter any problems. If you would like to read more about the design of this system and the transition, see the infra-spec[2]. GitHub ====== Currently all OpenStack projects are replicated to GitHub. We do not plan on changing that during the transition, however, any projects outside of the openstack*/ namespaces will not automatically be replicated to GitHub, and we do not plan on adding that in the future. We do, however, support projects using Zuul to run post-merge jobs to push updates to GitHub or any other third-party mirrors with their own credentials. We will be happy to work with anyone interested in that to help set up jobs to do so. We are adopting this approach so that individual projects can have more control over how they are represented in social media, and to give us more flexibility in supporting our own organizational namespaces on OpenDev without assuming they map directly to GitHub. Eventually we plan on moving the OpenStack project to that system as well and retiring direct replication from Gerrit to GitHub completely. But we will defer that work until after this transition. Logistics ========= We can prepare much of the system in advance (as we have for the hosting system on opendev.org), but the actual transition and renaming of the Gerrit server will need to happen at once during an outage window. We need to schedule that outage and begin preparing for it. Since all of the project git URLs are going to change (to replace git.openstack.org with opendev.org and review.openstack.org with review.opendev.org), we can additionally take the opportunity to reorganize projects into different organizations. For example, during the transition we will rename Zuul, and it's associated projects, from the "openstack-infra" org to "zuul". So their new names will be "zuul/zuul", "zuul/nodepool", etc. This is an excellent time for the rest of the OpenStack Foundation pilot projects to do the same. If the OpenStack project desires this, it would also be a good time to move unofficial projects out of the openstack/ namespace. Therefore, we need your help: Action Items ============ We need each of the following projects: * OpenStack * Airship * StarlingX * Zuul To nominate a single point of contact to work with us on the transition. It would be helpful for that person to attend the next (and possibly next several) openstack infra team meetings in IRC [3]. We will work with those people on scheduling the transition, as well as finalizing the list of projects which should be renamed as part of the transition. If you manage an unofficial project and would like to take the opportunity to move or rename your project, please add it to this ethercalc[4]. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html [2] http://specs.openstack.org/openstack-infra/infra-specs/specs/opendev-gerrit.html [3] http://eavesdrop.openstack.org/#Project_Infrastructure_Team_Meeting [4] https://ethercalc.openstack.org/opendev-transition From tdecacqu at redhat.com Fri Mar 8 03:34:48 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 08 Mar 2019 12:34:48 +0900 Subject: Proposal to create zuul-operator project In-Reply-To: <87d0n2zp3b.fsf@meyer.lemoncheese.net> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87d0n2zp3b.fsf@meyer.lemoncheese.net> Message-ID: <87o96mqbs7.tristanC@fedora> On Thu, Mar 07, 2019 at 07:22 James E. Blair wrote: > Paul Belanger writes: > >> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >>> >>> Ideally, we would grow a hierarchy of operators to manage the gating >>> system, and then you could have an opendev-operator to compose: >>> * log-processor-operator >>> * elastic-search-operator >>> * paste-operator, etherpad-operator, ... >>> * gerrit-operator >>> >> I think these are fine to have, but I am on the fence if they are under >> the scope of what zuul should do. As an example, I can totally see >> opendev.org wanting doing this. As long as everything is all >> disconnected from zuul-operator, I would be okay. Personally, I don't >> want zuul to managed a gerrit. > > I'm not on the fence about that. :) > > If we take Tristan's comment as describing a Zuul operator's place in > potentially one of many larger systems, that's great. It's good to > think about how what we create might be used. > > But here on this mailing list, this team of folks is focused on the Zuul > project, which is the components here: https://git.zuul-ci.org/cgit > > So it's fine to suggest that perhaps this Zuul operator might be part of > a larger OpenDev operator project, but that decision would be made by > the OpenDev team. > Yes, I meant an opendev-operator composing a gerrit-operator and zuul-operator, not the other way around. > If folks are interested in contributing to such a project, it would be > worth bringing it up on the openstack-infra mailing list. > >>> We also started to investigate operators for Software Factory component >>> and perhaps we could join force? > > Please do. :) > If we are going to rewrite configuration management in the form of kubernetes operators, then perhaps we could accomodate the difference between Software Factory and opendev and share the implementation. Well I jumped ahead and this is probably too early to discuss an opendev-operator (and not the right mailing list), but my suggestion is that a zuul-operator should provides an abstraction that can be used efficiently by a higher level operator. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Fri Mar 8 06:03:00 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 08 Mar 2019 15:03:00 +0900 Subject: Proposal to create zuul-operator project In-Reply-To: <20190307134710.rdowyyqkbrcvf5hg@yuggoth.org> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307134710.rdowyyqkbrcvf5hg@yuggoth.org> Message-ID: <87k1h9rjhn.tristanC@fedora> On Thu, Mar 07, 2019 at 13:47 Jeremy Stanley wrote: > On 2019-03-07 11:32:39 +0900 (+0900), Tristan Cacqueray wrote: > [...] >> IIUC, we would need to wait for a fix to land in the >> debian-stretch image, > > We could just wait for the fix to land in Debian itself (they're > going to be part of the coordinated disclosure of most of the sort > of vulnerabilities you're talking about so should be available at > basically the same time as it's announced by researchers) and then > rebuild those images ourselves if that step is taking too long... or > get involved with the community who maintains them and help out? > >> then wait for a new opendevorg/python-builder image and finally >> wait for a new zuul image to be published. > > Luckily OpenDev and Zuul image building is managed in the same > system and maintained by mostly the same people so I wouldn't > anticipate much lag there. But also, this entire stack of images is > buildable by downstream consumers who want to do that themselves > instead of waiting for us. > IIUC, the Zuul image is updated when a change is merged in the Zuul repository, so it seems like we are missing a periodic rebuild of the Zuul image, especially for the tagged releases. I don't know how the opendevorg/python-builder gets updated. My question was more about what are the concrete actions we need to perform to update one of the layers managed by opendev. Because from an end-user perspective, what used to be a host update and service restart is now replaced by pulling a brand new debian+opendev+zuul image. Monty mentioned in the other thread that rebuilding the Zuul image should update the debian packages too. >> Or what happen when the zuul image gets published with an >> incompatible openstacksdk requirement? > [...] > > I don't see where the choice of underlying distribution comes into > play here. We'd explicitly choose which version of OpenStackSDK to > install, so if it's incompatible then that's on us, right? > That's correct, the underlying distribution won't prevent pip from pulling unwanted bits. But this is a new issue/behavior with these layered builds because you can't prevent the Zuul layer from pulling new bits when all you want to do is update the base layer (at least with the current Dockerfile and requirements.txt file from the Zuul repository). These concerns are already adressed by Monty suggestion tu support local builds of the image. I just meant to discuss a couple of scenarios and how its going to affect user of the Zuul image provided by opendev. Regards, -Tristan > -- > Jeremy Stanley > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Fri Mar 8 06:36:50 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 08 Mar 2019 15:36:50 +0900 Subject: Proposal to create zuul-operator project In-Reply-To: References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> Message-ID: <87imwtrhx9.tristanC@fedora> On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote: > On 3/7/19 2:13 PM, Paul Belanger wrote: >> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >>> >>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: >>>> On 3/6/19 6:12 PM, Paul Belanger wrote: >>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >>>>>> Hey everybody, >>>>>> >>>>>> It seems that kubernetes operators are all the rage these days, and in the >>>>>> last 2 weeks I've had multiple conversations that have all come back around >>>>>> to someone wanting a zuul operator to exist. >>>>>> >>> >>> Hello, >>> >>> Thank you for starting this thread, It seems like a great oportunity to >>> converge the different deployment and operation tools. >>> >>> Ideally, we would grow a hierarchy of operators to manage the gating >>> system, and then you could have an opendev-operator to compose: >>> * log-processor-operator >>> * elastic-search-operator >>> * paste-operator, etherpad-operator, ... >>> * gerrit-operator >>> >> I think these are fine to have, but I am on the fence if they are under >> the scope of what zuul should do. As an example, I can totally see >> opendev.org wanting doing this. As long as everything is all >> disconnected from zuul-operator, I would be okay. Personally, I don't >> want zuul to managed a gerrit. > > I agree. > > This is definitely an interesting thought - although I'm not sure we're > all the way there from the opendev POV yet. As I learn more about > operators I could certainly see an opportunity to write one for gitea > once the indexing issue is fixed and we can go back to k8s for it. That > said - I think opendev operators are out of scope for the zuul team - so > we may want to chat about that in a slightly different context. (If the > zuul operator goes well and we find it pleasing I could see that > motivating doing more of them) > >>> We also started to investigate operators for Software Factory component >>> and perhaps we could join force? > > We should absolutely always join forces. :) > Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests. Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services. Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs. >>> >>>>>> SO ... >>>>>> >>>>>> I would like to propose that we create a zuul-operator project as part of >>>>>> Zuul. >>>>>> >>>>>> Given our close ties with Ansible, I think we should use ansible-operator >>>>>> [0] to write it. >>>>>> >>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I >>>>>> think we should set up gate jobs to make sure it works on both. Once we're >>>>>> happy with it - we should publish it to operatorhub. [1] >>>>>> >>>>>> Does anyone object? It not, I'll get a repo started. >>>>>> >>>>> I think it is a fine idea, I've also seen some talk about it recently >>>>> too. >>>>> >>>>> I do think we should have some consensus on the bits we want to support >>>>> for the deployment. Maybe going as far as saying it is an opinionated >>>>> deployment, for better or worse. As an example, graphite / statsd is >>>>> that in play or not? >>>> >>>> I think that's a great question. >>>> >>>> I do believe it should be an opinionated deployment - but there are >>>> probably places where people will want to make some choices from a >>>> specific set of choices. Like - by default I think it should spin up a >>>> percona-xtradb cluster as provided by the percona operator. BUT - I >>>> could see a setting in teh main Zuul CRD like "use-existing-db" that >>>> takes a sqlalchemy url. That way if there is someone who has an existing >>>> db infrastructure, they could take advantage of it - but the Zuul >>>> operator gets you a working system out of the box. >>>> >>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an >>>> operator for that yet - we might want to make one or work with someone >>>> to make one. >>>> >>> >>> There are actually 2 operators for ZooKeeper already listed in this page: >>> https://github.com/operator-framework/awesome-operators > > WOOT! > >>> >>>> I could see a graphite/statsd option with the ability to point to an >>>> existing graphite. Maybe we just start with CRD field that indicates the >>>> stats_host like we have now? >>>> >>>> But - yes - opinionated, mostly batteries included - maybe some amount >>>> of optional things. >>>> >>> >>> Should we discuss this with a spec? For example: > > Yes - probably so - because there's lots of design points (as you start > to raise here) I'll start one today and we can iterator on it together. > >>> How are we going to configure the connections and the tenants' >>> configuration? > > It's a great question. My first hunch is that both the connections > config and the tenant config should just be parameters to the Zuul CRD, > and then the operator could take that data, transform it into the right > file formats and restart zuul or run zuul-reconfigure as needed. > > It's also feasible we could do it as referenced ConfigMap objects. It's > also conceivable that maybe we just want connections to be parameters to > the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just > have the operator concatenate them appropriately. That last option > starts to worry me though - starting to feel a bit liek the puppet > modules that wanted to re-implement the config language of each thing > (like apache) in the Puppet DSL - and how it turns out we wind up > preferring the ansible "just jinja template the file itself" approach more. > > But I honestly don't know which we'll liek more. I think the EASIEST > thing for a simple first start would be having each content just be > parameters to the Zuul CRD - but we could probably play with the other > things and see what we like. Or maybe there is a different idea. > Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed. The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing. >>> Software Factory batteries include a zuul-reconfigure job to manage the >>> main.yaml file. Should the zuul-operator setup such job, or perhaps >>> should we also need a zuul-quick-start-operator? > I don't think the zuul operator should set up any jobs, because zuul > jobs come from git repos - and the zuul operator would have to learn > about all of the different types of git backends for it to be able to > push commits into one - and the operator would have to have git account > access. > > That seems like a feature that a software factory operator (or an > opendev operator) would be in a much better position to provide - since > it's reasonable for those operators to know more things about the > surrounding git infrastructure and have accounts for it. > > A zuul-reconfigure job might be a nice addition to zuul-jobs though. And > maybe there would need to be a zuul-operator-reconfigure if applying > main.yaml changes involves pushing a new Zuul CRD manifest via kubectl > to get the operator to update the file and run reconfigure. (take this > main.yaml file, slurp it into the manifest, run kubectl) > That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged. >>> >>>>> Also, could we maybe say a single distro for all container things? I >>>>> guess by default, that is what the current containers by zuul project is >>>>> running today. I don't much have the energy to get into the weeds of pip >>>>> / dpkg / rpm for how things maybe get installed (if needed). >>>> >>>> Yes. All of our container images as we are publishing them now are >>>> awesome, and are what I expect us to use. >>>> >>>> I am categorically opposed to us making and publishing different images. >>>> One of the nice things about publishing container images is that you can >>>> do them once and do them well. We're using the python base image. It >>>> happens to be based on debian-strech - but that's irrelevant. It's >>>> running python. >>>> >>> >>> That sounds great, but how are we going to cope with the next heartbleed >>> or shellshock? IIUC, we would need to wait for a fix to land in the >>> debian-stretch image, then wait for a new opendevorg/python-builder >>> image and finally wait for a new zuul image to be published. > > Jeremy also responded to this - so this is basically me saying "I agree > with fungi" in more words. > > We have to do that no matter what the base image is. But we don't > actually have to do that. The zuul Dockerfile already does the apt-get > update / apt-get install. If needed we could toss in a dist-upgrade if > the fix has hit the repo but not the base image so that we don't have to > wait for the stack. > > Supporting multiple in-image distros doesn't improve that. In fact, it > increases the work for upstream because now we have to do that exact > process you described, but we have to do it for each distro we're > building images with. > Agreed, there is no point in supporting multiple distros image. >>> Or what happen when the zuul image gets published with an incompatible >>> openstacksdk requirement? >>> >>> Thus I think the base image and how the zuul bits gets bundled are >>> relevant. For example OpenShift users may want to use BuildConfig >>> instead so that a parent layer update automatically trigger childs' >>> layer rebuild. > > Sure - but the zuul images are built with standard Dockerfiles - so > there should be nothing stopping people from rebuilding them locally > however they want. > >> Yah, this is what I was hoping we'd discuss more when we dove into the >> weeds. Can a BuildConfig work with k8s? Or does some other k8s items >> work with openshift? If not, I think it is fine if we only suppoert one >> and not try to do everything. Which one we support, I don't really know, >> hopefully others can say which one is better :) We can also support >> both, but I feel that is a lot of work for minimal gain. > I think it's a strange idea, when you have a Zuul, to want to use a > different build system to build zuul images, since you would not be able > to CI those images as effectively as if you use the zuul image building > jobs. The only good reason for local rebuilds is if you're carrying > local changes. Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack. > If you are carrying local changes, one would imagine you > would want to be able to submit code reviews to a git system to manage > those changes, and you would want those code reviews to get gated, and > you would want to publish the images that result in the gate passing. If > you have all of that set up in zuul to test and publish images from your > local changes, it standa to reason you'd want zuul to also just do an > image rebuild on your local branch in the scenarios you metion, and > using a BuildConfig would not really help you do that at all. **BUT** if > that's what you want to do, I mean, nothing should be stopping you. > My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example. > It certainly doesn't mean we need to support multi-distro inside the > image content we publish as an upstream. I'm pretty sure an OpenShift > BuildConfig can build an image based on python:slim as well as anything > else. > I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image. Regads, -Tristan >>>> I DO think we'll likely need to support specifying an image location in >>>> the CRD. (rook does this for ceph too) That's because I'd like for >>>> Tobias to be able to use the operator if he chooses, but he uses local >>>> builds of the images. And I think that's fine. >>>> >>>>> I am likely showing how much I don't know about k8s / openshift here, >>>>> but wanting to limit the amount of different ways to do a deployment >>>>> based on underlying distro bits. >>>> >>>> +10000000000 >>>> >>>> I believe there is no value in producing divergent images. >>>> >>> >> >> >> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From pabelanger at redhat.com Fri Mar 8 13:54:41 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 8 Mar 2019 08:54:41 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <87imwtrhx9.tristanC@fedora> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87imwtrhx9.tristanC@fedora> Message-ID: <20190308135441.GB5240@localhost.localdomain> On Fri, Mar 08, 2019 at 03:36:50PM +0900, Tristan Cacqueray wrote: > > On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote: > > On 3/7/19 2:13 PM, Paul Belanger wrote: > >> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: > >>> > >>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: > >>>> On 3/6/19 6:12 PM, Paul Belanger wrote: > >>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > >>>>>> Hey everybody, > >>>>>> > >>>>>> It seems that kubernetes operators are all the rage these days, and in the > >>>>>> last 2 weeks I've had multiple conversations that have all come back around > >>>>>> to someone wanting a zuul operator to exist. > >>>>>> > >>> > >>> Hello, > >>> > >>> Thank you for starting this thread, It seems like a great oportunity to > >>> converge the different deployment and operation tools. > >>> > >>> Ideally, we would grow a hierarchy of operators to manage the gating > >>> system, and then you could have an opendev-operator to compose: > >>> * log-processor-operator > >>> * elastic-search-operator > >>> * paste-operator, etherpad-operator, ... > >>> * gerrit-operator > >>> > >> I think these are fine to have, but I am on the fence if they are under > >> the scope of what zuul should do. As an example, I can totally see > >> opendev.org wanting doing this. As long as everything is all > >> disconnected from zuul-operator, I would be okay. Personally, I don't > >> want zuul to managed a gerrit. > > > > I agree. > > > > This is definitely an interesting thought - although I'm not sure we're > > all the way there from the opendev POV yet. As I learn more about > > operators I could certainly see an opportunity to write one for gitea > > once the indexing issue is fixed and we can go back to k8s for it. That > > said - I think opendev operators are out of scope for the zuul team - so > > we may want to chat about that in a slightly different context. (If the > > zuul operator goes well and we find it pleasing I could see that > > motivating doing more of them) > > > >>> We also started to investigate operators for Software Factory component > >>> and perhaps we could join force? > > > > We should absolutely always join forces. :) > > > > Back in the early days of Software Factory we tries to integrate the > openstack-infra manifest, but it was such a tedious task to make them > work on CentOS outside of the openstack.org domain that we ended up > writing our own manifests. > > Putting the other opendev components aside (e.g. logservices, code > hosting, ...), I think this is a great opportunity to converge the > existing Zuul deployment system and come up with a common set of ops > tasks for example to backup and restart the services. > My first (personal) reaction, is that is too much scope for a zuul-operator, as a zuul operator I don't really want to sign up for managing a gerrit / log service / code hosting / etc. This is what opendev.org is, and I think it is a great thing to provide, but I believe is should live outside of zuul-operator by default. If there is some other opendev-operator that lives a top of zuul-operator, I think I am okay with this. The main reason I say this, is look at the history of puppet-openstackci. In my eyes, it was almost what you mentioned here. A shared layer to manage an openstack-infra like environment. Things like centos support could have been added into puppet-openstackci, in fact i think there are patches on gerrit for this, but it was very difficult to get them merged for reasons. We are always going to have different deployment opinions, which is fine to have, I'd just not like to have a single operator, with tons of feature flags, to toggle what people want. I think this is what other threads are suggestion, have opendev-operator (logstash, code review. etc) depend on zuul-operator (nodepool / zuul / zookeeper), but wanted to get it down again. > Looking at the awesome-operators page, there are 2 ZooKeeper operators > and 5 for MySQL... It would be really nice if we are able to settle > on a single zuul-operator implementation that satisfies everyone needs. > > > >>> > >>>>>> SO ... > >>>>>> > >>>>>> I would like to propose that we create a zuul-operator project as part of > >>>>>> Zuul. > >>>>>> > >>>>>> Given our close ties with Ansible, I think we should use ansible-operator > >>>>>> [0] to write it. > >>>>>> > >>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I > >>>>>> think we should set up gate jobs to make sure it works on both. Once we're > >>>>>> happy with it - we should publish it to operatorhub. [1] > >>>>>> > >>>>>> Does anyone object? It not, I'll get a repo started. > >>>>>> > >>>>> I think it is a fine idea, I've also seen some talk about it recently > >>>>> too. > >>>>> > >>>>> I do think we should have some consensus on the bits we want to support > >>>>> for the deployment. Maybe going as far as saying it is an opinionated > >>>>> deployment, for better or worse. As an example, graphite / statsd is > >>>>> that in play or not? > >>>> > >>>> I think that's a great question. > >>>> > >>>> I do believe it should be an opinionated deployment - but there are > >>>> probably places where people will want to make some choices from a > >>>> specific set of choices. Like - by default I think it should spin up a > >>>> percona-xtradb cluster as provided by the percona operator. BUT - I > >>>> could see a setting in teh main Zuul CRD like "use-existing-db" that > >>>> takes a sqlalchemy url. That way if there is someone who has an existing > >>>> db infrastructure, they could take advantage of it - but the Zuul > >>>> operator gets you a working system out of the box. > >>>> > >>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an > >>>> operator for that yet - we might want to make one or work with someone > >>>> to make one. > >>>> > >>> > >>> There are actually 2 operators for ZooKeeper already listed in this page: > >>> https://github.com/operator-framework/awesome-operators > > > > WOOT! > > > >>> > >>>> I could see a graphite/statsd option with the ability to point to an > >>>> existing graphite. Maybe we just start with CRD field that indicates the > >>>> stats_host like we have now? > >>>> > >>>> But - yes - opinionated, mostly batteries included - maybe some amount > >>>> of optional things. > >>>> > >>> > >>> Should we discuss this with a spec? For example: > > > > Yes - probably so - because there's lots of design points (as you start > > to raise here) I'll start one today and we can iterator on it together. > > > >>> How are we going to configure the connections and the tenants' > >>> configuration? > > > > It's a great question. My first hunch is that both the connections > > config and the tenant config should just be parameters to the Zuul CRD, > > and then the operator could take that data, transform it into the right > > file formats and restart zuul or run zuul-reconfigure as needed. > > > > It's also feasible we could do it as referenced ConfigMap objects. It's > > also conceivable that maybe we just want connections to be parameters to > > the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just > > have the operator concatenate them appropriately. That last option > > starts to worry me though - starting to feel a bit liek the puppet > > modules that wanted to re-implement the config language of each thing > > (like apache) in the Puppet DSL - and how it turns out we wind up > > preferring the ansible "just jinja template the file itself" approach more. > > > > But I honestly don't know which we'll liek more. I think the EASIEST > > thing for a simple first start would be having each content just be > > parameters to the Zuul CRD - but we could probably play with the other > > things and see what we like. Or maybe there is a different idea. > > > > Everything in the Zuul CRD sounds like the most straightforward. > Moreover, the reconcile logic can take care of the differents action > depending on what changed. > > The CRD I started to work on looked like this: > ``` > apiVersion: v1 > kind: ZuulCluster > metadata: > name: test-cluster > spec: > merger: > instances: 1 > executor: > instances: 2 > zookeeperHosts: > - hostname:port > connections: > - name: git.zuul-ci.org > driver: git > baseurl: https://git.zuul-ci.org > - name: sqlreporter > driver: sql > dburi: mysql+pymysql://... > - name: gerrit > driver: gerrit > username: ... > tenants: > - name: local > source: > gerrit: > config-projects: > - config > git.zuul-ci.org: > untrusted-projects: > - zuul-jobs > ``` > Then I was planning on adding a similar NodepoolCluster and perhaps a > ZuulApplication to glue both cluster and manage zookeeper, rdbms and > statsd if missing. > > >>> Software Factory batteries include a zuul-reconfigure job to manage the > >>> main.yaml file. Should the zuul-operator setup such job, or perhaps > >>> should we also need a zuul-quick-start-operator? > > I don't think the zuul operator should set up any jobs, because zuul > > jobs come from git repos - and the zuul operator would have to learn > > about all of the different types of git backends for it to be able to > > push commits into one - and the operator would have to have git account > > access. > > > > That seems like a feature that a software factory operator (or an > > opendev operator) would be in a much better position to provide - since > > it's reasonable for those operators to know more things about the > > surrounding git infrastructure and have accounts for it. > > > > A zuul-reconfigure job might be a nice addition to zuul-jobs though. And > > maybe there would need to be a zuul-operator-reconfigure if applying > > main.yaml changes involves pushing a new Zuul CRD manifest via kubectl > > to get the operator to update the file and run reconfigure. (take this > > main.yaml file, slurp it into the manifest, run kubectl) > > > > That seems out of scope indeed, though later down the road, it seems > like a rather common pattern to have the tenant configuration in a git > repository that would reconfigure zuul when a change is merged. > > >>> > >>>>> Also, could we maybe say a single distro for all container things? I > >>>>> guess by default, that is what the current containers by zuul project is > >>>>> running today. I don't much have the energy to get into the weeds of pip > >>>>> / dpkg / rpm for how things maybe get installed (if needed). > >>>> > >>>> Yes. All of our container images as we are publishing them now are > >>>> awesome, and are what I expect us to use. > >>>> > >>>> I am categorically opposed to us making and publishing different images. > >>>> One of the nice things about publishing container images is that you can > >>>> do them once and do them well. We're using the python base image. It > >>>> happens to be based on debian-strech - but that's irrelevant. It's > >>>> running python. > >>>> > >>> > >>> That sounds great, but how are we going to cope with the next heartbleed > >>> or shellshock? IIUC, we would need to wait for a fix to land in the > >>> debian-stretch image, then wait for a new opendevorg/python-builder > >>> image and finally wait for a new zuul image to be published. > > > > Jeremy also responded to this - so this is basically me saying "I agree > > with fungi" in more words. > > > > We have to do that no matter what the base image is. But we don't > > actually have to do that. The zuul Dockerfile already does the apt-get > > update / apt-get install. If needed we could toss in a dist-upgrade if > > the fix has hit the repo but not the base image so that we don't have to > > wait for the stack. > > > > Supporting multiple in-image distros doesn't improve that. In fact, it > > increases the work for upstream because now we have to do that exact > > process you described, but we have to do it for each distro we're > > building images with. > > > > Agreed, there is no point in supporting multiple distros image. > > >>> Or what happen when the zuul image gets published with an incompatible > >>> openstacksdk requirement? > >>> > >>> Thus I think the base image and how the zuul bits gets bundled are > >>> relevant. For example OpenShift users may want to use BuildConfig > >>> instead so that a parent layer update automatically trigger childs' > >>> layer rebuild. > > > > Sure - but the zuul images are built with standard Dockerfiles - so > > there should be nothing stopping people from rebuilding them locally > > however they want. > > > >> Yah, this is what I was hoping we'd discuss more when we dove into the > >> weeds. Can a BuildConfig work with k8s? Or does some other k8s items > >> work with openshift? If not, I think it is fine if we only suppoert one > >> and not try to do everything. Which one we support, I don't really know, > >> hopefully others can say which one is better :) We can also support > >> both, but I feel that is a lot of work for minimal gain. > > I think it's a strange idea, when you have a Zuul, to want to use a > > different build system to build zuul images, since you would not be able > > to CI those images as effectively as if you use the zuul image building > > jobs. The only good reason for local rebuilds is if you're carrying > > local changes. > > Another good reason to manage the image is for compliance and > reproducability. Using another base layer may be easier to audit and > update, and users may also want to build their image with curated > registries for things like pip or npm to prevent supply chain attack. > > > > If you are carrying local changes, one would imagine you > > would want to be able to submit code reviews to a git system to manage > > those changes, and you would want those code reviews to get gated, and > > you would want to publish the images that result in the gate passing. If > > you have all of that set up in zuul to test and publish images from your > > local changes, it standa to reason you'd want zuul to also just do an > > image rebuild on your local branch in the scenarios you metion, and > > using a BuildConfig would not really help you do that at all. **BUT** if > > that's what you want to do, I mean, nothing should be stopping you. > > > > My main concern is about the base layer update. BuildConfig can take > care of rebuilding your final image when needed. You may use Zuul to > promote the resulting image before deploying it, but the point is, base > layer updates may happen out of your control, e.g. after a bug is > discovered in openssl for example. > > > > It certainly doesn't mean we need to support multi-distro inside the > > image content we publish as an upstream. I'm pretty sure an OpenShift > > BuildConfig can build an image based on python:slim as well as anything > > else. > > > > I didn't mean to derailed the thread to this image building topic which > is not actually relevant to the zuul-operator. Sorry about that :) > I mean, as long as the zuul-operator can use an optional custom image. > Yesterday I had a chance to boot the zuul/zuul-merger image in openshift, and not surprising that seems to work a little different then say a local docker. Openshift wasn't happy that the user inside the container was running as root, seems it really wants a non-root user inside. So, I do think it we might want to make all our images work as well as possible in each thing (docker, podman, k8s, openshift, etc) but honestly, I'm not really sure what the work looks like. In this case, I think it is just a matter of creating a zuul user inside the container and starting zuul / nodepool with it. > Regads, > -Tristan > > >>>> I DO think we'll likely need to support specifying an image location in > >>>> the CRD. (rook does this for ceph too) That's because I'd like for > >>>> Tobias to be able to use the operator if he chooses, but he uses local > >>>> builds of the images. And I think that's fine. > >>>> > >>>>> I am likely showing how much I don't know about k8s / openshift here, > >>>>> but wanting to limit the amount of different ways to do a deployment > >>>>> based on underlying distro bits. > >>>> > >>>> +10000000000 > >>>> > >>>> I believe there is no value in producing divergent images. > >>>> > >>> > >> > >> > >> > >>> _______________________________________________ > >>> Zuul-discuss mailing list > >>> Zuul-discuss at lists.zuul-ci.org > >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > >> > >> > >> _______________________________________________ > >> Zuul-discuss mailing list > >> Zuul-discuss at lists.zuul-ci.org > >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > >> > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From pabelanger at redhat.com Fri Mar 8 16:35:58 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 8 Mar 2019 11:35:58 -0500 Subject: Proposal to create zuul-operator project In-Reply-To: <20190308135441.GB5240@localhost.localdomain> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87imwtrhx9.tristanC@fedora> <20190308135441.GB5240@localhost.localdomain> Message-ID: <20190308163558.GA11319@localhost.localdomain> On Fri, Mar 08, 2019 at 08:54:41AM -0500, Paul Belanger wrote: > On Fri, Mar 08, 2019 at 03:36:50PM +0900, Tristan Cacqueray wrote: > > > > On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote: > > > On 3/7/19 2:13 PM, Paul Belanger wrote: > > >> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: > > >>> > > >>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: > > >>>> On 3/6/19 6:12 PM, Paul Belanger wrote: > > >>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > > >>>>>> Hey everybody, > > >>>>>> > > >>>>>> It seems that kubernetes operators are all the rage these days, and in the > > >>>>>> last 2 weeks I've had multiple conversations that have all come back around > > >>>>>> to someone wanting a zuul operator to exist. > > >>>>>> > > >>> > > >>> Hello, > > >>> > > >>> Thank you for starting this thread, It seems like a great oportunity to > > >>> converge the different deployment and operation tools. > > >>> > > >>> Ideally, we would grow a hierarchy of operators to manage the gating > > >>> system, and then you could have an opendev-operator to compose: > > >>> * log-processor-operator > > >>> * elastic-search-operator > > >>> * paste-operator, etherpad-operator, ... > > >>> * gerrit-operator > > >>> > > >> I think these are fine to have, but I am on the fence if they are under > > >> the scope of what zuul should do. As an example, I can totally see > > >> opendev.org wanting doing this. As long as everything is all > > >> disconnected from zuul-operator, I would be okay. Personally, I don't > > >> want zuul to managed a gerrit. > > > > > > I agree. > > > > > > This is definitely an interesting thought - although I'm not sure we're > > > all the way there from the opendev POV yet. As I learn more about > > > operators I could certainly see an opportunity to write one for gitea > > > once the indexing issue is fixed and we can go back to k8s for it. That > > > said - I think opendev operators are out of scope for the zuul team - so > > > we may want to chat about that in a slightly different context. (If the > > > zuul operator goes well and we find it pleasing I could see that > > > motivating doing more of them) > > > > > >>> We also started to investigate operators for Software Factory component > > >>> and perhaps we could join force? > > > > > > We should absolutely always join forces. :) > > > > > > > Back in the early days of Software Factory we tries to integrate the > > openstack-infra manifest, but it was such a tedious task to make them > > work on CentOS outside of the openstack.org domain that we ended up > > writing our own manifests. > > > > Putting the other opendev components aside (e.g. logservices, code > > hosting, ...), I think this is a great opportunity to converge the > > existing Zuul deployment system and come up with a common set of ops > > tasks for example to backup and restart the services. > > Now that I've had enough coffee and reread your original reply. I though you were suggesting logservices, code hosting alongside (you side aside) :) So my reply basically rehashed my original opinion for no real reason. Sorry for the noisy here. > My first (personal) reaction, is that is too much scope for a > zuul-operator, as a zuul operator I don't really want to sign up for > managing a gerrit / log service / code hosting / etc. This is what > opendev.org is, and I think it is a great thing to provide, but I > believe is should live outside of zuul-operator by default. If there is > some other opendev-operator that lives a top of zuul-operator, I think I > am okay with this. > > The main reason I say this, is look at the history of > puppet-openstackci. In my eyes, it was almost what you mentioned here. A > shared layer to manage an openstack-infra like environment. > > Things like centos support could have been added into puppet-openstackci, > in fact i think there are patches on gerrit for this, but it was very > difficult to get them merged for reasons. We are always going to have > different deployment opinions, which is fine to have, I'd just not like > to have a single operator, with tons of feature flags, to toggle what > people want. > > I think this is what other threads are suggestion, have opendev-operator > (logstash, code review. etc) depend on zuul-operator (nodepool / zuul / > zookeeper), but wanted to get it down again. > > > Looking at the awesome-operators page, there are 2 ZooKeeper operators > > and 5 for MySQL... It would be really nice if we are able to settle > > on a single zuul-operator implementation that satisfies everyone needs. > > > > > > >>> > > >>>>>> SO ... > > >>>>>> > > >>>>>> I would like to propose that we create a zuul-operator project as part of > > >>>>>> Zuul. > > >>>>>> > > >>>>>> Given our close ties with Ansible, I think we should use ansible-operator > > >>>>>> [0] to write it. > > >>>>>> > > >>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I > > >>>>>> think we should set up gate jobs to make sure it works on both. Once we're > > >>>>>> happy with it - we should publish it to operatorhub. [1] > > >>>>>> > > >>>>>> Does anyone object? It not, I'll get a repo started. > > >>>>>> > > >>>>> I think it is a fine idea, I've also seen some talk about it recently > > >>>>> too. > > >>>>> > > >>>>> I do think we should have some consensus on the bits we want to support > > >>>>> for the deployment. Maybe going as far as saying it is an opinionated > > >>>>> deployment, for better or worse. As an example, graphite / statsd is > > >>>>> that in play or not? > > >>>> > > >>>> I think that's a great question. > > >>>> > > >>>> I do believe it should be an opinionated deployment - but there are > > >>>> probably places where people will want to make some choices from a > > >>>> specific set of choices. Like - by default I think it should spin up a > > >>>> percona-xtradb cluster as provided by the percona operator. BUT - I > > >>>> could see a setting in teh main Zuul CRD like "use-existing-db" that > > >>>> takes a sqlalchemy url. That way if there is someone who has an existing > > >>>> db infrastructure, they could take advantage of it - but the Zuul > > >>>> operator gets you a working system out of the box. > > >>>> > > >>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an > > >>>> operator for that yet - we might want to make one or work with someone > > >>>> to make one. > > >>>> > > >>> > > >>> There are actually 2 operators for ZooKeeper already listed in this page: > > >>> https://github.com/operator-framework/awesome-operators > > > > > > WOOT! > > > > > >>> > > >>>> I could see a graphite/statsd option with the ability to point to an > > >>>> existing graphite. Maybe we just start with CRD field that indicates the > > >>>> stats_host like we have now? > > >>>> > > >>>> But - yes - opinionated, mostly batteries included - maybe some amount > > >>>> of optional things. > > >>>> > > >>> > > >>> Should we discuss this with a spec? For example: > > > > > > Yes - probably so - because there's lots of design points (as you start > > > to raise here) I'll start one today and we can iterator on it together. > > > > > >>> How are we going to configure the connections and the tenants' > > >>> configuration? > > > > > > It's a great question. My first hunch is that both the connections > > > config and the tenant config should just be parameters to the Zuul CRD, > > > and then the operator could take that data, transform it into the right > > > file formats and restart zuul or run zuul-reconfigure as needed. > > > > > > It's also feasible we could do it as referenced ConfigMap objects. It's > > > also conceivable that maybe we just want connections to be parameters to > > > the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just > > > have the operator concatenate them appropriately. That last option > > > starts to worry me though - starting to feel a bit liek the puppet > > > modules that wanted to re-implement the config language of each thing > > > (like apache) in the Puppet DSL - and how it turns out we wind up > > > preferring the ansible "just jinja template the file itself" approach more. > > > > > > But I honestly don't know which we'll liek more. I think the EASIEST > > > thing for a simple first start would be having each content just be > > > parameters to the Zuul CRD - but we could probably play with the other > > > things and see what we like. Or maybe there is a different idea. > > > > > > > Everything in the Zuul CRD sounds like the most straightforward. > > Moreover, the reconcile logic can take care of the differents action > > depending on what changed. > > > > The CRD I started to work on looked like this: > > ``` > > apiVersion: v1 > > kind: ZuulCluster > > metadata: > > name: test-cluster > > spec: > > merger: > > instances: 1 > > executor: > > instances: 2 > > zookeeperHosts: > > - hostname:port > > connections: > > - name: git.zuul-ci.org > > driver: git > > baseurl: https://git.zuul-ci.org > > - name: sqlreporter > > driver: sql > > dburi: mysql+pymysql://... > > - name: gerrit > > driver: gerrit > > username: ... > > tenants: > > - name: local > > source: > > gerrit: > > config-projects: > > - config > > git.zuul-ci.org: > > untrusted-projects: > > - zuul-jobs > > ``` > > Then I was planning on adding a similar NodepoolCluster and perhaps a > > ZuulApplication to glue both cluster and manage zookeeper, rdbms and > > statsd if missing. > > > > >>> Software Factory batteries include a zuul-reconfigure job to manage the > > >>> main.yaml file. Should the zuul-operator setup such job, or perhaps > > >>> should we also need a zuul-quick-start-operator? > > > I don't think the zuul operator should set up any jobs, because zuul > > > jobs come from git repos - and the zuul operator would have to learn > > > about all of the different types of git backends for it to be able to > > > push commits into one - and the operator would have to have git account > > > access. > > > > > > That seems like a feature that a software factory operator (or an > > > opendev operator) would be in a much better position to provide - since > > > it's reasonable for those operators to know more things about the > > > surrounding git infrastructure and have accounts for it. > > > > > > A zuul-reconfigure job might be a nice addition to zuul-jobs though. And > > > maybe there would need to be a zuul-operator-reconfigure if applying > > > main.yaml changes involves pushing a new Zuul CRD manifest via kubectl > > > to get the operator to update the file and run reconfigure. (take this > > > main.yaml file, slurp it into the manifest, run kubectl) > > > > > > > That seems out of scope indeed, though later down the road, it seems > > like a rather common pattern to have the tenant configuration in a git > > repository that would reconfigure zuul when a change is merged. > > > > >>> > > >>>>> Also, could we maybe say a single distro for all container things? I > > >>>>> guess by default, that is what the current containers by zuul project is > > >>>>> running today. I don't much have the energy to get into the weeds of pip > > >>>>> / dpkg / rpm for how things maybe get installed (if needed). > > >>>> > > >>>> Yes. All of our container images as we are publishing them now are > > >>>> awesome, and are what I expect us to use. > > >>>> > > >>>> I am categorically opposed to us making and publishing different images. > > >>>> One of the nice things about publishing container images is that you can > > >>>> do them once and do them well. We're using the python base image. It > > >>>> happens to be based on debian-strech - but that's irrelevant. It's > > >>>> running python. > > >>>> > > >>> > > >>> That sounds great, but how are we going to cope with the next heartbleed > > >>> or shellshock? IIUC, we would need to wait for a fix to land in the > > >>> debian-stretch image, then wait for a new opendevorg/python-builder > > >>> image and finally wait for a new zuul image to be published. > > > > > > Jeremy also responded to this - so this is basically me saying "I agree > > > with fungi" in more words. > > > > > > We have to do that no matter what the base image is. But we don't > > > actually have to do that. The zuul Dockerfile already does the apt-get > > > update / apt-get install. If needed we could toss in a dist-upgrade if > > > the fix has hit the repo but not the base image so that we don't have to > > > wait for the stack. > > > > > > Supporting multiple in-image distros doesn't improve that. In fact, it > > > increases the work for upstream because now we have to do that exact > > > process you described, but we have to do it for each distro we're > > > building images with. > > > > > > > Agreed, there is no point in supporting multiple distros image. > > > > >>> Or what happen when the zuul image gets published with an incompatible > > >>> openstacksdk requirement? > > >>> > > >>> Thus I think the base image and how the zuul bits gets bundled are > > >>> relevant. For example OpenShift users may want to use BuildConfig > > >>> instead so that a parent layer update automatically trigger childs' > > >>> layer rebuild. > > > > > > Sure - but the zuul images are built with standard Dockerfiles - so > > > there should be nothing stopping people from rebuilding them locally > > > however they want. > > > > > >> Yah, this is what I was hoping we'd discuss more when we dove into the > > >> weeds. Can a BuildConfig work with k8s? Or does some other k8s items > > >> work with openshift? If not, I think it is fine if we only suppoert one > > >> and not try to do everything. Which one we support, I don't really know, > > >> hopefully others can say which one is better :) We can also support > > >> both, but I feel that is a lot of work for minimal gain. > > > I think it's a strange idea, when you have a Zuul, to want to use a > > > different build system to build zuul images, since you would not be able > > > to CI those images as effectively as if you use the zuul image building > > > jobs. The only good reason for local rebuilds is if you're carrying > > > local changes. > > > > Another good reason to manage the image is for compliance and > > reproducability. Using another base layer may be easier to audit and > > update, and users may also want to build their image with curated > > registries for things like pip or npm to prevent supply chain attack. > > > > > > > If you are carrying local changes, one would imagine you > > > would want to be able to submit code reviews to a git system to manage > > > those changes, and you would want those code reviews to get gated, and > > > you would want to publish the images that result in the gate passing. If > > > you have all of that set up in zuul to test and publish images from your > > > local changes, it standa to reason you'd want zuul to also just do an > > > image rebuild on your local branch in the scenarios you metion, and > > > using a BuildConfig would not really help you do that at all. **BUT** if > > > that's what you want to do, I mean, nothing should be stopping you. > > > > > > > My main concern is about the base layer update. BuildConfig can take > > care of rebuilding your final image when needed. You may use Zuul to > > promote the resulting image before deploying it, but the point is, base > > layer updates may happen out of your control, e.g. after a bug is > > discovered in openssl for example. > > > > > > > It certainly doesn't mean we need to support multi-distro inside the > > > image content we publish as an upstream. I'm pretty sure an OpenShift > > > BuildConfig can build an image based on python:slim as well as anything > > > else. > > > > > > > I didn't mean to derailed the thread to this image building topic which > > is not actually relevant to the zuul-operator. Sorry about that :) > > I mean, as long as the zuul-operator can use an optional custom image. > > > Yesterday I had a chance to boot the zuul/zuul-merger image in > openshift, and not surprising that seems to work a little different then > say a local docker. Openshift wasn't happy that the user inside the > container was running as root, seems it really wants a non-root user > inside. So, I do think it we might want to make all our images work as > well as possible in each thing (docker, podman, k8s, openshift, etc) but > honestly, I'm not really sure what the work looks like. In this case, I > think it is just a matter of creating a zuul user inside the container > and starting zuul / nodepool with it. > > > Regads, > > -Tristan > > > > >>>> I DO think we'll likely need to support specifying an image location in > > >>>> the CRD. (rook does this for ceph too) That's because I'd like for > > >>>> Tobias to be able to use the operator if he chooses, but he uses local > > >>>> builds of the images. And I think that's fine. > > >>>> > > >>>>> I am likely showing how much I don't know about k8s / openshift here, > > >>>>> but wanting to limit the amount of different ways to do a deployment > > >>>>> based on underlying distro bits. > > >>>> > > >>>> +10000000000 > > >>>> > > >>>> I believe there is no value in producing divergent images. > > >>>> > > >>> > > >> > > >> > > >> > > >>> _______________________________________________ > > >>> Zuul-discuss mailing list > > >>> Zuul-discuss at lists.zuul-ci.org > > >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > >> > > >> > > >> _______________________________________________ > > >> Zuul-discuss mailing list > > >> Zuul-discuss at lists.zuul-ci.org > > >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > >> > > > > > > _______________________________________________ > > > Zuul-discuss mailing list > > > Zuul-discuss at lists.zuul-ci.org > > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mordred at inaugust.com Fri Mar 8 18:23:08 2019 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 8 Mar 2019 18:23:08 +0000 Subject: Proposal to create zuul-operator project In-Reply-To: <87imwtrhx9.tristanC@fedora> References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87imwtrhx9.tristanC@fedora> Message-ID: On 3/8/19 6:36 AM, Tristan Cacqueray wrote: > > On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote: >> On 3/7/19 2:13 PM, Paul Belanger wrote: >>> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >>>> >>>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: >>>>> On 3/6/19 6:12 PM, Paul Belanger wrote: >>>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >>>>>>> Hey everybody, >>>>>>> >>>>>>> It seems that kubernetes operators are all the rage these days, and in the >>>>>>> last 2 weeks I've had multiple conversations that have all come back around >>>>>>> to someone wanting a zuul operator to exist. >>>>>>> >>>> >>>> Hello, >>>> >>>> Thank you for starting this thread, It seems like a great oportunity to >>>> converge the different deployment and operation tools. >>>> >>>> Ideally, we would grow a hierarchy of operators to manage the gating >>>> system, and then you could have an opendev-operator to compose: >>>> * log-processor-operator >>>> * elastic-search-operator >>>> * paste-operator, etherpad-operator, ... >>>> * gerrit-operator >>>> >>> I think these are fine to have, but I am on the fence if they are under >>> the scope of what zuul should do. As an example, I can totally see >>> opendev.org wanting doing this. As long as everything is all >>> disconnected from zuul-operator, I would be okay. Personally, I don't >>> want zuul to managed a gerrit. >> >> I agree. >> >> This is definitely an interesting thought - although I'm not sure we're >> all the way there from the opendev POV yet. As I learn more about >> operators I could certainly see an opportunity to write one for gitea >> once the indexing issue is fixed and we can go back to k8s for it. That >> said - I think opendev operators are out of scope for the zuul team - so >> we may want to chat about that in a slightly different context. (If the >> zuul operator goes well and we find it pleasing I could see that >> motivating doing more of them) >> >>>> We also started to investigate operators for Software Factory component >>>> and perhaps we could join force? >> >> We should absolutely always join forces. :) >> > > Back in the early days of Software Factory we tries to integrate the > openstack-infra manifest, but it was such a tedious task to make them > work on CentOS outside of the openstack.org domain that we ended up > writing our own manifests. > > Putting the other opendev components aside (e.g. logservices, code > hosting, ...), I think this is a great opportunity to converge the > existing Zuul deployment system and come up with a common set of ops > tasks for example to backup and restart the services. > > Looking at the awesome-operators page, there are 2 ZooKeeper operators > and 5 for MySQL... It would be really nice if we are able to settle > on a single zuul-operator implementation that satisfies everyone needs. Yes. I think the world would be a sad place if there were 2 zuul operators. I definitely also want to list it at operatorhub: https://www.operatorhub.io because apparently the Operator Lifecycle Manager will know how to pull operators from operatorhub based on dependency metadata declarations. Incidentally, this: https://www.operatorhub.io/operator/percona-xtradb-cluster-operator-community.v0.3.0-rc1 is there for database. None of the awesome-operators zookeeper ones are on operatorhub - so we may need to dig in to that one a bit more. > >>>> >>>>>>> SO ... >>>>>>> >>>>>>> I would like to propose that we create a zuul-operator project as part of >>>>>>> Zuul. >>>>>>> >>>>>>> Given our close ties with Ansible, I think we should use ansible-operator >>>>>>> [0] to write it. >>>>>>> >>>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I >>>>>>> think we should set up gate jobs to make sure it works on both. Once we're >>>>>>> happy with it - we should publish it to operatorhub. [1] >>>>>>> >>>>>>> Does anyone object? It not, I'll get a repo started. >>>>>>> >>>>>> I think it is a fine idea, I've also seen some talk about it recently >>>>>> too. >>>>>> >>>>>> I do think we should have some consensus on the bits we want to support >>>>>> for the deployment. Maybe going as far as saying it is an opinionated >>>>>> deployment, for better or worse. As an example, graphite / statsd is >>>>>> that in play or not? >>>>> >>>>> I think that's a great question. >>>>> >>>>> I do believe it should be an opinionated deployment - but there are >>>>> probably places where people will want to make some choices from a >>>>> specific set of choices. Like - by default I think it should spin up a >>>>> percona-xtradb cluster as provided by the percona operator. BUT - I >>>>> could see a setting in teh main Zuul CRD like "use-existing-db" that >>>>> takes a sqlalchemy url. That way if there is someone who has an existing >>>>> db infrastructure, they could take advantage of it - but the Zuul >>>>> operator gets you a working system out of the box. >>>>> >>>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an >>>>> operator for that yet - we might want to make one or work with someone >>>>> to make one. >>>>> >>>> >>>> There are actually 2 operators for ZooKeeper already listed in this page: >>>> https://github.com/operator-framework/awesome-operators >> >> WOOT! >> >>>> >>>>> I could see a graphite/statsd option with the ability to point to an >>>>> existing graphite. Maybe we just start with CRD field that indicates the >>>>> stats_host like we have now? >>>>> >>>>> But - yes - opinionated, mostly batteries included - maybe some amount >>>>> of optional things. >>>>> >>>> >>>> Should we discuss this with a spec? For example: >> >> Yes - probably so - because there's lots of design points (as you start >> to raise here) I'll start one today and we can iterator on it together. >> >>>> How are we going to configure the connections and the tenants' >>>> configuration? >> >> It's a great question. My first hunch is that both the connections >> config and the tenant config should just be parameters to the Zuul CRD, >> and then the operator could take that data, transform it into the right >> file formats and restart zuul or run zuul-reconfigure as needed. >> >> It's also feasible we could do it as referenced ConfigMap objects. It's >> also conceivable that maybe we just want connections to be parameters to >> the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just >> have the operator concatenate them appropriately. That last option >> starts to worry me though - starting to feel a bit liek the puppet >> modules that wanted to re-implement the config language of each thing >> (like apache) in the Puppet DSL - and how it turns out we wind up >> preferring the ansible "just jinja template the file itself" approach more. >> >> But I honestly don't know which we'll liek more. I think the EASIEST >> thing for a simple first start would be having each content just be >> parameters to the Zuul CRD - but we could probably play with the other >> things and see what we like. Or maybe there is a different idea. >> > > Everything in the Zuul CRD sounds like the most straightforward. > Moreover, the reconcile logic can take care of the differents action > depending on what changed. > > The CRD I started to work on looked like this: > ``` > apiVersion: v1 > kind: ZuulCluster > metadata: > name: test-cluster > spec: > merger: > instances: 1 > executor: > instances: 2 > zookeeperHosts: > - hostname:port > connections: > - name: git.zuul-ci.org > driver: git > baseurl: https://git.zuul-ci.org > - name: sqlreporter > driver: sql > dburi: mysql+pymysql://... > - name: gerrit > driver: gerrit > username: ... > tenants: > - name: local > source: > gerrit: > config-projects: > - config > git.zuul-ci.org: > untrusted-projects: > - zuul-jobs > ``` > Then I was planning on adding a similar NodepoolCluster and perhaps a > ZuulApplication to glue both cluster and manage zookeeper, rdbms and > statsd if missing. Yes - I think that's likely a great approach to start with. (although I think we can just call ZuulApplication "Zuul" - but I like typing shorter things. Although I could also see just having a Zuul and a Nodepool and having a db field that could optionally be overridden but otherwise would cause a percona operator to get created and a sqlreporter entry added to the connections list. Similarly the Zuul operator will need to depend on a Zookeeper resource provided by an operator - but allow it to be overridden. Having that there though - I don't think the user of the Zuul operator should have to put in an entry to the Zuul CR telling it to put the zookeeper info into the config file- that should be zuul operator details. *waves hands* Basically - when there's an opportunity for the operator to take care of connecting some info here we should take it. The nodepool case is a bit more interesting to think about - because a Zuul requires a Nodepool - but the Nodepool config is a bit more complex and maybe would start to feel very heavyweight if we just put it all into the CRD? i'm sure we'll learn a lot as we poke ... >>>> Software Factory batteries include a zuul-reconfigure job to manage the >>>> main.yaml file. Should the zuul-operator setup such job, or perhaps >>>> should we also need a zuul-quick-start-operator? >> I don't think the zuul operator should set up any jobs, because zuul >> jobs come from git repos - and the zuul operator would have to learn >> about all of the different types of git backends for it to be able to >> push commits into one - and the operator would have to have git account >> access. >> >> That seems like a feature that a software factory operator (or an >> opendev operator) would be in a much better position to provide - since >> it's reasonable for those operators to know more things about the >> surrounding git infrastructure and have accounts for it. >> >> A zuul-reconfigure job might be a nice addition to zuul-jobs though. And >> maybe there would need to be a zuul-operator-reconfigure if applying >> main.yaml changes involves pushing a new Zuul CRD manifest via kubectl >> to get the operator to update the file and run reconfigure. (take this >> main.yaml file, slurp it into the manifest, run kubectl) >> > > That seems out of scope indeed, though later down the road, it seems > like a rather common pattern to have the tenant configuration in a git > repository that would reconfigure zuul when a change is merged. Agree >>>> >>>>>> Also, could we maybe say a single distro for all container things? I >>>>>> guess by default, that is what the current containers by zuul project is >>>>>> running today. I don't much have the energy to get into the weeds of pip >>>>>> / dpkg / rpm for how things maybe get installed (if needed). >>>>> >>>>> Yes. All of our container images as we are publishing them now are >>>>> awesome, and are what I expect us to use. >>>>> >>>>> I am categorically opposed to us making and publishing different images. >>>>> One of the nice things about publishing container images is that you can >>>>> do them once and do them well. We're using the python base image. It >>>>> happens to be based on debian-strech - but that's irrelevant. It's >>>>> running python. >>>>> >>>> >>>> That sounds great, but how are we going to cope with the next heartbleed >>>> or shellshock? IIUC, we would need to wait for a fix to land in the >>>> debian-stretch image, then wait for a new opendevorg/python-builder >>>> image and finally wait for a new zuul image to be published. >> >> Jeremy also responded to this - so this is basically me saying "I agree >> with fungi" in more words. >> >> We have to do that no matter what the base image is. But we don't >> actually have to do that. The zuul Dockerfile already does the apt-get >> update / apt-get install. If needed we could toss in a dist-upgrade if >> the fix has hit the repo but not the base image so that we don't have to >> wait for the stack. >> >> Supporting multiple in-image distros doesn't improve that. In fact, it >> increases the work for upstream because now we have to do that exact >> process you described, but we have to do it for each distro we're >> building images with. >> > > Agreed, there is no point in supporting multiple distros image. > >>>> Or what happen when the zuul image gets published with an incompatible >>>> openstacksdk requirement? >>>> >>>> Thus I think the base image and how the zuul bits gets bundled are >>>> relevant. For example OpenShift users may want to use BuildConfig >>>> instead so that a parent layer update automatically trigger childs' >>>> layer rebuild. >> >> Sure - but the zuul images are built with standard Dockerfiles - so >> there should be nothing stopping people from rebuilding them locally >> however they want. >> >>> Yah, this is what I was hoping we'd discuss more when we dove into the >>> weeds. Can a BuildConfig work with k8s? Or does some other k8s items >>> work with openshift? If not, I think it is fine if we only suppoert one >>> and not try to do everything. Which one we support, I don't really know, >>> hopefully others can say which one is better :) We can also support >>> both, but I feel that is a lot of work for minimal gain. >> I think it's a strange idea, when you have a Zuul, to want to use a >> different build system to build zuul images, since you would not be able >> to CI those images as effectively as if you use the zuul image building >> jobs. The only good reason for local rebuilds is if you're carrying >> local changes. > > Another good reason to manage the image is for compliance and > reproducability. Using another base layer may be easier to audit and > update, and users may also want to build their image with curated > registries for things like pip or npm to prevent supply chain attack. > > >> If you are carrying local changes, one would imagine you >> would want to be able to submit code reviews to a git system to manage >> those changes, and you would want those code reviews to get gated, and >> you would want to publish the images that result in the gate passing. If >> you have all of that set up in zuul to test and publish images from your >> local changes, it standa to reason you'd want zuul to also just do an >> image rebuild on your local branch in the scenarios you metion, and >> using a BuildConfig would not really help you do that at all. **BUT** if >> that's what you want to do, I mean, nothing should be stopping you. >> > > My main concern is about the base layer update. BuildConfig can take > care of rebuilding your final image when needed. You may use Zuul to > promote the resulting image before deploying it, but the point is, base > layer updates may happen out of your control, e.g. after a bug is > discovered in openssl for example. Sure - I just think that it's a better route to have something trigger Zuul to do a rebuild and potential promotion when a base layer changes. (A simple trigger should be sufficient) BuildConfig is a whole other system with a whole other worldview and I would be more worried that Zuul and BuildConfig would fight each other. (and I don't think BuildConfig adds anything, other than potentially having a trigger defined watching for changes to base images) > >> It certainly doesn't mean we need to support multi-distro inside the >> image content we publish as an upstream. I'm pretty sure an OpenShift >> BuildConfig can build an image based on python:slim as well as anything >> else. >> > > I didn't mean to derailed the thread to this image building topic which > is not actually relevant to the zuul-operator. Sorry about that :) > I mean, as long as the zuul-operator can use an optional custom image. ++ - I think it's essential that it can use an optional custom image. > Regads, > -Tristan > >>>>> I DO think we'll likely need to support specifying an image location in >>>>> the CRD. (rook does this for ceph too) That's because I'd like for >>>>> Tobias to be able to use the operator if he chooses, but he uses local >>>>> builds of the images. And I think that's fine. >>>>> >>>>>> I am likely showing how much I don't know about k8s / openshift here, >>>>>> but wanting to limit the amount of different ways to do a deployment >>>>>> based on underlying distro bits. >>>>> >>>>> +10000000000 >>>>> >>>>> I believe there is no value in producing divergent images. >>>>> >>>> >>> >>> >>> >>>> _______________________________________________ >>>> Zuul-discuss mailing list >>>> Zuul-discuss at lists.zuul-ci.org >>>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>> >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>> >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From tdecacqu at redhat.com Sun Mar 10 06:09:21 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 10 Mar 2019 15:09:21 +0900 Subject: Proposal to create zuul-operator project In-Reply-To: References: <99b5190c-f58d-68e1-b751-45b00dfbd4ff@inaugust.com> <20190306181204.GA16087@localhost.localdomain> <43c867d9-290a-f628-0b61-ef5d966dcac7@inaugust.com> <87va0vqurc.tristanC@fedora> <20190307141347.GA20461@localhost.localdomain> <87imwtrhx9.tristanC@fedora> Message-ID: <87d0mzqmzy.tristanC@fedora> On Fri, Mar 08, 2019 at 18:23 Monty Taylor wrote: > On 3/8/19 6:36 AM, Tristan Cacqueray wrote: >> >> On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote: >>> On 3/7/19 2:13 PM, Paul Belanger wrote: >>>> On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote: >>>>> >>>>> On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote: >>>>>> On 3/6/19 6:12 PM, Paul Belanger wrote: >>>>>>> On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >>>>>>>> Hey everybody, >>>>>>>> >>>>>>>> It seems that kubernetes operators are all the rage these days, and in the >>>>>>>> last 2 weeks I've had multiple conversations that have all come back around >>>>>>>> to someone wanting a zuul operator to exist. >>>>>>>> >>>>> >>>>> Hello, >>>>> >>>>> Thank you for starting this thread, It seems like a great oportunity to >>>>> converge the different deployment and operation tools. >>>>> >>>>> Ideally, we would grow a hierarchy of operators to manage the gating >>>>> system, and then you could have an opendev-operator to compose: >>>>> * log-processor-operator >>>>> * elastic-search-operator >>>>> * paste-operator, etherpad-operator, ... >>>>> * gerrit-operator >>>>> >>>> I think these are fine to have, but I am on the fence if they are under >>>> the scope of what zuul should do. As an example, I can totally see >>>> opendev.org wanting doing this. As long as everything is all >>>> disconnected from zuul-operator, I would be okay. Personally, I don't >>>> want zuul to managed a gerrit. >>> >>> I agree. >>> >>> This is definitely an interesting thought - although I'm not sure we're >>> all the way there from the opendev POV yet. As I learn more about >>> operators I could certainly see an opportunity to write one for gitea >>> once the indexing issue is fixed and we can go back to k8s for it. That >>> said - I think opendev operators are out of scope for the zuul team - so >>> we may want to chat about that in a slightly different context. (If the >>> zuul operator goes well and we find it pleasing I could see that >>> motivating doing more of them) >>> >>>>> We also started to investigate operators for Software Factory component >>>>> and perhaps we could join force? >>> >>> We should absolutely always join forces. :) >>> >> >> Back in the early days of Software Factory we tries to integrate the >> openstack-infra manifest, but it was such a tedious task to make them >> work on CentOS outside of the openstack.org domain that we ended up >> writing our own manifests. >> >> Putting the other opendev components aside (e.g. logservices, code >> hosting, ...), I think this is a great opportunity to converge the >> existing Zuul deployment system and come up with a common set of ops >> tasks for example to backup and restart the services. >> >> Looking at the awesome-operators page, there are 2 ZooKeeper operators >> and 5 for MySQL... It would be really nice if we are able to settle >> on a single zuul-operator implementation that satisfies everyone needs. > > Yes. I think the world would be a sad place if there were 2 zuul operators. > > I definitely also want to list it at operatorhub: > > https://www.operatorhub.io > > because apparently the Operator Lifecycle Manager will know how to pull > operators from operatorhub based on dependency metadata declarations. > > Incidentally, this: > > https://www.operatorhub.io/operator/percona-xtradb-cluster-operator-community.v0.3.0-rc1 > > is there for database. None of the awesome-operators zookeeper ones are > on operatorhub - so we may need to dig in to that one a bit more. > Not sure how operator dependencies are going to work (and be tested), but it seems like one operator can install another one at runtime. >> >>>>> >>>>>>>> SO ... >>>>>>>> >>>>>>>> I would like to propose that we create a zuul-operator project as part of >>>>>>>> Zuul. >>>>>>>> >>>>>>>> Given our close ties with Ansible, I think we should use ansible-operator >>>>>>>> [0] to write it. >>>>>>>> >>>>>>>> Since we have zuul in production on both vanilla k8s and on OpenShift, I >>>>>>>> think we should set up gate jobs to make sure it works on both. Once we're >>>>>>>> happy with it - we should publish it to operatorhub. [1] >>>>>>>> >>>>>>>> Does anyone object? It not, I'll get a repo started. >>>>>>>> >>>>>>> I think it is a fine idea, I've also seen some talk about it recently >>>>>>> too. >>>>>>> >>>>>>> I do think we should have some consensus on the bits we want to support >>>>>>> for the deployment. Maybe going as far as saying it is an opinionated >>>>>>> deployment, for better or worse. As an example, graphite / statsd is >>>>>>> that in play or not? >>>>>> >>>>>> I think that's a great question. >>>>>> >>>>>> I do believe it should be an opinionated deployment - but there are >>>>>> probably places where people will want to make some choices from a >>>>>> specific set of choices. Like - by default I think it should spin up a >>>>>> percona-xtradb cluster as provided by the percona operator. BUT - I >>>>>> could see a setting in teh main Zuul CRD like "use-existing-db" that >>>>>> takes a sqlalchemy url. That way if there is someone who has an existing >>>>>> db infrastructure, they could take advantage of it - but the Zuul >>>>>> operator gets you a working system out of the box. >>>>>> >>>>>> Obviously we'll need to spin up a zookeeper. Not sure if there is an >>>>>> operator for that yet - we might want to make one or work with someone >>>>>> to make one. >>>>>> >>>>> >>>>> There are actually 2 operators for ZooKeeper already listed in this page: >>>>> https://github.com/operator-framework/awesome-operators >>> >>> WOOT! >>> >>>>> >>>>>> I could see a graphite/statsd option with the ability to point to an >>>>>> existing graphite. Maybe we just start with CRD field that indicates the >>>>>> stats_host like we have now? >>>>>> >>>>>> But - yes - opinionated, mostly batteries included - maybe some amount >>>>>> of optional things. >>>>>> >>>>> >>>>> Should we discuss this with a spec? For example: >>> >>> Yes - probably so - because there's lots of design points (as you start >>> to raise here) I'll start one today and we can iterator on it together. >>> >>>>> How are we going to configure the connections and the tenants' >>>>> configuration? >>> >>> It's a great question. My first hunch is that both the connections >>> config and the tenant config should just be parameters to the Zuul CRD, >>> and then the operator could take that data, transform it into the right >>> file formats and restart zuul or run zuul-reconfigure as needed. >>> >>> It's also feasible we could do it as referenced ConfigMap objects. It's >>> also conceivable that maybe we just want connections to be parameters to >>> the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just >>> have the operator concatenate them appropriately. That last option >>> starts to worry me though - starting to feel a bit liek the puppet >>> modules that wanted to re-implement the config language of each thing >>> (like apache) in the Puppet DSL - and how it turns out we wind up >>> preferring the ansible "just jinja template the file itself" approach more. >>> >>> But I honestly don't know which we'll liek more. I think the EASIEST >>> thing for a simple first start would be having each content just be >>> parameters to the Zuul CRD - but we could probably play with the other >>> things and see what we like. Or maybe there is a different idea. >>> >> >> Everything in the Zuul CRD sounds like the most straightforward. >> Moreover, the reconcile logic can take care of the differents action >> depending on what changed. >> >> The CRD I started to work on looked like this: >> ``` >> apiVersion: v1 >> kind: ZuulCluster >> metadata: >> name: test-cluster >> spec: >> merger: >> instances: 1 >> executor: >> instances: 2 >> zookeeperHosts: >> - hostname:port >> connections: >> - name: git.zuul-ci.org >> driver: git >> baseurl: https://git.zuul-ci.org >> - name: sqlreporter >> driver: sql >> dburi: mysql+pymysql://... >> - name: gerrit >> driver: gerrit >> username: ... >> tenants: >> - name: local >> source: >> gerrit: >> config-projects: >> - config >> git.zuul-ci.org: >> untrusted-projects: >> - zuul-jobs >> ``` >> Then I was planning on adding a similar NodepoolCluster and perhaps a >> ZuulApplication to glue both cluster and manage zookeeper, rdbms and >> statsd if missing. > > Yes - I think that's likely a great approach to start with. (although I > think we can just call ZuulApplication "Zuul" - but I like typing > shorter things. > > Although I could also see just having a Zuul and a Nodepool and having a > db field that could optionally be overridden but otherwise would cause a > percona operator to get created and a sqlreporter entry added to the > connections list. Similarly the Zuul operator will need to depend on a > Zookeeper resource provided by an operator - but allow it to be > overridden. Having that there though - I don't think the user of the > Zuul operator should have to put in an entry to the Zuul CR telling it > to put the zookeeper info into the config file- that should be zuul > operator details. *waves hands* > > Basically - when there's an opportunity for the operator to take care of > connecting some info here we should take it. > Yes. So for completeness here are the 2 other CRDs (draft): ``` kind: NodepoolCluster spec: zookeeperHosts: - hostname:port launcher: instances: 2 builder: instances: 1 providers: - name: cloud driver: openstack pools: [] - name: static driver: static ... cloud-config: # clouds yaml content or secret name kube-config: # kube yaml content or secret name ``` And the user facing CRD would be: ``` kind: Zuul spec: connections: [] tenants: [] providers: [] ``` This higher level Zuul resources may takes care of the Zookeeper and DB setup if it's not provided. > The nodepool case is a bit more interesting to think about - because a > Zuul requires a Nodepool - but the Nodepool config is a bit more complex > and maybe would start to feel very heavyweight if we just put it all > into the CRD? i'm sure we'll learn a lot as we poke ... > Secrets should probably live in a Secret resource, but I'm not sure how operator can poke at the services when Secrets are udpated (e.g. if the Zuul dburi changes). Hence to keep things simple at first, it seems easier to manage every configuration bits as part of the CRDs spec. > >>>>> Software Factory batteries include a zuul-reconfigure job to manage the >>>>> main.yaml file. Should the zuul-operator setup such job, or perhaps >>>>> should we also need a zuul-quick-start-operator? >>> I don't think the zuul operator should set up any jobs, because zuul >>> jobs come from git repos - and the zuul operator would have to learn >>> about all of the different types of git backends for it to be able to >>> push commits into one - and the operator would have to have git account >>> access. >>> >>> That seems like a feature that a software factory operator (or an >>> opendev operator) would be in a much better position to provide - since >>> it's reasonable for those operators to know more things about the >>> surrounding git infrastructure and have accounts for it. >>> >>> A zuul-reconfigure job might be a nice addition to zuul-jobs though. And >>> maybe there would need to be a zuul-operator-reconfigure if applying >>> main.yaml changes involves pushing a new Zuul CRD manifest via kubectl >>> to get the operator to update the file and run reconfigure. (take this >>> main.yaml file, slurp it into the manifest, run kubectl) >>> >> >> That seems out of scope indeed, though later down the road, it seems >> like a rather common pattern to have the tenant configuration in a git >> repository that would reconfigure zuul when a change is merged. > > Agree > >>>>> >>>>>>> Also, could we maybe say a single distro for all container things? I >>>>>>> guess by default, that is what the current containers by zuul project is >>>>>>> running today. I don't much have the energy to get into the weeds of pip >>>>>>> / dpkg / rpm for how things maybe get installed (if needed). >>>>>> >>>>>> Yes. All of our container images as we are publishing them now are >>>>>> awesome, and are what I expect us to use. >>>>>> >>>>>> I am categorically opposed to us making and publishing different images. >>>>>> One of the nice things about publishing container images is that you can >>>>>> do them once and do them well. We're using the python base image. It >>>>>> happens to be based on debian-strech - but that's irrelevant. It's >>>>>> running python. >>>>>> >>>>> >>>>> That sounds great, but how are we going to cope with the next heartbleed >>>>> or shellshock? IIUC, we would need to wait for a fix to land in the >>>>> debian-stretch image, then wait for a new opendevorg/python-builder >>>>> image and finally wait for a new zuul image to be published. >>> >>> Jeremy also responded to this - so this is basically me saying "I agree >>> with fungi" in more words. >>> >>> We have to do that no matter what the base image is. But we don't >>> actually have to do that. The zuul Dockerfile already does the apt-get >>> update / apt-get install. If needed we could toss in a dist-upgrade if >>> the fix has hit the repo but not the base image so that we don't have to >>> wait for the stack. >>> >>> Supporting multiple in-image distros doesn't improve that. In fact, it >>> increases the work for upstream because now we have to do that exact >>> process you described, but we have to do it for each distro we're >>> building images with. >>> >> >> Agreed, there is no point in supporting multiple distros image. >> >>>>> Or what happen when the zuul image gets published with an incompatible >>>>> openstacksdk requirement? >>>>> >>>>> Thus I think the base image and how the zuul bits gets bundled are >>>>> relevant. For example OpenShift users may want to use BuildConfig >>>>> instead so that a parent layer update automatically trigger childs' >>>>> layer rebuild. >>> >>> Sure - but the zuul images are built with standard Dockerfiles - so >>> there should be nothing stopping people from rebuilding them locally >>> however they want. >>> >>>> Yah, this is what I was hoping we'd discuss more when we dove into the >>>> weeds. Can a BuildConfig work with k8s? Or does some other k8s items >>>> work with openshift? If not, I think it is fine if we only suppoert one >>>> and not try to do everything. Which one we support, I don't really know, >>>> hopefully others can say which one is better :) We can also support >>>> both, but I feel that is a lot of work for minimal gain. >>> I think it's a strange idea, when you have a Zuul, to want to use a >>> different build system to build zuul images, since you would not be able >>> to CI those images as effectively as if you use the zuul image building >>> jobs. The only good reason for local rebuilds is if you're carrying >>> local changes. >> >> Another good reason to manage the image is for compliance and >> reproducability. Using another base layer may be easier to audit and >> update, and users may also want to build their image with curated >> registries for things like pip or npm to prevent supply chain attack. >> >> >>> If you are carrying local changes, one would imagine you >>> would want to be able to submit code reviews to a git system to manage >>> those changes, and you would want those code reviews to get gated, and >>> you would want to publish the images that result in the gate passing. If >>> you have all of that set up in zuul to test and publish images from your >>> local changes, it standa to reason you'd want zuul to also just do an >>> image rebuild on your local branch in the scenarios you metion, and >>> using a BuildConfig would not really help you do that at all. **BUT** if >>> that's what you want to do, I mean, nothing should be stopping you. >>> >> >> My main concern is about the base layer update. BuildConfig can take >> care of rebuilding your final image when needed. You may use Zuul to >> promote the resulting image before deploying it, but the point is, base >> layer updates may happen out of your control, e.g. after a bug is >> discovered in openssl for example. > > Sure - I just think that it's a better route to have something trigger > Zuul to do a rebuild and potential promotion when a base layer changes. IIUC, BuildConfig are first class citizens in OKD, and it enables images to be managed by the cluster: https://docs.okd.io/latest/dev_guide/builds/index.html#dev-guide-what-is-a-build It seems like Zuul wouldn't be able to do that as the job would requires a docker service. > (A simple trigger should be sufficient) BuildConfig is a whole other > system with a whole other worldview and I would be more worried that > Zuul and BuildConfig would fight each other. (and I don't think > BuildConfig adds anything, other than potentially having a trigger > defined watching for changes to base images) User of BuildConfig should be able to trigger a Zuul pipeline to take care of the promotion and deployment. At least that is the workflow provided by Jenkins: https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#pipeline-strategy-options Regards, -Tristan >> >>> It certainly doesn't mean we need to support multi-distro inside the >>> image content we publish as an upstream. I'm pretty sure an OpenShift >>> BuildConfig can build an image based on python:slim as well as anything >>> else. >>> >> >> I didn't mean to derailed the thread to this image building topic which >> is not actually relevant to the zuul-operator. Sorry about that :) >> I mean, as long as the zuul-operator can use an optional custom image. > > ++ - I think it's essential that it can use an optional custom image. > >> Regads, >> -Tristan >> >>>>>> I DO think we'll likely need to support specifying an image location in >>>>>> the CRD. (rook does this for ceph too) That's because I'd like for >>>>>> Tobias to be able to use the operator if he chooses, but he uses local >>>>>> builds of the images. And I think that's fine. >>>>>> >>>>>>> I am likely showing how much I don't know about k8s / openshift here, >>>>>>> but wanting to limit the amount of different ways to do a deployment >>>>>>> based on underlying distro bits. >>>>>> >>>>>> +10000000000 >>>>>> >>>>>> I believe there is no value in producing divergent images. >>>>>> >>>>> >>>> >>>> >>>> >>>>> _______________________________________________ >>>>> Zuul-discuss mailing list >>>>> Zuul-discuss at lists.zuul-ci.org >>>>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>>> >>>> >>>> _______________________________________________ >>>> Zuul-discuss mailing list >>>> Zuul-discuss at lists.zuul-ci.org >>>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >>>> >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tomasz.szandala at gmail.com Sun Mar 10 08:16:11 2019 From: tomasz.szandala at gmail.com (=?UTF-8?Q?Tomasz_Szanda=C5=82a?=) Date: Sun, 10 Mar 2019 09:16:11 +0100 Subject: 404 - Tenant does not exist Message-ID: Greetings, I have set up zuul v3 using docker-compose, so on my host I can see following containers running: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b47be7107613 zuul/zuul-web "/usr/bin/dumb-init …" 12 hours ago Up 18 minutes 0.0.0.0:9000->9000/tcp docker_web_1 5b4fcd2a2238 zuul/zuul-executor "/usr/bin/dumb-init …" 12 hours ago Up 18 minutes docker_executor_1 4fe739ae6ace zuul/zuul-scheduler "/usr/bin/dumb-init …" 12 hours ago Up 7 minutes docker_scheduler_1 1c88df15bd55 zuul/nodepool-launcher "/usr/bin/dumb-init …" 12 hours ago Up 18 minutes 0.0.0.0:8022->8022/tcp docker_launcher_1 682c8f4ece17 zookeeper "/docker-entrypoint.…" 12 hours ago Up 18 minutes 2181/tcp, 2888/tcp, 3888/tcp docker_zk_1 3bcfaf604728 docker_logs "httpd-foreground" 12 hours ago Up 18 minutes 0.0.0.0:8000->80/tcp docker_logs_1 151eee2a9f4a mariadb "docker-entrypoint.s…" 12 hours ago Up 12 hours 3306/tcp Everything in real time works perfectly, but the problem is that I cannot see the builds history. My web service cannot get http://zuul-master.internal.com:9000/api/tenant/igor/buildsets nor any other /api call. It receives 404, tenant not found. I doesn't look like firewall issue, nor I see any errors in docker-compose logs. Any idea what did I set wrongly here? Kind regards, Tomasz -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomasz.szandala at gmail.com Sun Mar 10 10:31:36 2019 From: tomasz.szandala at gmail.com (=?UTF-8?Q?Tomasz_Szanda=C5=82a?=) Date: Sun, 10 Mar 2019 11:31:36 +0100 Subject: 404 - Tenant does not exist In-Reply-To: References: Message-ID: Cancel alarm. After another 2 hours of searching I found wrongly defined reporter in pipeline yaml... niedz., 10 mar 2019 o 09:16 Tomasz Szandała napisał(a): > Greetings, > I have set up zuul v3 using docker-compose, so on my host I can see > following containers running: > CONTAINER ID IMAGE COMMAND > CREATED STATUS PORTS NAMES > b47be7107613 zuul/zuul-web "/usr/bin/dumb-init …" 12 > hours ago Up 18 minutes 0.0.0.0:9000->9000/tcp > docker_web_1 > 5b4fcd2a2238 zuul/zuul-executor "/usr/bin/dumb-init …" 12 > hours ago Up 18 minutes > docker_executor_1 > 4fe739ae6ace zuul/zuul-scheduler "/usr/bin/dumb-init …" 12 > hours ago Up 7 minutes > docker_scheduler_1 > 1c88df15bd55 zuul/nodepool-launcher "/usr/bin/dumb-init …" 12 > hours ago Up 18 minutes 0.0.0.0:8022->8022/tcp > docker_launcher_1 > 682c8f4ece17 zookeeper "/docker-entrypoint.…" 12 > hours ago Up 18 minutes 2181/tcp, 2888/tcp, 3888/tcp > docker_zk_1 > 3bcfaf604728 docker_logs "httpd-foreground" 12 > hours ago Up 18 minutes 0.0.0.0:8000->80/tcp > docker_logs_1 > 151eee2a9f4a mariadb "docker-entrypoint.s…" 12 > hours ago Up 12 hours 3306/tcp > > > Everything in real time works perfectly, but the problem is that I cannot > see the builds history. My web service cannot get > http://zuul-master.internal.com:9000/api/tenant/igor/buildsets > nor any other /api call. > It receives 404, tenant not found. > I doesn't look like firewall issue, nor I see any errors in docker-compose > logs. > > Any idea what did I set wrongly here? > > Kind regards, > Tomasz > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Mar 11 20:12:45 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 11 Mar 2019 13:12:45 -0700 Subject: Zuul project update 2019-03-11 Message-ID: <877ed5kw5e.fsf@meyer.lemoncheese.net> Hi, Since the last email update, we merged the Multi-Ansible spec: https://zuul-ci.org/docs/zuul/developer/specs/multiple-ansible-versions.html Tristan worked on adding a job filter to the trigger event, as discussed previously on this list. The webtrigger patch is now based on that. Tristan also continued working on zuul-runner and a set of patches to support nodes with python3 only. I finally finished the changes related to buildset docker registries (also mentioned previously on this list). In the end, the system looks to be easy to use, but there are a few moving parts. My next task is to document how to set such a system up. ****************************** There are a lot of outstanding changes, and I'd like to attempt to prioritize and sequence them to help us work through them together: 1) The multi-ansible series While this is newer than the others on the list, it does address significant technical debt. It would be really good to get this finished and released before Ansible 2.5 goes out of maintenance (it's the oldest supported Ansible and only receives security fixes). 2) The zuul-runner series This has been in development for a long time, and has a significant impact on the executor. It will almost certainly conflict with the multi-ansible series, so let's work on getting this merged after that. 3) The web authentication spec This has been in development for a while, and folks working on it recently signaled it's ready for review. This shouldn't conflict with anything ahead of it, so we can review it as time permits. 4) The web authentication series Once the spec lands, we can review the implementation which has been developing in parallel. Of course, there's lots of other stuff out there we can review as well, and anyone is free to review anything they want. :) However, I think if we agree to prioritize these major efforts as I suggest, we will be able to focus on them together and make the most efficient progress. Help write the next update email by updating https://etherpad.openstack.org/p/zuul-update-email Thanks, Jim From pabelanger at redhat.com Mon Mar 11 20:31:12 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 11 Mar 2019 16:31:12 -0400 Subject: Zuul project update 2019-03-11 In-Reply-To: <877ed5kw5e.fsf@meyer.lemoncheese.net> References: <877ed5kw5e.fsf@meyer.lemoncheese.net> Message-ID: <20190311203112.GA9526@localhost.localdomain> On Mon, Mar 11, 2019 at 01:12:45PM -0700, James E. Blair wrote: > Hi, > > Since the last email update, we merged the Multi-Ansible spec: > > https://zuul-ci.org/docs/zuul/developer/specs/multiple-ansible-versions.html > > Tristan worked on adding a job filter to the trigger event, as discussed > previously on this list. The webtrigger patch is now based on that. > Tristan also continued working on zuul-runner and a set of patches to > support nodes with python3 only. > > I finally finished the changes related to buildset docker registries > (also mentioned previously on this list). In the end, the system looks > to be easy to use, but there are a few moving parts. My next task is to > document how to set such a system up. > > ****************************** > > There are a lot of outstanding changes, and I'd like to attempt to > prioritize and sequence them to help us work through them together: > > 1) The multi-ansible series > > While this is newer than the others on the list, it does address > significant technical debt. It would be really good to get this > finished and released before Ansible 2.5 goes out of maintenance > (it's the oldest supported Ansible and only receives security fixes). > > 2) The zuul-runner series > > This has been in development for a long time, and has a significant > impact on the executor. It will almost certainly conflict with the > multi-ansible series, so let's work on getting this merged after > that. > > 3) The web authentication spec > > This has been in development for a while, and folks working on it > recently signaled it's ready for review. This shouldn't conflict > with anything ahead of it, so we can review it as time permits. > > 4) The web authentication series > > Once the spec lands, we can review the implementation which has been > developing in parallel. > > Of course, there's lots of other stuff out there we can review as well, > and anyone is free to review anything they want. :) However, I think > if we agree to prioritize these major efforts as I suggest, we will be > able to focus on them together and make the most efficient progress. > What are your thoughts on the scaling out of scheduler spec? Is that something we are realistically looking to work on in the next 6 months? I am assuming, that is something we'll next tackle after multi-ansible is finished. - Paul > Help write the next update email by updating > > https://etherpad.openstack.org/p/zuul-update-email > > Thanks, > > Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From Tobias.Henkel at bmw.de Mon Mar 11 20:39:00 2019 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 11 Mar 2019 20:39:00 +0000 Subject: Zuul project update 2019-03-11 In-Reply-To: <20190311203112.GA9526@localhost.localdomain> References: <877ed5kw5e.fsf@meyer.lemoncheese.net> <20190311203112.GA9526@localhost.localdomain> Message-ID: <24FD2ADD-532B-44EC-89DF-A6E91711CB0D@bmw.de> On 11.03.19, 21:31, "Paul Belanger" wrote: On Mon, Mar 11, 2019 at 01:12:45PM -0700, James E. Blair wrote: (...) > > Of course, there's lots of other stuff out there we can review as well, > and anyone is free to review anything they want. :) However, I think > if we agree to prioritize these major efforts as I suggest, we will be > able to focus on them together and make the most efficient progress. > What are your thoughts on the scaling out of scheduler spec? Is that something we are realistically looking to work on in the next 6 months? I am assuming, that is something we'll next tackle after multi-ansible is finished. - Paul Scale out scheduler has at least for me a high priority as this will solve scalability and operations problems for us. The next task in that regard is defining work packages and an upgrade path in the spec. I hope I find time to continue on that soon. - Tobias From corvus at inaugust.com Mon Mar 11 20:47:26 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 11 Mar 2019 13:47:26 -0700 Subject: Should we drop our custom Ansible plugins? Message-ID: <877ed5jfz5.fsf@meyer.lemoncheese.net> Hi, When we created Zuul v3, we faced a fairly unique situation -- we needed to run untrusted Ansible playbooks without compromising the host running Ansible (i.e., the Zuul executor). To address this, we created a set of Ansible plugins which wrap the upstream Ansible plugins and block actions which we consider dangerous. Broadly speaking, there are two classes of actions we might want to block: 1) Actions which may expose private information stored on the executor (including information used by Zuul itself or other jobs), or otherwise compromise the security of the executor itself (so that future jobs can not be trusted). 2) Actions which may allow a user to take advantage of privileges afforded to the executor host itself. For example, if the executor is behind a network firewall and therefore can access unprotected network services. We added bubblewrap as a second layer of defense against the first class. It has been very effective -- our custom plugins have had several vulnerabilities, all of which were ultimately unexploitable due to the use of bubblewrap. Bubblewrap doesn't address the second class. Here's more background on the bubblewrap addition: https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3-executor-security.html Maintaining the Ansible plugins is a lot of work, and it's very easy to get wrong. For each new Ansible version, we need to look for any changes to module parameters and make corresponding updates. All of our security issues so far have been due to things we missed here. Meanwhile, these custom plugins increase the area where playbooks which may otherwise work can not be used with Zuul (for example, because they use a lookup plugin which writes to the executor). This makes Zuul more difficult to use. Therefore, I propose that we drop support for the custom plugins and rely entirely on bubblewrap to protect us from the first class of issues. It will mean less work for us, make Zuul more useful, and, at least with regard to the first class of problems, I don't think Zuul will be less secure. That brings me to the second class. I have a feeling (which is unsubstantiated), that there are probably enough holes in our plugins which make the second class of actions possible too. For example, we allow use of the URI module, but that means that any playbook can hit any HTTP URL accessible by the executor. While it may be difficult for a user to make a connection to a mysql server, it's not outside the realm of possibility. Dropping support for our custom plugins would mean that Zuul has no defense against this class of action. We would need to document that and ensure that users know that if their executors run untrusted code that they should have adequate application or network level protections in place. In short: don't put an executor inside the squishy center of your security perimeter. Before we do this, we need to make sure that Zuul itself doesn't impose any constraints that contradict this. The executor only uses Gearman to communicate with the rest of Zuul, and it can adequately be secured. Before we add any features which require the executor to communicate via ZooKeeper, we should make sure that we support zookeeper auth. What do folks think? Is this a reasonable change to make, or should we continue supporting the custom plugins? -Jim From pabelanger at redhat.com Mon Mar 11 22:08:46 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 11 Mar 2019 18:08:46 -0400 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <877ed5jfz5.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> Message-ID: <20190311220846.GA22574@localhost.localdomain> On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: > Hi, > > When we created Zuul v3, we faced a fairly unique situation -- we needed > to run untrusted Ansible playbooks without compromising the host running > Ansible (i.e., the Zuul executor). > > To address this, we created a set of Ansible plugins which wrap the > upstream Ansible plugins and block actions which we consider dangerous. > > Broadly speaking, there are two classes of actions we might want to > block: > > 1) Actions which may expose private information stored on the executor > (including information used by Zuul itself or other jobs), or otherwise > compromise the security of the executor itself (so that future jobs can > not be trusted). > > 2) Actions which may allow a user to take advantage of privileges > afforded to the executor host itself. For example, if the executor is > behind a network firewall and therefore can access unprotected network > services. > > We added bubblewrap as a second layer of defense against the first > class. It has been very effective -- our custom plugins have had > several vulnerabilities, all of which were ultimately unexploitable due > to the use of bubblewrap. Bubblewrap doesn't address the second class. > > Here's more background on the bubblewrap addition: > > https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3-executor-security.html > > Maintaining the Ansible plugins is a lot of work, and it's very easy to > get wrong. For each new Ansible version, we need to look for any > changes to module parameters and make corresponding updates. All of our > security issues so far have been due to things we missed here. > > Meanwhile, these custom plugins increase the area where playbooks which > may otherwise work can not be used with Zuul (for example, because they > use a lookup plugin which writes to the executor). This makes Zuul more > difficult to use. > > Therefore, I propose that we drop support for the custom plugins and > rely entirely on bubblewrap to protect us from the first class of > issues. It will mean less work for us, make Zuul more useful, and, at > least with regard to the first class of problems, I don't think Zuul > will be less secure. > > That brings me to the second class. I have a feeling (which is > unsubstantiated), that there are probably enough holes in our plugins > which make the second class of actions possible too. For example, we > allow use of the URI module, but that means that any playbook can hit > any HTTP URL accessible by the executor. While it may be difficult for > a user to make a connection to a mysql server, it's not outside the > realm of possibility. > > Dropping support for our custom plugins would mean that Zuul has no > defense against this class of action. We would need to document that > and ensure that users know that if their executors run untrusted code > that they should have adequate application or network level protections > in place. In short: don't put an executor inside the squishy center of > your security perimeter. > One topic I keep hearing from time to time, is moving where the zuul-executor runs ansible-playbook from. Most commonly, I've heard people asking for that to be run inside a pod (off the static zuul-executor host). I understand in our case it is a little more difficult then that, with how we deal with git repo state. Could you speak to if that would provide a better / safer / other place for ansible-playbook to run? > Before we do this, we need to make sure that Zuul itself doesn't impose > any constraints that contradict this. The executor only uses Gearman to > communicate with the rest of Zuul, and it can adequately be secured. > Before we add any features which require the executor to communicate via > ZooKeeper, we should make sure that we support zookeeper auth. > > What do folks think? Is this a reasonable change to make, or should we > continue supporting the custom plugins? > I do like the additional layer of protection we added to ansible, but not really surprised we are wanting to remove it. As we get better at doing things with zuul, we've needed to relax the restrictions on some modules to use (eg: add_host / known_hosts recently). What would it look like, if a zuul operator wanted to manage this additional security themselves? Or is that something we'd even consider. For example, in zuul 3.3.1[1] we released a point release with additional filtering for add_host and SSH, we'd now be saying this was actually okay, but still could be an attack vector right? Are connection plugins restrictions still in place or will they also be removed? [1] https://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-3-1-security-issues > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Mon Mar 11 23:05:04 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 11 Mar 2019 16:05:04 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <20190311220846.GA22574@localhost.localdomain> (Paul Belanger's message of "Mon, 11 Mar 2019 18:08:46 -0400") References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> Message-ID: <877ed5hv1b.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: ... >> Broadly speaking, there are two classes of actions we might want to >> block: >> >> 1) Actions which may expose private information stored on the executor >> (including information used by Zuul itself or other jobs), or otherwise >> compromise the security of the executor itself (so that future jobs can >> not be trusted). >> >> 2) Actions which may allow a user to take advantage of privileges >> afforded to the executor host itself. For example, if the executor is >> behind a network firewall and therefore can access unprotected network >> services. > One topic I keep hearing from time to time, is moving where the > zuul-executor runs ansible-playbook from. Most commonly, I've heard > people asking for that to be run inside a pod (off the static > zuul-executor host). I understand in our case it is a little more > difficult then that, with how we deal with git repo state. Could you > speak to if that would provide a better / safer / other place for > ansible-playbook to run? With respect to the first class, what you describe would not be significantly different. We already run Ansible in a container; running it in a different container would not change much. With respect to the second, it would mean that an operator would be able to worry less about what the Zuul executor has access to, but would need to ensure that whatever off-host non-Zuul executor doesn't have access to anything it shouldn't. Basically, it moves the problem. As long as the things that Zuul requires access to (e.g., gearman, zookeeper) can be protected, I don't think it's a significant difference from a security POV. Now is a good time to clarify that there are *significant* technical problems and potential loss of functionality with that approach, and there are no current plans to support it. > I do like the additional layer of protection we added to ansible, but > not really surprised we are wanting to remove it. As we get better at > doing things with zuul, we've needed to relax the restrictions on some > modules to use (eg: add_host / known_hosts recently). What would it look > like, if a zuul operator wanted to manage this additional security > themselves? Or is that something we'd even consider. For example, in > zuul 3.3.1[1] we released a point release with additional filtering for > add_host and SSH, we'd now be saying this was actually okay, but still > could be an attack vector right? The vulnerability there is running arbitrary code on the executor, and, if we decide to proceed with this, we would be making the decision that is no longer a vulnerability but is a feature. Given that executing arbitrary code on a machine is frequently a path privilege escalation, it might be worth calling that out explicitly. Perhaps that should be a third class of action we're currently trying to prevent. It's certainly something we paid attention to when we added bubblewrap: https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3-executor-security.html#local-privilege-escalation-lpe > Are connection plugins restrictions still in place or will they also be > removed? I imagine we would remove them too. -Jim From tdecacqu at redhat.com Tue Mar 12 03:09:24 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 12 Mar 2019 12:09:24 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <877ed5hv1b.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> Message-ID: <87lg1k3i1n.tristanC@fedora> On Mon, Mar 11, 2019 at 16:05 James E. Blair wrote: > Paul Belanger writes: > >> On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: > ... > >>> Broadly speaking, there are two classes of actions we might want to >>> block: >>> >>> 1) Actions which may expose private information stored on the executor >>> (including information used by Zuul itself or other jobs), or otherwise >>> compromise the security of the executor itself (so that future jobs can >>> not be trusted). >>> >>> 2) Actions which may allow a user to take advantage of privileges >>> afforded to the executor host itself. For example, if the executor is >>> behind a network firewall and therefore can access unprotected network >>> services. > >> One topic I keep hearing from time to time, is moving where the >> zuul-executor runs ansible-playbook from. Most commonly, I've heard >> people asking for that to be run inside a pod (off the static >> zuul-executor host). I understand in our case it is a little more >> difficult then that, with how we deal with git repo state. Could you >> speak to if that would provide a better / safer / other place for >> ansible-playbook to run? > > With respect to the first class, what you describe would not be > significantly different. We already run Ansible in a container; running > it in a different container would not change much. > Though Zuul executor job containers are missing cgroup, network namespace and seccomp filter. And we assume the executor process can create containers, which may not be possible in some restricted environment. I wish we would consider using external pods to run jobs, this would removes the executor privilege requirement and simplifies resources scaling. > With respect to the second, it would mean that an operator would be able > to worry less about what the Zuul executor has access to, but would need > to ensure that whatever off-host non-Zuul executor doesn't have access > to anything it shouldn't. Basically, it moves the problem. As long as > the things that Zuul requires access to (e.g., gearman, zookeeper) can > be protected, I don't think it's a significant difference from a > security POV. > Note that ZooKeeper access can not be protected without a fix for https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without network isolation between the executor service and the job tasks). > Now is a good time to clarify that there are *significant* technical > problems and potential loss of functionality with that approach, and > there are no current plans to support it. > >> I do like the additional layer of protection we added to ansible, but >> not really surprised we are wanting to remove it. As we get better at >> doing things with zuul, we've needed to relax the restrictions on some >> modules to use (eg: add_host / known_hosts recently). What would it look >> like, if a zuul operator wanted to manage this additional security >> themselves? Or is that something we'd even consider. For example, in >> zuul 3.3.1[1] we released a point release with additional filtering for >> add_host and SSH, we'd now be saying this was actually okay, but still >> could be an attack vector right? > > The vulnerability there is running arbitrary code on the executor, and, > if we decide to proceed with this, we would be making the decision that > is no longer a vulnerability but is a feature. > That sounds like rather risky, especially since executor are handling job's secrets. How are we going to prevent one play to not taint the one that are protected? Regards, -Tristan > Given that executing arbitrary code on a machine is frequently a path > privilege escalation, it might be worth calling that out explicitly. > Perhaps that should be a third class of action we're currently trying to > prevent. > > It's certainly something we paid attention to when we added bubblewrap: > > https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3-executor-security.html#local-privilege-escalation-lpe > >> Are connection plugins restrictions still in place or will they also be >> removed? > > I imagine we would remove them too. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From joshua.hesketh at gmail.com Tue Mar 12 03:53:04 2019 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 12 Mar 2019 14:53:04 +1100 Subject: Zuul project update 2019-03-11 In-Reply-To: <877ed5kw5e.fsf@meyer.lemoncheese.net> References: <877ed5kw5e.fsf@meyer.lemoncheese.net> Message-ID: On Tue, Mar 12, 2019 at 7:13 AM James E. Blair wrote: > Hi, > > Since the last email update, we merged the Multi-Ansible spec: > > > https://zuul-ci.org/docs/zuul/developer/specs/multiple-ansible-versions.html > > Tristan worked on adding a job filter to the trigger event, as discussed > previously on this list. The webtrigger patch is now based on that. > Tristan also continued working on zuul-runner and a set of patches to > support nodes with python3 only. > > I finally finished the changes related to buildset docker registries > (also mentioned previously on this list). In the end, the system looks > to be easy to use, but there are a few moving parts. My next task is to > document how to set such a system up. > > ****************************** > > There are a lot of outstanding changes, and I'd like to attempt to > prioritize and sequence them to help us work through them together: > > 1) The multi-ansible series > > While this is newer than the others on the list, it does address > significant technical debt. It would be really good to get this > finished and released before Ansible 2.5 goes out of maintenance > (it's the oldest supported Ansible and only receives security fixes). > > 2) The zuul-runner series > > This has been in development for a long time, and has a significant > impact on the executor. It will almost certainly conflict with the > multi-ansible series, so let's work on getting this merged after > that. > The main impact that this series has on the executor is a refactoring of the server into a common library. Other than moving parts of the code into a new file, the code itself is largely untouched (this is deliberate to help with reviewing). Each change to the executor server without this merged means that we have to redo the refactoring or reapply patches to the new place. This can be a significant overhead. Since this refactoring is ready (specifically the split into a common lib, rather than the whole runner itself), I would like to request that we review and merge it before working on the multi-ansible series. If we find significant problems during review with the refactoring approach then I am okay with postponing it for the more important work. However, at the moment I would like to avoid creating unnecessary maintenance work since this should be ready now in my opinion. I am specifically referring to the following patches: 607077, 607078, 607079. Thanks, Josh > > 3) The web authentication spec > > This has been in development for a while, and folks working on it > recently signaled it's ready for review. This shouldn't conflict > with anything ahead of it, so we can review it as time permits. > > 4) The web authentication series > > Once the spec lands, we can review the implementation which has been > developing in parallel. > > Of course, there's lots of other stuff out there we can review as well, > and anyone is free to review anything they want. :) However, I think > if we agree to prioritize these major efforts as I suggest, we will be > able to focus on them together and make the most efficient progress. > > Help write the next update email by updating > > https://etherpad.openstack.org/p/zuul-update-email > > Thanks, > > Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 12 15:57:14 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 12 Mar 2019 08:57:14 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87lg1k3i1n.tristanC@fedora> (Tristan Cacqueray's message of "Tue, 12 Mar 2019 12:09:24 +0900") References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> Message-ID: <87r2bcf5lx.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > I wish we would consider using external pods to run jobs, this would > removes the executor privilege requirement and simplifies resources > scaling. It adds the requirement of a COE, and one of Zuul's strengths is that it can operate in any environment, bare metal, virtual machine, or container. There's also the problem of the git repo cache that Paul mentioned. That is far from trivial. Regarding the container privilege requirement, the best way to remove that would be for someone to identify what the problem is on a technical level. Right now, we know that we have to set "privileged: true" in some situations. Why? What specific technical action performed by bwrap requires that? What capabilities? What seccomp rules? Basically, what are all of the changes required to get bwrap to work in openshift. If we knew that, we could look at potential changes to bwrap or future features of the kernel or openshift which may ameliorate the problem. If you're really interested in this, then please help with that. Understanding the problem is an important first step. >> With respect to the second, it would mean that an operator would be able >> to worry less about what the Zuul executor has access to, but would need >> to ensure that whatever off-host non-Zuul executor doesn't have access >> to anything it shouldn't. Basically, it moves the problem. As long as >> the things that Zuul requires access to (e.g., gearman, zookeeper) can >> be protected, I don't think it's a significant difference from a >> security POV. >> > > Note that ZooKeeper access can not be protected without a fix for > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without > network isolation between the executor service and the job tasks). Indeed, we should clarify this. I don't think we can drop our plugins or proceed with a scale-out scheduler if we can't prevent anonymous zk reads. The Jira report involves a custom authentication plugin. Have you tested this with the system we'd be using? >> The vulnerability there is running arbitrary code on the executor, and, >> if we decide to proceed with this, we would be making the decision that >> is no longer a vulnerability but is a feature. >> > > That sounds like rather risky, especially since executor are handling > job's secrets. How are we going to prevent one play to not taint the one > that are protected? That's what bubblewrap does for us today. -Jim From corvus at inaugust.com Tue Mar 12 16:42:16 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 12 Mar 2019 09:42:16 -0700 Subject: Zuul project update 2019-03-11 In-Reply-To: (Joshua Hesketh's message of "Tue, 12 Mar 2019 14:53:04 +1100") References: <877ed5kw5e.fsf@meyer.lemoncheese.net> Message-ID: <874l88f3iv.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: > On Tue, Mar 12, 2019 at 7:13 AM James E. Blair wrote: > >> 1) The multi-ansible series >> >> While this is newer than the others on the list, it does address >> significant technical debt. It would be really good to get this >> finished and released before Ansible 2.5 goes out of maintenance >> (it's the oldest supported Ansible and only receives security fixes). >> >> 2) The zuul-runner series >> >> This has been in development for a long time, and has a significant >> impact on the executor. It will almost certainly conflict with the >> multi-ansible series, so let's work on getting this merged after >> that. > > The main impact that this series has on the executor is a refactoring of > the server into a common library. Other than moving parts of the code into > a new file, the code itself is largely untouched (this is deliberate to > help with reviewing). Each change to the executor server without this > merged means that we have to redo the refactoring or reapply patches to the > new place. This can be a significant overhead. > > Since this refactoring is ready (specifically the split into a common lib, > rather than the whole runner itself), I would like to request that we > review and merge it before working on the multi-ansible series. If we find > significant problems during review with the refactoring approach then I am > okay with postponing it for the more important work. However, at the moment > I would like to avoid creating unnecessary maintenance work since this > should be ready now in my opinion. > > I am specifically referring to the following patches: 607077, 607078, > 607079. I understand what I'm asking, and I agree it's unfortunate. The inverse is true too -- if we landed zuul-runner first, we would need to rework the multi-ansible patches. I think there are two inputs into this decision: precedence by age, and precedence by need. I agree that the zuul-runner series has been in development longer. >From that perspective, it should have precedence by age. Ansible 2.8 is beginning its freeze process. It is scheduled to release on May 2. On that date, 2.5 will be out of support, and that will be very bad for Zuul. We need to merge the multi-ansible support and make a release ASAP, to give people time to incorporate it into their deployment processes and upgrade (which may involve fixing jobs). Ideally, that time frame would have been much longer than 7 weeks. From that perspective, the multi-ansible stack satisfies the greater need. There's a bit more too it than that though. The multi-ansible stack changes how the executor runs Ansible. That is certainly going to have an impact on the zuul-runner stack. So either zuul-runner needs more work to adapt to how the executor now deals with multiple Ansible versions, or the multi-ansible stack would need more work to adapt to local execution of Ansible. Whichever goes second is going to have a delay. Given the already short timeframe before Zuul is left requiring an unsupported version of Ansible, I think it's better for everyone if the multi-ansible support lands first. The tradeoff here though is that we should all recognize the extra work we're asking of the authors of the zuul-runner patches, and once multi-ansible is out the door, we focus on getting those merged before any other significant changes. -Jim From clint at fewbar.com Tue Mar 12 16:57:07 2019 From: clint at fewbar.com (Clint Byrum) Date: Tue, 12 Mar 2019 09:57:07 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87lg1k3i1n.tristanC@fedora> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> Message-ID: <155240982758.16268.10199525229602449261@clint-ThinkPad-X1-Carbon-6th> Quoting Tristan Cacqueray (2019-03-11 20:09:24) > > On Mon, Mar 11, 2019 at 16:05 James E. Blair wrote: > > Paul Belanger writes: > > > >> On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: > > ... > > > >>> Broadly speaking, there are two classes of actions we might want to > >>> block: > >>> > >>> 1) Actions which may expose private information stored on the executor > >>> (including information used by Zuul itself or other jobs), or otherwise > >>> compromise the security of the executor itself (so that future jobs can > >>> not be trusted). > >>> > >>> 2) Actions which may allow a user to take advantage of privileges > >>> afforded to the executor host itself. For example, if the executor is > >>> behind a network firewall and therefore can access unprotected network > >>> services. > > > >> One topic I keep hearing from time to time, is moving where the > >> zuul-executor runs ansible-playbook from. Most commonly, I've heard > >> people asking for that to be run inside a pod (off the static > >> zuul-executor host). I understand in our case it is a little more > >> difficult then that, with how we deal with git repo state. Could you > >> speak to if that would provide a better / safer / other place for > >> ansible-playbook to run? > > > > With respect to the first class, what you describe would not be > > significantly different. We already run Ansible in a container; running > > it in a different container would not change much. > > > > Though Zuul executor job containers are missing cgroup, network namespace > and seccomp filter. And we assume the executor process can create > containers, which may not be possible in some restricted environment. > A seccomp filter would need to be created for every potential job. Any attempt to make a generic one will have limited benefit as it won't really be blocking much. What problem that ansible restrictions prevent now, would be prevented by cgroups and/or network namespaces? Also USER_NS has been in mainline kernels for *years* now. I'm comfortable with that assumption. We also require Python 3.5+, which isn't in a lot of old OS's, so I'd like to think we can continue the aggressive stance on dependencies. > I wish we would consider using external pods to run jobs, this would > removes the executor privilege requirement and simplifies resources > scaling. > Not sure anybody has suggested this isn't possible, but I think what's on the table now is removing the Ansible restrictions without doing external pod execution. Are you suggesting pod execution should be added first? Also there is no executor privilege requirement if your kernel supports USER_NS. > > With respect to the second, it would mean that an operator would be able > > to worry less about what the Zuul executor has access to, but would need > > to ensure that whatever off-host non-Zuul executor doesn't have access > > to anything it shouldn't. Basically, it moves the problem. As long as > > the things that Zuul requires access to (e.g., gearman, zookeeper) can > > be protected, I don't think it's a significant difference from a > > security POV. > > > > Note that ZooKeeper access can not be protected without a fix for > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without > network isolation between the executor service and the job tasks). > What a bummer, I didn't know about that. I believe the workaround for this is to put a TLS terminator in front of Zookeeper that enforces client cert auth. Seems like it would be worth our time to dust off that PR and get it merged and released though. > > Now is a good time to clarify that there are *significant* technical > > problems and potential loss of functionality with that approach, and > > there are no current plans to support it. > > > >> I do like the additional layer of protection we added to ansible, but > >> not really surprised we are wanting to remove it. As we get better at > >> doing things with zuul, we've needed to relax the restrictions on some > >> modules to use (eg: add_host / known_hosts recently). What would it look > >> like, if a zuul operator wanted to manage this additional security > >> themselves? Or is that something we'd even consider. For example, in > >> zuul 3.3.1[1] we released a point release with additional filtering for > >> add_host and SSH, we'd now be saying this was actually okay, but still > >> could be an attack vector right? > > > > The vulnerability there is running arbitrary code on the executor, and, > > if we decide to proceed with this, we would be making the decision that > > is no longer a vulnerability but is a feature. > > > > That sounds like rather risky, especially since executor are handling > job's secrets. How are we going to prevent one play to not taint the one > that are protected? > The current restrictions aren't sufficient anyway. You can write your own Ansible module and do whatever you want that isn't an Ansible action. IMO doing things at this granular level is a neverending time sink, and that's what Jim is suggesting we end: Trust bubblewrap, lock it down further if necessary, and look at your executors for what they are: dangerous remote code executors that need to be watched and controlled. From shrews at redhat.com Tue Mar 12 17:48:10 2019 From: shrews at redhat.com (David Shrewsbury) Date: Tue, 12 Mar 2019 13:48:10 -0400 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87r2bcf5lx.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> Message-ID: On Tue, Mar 12, 2019 at 11:57 AM James E. Blair wrote: [snip] >> With respect to the second, it would mean that an operator would be able > >> to worry less about what the Zuul executor has access to, but would need > >> to ensure that whatever off-host non-Zuul executor doesn't have access > >> to anything it shouldn't. Basically, it moves the problem. As long as > >> the things that Zuul requires access to (e.g., gearman, zookeeper) can > >> be protected, I don't think it's a significant difference from a > >> security POV. > >> > > > > Note that ZooKeeper access can not be protected without a fix for > > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without > > network isolation between the executor service and the job tasks). > > Indeed, we should clarify this. I don't think we can drop our plugins > or proceed with a scale-out scheduler if we can't prevent anonymous zk > reads. The Jira report involves a custom authentication plugin. Have > you tested this with the system we'd be using? > > FYI, I did some quick local testing and I don't believe the standard digest/sasl mechanisms are affected. An unauthenticated client can connect and do CRUD ops, but cannot affect any znodes that have appropriate ACLs set on them. So as long as we set the ACLs correctly, I don't see any issue. This will mean we'll need to consider an upgrade path from non-ACL nodes to ACL nodes, but it looks like kazoo provides us a set_acls() API call we use to create a tool to help with that. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Mar 12 18:09:38 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 12 Mar 2019 19:09:38 +0100 Subject: Should we drop our custom Ansible plugins? In-Reply-To: References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> Message-ID: Hi all, On Tue, 12 Mar 2019, 18:48 David Shrewsbury, wrote: > > > On Tue, Mar 12, 2019 at 11:57 AM James E. Blair > wrote: > > [snip] > > >> With respect to the second, it would mean that an operator would be able >> >> to worry less about what the Zuul executor has access to, but would >> need >> >> to ensure that whatever off-host non-Zuul executor doesn't have access >> >> to anything it shouldn't. Basically, it moves the problem. As long as >> >> the things that Zuul requires access to (e.g., gearman, zookeeper) can >> >> be protected, I don't think it's a significant difference from a >> >> security POV. >> >> >> > >> > Note that ZooKeeper access can not be protected without a fix for >> > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without >> > network isolation between the executor service and the job tasks). >> >> Indeed, we should clarify this. I don't think we can drop our plugins >> or proceed with a scale-out scheduler if we can't prevent anonymous zk >> reads. The Jira report involves a custom authentication plugin. Have >> you tested this with the system we'd be using? >> >> > FYI, I did some quick local testing and I don't believe the standard > digest/sasl mechanisms are affected. An unauthenticated client can connect > and do CRUD ops, but cannot affect any znodes that have appropriate ACLs > set on them. So as long as we set the ACLs correctly, I don't see any > issue. This will mean we'll need to consider an upgrade path from non-ACL > nodes to ACL nodes, but it looks like kazoo provides us a set_acls() API > call we use to create a tool to help with that. > Doesn't this still leave a DDos or overfill attacks possible then? On the other hand I would really like to have possibility of keeping and using in-project custom plugins (i.e. project executes some playbooks and use in-project plugin to report statistics to some defined place). This is something I wanted to play with. I haven't tried it yet, but this mail raised concerns in me. Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomasz.szandala at gmail.com Tue Mar 12 22:15:17 2019 From: tomasz.szandala at gmail.com (=?UTF-8?Q?Tomasz_Szanda=C5=82a?=) Date: Tue, 12 Mar 2019 23:15:17 +0100 Subject: Zuul generic jobs Message-ID: Greetings, I have a project, that requires compilation for different targets. Same commands, but each one has to be done separately. I have seen how OpenStack does it: https://github.com/openstack-infra/openstack-zuul-jobs/blob/master/zuul.d/jobs.yaml so we define dozens of jobs separately even if they differ only by single element. My idea was to define job compilation, in which it is Ansible playbook that determines which target it should choose, based on job name: (compilation.yaml - playbook) - name: Set hardware and technology set_fact: hardware: "{{ zuul.job.split('.')[1] }}" technology: "{{ zuul.job.split('.')[2] }}" Now in project definition I can have something like this: - project: verification: jobs: - compilation.x86_64.old: parent: compilation - compilation.i686.old: parent: compilation - compilation.i686.future: parent: compilation But Zuul says, that compiation.x86_64.old is not defined. In other words: I want to define job inside project definition. This will mean that adding new variant I can only modify only 2 lines. Is there any reason why it is impossible? Kind regards, Tomasz -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.kundrat at cesnet.cz Tue Mar 12 23:09:22 2019 From: jan.kundrat at cesnet.cz (=?iso-8859-1?Q?Jan_Kundr=E1t?=) Date: Wed, 13 Mar 2019 00:09:22 +0100 Subject: Zuul generic jobs In-Reply-To: References: Message-ID: > My idea was to define job compilation, in which it is Ansible > playbook that determines which target it should choose, based on > job name: Hi, we're doing something similar. In our use case, the difference among all the jobs is just the choice of the compiler (gcc vs. clang) and some build flags (whether to do a plain build, or -fsanitize=address for ASAN, or ASAN+UBSAN, or TSAN, etc). We are also doing this jobname -> env mapping within a per-repo shell script, but that's probably just legacy at this point. > (compilation.yaml - playbook) > - name: Set hardware and technology > set_fact: > hardware: "{{ zuul.job.split('.')[1] }}" > technology: "{{ zuul.job.split('.')[2] }}" > > Now in project definition I can have something like this: > - project: > verification: > jobs: > - compilation.x86_64.old: > parent: compilation > - compilation.i686.old: > parent: compilation > - compilation.i686.future: > parent: compilation > > But Zuul says, that compiation.x86_64.old is not defined. That's because this statement only assigns a job name to a particular project and a pipeline. The job must be defined somewhere -- perhaps in your foo-jobs repository, or right in your leaf repo. That "definition" can very well be a two liner, assigning only "name" and "parent" as you're doing here. The actual job "use" within a project is a one-liner unless you need something special. You can take a look on how we are doing that: the job definition [1], one project [2] (which are generates artifacts), and a second project [3] (which consumes artifacts from the first project). You will need Zuul from git for artifacts in case you want to give this a try. > In other words: I want to define job inside project definition. > This will mean that adding new variant I can only modify only 2 > lines. > > Is there any reason why it is impossible? Jobs can be defined within a project scope -- just define them via a top-level `jobs:` dict. However, there's a big catch: they all live in a shared namespace (per each tenant). This means that if you have a job called, say, "build-centos7-gcc", it must be defined exactly once. You cannot have project-A and project-B both defining such a job within their .zuul.yaml, unfortunately. This leads to some repetition (see the YAML references in [2], for example), but I do not think that this is a big inconvenience. Hope this helps, Jan [1] https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/master/zuul.yaml#31 [2] https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/dependencies/+/master/.zuul.yaml#4 [3] https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/netconf-cli/+/master/.zuul.yaml From tdecacqu at redhat.com Wed Mar 13 03:05:08 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 13 Mar 2019 12:05:08 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> Message-ID: <877ed33257.tristanC@fedora> On Tue, Mar 12, 2019 at 19:09 Artem Goncharov wrote: > Hi all, > > On Tue, 12 Mar 2019, 18:48 David Shrewsbury, wrote: > >> >> >> On Tue, Mar 12, 2019 at 11:57 AM James E. Blair >> wrote: >> >> [snip] >> >> >> With respect to the second, it would mean that an operator would be able >>> >> to worry less about what the Zuul executor has access to, but would >>> need >>> >> to ensure that whatever off-host non-Zuul executor doesn't have access >>> >> to anything it shouldn't. Basically, it moves the problem. As long as >>> >> the things that Zuul requires access to (e.g., gearman, zookeeper) can >>> >> be protected, I don't think it's a significant difference from a >>> >> security POV. >>> >> >>> > >>> > Note that ZooKeeper access can not be protected without a fix for >>> > https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without >>> > network isolation between the executor service and the job tasks). >>> >>> Indeed, we should clarify this. I don't think we can drop our plugins >>> or proceed with a scale-out scheduler if we can't prevent anonymous zk >>> reads. The Jira report involves a custom authentication plugin. Have >>> you tested this with the system we'd be using? >>> >>> >> FYI, I did some quick local testing and I don't believe the standard >> digest/sasl mechanisms are affected. An unauthenticated client can connect >> and do CRUD ops, but cannot affect any znodes that have appropriate ACLs >> set on them. So as long as we set the ACLs correctly, I don't see any >> issue. This will mean we'll need to consider an upgrade path from non-ACL >> nodes to ACL nodes, but it looks like kazoo provides us a set_acls() API >> call we use to create a tool to help with that. >> > > Doesn't this still leave a DDos or overfill attacks possible then? > > On the other hand I would really like to have possibility of keeping and > using in-project custom plugins (i.e. project executes some playbooks and > use in-project plugin to report statistics to some defined place). This is > something I wanted to play with. I haven't tried it yet, but this mail > raised concerns in me. > Another concern is the /zookeeper nodes that would need extra protection. Moreover, note that ZooKeeper ACL are not like regular filesystem permission, they don't implement hierarchy. It may be possible that an anonymous user can create nodes in /nodepool/. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Mar 13 03:35:46 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 13 Mar 2019 12:35:46 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87r2bcf5lx.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> Message-ID: <874l8730q5.tristanC@fedora> On Tue, Mar 12, 2019 at 08:57 James E. Blair wrote: > Tristan Cacqueray writes: > >> I wish we would consider using external pods to run jobs, this would >> removes the executor privilege requirement and simplifies resources >> scaling. > > It adds the requirement of a COE, and one of Zuul's strengths is that it > can operate in any environment, bare metal, virtual machine, or > container. > This could be an optional requirement implemented as an execution context driver. > There's also the problem of the git repo cache that Paul mentioned. > That is far from trivial. > > Regarding the container privilege requirement, the best way to remove > that would be for someone to identify what the problem is on a technical > level. Right now, we know that we have to set "privileged: true" in > some situations. Why? As explained in the previous thread about this (install Zuul on K8/openshift) the core issue is that recursive/nested containers are considered dangerous and container runtime like docker disable it by default. There are progress being made to at least be able to create new namespace inside a container and this is well documented in these slides: https://www.slideshare.net/AkihiroSuda/rootless-containers However, considering the bad track record of this feature, it may takes time before we can created nested namespace without explicitely using some sort of "privileged: true" or "seccomp: unconfined" options, which may be disabled in multi tenant COE like openshift. > What specific technical action performed by > bwrap requires that? What capabilities? What seccomp rules? > The first issue is clone(CLONE_NEWNS). You need the SYS_ADMIN capability. For example a regular user is not able to run "unshare --mount". Though you can create a user namespace (CLONE_NEWUSER) which grants the SYS_ADMIN capability, thus a regular user can run "unshare --user --mount". Unfortunately this has a lot of security implication. Then assuming you can create namespaces, you still need access to a proper /proc fs to complete the new mount namespace creation. Finally you have to overcome the default seccomp/apparmor/SELinux policy which usualy prevent above actions. More details about this are available in these issues: https://github.com/projectatomic/bubblewrap/issues/269 https://github.com/opencontainers/runc/issues/1658 And finally there seems to be a blocker for user namespace in K8/openshift: https://github.com/kubernetes/enhancements/issues/127 > Basically, what are all of the changes required to get bwrap to work in > openshift. > Basically, you need these permission today: - https://github.com/gtema/zuul-apb/blob/master/zuul_scc.yaml#L21-L23 If user namespace are available, then you could replace privileged by - enable (through seccomp) the creation of NEWNS|NEWUSER clone. - enable a proper /proc (e.g. unmasked /proc/sys /proc/kcore). - enable setuid/setgid cap for newuidmap/newgidmap needed by user ns creation. Then when K8S supports user namespace creation, you may be able to drop the setuid/setgit cap. At least, that is my understanding of the technical issues :-) Also please note that these only cover namespaces, which is a subset of what "container" usualy means, and bwrap doesn't support cgroup or network isolation. > If we knew that, we could look at potential changes to bwrap or future > features of the kernel or openshift which may ameliorate the problem. > > If you're really interested in this, then please help with that. > Understanding the problem is an important first step. > The core problem with the zuul executor process is that we are running different execution context inside a single pod, which goes against the design of COE environment: With containers you are supposed to split the application into multiple containers each doing one task as opposed to multiple different processes in the same container. This gives you the isolation. Thus I think nested/recursive containers is great for unprivileged image building, but it's not suitable for the zuul executor. [snip] >>> The vulnerability there is running arbitrary code on the executor, and, >>> if we decide to proceed with this, we would be making the decision that >>> is no longer a vulnerability but is a feature. >>> >> >> That sounds like rather risky, especially since executor are handling >> job's secrets. How are we going to prevent one play to not taint the one >> that are protected? > > That's what bubblewrap does for us today. > I meant, if we are going to enable trusted and untrusted playbook to run with the same workspace, then we may be in a position to use an external pod for the job. IIUC, the issue is that after an untrusted play we have to ensure the trusted content is not tainted, which is the main limitation with regard to implementing a k8s execution context driver. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Mar 13 03:35:48 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 13 Mar 2019 12:35:48 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <155240982758.16268.10199525229602449261@clint-ThinkPad-X1-Carbon-6th> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <155240982758.16268.10199525229602449261@clint-ThinkPad-X1-Carbon-6th> Message-ID: <8736nr30q3.tristanC@fedora> On Tue, Mar 12, 2019 at 09:57 Clint Byrum wrote: > Quoting Tristan Cacqueray (2019-03-11 20:09:24) >> >> On Mon, Mar 11, 2019 at 16:05 James E. Blair wrote: >> > Paul Belanger writes: >> > >> >> On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: >> > ... >> > >> >>> Broadly speaking, there are two classes of actions we might want to >> >>> block: >> >>> >> >>> 1) Actions which may expose private information stored on the executor >> >>> (including information used by Zuul itself or other jobs), or otherwise >> >>> compromise the security of the executor itself (so that future jobs can >> >>> not be trusted). >> >>> >> >>> 2) Actions which may allow a user to take advantage of privileges >> >>> afforded to the executor host itself. For example, if the executor is >> >>> behind a network firewall and therefore can access unprotected network >> >>> services. >> > >> >> One topic I keep hearing from time to time, is moving where the >> >> zuul-executor runs ansible-playbook from. Most commonly, I've heard >> >> people asking for that to be run inside a pod (off the static >> >> zuul-executor host). I understand in our case it is a little more >> >> difficult then that, with how we deal with git repo state. Could you >> >> speak to if that would provide a better / safer / other place for >> >> ansible-playbook to run? >> > >> > With respect to the first class, what you describe would not be >> > significantly different. We already run Ansible in a container; running >> > it in a different container would not change much. >> > >> >> Though Zuul executor job containers are missing cgroup, network namespace >> and seccomp filter. And we assume the executor process can create >> containers, which may not be possible in some restricted environment. >> > > A seccomp filter would need to be created for every potential job. Any > attempt to make a generic one will have limited benefit as it won't > really be blocking much. > Well the common seccomp filters usualy provides a good defensive layer that shouldn't be available to any given job (such as using SCTP protocol). Here is the one used by docker: https://docs.docker.com/v18.03/engine/security/seccomp/#significant-syscalls-blocked-by-the-default-profile > What problem that ansible restrictions prevent now, would be prevented > by cgroups and/or network namespaces? > Cgroup would let us limit the memory and cpu usage. Network namespaces would enable 2 jobs to bind the same port, and we could setup packet filter to prevent job network to access zuul control plane. > Also USER_NS has been in mainline kernels for *years* now. I'm > comfortable with that assumption. We also require Python 3.5+, which > isn't in a lot of old OS's, so I'd like to think we can continue the > aggressive stance on dependencies. > As explained in another mail, USER_NS is not enough to enable using bwrap in k8s. >> I wish we would consider using external pods to run jobs, this would >> removes the executor privilege requirement and simplifies resources >> scaling. >> > > Not sure anybody has suggested this isn't possible, but I think what's > on the table now is removing the Ansible restrictions without doing > external pod execution. Are you suggesting pod execution should be added > first? > I'm suggesting we could have a pod execution context driver that could be used in place of the bubblewrap one. > Also there is no executor privilege requirement if your kernel supports > USER_NS. > IIUC, you still need SETUID/SETGID cap to be able to complete the user namespace creation. Regards, -Tristan >> > With respect to the second, it would mean that an operator would be able >> > to worry less about what the Zuul executor has access to, but would need >> > to ensure that whatever off-host non-Zuul executor doesn't have access >> > to anything it shouldn't. Basically, it moves the problem. As long as >> > the things that Zuul requires access to (e.g., gearman, zookeeper) can >> > be protected, I don't think it's a significant difference from a >> > security POV. >> > >> >> Note that ZooKeeper access can not be protected without a fix for >> https://issues.apache.org/jira/browse/ZOOKEEPER-2937 (at least without >> network isolation between the executor service and the job tasks). >> > > What a bummer, I didn't know about that. > > I believe the workaround for this is to put a TLS terminator in front of > Zookeeper that enforces client cert auth. Seems like it would be worth > our time to dust off that PR and get it merged and released though. > >> > Now is a good time to clarify that there are *significant* technical >> > problems and potential loss of functionality with that approach, and >> > there are no current plans to support it. >> > >> >> I do like the additional layer of protection we added to ansible, but >> >> not really surprised we are wanting to remove it. As we get better at >> >> doing things with zuul, we've needed to relax the restrictions on some >> >> modules to use (eg: add_host / known_hosts recently). What would it look >> >> like, if a zuul operator wanted to manage this additional security >> >> themselves? Or is that something we'd even consider. For example, in >> >> zuul 3.3.1[1] we released a point release with additional filtering for >> >> add_host and SSH, we'd now be saying this was actually okay, but still >> >> could be an attack vector right? >> > >> > The vulnerability there is running arbitrary code on the executor, and, >> > if we decide to proceed with this, we would be making the decision that >> > is no longer a vulnerability but is a feature. >> > >> >> That sounds like rather risky, especially since executor are handling >> job's secrets. How are we going to prevent one play to not taint the one >> that are protected? >> > > The current restrictions aren't sufficient anyway. You can write your > own Ansible module and do whatever you want that isn't an Ansible > action. IMO doing things at this granular level is a neverending time > sink, and that's what Jim is suggesting we end: Trust bubblewrap, lock > it down further if necessary, and look at your executors for what they > are: dangerous remote code executors that need to be watched and > controlled. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Mar 13 05:42:05 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 13 Mar 2019 14:42:05 +0900 Subject: Zuul project update 2019-03-11 In-Reply-To: <874l88f3iv.fsf@meyer.lemoncheese.net> References: <877ed5kw5e.fsf@meyer.lemoncheese.net> <874l88f3iv.fsf@meyer.lemoncheese.net> Message-ID: <87zhpz1gb6.tristanC@fedora> On Tue, Mar 12, 2019 at 09:42 James E. Blair wrote: > Joshua Hesketh writes: > >> On Tue, Mar 12, 2019 at 7:13 AM James E. Blair wrote: >> >>> 1) The multi-ansible series >>> >>> While this is newer than the others on the list, it does address >>> significant technical debt. It would be really good to get this >>> finished and released before Ansible 2.5 goes out of maintenance >>> (it's the oldest supported Ansible and only receives security fixes). >>> >>> 2) The zuul-runner series >>> >>> This has been in development for a long time, and has a significant >>> impact on the executor. It will almost certainly conflict with the >>> multi-ansible series, so let's work on getting this merged after >>> that. >> >> The main impact that this series has on the executor is a refactoring of >> the server into a common library. Other than moving parts of the code into >> a new file, the code itself is largely untouched (this is deliberate to >> help with reviewing). Each change to the executor server without this >> merged means that we have to redo the refactoring or reapply patches to the >> new place. This can be a significant overhead. >> >> Since this refactoring is ready (specifically the split into a common lib, >> rather than the whole runner itself), I would like to request that we >> review and merge it before working on the multi-ansible series. If we find >> significant problems during review with the refactoring approach then I am >> okay with postponing it for the more important work. However, at the moment >> I would like to avoid creating unnecessary maintenance work since this >> should be ready now in my opinion. >> >> I am specifically referring to the following patches: 607077, 607078, >> 607079. > > I understand what I'm asking, and I agree it's unfortunate. The inverse > is true too -- if we landed zuul-runner first, we would need to rework > the multi-ansible patches. > > I think there are two inputs into this decision: precedence by age, and > precedence by need. > > I agree that the zuul-runner series has been in development longer. > From that perspective, it should have precedence by age. > > Ansible 2.8 is beginning its freeze process. It is scheduled to release > on May 2. On that date, 2.5 will be out of support, and that will be > very bad for Zuul. We need to merge the multi-ansible support and make > a release ASAP, to give people time to incorporate it into their > deployment processes and upgrade (which may involve fixing jobs). > Ideally, that time frame would have been much longer than 7 weeks. From > that perspective, the multi-ansible stack satisfies the greater need. > > There's a bit more too it than that though. The multi-ansible stack > changes how the executor runs Ansible. That is certainly going to have > an impact on the zuul-runner stack. So either zuul-runner needs more > work to adapt to how the executor now deals with multiple Ansible > versions, or the multi-ansible stack would need more work to adapt to > local execution of Ansible. > Multi-ansible doesn't have an impact on the zuul-runner stack since the executor code is not modified (at least by the review mentioned above). We'll just have to do yet another rebase by moving the code added in executor/server.py to executor/common.py. > Whichever goes second is going to have a delay. Given the already short > timeframe before Zuul is left requiring an unsupported version of > Ansible, I think it's better for everyone if the multi-ansible support > lands first. > > The tradeoff here though is that we should all recognize the extra work > we're asking of the authors of the zuul-runner patches, and once > multi-ansible is out the door, we focus on getting those merged before > any other significant changes. > > -Jim It's really unfortunate that some changes take so long to be reviewed as it's time consuming to context switch back to change proposed months ago. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tomasz.szandala at gmail.com Wed Mar 13 09:17:18 2019 From: tomasz.szandala at gmail.com (=?UTF-8?Q?Tomasz_Szanda=C5=82a?=) Date: Wed, 13 Mar 2019 10:17:18 +0100 Subject: Zuul generic jobs In-Reply-To: References: Message-ID: Thank You. However: based on example [2] I have: - project: verification: jobs: - meta_validation - compilation.i686.old: parent: compilation dependencies: - name: meta_validation And one job: - job: name: compilation run: playbooks/compilation.yaml parent: base Now Zuul reports to Gerrit: Zuul encountered a syntax error while parsing its configuration in the repo some_repo on branch master. The error was: Job compilation.i686.old not defined The error appears in the following project stanza: project: verification: jobs: - meta_validation - compilation.i686.old: (...) What am I doing wrong? śr., 13 mar 2019 o 00:09 Jan Kundrát napisał(a): > > My idea was to define job compilation, in which it is Ansible > > playbook that determines which target it should choose, based on > > job name: > > Hi, > we're doing something similar. In our use case, the difference among all > the jobs is just the choice of the compiler (gcc vs. clang) and some build > flags (whether to do a plain build, or -fsanitize=address for ASAN, or > ASAN+UBSAN, or TSAN, etc). We are also doing this jobname -> env mapping > within a per-repo shell script, but that's probably just legacy at this > point. > > > (compilation.yaml - playbook) > > - name: Set hardware and technology > > set_fact: > > hardware: "{{ zuul.job.split('.')[1] }}" > > technology: "{{ zuul.job.split('.')[2] }}" > > > > Now in project definition I can have something like this: > > - project: > > verification: > > jobs: > > - compilation.x86_64.old: > > parent: compilation > > - compilation.i686.old: > > parent: compilation > > - compilation.i686.future: > > parent: compilation > > > > But Zuul says, that compiation.x86_64.old is not defined. > > That's because this statement only assigns a job name to a particular > project and a pipeline. The job must be defined somewhere -- perhaps in > your foo-jobs repository, or right in your leaf repo. That "definition" > can > very well be a two liner, assigning only "name" and "parent" as you're > doing here. The actual job "use" within a project is a one-liner unless > you > need something special. > > You can take a look on how we are doing that: the job definition [1], one > project [2] (which are generates artifacts), and a second project [3] > (which consumes artifacts from the first project). > > You will need Zuul from git for artifacts in case you want to give this a > try. > > > In other words: I want to define job inside project definition. > > This will mean that adding new variant I can only modify only 2 > > lines. > > > > Is there any reason why it is impossible? > > Jobs can be defined within a project scope -- just define them via a > top-level `jobs:` dict. However, there's a big catch: they all live in a > shared namespace (per each tenant). This means that if you have a job > called, say, "build-centos7-gcc", it must be defined exactly once. You > cannot have project-A and project-B both defining such a job within their > .zuul.yaml, unfortunately. > > This leads to some repetition (see the YAML references in [2], for > example), but I do not think that this is a big inconvenience. > > Hope this helps, > Jan > > [1] > > https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/master/zuul.yaml#31 > [2] > > https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/dependencies/+/master/.zuul.yaml#4 > [3] > > https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/netconf-cli/+/master/.zuul.yaml > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Mar 13 13:55:59 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 13 Mar 2019 09:55:59 -0400 Subject: Zuul generic jobs In-Reply-To: References: Message-ID: <20190313135559.GA13660@localhost.localdomain> On Wed, Mar 13, 2019 at 10:17:18AM +0100, Tomasz Szandała wrote: > Thank You. > However: based on example [2] > I have: > > - project: > verification: > jobs: > - meta_validation > - compilation.i686.old: > parent: compilation > dependencies: > - name: meta_validation > > > And one job: > - job: > name: compilation > run: playbooks/compilation.yaml > parent: base > You also need to define the compilation.i686.old job: - job: name: compilation.i686.old parent: compilation Then you can assign it to your pipeline: - project: verification: jobs: - meta_validation - compilation.i686.old > > Now Zuul reports to Gerrit: > > Zuul encountered a syntax error while parsing its configuration in the repo > some_repo on branch master. The error was: > > Job compilation.i686.old not defined > > The error appears in the following project stanza: > > project: > verification: > jobs: > - meta_validation > - compilation.i686.old: (...) > > What am I doing wrong? > > śr., 13 mar 2019 o 00:09 Jan Kundrát napisał(a): > > > > My idea was to define job compilation, in which it is Ansible > > > playbook that determines which target it should choose, based on > > > job name: > > > > Hi, > > we're doing something similar. In our use case, the difference among all > > the jobs is just the choice of the compiler (gcc vs. clang) and some build > > flags (whether to do a plain build, or -fsanitize=address for ASAN, or > > ASAN+UBSAN, or TSAN, etc). We are also doing this jobname -> env mapping > > within a per-repo shell script, but that's probably just legacy at this > > point. > > > > > (compilation.yaml - playbook) > > > - name: Set hardware and technology > > > set_fact: > > > hardware: "{{ zuul.job.split('.')[1] }}" > > > technology: "{{ zuul.job.split('.')[2] }}" > > > > > > Now in project definition I can have something like this: > > > - project: > > > verification: > > > jobs: > > > - compilation.x86_64.old: > > > parent: compilation > > > - compilation.i686.old: > > > parent: compilation > > > - compilation.i686.future: > > > parent: compilation > > > > > > But Zuul says, that compiation.x86_64.old is not defined. > > > > That's because this statement only assigns a job name to a particular > > project and a pipeline. The job must be defined somewhere -- perhaps in > > your foo-jobs repository, or right in your leaf repo. That "definition" > > can > > very well be a two liner, assigning only "name" and "parent" as you're > > doing here. The actual job "use" within a project is a one-liner unless > > you > > need something special. > > > > You can take a look on how we are doing that: the job definition [1], one > > project [2] (which are generates artifacts), and a second project [3] > > (which consumes artifacts from the first project). > > > > You will need Zuul from git for artifacts in case you want to give this a > > try. > > > > > In other words: I want to define job inside project definition. > > > This will mean that adding new variant I can only modify only 2 > > > lines. > > > > > > Is there any reason why it is impossible? > > > > Jobs can be defined within a project scope -- just define them via a > > top-level `jobs:` dict. However, there's a big catch: they all live in a > > shared namespace (per each tenant). This means that if you have a job > > called, say, "build-centos7-gcc", it must be defined exactly once. You > > cannot have project-A and project-B both defining such a job within their > > .zuul.yaml, unfortunately. > > > > This leads to some repetition (see the YAML references in [2], for > > example), but I do not think that this is a big inconvenience. > > > > Hope this helps, > > Jan > > > > [1] > > > > https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/master/zuul.yaml#31 > > [2] > > > > https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/dependencies/+/master/.zuul.yaml#4 > > [3] > > > > https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/netconf-cli/+/master/.zuul.yaml > > > > _______________________________________________ > > Zuul-discuss mailing list > > Zuul-discuss at lists.zuul-ci.org > > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Mar 13 15:42:50 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 13 Mar 2019 08:42:50 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <874l8730q5.tristanC@fedora> (Tristan Cacqueray's message of "Wed, 13 Mar 2019 12:35:46 +0900") References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> <874l8730q5.tristanC@fedora> Message-ID: <878sxidblx.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On Tue, Mar 12, 2019 at 08:57 James E. Blair wrote: >> Tristan Cacqueray writes: >> >>> I wish we would consider using external pods to run jobs, this would >>> removes the executor privilege requirement and simplifies resources >>> scaling. >> >> It adds the requirement of a COE, and one of Zuul's strengths is that it >> can operate in any environment, bare metal, virtual machine, or >> container. >> > > This could be an optional requirement implemented as an execution > context driver. That's certainly something to consider, but if it's optional, then it doesn't provide an answer to the question I raised in this thread: Should we drop our custom Ansible plugins? So, setting aside the fact that you would like a COE driver as a new feature for other reasons, let's bring this back to topic: Do you think the extra protections of a full container system (i.e., more than bubblewrap) are necessary if we are to drop the custom Ansible plugins? >> There's also the problem of the git repo cache that Paul mentioned. >> That is far from trivial. >> >> Regarding the container privilege requirement, the best way to remove >> that would be for someone to identify what the problem is on a technical >> level. Right now, we know that we have to set "privileged: true" in >> some situations. Why? > > As explained in the previous thread about this (install Zuul on K8/openshift) > the core issue is that recursive/nested containers are considered > dangerous and container runtime like docker disable it by default. > > There are progress being made to at least be able to create new > namespace inside a container and this is well documented in these > slides: https://www.slideshare.net/AkihiroSuda/rootless-containers > > However, considering the bad track record of this feature, it may takes > time before we can created nested namespace without explicitely using > some sort of "privileged: true" or "seccomp: unconfined" options, > which may be disabled in multi tenant COE like openshift. > >> What specific technical action performed by >> bwrap requires that? What capabilities? What seccomp rules? >> > > The first issue is clone(CLONE_NEWNS). You need the SYS_ADMIN capability. > For example a regular user is not able to run "unshare --mount". > > Though you can create a user namespace (CLONE_NEWUSER) which grants the > SYS_ADMIN capability, thus a regular user can run "unshare --user --mount". > Unfortunately this has a lot of security implication. > > Then assuming you can create namespaces, you still need access to a proper > /proc fs to complete the new mount namespace creation. > > Finally you have to overcome the default seccomp/apparmor/SELinux policy > which usualy prevent above actions. > > More details about this are available in these issues: > https://github.com/projectatomic/bubblewrap/issues/269 > https://github.com/opencontainers/runc/issues/1658 > > And finally there seems to be a blocker for user namespace in K8/openshift: > https://github.com/kubernetes/enhancements/issues/127 > >> Basically, what are all of the changes required to get bwrap to work in >> openshift. >> > > Basically, you need these permission today: > - https://github.com/gtema/zuul-apb/blob/master/zuul_scc.yaml#L21-L23 > If user namespace are available, then you could replace privileged by > - enable (through seccomp) the creation of NEWNS|NEWUSER clone. > - enable a proper /proc (e.g. unmasked /proc/sys /proc/kcore). > - enable setuid/setgid cap for newuidmap/newgidmap needed by user ns creation. > > Then when K8S supports user namespace creation, you may be able to drop > the setuid/setgit cap. > > At least, that is my understanding of the technical issues :-) > > Also please note that these only cover namespaces, which is a subset of > what "container" usualy means, and bwrap doesn't support cgroup or > network isolation. That is all very helpful information, thanks! >> If we knew that, we could look at potential changes to bwrap or future >> features of the kernel or openshift which may ameliorate the problem. >> >> If you're really interested in this, then please help with that. >> Understanding the problem is an important first step. >> > > The core problem with the zuul executor process is that we are running > different execution context inside a single pod, which goes against the > design of COE environment: > With containers you are supposed to split the application into multiple > containers each doing one task as opposed to multiple different > processes in the same container. This gives you the isolation. > > Thus I think nested/recursive containers is great for unprivileged image > building, but it's not suitable for the zuul executor. I disagree with your interpretations, but let's not discuss this as a point of philosophy. I don't think that's going to bring us to a resolution. We can look at this strictly from a technical perspective: * We will continue to support non-COE Zuul deployments as first-class. * Zuul works as designed today in any COE which allows privileged containers (including dedicated OpenShift so configured). * Executor support for COE could provide some additional advantages in scalability, security, and simplified management. * Executor support for COE could allow Zuul to be used in a shared OpenShift. Those are reasons enough to consider it. But again, maybe we could do that separately instead of diverting this thread off course? > > [snip] > >>>> The vulnerability there is running arbitrary code on the executor, and, >>>> if we decide to proceed with this, we would be making the decision that >>>> is no longer a vulnerability but is a feature. >>>> >>> >>> That sounds like rather risky, especially since executor are handling >>> job's secrets. How are we going to prevent one play to not taint the one >>> that are protected? >> >> That's what bubblewrap does for us today. >> > > I meant, if we are going to enable trusted and untrusted playbook to run > with the same workspace, then we may be in a position to use an external > pod for the job. > IIUC, the issue is that after an untrusted play we have to ensure the > trusted content is not tainted, which is the main limitation with regard > to implementing a k8s execution context driver. I am not proposing that we change anything about how trusted and untrusted playbooks are kept separate. They run with the same workspace, but bubblewrap ensures that secrets are only accessible to the individual playbook they're associated with. The start of the text you quoted was in regards to dropping our custom Ansible plugins, but it seems to me that in your responses you have interpreted it in the context of executor COE support. That's why the conversation is so confused. The original question I asked is very important, and it's worth us focusing on it. It's fine that, as part of that, we ask "would executor COE support help address the issue?" the answer is that since we will not *require* it, it can not help. So let's set that aside now and explore it in a new thread. Meanwhile, let's get back to analyzing the issues involved in dropping the custom Ansible plugins. -Jim From corvus at inaugust.com Wed Mar 13 15:49:24 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 13 Mar 2019 08:49:24 -0700 Subject: Zuul generic jobs In-Reply-To: <20190313135559.GA13660@localhost.localdomain> (Paul Belanger's message of "Wed, 13 Mar 2019 09:55:59 -0400") References: <20190313135559.GA13660@localhost.localdomain> Message-ID: <874l86dbaz.fsf@meyer.lemoncheese.net> Paul Belanger writes: > You also need to define the compilation.i686.old job: > > - job: > name: compilation.i686.old > parent: compilation It's not much more verbose to be explicit: vars: architecture: i686 I'd encourage folks to avoid having jobs change behavior based on job names. A separation of concerns there makes the system more maintainable over the long run -- you can rename your jobs as needed without worrying about unexpected side effects, and you can look at your job definitions and see exactly what behaviors you are setting. -Jim From corvus at inaugust.com Wed Mar 13 16:23:10 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 13 Mar 2019 09:23:10 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <877ed5jfz5.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Mon, 11 Mar 2019 13:47:26 -0700") References: <877ed5jfz5.fsf@meyer.lemoncheese.net> Message-ID: <87imwmbv69.fsf@meyer.lemoncheese.net> I'd like to reset this thread because I think we were getting off-topic. To summarize, there has been some trepidation, some specific concerns, and some support. The support tends to come from the perspective that our masking is incomplete enough that we're probably effectively already in this state anyway. The specific concerns are: 1) We should ensure that Zookeeper is sufficiently secure that a job can't just connect to it and start manipulating the data. Tristan and Shrews are looking into that. We should follow up with a definitive answer as to whether the ZK ACLs are sufficient. 2) We would be allowing arbitrary code execution which may allow an attacker access to exploit a vulnerability (e.g., in the kernel) on the host. I've tried to digest the thread so far, but I may have missed something. Are there any other concerns about regressions which this may introduce? -Jim From brendan.jackman at bluwireless.co.uk Wed Mar 13 09:21:15 2019 From: brendan.jackman at bluwireless.co.uk (Brendan Jackman) Date: Wed, 13 Mar 2019 09:21:15 +0000 Subject: Zuul generic jobs In-Reply-To: References: Message-ID: <698797f3-2a5c-544a-1b16-004f3aa58ada@bluwireless.co.uk> I don't think you can define and refer-to jobs at the same time like that. Unless I'm mistaken you need something like: - job: name: compilation.i686.old parent: compilation # etc, other job def fields - project: verification: jobs: - compilation.i686.old On 13/03/2019 16:17, Tomasz Szandała wrote: Thank You. However: based on example [2] I have: - project: verification: jobs: - meta_validation - compilation.i686.old: parent: compilation dependencies: - name: meta_validation And one job: - job: name: compilation run: playbooks/compilation.yaml parent: base Now Zuul reports to Gerrit: Zuul encountered a syntax error while parsing its configuration in the repo some_repo on branch master. The error was: Job compilation.i686.old not defined The error appears in the following project stanza: project: verification: jobs: - meta_validation - compilation.i686.old: (...) What am I doing wrong? śr., 13 mar 2019 o 00:09 Jan Kundrát > napisał(a): > My idea was to define job compilation, in which it is Ansible > playbook that determines which target it should choose, based on > job name: Hi, we're doing something similar. In our use case, the difference among all the jobs is just the choice of the compiler (gcc vs. clang) and some build flags (whether to do a plain build, or -fsanitize=address for ASAN, or ASAN+UBSAN, or TSAN, etc). We are also doing this jobname -> env mapping within a per-repo shell script, but that's probably just legacy at this point. > (compilation.yaml - playbook) > - name: Set hardware and technology > set_fact: > hardware: "{{ zuul.job.split('.')[1] }}" > technology: "{{ zuul.job.split('.')[2] }}" > > Now in project definition I can have something like this: > - project: > verification: > jobs: > - compilation.x86_64.old: > parent: compilation > - compilation.i686.old: > parent: compilation > - compilation.i686.future: > parent: compilation > > But Zuul says, that compiation.x86_64.old is not defined. That's because this statement only assigns a job name to a particular project and a pipeline. The job must be defined somewhere -- perhaps in your foo-jobs repository, or right in your leaf repo. That "definition" can very well be a two liner, assigning only "name" and "parent" as you're doing here. The actual job "use" within a project is a one-liner unless you need something special. You can take a look on how we are doing that: the job definition [1], one project [2] (which are generates artifacts), and a second project [3] (which consumes artifacts from the first project). You will need Zuul from git for artifacts in case you want to give this a try. > In other words: I want to define job inside project definition. > This will mean that adding new variant I can only modify only 2 > lines. > > Is there any reason why it is impossible? Jobs can be defined within a project scope -- just define them via a top-level `jobs:` dict. However, there's a big catch: they all live in a shared namespace (per each tenant). This means that if you have a job called, say, "build-centos7-gcc", it must be defined exactly once. You cannot have project-A and project-B both defining such a job within their .zuul.yaml, unfortunately. This leads to some repetition (see the YAML references in [2], for example), but I do not think that this is a big inconvenience. Hope this helps, Jan [1] https://gerrit.cesnet.cz/plugins/gitiles/ci/zuul-jobs-cesnet/+/master/zuul.yaml#31 [2] https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/dependencies/+/master/.zuul.yaml#4 [3] https://gerrit.cesnet.cz/plugins/gitiles/CzechLight/netconf-cli/+/master/.zuul.yaml _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss _______________________________________________ Zuul-discuss mailing list Zuul-discuss at lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Mar 13 16:42:46 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 13 Mar 2019 12:42:46 -0400 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87imwmbv69.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <87imwmbv69.fsf@meyer.lemoncheese.net> Message-ID: <20190313164246.GE13660@localhost.localdomain> On Wed, Mar 13, 2019 at 09:23:10AM -0700, James E. Blair wrote: > I'd like to reset this thread because I think we were getting off-topic. > > To summarize, there has been some trepidation, some specific concerns, > and some support. > I'm in the cautiously support camp, and see if possible to remove. > The support tends to come from the perspective that our masking is > incomplete enough that we're probably effectively already in this state > anyway. > > The specific concerns are: > > 1) We should ensure that Zookeeper is sufficiently secure that a job > can't just connect to it and start manipulating the data. Tristan and > Shrews are looking into that. We should follow up with a definitive > answer as to whether the ZK ACLs are sufficient. > > 2) We would be allowing arbitrary code execution which may allow an > attacker access to exploit a vulnerability (e.g., in the kernel) on the > host. > I admit, this sounds scary, but if we 99.9% believe bubblewrap is going to provide the best protection, this should be fine. I do think our plugins provide another layer of protection, which helps limit the amount of things we can even run on the executor. So far, the majority of jobs haven't needed access to localhost that I have seen. > I've tried to digest the thread so far, but I may have missed something. > Are there any other concerns about regressions which this may introduce? > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From Tobias.Henkel at bmw.de Wed Mar 13 16:45:16 2019 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Wed, 13 Mar 2019 16:45:16 +0000 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <87imwmbv69.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <87imwmbv69.fsf@meyer.lemoncheese.net> Message-ID: On 13.03.19, 17:23, "James E. Blair" wrote: I'd like to reset this thread because I think we were getting off-topic. To summarize, there has been some trepidation, some specific concerns, and some support. The support tends to come from the perspective that our masking is incomplete enough that we're probably effectively already in this state anyway. The specific concerns are: 1) We should ensure that Zookeeper is sufficiently secure that a job can't just connect to it and start manipulating the data. Tristan and Shrews are looking into that. We should follow up with a definitive answer as to whether the ZK ACLs are sufficient. 2) We would be allowing arbitrary code execution which may allow an attacker access to exploit a vulnerability (e.g., in the kernel) on the host. What we need to keep in mind is that we must ensure that bwrap drops all privileges the zuul-executor may have. It probably does so but we should double check. I've tried to digest the thread so far, but I may have missed something. Are there any other concerns about regressions which this may introduce? -Jim We also have the (un)trusted mounts setting in the executor which is currently needed so ansible can access windows nodes via winrm. It's afaik not easy to secure those private keys in a similar way like ssh-agent. So those would be freely accessible by the jobs. I'm currently on the fence about this idea as this means that we're dropping one layer of security. But as already pointed out this layer has many holes in it so it might be really more work maintaining it than it's worth. - Tobias From clint at fewbar.com Wed Mar 13 16:46:18 2019 From: clint at fewbar.com (Clint Byrum) Date: Wed, 13 Mar 2019 09:46:18 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <8736nr30q3.tristanC@fedora> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <155240982758.16268.10199525229602449261@clint-ThinkPad-X1-Carbon-6th> <8736nr30q3.tristanC@fedora> Message-ID: <155249557807.16268.14878150673994054897@clint-ThinkPad-X1-Carbon-6th> Quoting Tristan Cacqueray (2019-03-12 20:35:48) > > On Tue, Mar 12, 2019 at 09:57 Clint Byrum wrote: > > Quoting Tristan Cacqueray (2019-03-11 20:09:24) > >> > >> On Mon, Mar 11, 2019 at 16:05 James E. Blair wrote: > >> > Paul Belanger writes: > >> > > >> >> On Mon, Mar 11, 2019 at 01:47:26PM -0700, James E. Blair wrote: > >> > ... > >> > > >> >>> Broadly speaking, there are two classes of actions we might want to > >> >>> block: > >> >>> > >> >>> 1) Actions which may expose private information stored on the executor > >> >>> (including information used by Zuul itself or other jobs), or otherwise > >> >>> compromise the security of the executor itself (so that future jobs can > >> >>> not be trusted). > >> >>> > >> >>> 2) Actions which may allow a user to take advantage of privileges > >> >>> afforded to the executor host itself. For example, if the executor is > >> >>> behind a network firewall and therefore can access unprotected network > >> >>> services. > >> > > >> >> One topic I keep hearing from time to time, is moving where the > >> >> zuul-executor runs ansible-playbook from. Most commonly, I've heard > >> >> people asking for that to be run inside a pod (off the static > >> >> zuul-executor host). I understand in our case it is a little more > >> >> difficult then that, with how we deal with git repo state. Could you > >> >> speak to if that would provide a better / safer / other place for > >> >> ansible-playbook to run? > >> > > >> > With respect to the first class, what you describe would not be > >> > significantly different. We already run Ansible in a container; running > >> > it in a different container would not change much. > >> > > >> > >> Though Zuul executor job containers are missing cgroup, network namespace > >> and seccomp filter. And we assume the executor process can create > >> containers, which may not be possible in some restricted environment. > >> > > > > A seccomp filter would need to be created for every potential job. Any > > attempt to make a generic one will have limited benefit as it won't > > really be blocking much. > > > > Well the common seccomp filters usualy provides a good defensive > layer that shouldn't be available to any given job (such as using > SCTP protocol). Here is the one used by docker: > https://docs.docker.com/v18.03/engine/security/seccomp/#significant-syscalls-blocked-by-the-default-profile > Indeed. Sometimes it does feel like we're just re-implementing kubernetes and docker with the zuul executor and bubblewrap. > > What problem that ansible restrictions prevent now, would be prevented > > by cgroups and/or network namespaces? > > > > Cgroup would let us limit the memory and cpu usage. +1, but again, zuul-executor feels like maybe the wrong place to do it if container runtimes already have this. > Network namespaces would enable 2 jobs to bind the same port, and we > could setup packet filter to prevent job network to access zuul control plane. > Sounds complicated to get right. > > Also USER_NS has been in mainline kernels for *years* now. I'm > > comfortable with that assumption. We also require Python 3.5+, which > > isn't in a lot of old OS's, so I'd like to think we can continue the > > aggressive stance on dependencies. > > > > As explained in another mail, USER_NS is not enough to enable using > bwrap in k8s. > I missed that one. How unfortunate. :-/ > >> I wish we would consider using external pods to run jobs, this would > >> removes the executor privilege requirement and simplifies resources > >> scaling. > >> > > > > Not sure anybody has suggested this isn't possible, but I think what's > > on the table now is removing the Ansible restrictions without doing > > external pod execution. Are you suggesting pod execution should be added > > first? > > > > I'm suggesting we could have a pod execution context driver that could > be used in place of the bubblewrap one. > I like this train of thought. When I made the interface for bubblewrap, some may remember I added an uncontained execution context because I thought maybe people would prefer to run without bubblewrap in some contexts (it was removed because it was deemed too dangerous). But the abstraction holds, and I think this would be a worthwhile usage of that abstraction. V > > Also there is no executor privilege requirement if your kernel supports > > USER_NS. > > > > IIUC, you still need SETUID/SETGID cap to be able to complete the user > namespace creation. > ACK From clint at fewbar.com Wed Mar 13 17:23:01 2019 From: clint at fewbar.com (Clint Byrum) Date: Wed, 13 Mar 2019 10:23:01 -0700 Subject: Zuul generic jobs In-Reply-To: References: Message-ID: <155249778186.16268.4395544973193460913@clint-ThinkPad-X1-Carbon-6th> Quoting Tomasz Szandała (2019-03-12 15:15:17) > Greetings, > I have a project, that requires compilation for different targets. > Same commands, but each one has to be done separately. > I have seen how OpenStack does it: > [1]https://github.com/openstack-infra/openstack-zuul-jobs/blob/master/z > uul.d/jobs.yaml > so we define dozens of jobs separately even if they differ only by > single element. > My idea was to define job compilation, in which it is Ansible playbook > that determines which target it should choose, based on job name: > (compilation.yaml - playbook) > - name: Set hardware and technology > set_fact: > hardware: "{{ zuul.job.split('.')[1] }}" > technology: "{{ zuul.job.split('.')[2] }}" > Now in project definition I can have something like this: > - project: > verification: > jobs: > - compilation.x86_64.old: > parent: compilation > - compilation.i686.old: > parent: compilation > - compilation.i686.future: > parent: compilation Syntax and problematic implementation aside, I see your use case, and I think Zuul already does a pretty good job of making this as DRY as possible. Basically, you should have a shared jobs repo with all of these jobs and playbooks, and in your various repos, you should have some kind of common testing interface. Like, `make test` in the repo root will figure out what to do based on what is installed, or envvars. Then you just have to define each compilation target once as a child of the overall job. - job: name: compilation abstract: true pre-run: playbooks/compilation-install.yaml run: playbooks/compilation.yaml - job: name: compilation-x86-64-old parent: compilation vars: compiler: gcc-old - job: name: compilation-i686-future parent: compilation vars: compiler: gcc-future - project-template: name: compilation-all check: jobs: - compilation-x86-64-old - compilation-i686-future gate: jobs: - compilation-x86-64-old - compilation-i686-future This way in your project stanzas you just add this: - project: templates: compilation-all I think that should satisfy your desire for generic compilation jobs and not having to add a ton of boilerplate everywhere. Yes? From fungi at yuggoth.org Wed Mar 13 17:32:12 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Mar 2019 17:32:12 +0000 Subject: Zuul generic jobs In-Reply-To: <155249778186.16268.4395544973193460913@clint-ThinkPad-X1-Carbon-6th> References: <155249778186.16268.4395544973193460913@clint-ThinkPad-X1-Carbon-6th> Message-ID: <20190313173212.kibdifdxsjhqhl4f@yuggoth.org> On 2019-03-13 10:23:01 -0700 (-0700), Clint Byrum wrote: [...] > Syntax and problematic implementation aside, I see your use case, and I > think Zuul already does a pretty good job of making this as DRY as > possible. > > Basically, you should have a shared jobs repo with all of these jobs and > playbooks, and in your various repos, you should have some kind of > common testing interface. Like, `make test` in the repo root will figure > out what to do based on what is installed, or envvars. > > Then you just have to define each compilation target once as a child of > the overall job. [...] Another related challenge which has been raised in a past thread is a desire to perform matrix expansion for x*y*z parameters (perhaps SOvers of different crucial libraries, or cross-product of architecture and ABI, or whatever). Since neither Zuul nor YAML itself have an inherent generator capability, this leads to externally scripting generation of the job definitions and instantiations for each combination, and then updating the configuration with the result. Is this part of the concern? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From clint at fewbar.com Wed Mar 13 17:40:00 2019 From: clint at fewbar.com (Clint Byrum) Date: Wed, 13 Mar 2019 10:40:00 -0700 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <20190313164246.GE13660@localhost.localdomain> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <87imwmbv69.fsf@meyer.lemoncheese.net> <20190313164246.GE13660@localhost.localdomain> Message-ID: <155249880026.16268.11392781828795903176@clint-ThinkPad-X1-Carbon-6th> Quoting Paul Belanger (2019-03-13 09:42:46) > On Wed, Mar 13, 2019 at 09:23:10AM -0700, James E. Blair wrote: > > I'd like to reset this thread because I think we were getting off-topic. > > > > To summarize, there has been some trepidation, some specific concerns, > > and some support. > > > I'm in the cautiously support camp, and see if possible to remove. > > > The support tends to come from the perspective that our masking is > > incomplete enough that we're probably effectively already in this state > > anyway. > > > > The specific concerns are: > > > > 1) We should ensure that Zookeeper is sufficiently secure that a job > > can't just connect to it and start manipulating the data. Tristan and > > Shrews are looking into that. We should follow up with a definitive > > answer as to whether the ZK ACLs are sufficient. > > > > 2) We would be allowing arbitrary code execution which may allow an > > attacker access to exploit a vulnerability (e.g., in the kernel) on the > > host. > > > I admit, this sounds scary, but if we 99.9% believe bubblewrap is going > to provide the best protection, this should be fine. I do think our > plugins provide another layer of protection, which helps limit the > amount of things we can even run on the executor. So far, the majority > of jobs haven't needed access to localhost that I have seen. > I've always held the position that the main benefit of the plugins was the friendly error messages. Instead of an EACCESS and whatever code you're executing's terrible error handling, you got a friendly "Hey you can't do that." But the burden of hooking in to a private API that changes constantly is large. So maybe we could scope it down a bit. Instead of focusing on blocking in the Ansible plugins, focus on enabling well behaved users with better error handling and early fails for things that we don't like. If we miss one, or if chasing upstream churn is too hard, we can opt not to do it for the next version of Ansible. But then to make up for the loss in code execution prevention, also take on the work of locking bubblewrap down more and/or adding other container runtimes. That should still be more straight-forward and future-proof than chasing Ansible's APIs. From tomasz.szandala at gmail.com Wed Mar 13 20:47:24 2019 From: tomasz.szandala at gmail.com (=?UTF-8?Q?Tomasz_Szanda=C5=82a?=) Date: Wed, 13 Mar 2019 21:47:24 +0100 Subject: Zuul generic jobs In-Reply-To: <20190313173212.kibdifdxsjhqhl4f@yuggoth.org> References: <155249778186.16268.4395544973193460913@clint-ThinkPad-X1-Carbon-6th> <20190313173212.kibdifdxsjhqhl4f@yuggoth.org> Message-ID: Thank You all. @Clint. I would love to do that, but I have dozens of projects to maintain. I am working to unify them, but still there will be no many jobs that could by generic for them. This idea needs to wait ~2 years :-) @ James: I tried Your solution, and if I have jobs: - compilation: vars: target: i686 - compilation: vars: target: x86_64 It looked like Zuul tool only the second one. I tried it few days ago, I might be wrong, but isn't that I would have only compilation for x86_64 in the flow? śr., 13 mar 2019 o 18:32 Jeremy Stanley napisał(a): > On 2019-03-13 10:23:01 -0700 (-0700), Clint Byrum wrote: > [...] > > Syntax and problematic implementation aside, I see your use case, and I > > think Zuul already does a pretty good job of making this as DRY as > > possible. > > > > Basically, you should have a shared jobs repo with all of these jobs and > > playbooks, and in your various repos, you should have some kind of > > common testing interface. Like, `make test` in the repo root will figure > > out what to do based on what is installed, or envvars. > > > > Then you just have to define each compilation target once as a child of > > the overall job. > [...] > > Another related challenge which has been raised in a past thread is > a desire to perform matrix expansion for x*y*z parameters (perhaps > SOvers of different crucial libraries, or cross-product of > architecture and ABI, or whatever). Since neither Zuul nor YAML > itself have an inherent generator capability, this leads to > externally scripting generation of the job definitions and > instantiations for each combination, and then updating the > configuration with the result. Is this part of the concern? > -- > Jeremy Stanley > -----BEGIN PGP SIGNATURE----- > > iQKTBAABCgB9FiEEl65Jb8At7J/DU7LnSPmWEUNJWCkFAlyJPpRfFIAAAAAALgAo > aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDk3 > QUU0OTZGQzAyREVDOUZDMzUzQjJFNzQ4Rjk5NjExNDM0OTU4MjkACgkQSPmWEUNJ > WClWiA//VZtCiL1FAN/fHfvi3WGMMLYnN1sWvoAz85dOTXQQGJaMaClT0URP/e/3 > b8dJtCM5KGMActbTT9PfQP75IFEE+2fz+e+l+M3BspmUG19FVS2+wpeHLoZEw6Fd > zb36gvUMD3oY7fH+LYuPzHlQQVgKou8xu8mKJDFY+zQG14ZYhq/jcu2+8eRD+7oz > OzbcWocv7CFe5AKcPl6rYdHQzdvY+t5f1p3GBvxreT8xs1Mx12dgMtHx4nxicR3H > ONRhUZ6o3448DNPBAUfV6KPB76m3Ljh0mamtdKGApRaxTGJmknXETRvzwUns1UhF > iyi1vgmTcAID4Run66uggf3XTcRnatuMjZh5eHuQVP7rIuTq7MmJhi5g3aNL5KYI > 8ZYRT/z4UUG78OENG7MwGwXrf/HiShIPybZxfmIYLwq+w4Sb9lAX/YUuM8zQ8jXu > KDknsoULhNVaOaPx5G1zzTeOc4ayfLH7/yo8JYK9QuWJ7STWee3Lc701VeECiGDK > NvYzDVZet4S5Uf8SIuvZMbSIyFRKyF4xTWOOyZX7o9UmlfxBUZ6CCkLfEzLAmXvP > MpNX6Y20JsZbGqMfSTDRGuxQb3ed31hRH2MJjGv1wRJteMUg0YNMyVhCXNVdUNu2 > Yt0/IyAsGQiCYaar6xhDlPDlGk7k58KGWSoZsDnn/RXPzByU17E= > =PcmM > -----END PGP SIGNATURE----- > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Thu Mar 14 02:48:09 2019 From: clint at fewbar.com (Clint Byrum) Date: Wed, 13 Mar 2019 19:48:09 -0700 Subject: Zuul generic jobs In-Reply-To: References: <155249778186.16268.4395544973193460913@clint-ThinkPad-X1-Carbon-6th> <20190313173212.kibdifdxsjhqhl4f@yuggoth.org> Message-ID: <155253168957.16268.8001556186214956575@clint-ThinkPad-X1-Carbon-6th> Quoting Tomasz Szandała (2019-03-13 13:47:24) > Thank You all. > @Clint. I would love to do that, but I have dozens of projects to > maintain. I am working to unify them, but still there will be no many > jobs that could by generic for them. > This idea needs to wait ~2 years :-) I failed to mention it, but you can also make a project statement that is a regex. So, if all those dozens of projects have some name that they share, you just need this in a trusted repo: - project: name: '^.*compiled.*$' templates: - compiled That will attach the compiled template to every project with the word 'compiled' in its name. If you can't match on that, I don't think it's such a burden to have dozens of project stanzas, or if you like, a single identical file in each project's repo that is basically the section above, but without the `name` attribute. > @ James: I tried Your solution, and if I have > jobs: > - compilation: > vars: > target: i686 > - compilation: > vars: > target: x86_64 > It looked like Zuul tool only the second one. I tried it few days ago, > I might be wrong, but isn't that I would have only compilation for > x86_64 in the flow? > Per the documentation, vars are merged when you create variants, so the later one overrides the earlier one. I really don't think you want to do what you're describing. Variants are for varying behavior on repo, or branch, not the test itself. You only run one variant per trigger buildset. What you want is a job per testing configuration, so compilation-i686 and compilation-x86-64 would be sensible names, and then use a template to pull them all in and run them all in the appropriate pipelines. The worst case will have a unique set of architectures per repo so you have to maintain a single project statement per mix. Most likely, you'll have a 'compiled-all' template, and a 'compiled-i686-only' or something like that where you have some that have a different mix. From tdecacqu at redhat.com Thu Mar 14 10:11:33 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 14 Mar 2019 19:11:33 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: <878sxidblx.fsf@meyer.lemoncheese.net> References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <20190311220846.GA22574@localhost.localdomain> <877ed5hv1b.fsf@meyer.lemoncheese.net> <87lg1k3i1n.tristanC@fedora> <87r2bcf5lx.fsf@meyer.lemoncheese.net> <874l8730q5.tristanC@fedora> <878sxidblx.fsf@meyer.lemoncheese.net> Message-ID: <87ftrp22ay.tristanC@fedora> On Wed, Mar 13, 2019 at 08:42 James E. Blair wrote: > Tristan Cacqueray writes: > >> On Tue, Mar 12, 2019 at 08:57 James E. Blair wrote: >>> Tristan Cacqueray writes: >>> >>>> I wish we would consider using external pods to run jobs, this would >>>> removes the executor privilege requirement and simplifies resources >>>> scaling. >>> >>> It adds the requirement of a COE, and one of Zuul's strengths is that it >>> can operate in any environment, bare metal, virtual machine, or >>> container. >>> >> >> This could be an optional requirement implemented as an execution >> context driver. > > That's certainly something to consider, but if it's optional, then it > doesn't provide an answer to the question I raised in this thread: > Should we drop our custom Ansible plugins? > > So, setting aside the fact that you would like a COE driver as a new > feature for other reasons, let's bring this back to topic: > > Do you think the extra protections of a full container system (i.e., > more than bubblewrap) are necessary if we are to drop the custom Ansible > plugins? > I meant if we enable untrusted job to run on localhost, then this should enable such COE driver implementation. However droping the custom Ansible plugins and authorizing arbritary code execution in the context of the executor service sounds like a dangerous feature. >>> There's also the problem of the git repo cache that Paul mentioned. >>> That is far from trivial. >>> >>> Regarding the container privilege requirement, the best way to remove >>> that would be for someone to identify what the problem is on a technical >>> level. Right now, we know that we have to set "privileged: true" in >>> some situations. Why? >> >> As explained in the previous thread about this (install Zuul on K8/openshift) >> the core issue is that recursive/nested containers are considered >> dangerous and container runtime like docker disable it by default. >> >> There are progress being made to at least be able to create new >> namespace inside a container and this is well documented in these >> slides: https://www.slideshare.net/AkihiroSuda/rootless-containers >> >> However, considering the bad track record of this feature, it may takes >> time before we can created nested namespace without explicitely using >> some sort of "privileged: true" or "seccomp: unconfined" options, >> which may be disabled in multi tenant COE like openshift. >> >>> What specific technical action performed by >>> bwrap requires that? What capabilities? What seccomp rules? >>> >> >> The first issue is clone(CLONE_NEWNS). You need the SYS_ADMIN capability. >> For example a regular user is not able to run "unshare --mount". >> >> Though you can create a user namespace (CLONE_NEWUSER) which grants the >> SYS_ADMIN capability, thus a regular user can run "unshare --user --mount". >> Unfortunately this has a lot of security implication. >> >> Then assuming you can create namespaces, you still need access to a proper >> /proc fs to complete the new mount namespace creation. >> >> Finally you have to overcome the default seccomp/apparmor/SELinux policy >> which usualy prevent above actions. >> >> More details about this are available in these issues: >> https://github.com/projectatomic/bubblewrap/issues/269 >> https://github.com/opencontainers/runc/issues/1658 >> >> And finally there seems to be a blocker for user namespace in K8/openshift: >> https://github.com/kubernetes/enhancements/issues/127 >> >>> Basically, what are all of the changes required to get bwrap to work in >>> openshift. >>> >> >> Basically, you need these permission today: >> - https://github.com/gtema/zuul-apb/blob/master/zuul_scc.yaml#L21-L23 >> If user namespace are available, then you could replace privileged by >> - enable (through seccomp) the creation of NEWNS|NEWUSER clone. >> - enable a proper /proc (e.g. unmasked /proc/sys /proc/kcore). >> - enable setuid/setgid cap for newuidmap/newgidmap needed by user ns creation. >> >> Then when K8S supports user namespace creation, you may be able to drop >> the setuid/setgit cap. >> >> At least, that is my understanding of the technical issues :-) >> >> Also please note that these only cover namespaces, which is a subset of >> what "container" usualy means, and bwrap doesn't support cgroup or >> network isolation. > > That is all very helpful information, thanks! > I forgot to mention that while these restrictions are enforced by openshift, they are also relevant for multi-tenant k8s. And since zuul executors (and each job they run) are effectively multi-tenant, we should ensure Zuul can work with these restrictions activated. >>> If we knew that, we could look at potential changes to bwrap or future >>> features of the kernel or openshift which may ameliorate the problem. >>> >>> If you're really interested in this, then please help with that. >>> Understanding the problem is an important first step. >>> >> >> The core problem with the zuul executor process is that we are running >> different execution context inside a single pod, which goes against the >> design of COE environment: >> With containers you are supposed to split the application into multiple >> containers each doing one task as opposed to multiple different >> processes in the same container. This gives you the isolation. >> >> Thus I think nested/recursive containers is great for unprivileged image >> building, but it's not suitable for the zuul executor. > > I disagree with your interpretations, but let's not discuss this as a > point of philosophy. I don't think that's going to bring us to a > resolution. > > We can look at this strictly from a technical perspective: > > * We will continue to support non-COE Zuul deployments as first-class. > * Zuul works as designed today in any COE which allows privileged > containers (including dedicated OpenShift so configured). > * Executor support for COE could provide some additional advantages in > scalability, security, and simplified management. > * Executor support for COE could allow Zuul to be used in a shared > OpenShift. > > Those are reasons enough to consider it. But again, maybe we could do > that separately instead of diverting this thread off course? > It's great to hear you are willing to consider such feature. Then I can propose a plan in another thread. >> >> [snip] >> >>>>> The vulnerability there is running arbitrary code on the executor, and, >>>>> if we decide to proceed with this, we would be making the decision that >>>>> is no longer a vulnerability but is a feature. >>>>> >>>> >>>> That sounds like rather risky, especially since executor are handling >>>> job's secrets. How are we going to prevent one play to not taint the one >>>> that are protected? >>> >>> That's what bubblewrap does for us today. >>> >> >> I meant, if we are going to enable trusted and untrusted playbook to run >> with the same workspace, then we may be in a position to use an external >> pod for the job. >> IIUC, the issue is that after an untrusted play we have to ensure the >> trusted content is not tainted, which is the main limitation with regard >> to implementing a k8s execution context driver. > > I am not proposing that we change anything about how trusted and > untrusted playbooks are kept separate. They run with the same > workspace, but bubblewrap ensures that secrets are only accessible to > the individual playbook they're associated with. > > The start of the text you quoted was in regards to dropping our custom > Ansible plugins, but it seems to me that in your responses you have > interpreted it in the context of executor COE support. That's why the > conversation is so confused. > Mea culpa I extrapolated the proposal for executor COE support. Please find my reply to the original question in the new thread. Regards, -Tristan > The original question I asked is very important, and it's worth us > focusing on it. It's fine that, as part of that, we ask "would executor > COE support help address the issue?" the answer is that since we will > not *require* it, it can not help. So let's set that aside now and > explore it in a new thread. Meanwhile, let's get back to analyzing the > issues involved in dropping the custom Ansible plugins. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Thu Mar 14 10:12:07 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 14 Mar 2019 19:12:07 +0900 Subject: Should we drop our custom Ansible plugins? In-Reply-To: References: <877ed5jfz5.fsf@meyer.lemoncheese.net> <87imwmbv69.fsf@meyer.lemoncheese.net> Message-ID: <87ef7922a0.tristanC@fedora> On Wed, Mar 13, 2019 at 16:45 Tobias.Henkel at bmw.de wrote: > On 13.03.19, 17:23, "James E. Blair" wrote: > > I'd like to reset this thread because I think we were getting off-topic. > > To summarize, there has been some trepidation, some specific concerns, > and some support. > > The support tends to come from the perspective that our masking is > incomplete enough that we're probably effectively already in this state > anyway. > > The specific concerns are: > > 1) We should ensure that Zookeeper is sufficiently secure that a job > can't just connect to it and start manipulating the data. Tristan and > Shrews are looking into that. We should follow up with a definitive > answer as to whether the ZK ACLs are sufficient. > > 2) We would be allowing arbitrary code execution which may allow an > attacker access to exploit a vulnerability (e.g., in the kernel) on the > host. > > What we need to keep in mind is that we must ensure that bwrap drops all > privileges the zuul-executor may have. It probably does so but we should > double check. > Bubblewrap drops capabilities and closes FD by default. > > I've tried to digest the thread so far, but I may have missed something. > Are there any other concerns about regressions which this may introduce? > > -Jim > > > We also have the (un)trusted mounts setting in the executor which is > currently needed so ansible can access windows nodes via winrm. It's > afaik not easy to secure those private keys in a similar way like > ssh-agent. So those would be freely accessible by the jobs. > The custom Ansible plugins also prevent access to the ansible fact-cache and cp directory as well as prevent workspace symlink to point at secret which may result in leak. > I'm currently on the fence about this idea as this means that we're dropping > one layer of security. But as already pointed out this layer has many holes > in it so it might be really more work maintaining it than it's worth. > > - Tobias > I'm also on the fence as remote code execution greatly extend the attack surface. Wouldn't it be possible to submit those plugins upstream so that they become an Ansible feature? IIUC the Zuul plugins enable: - command output stream - task blacklist for localhost execution - read/write path whitelist for localhost access Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From jimmy at openstack.org Mon Mar 18 18:39:37 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 18 Mar 2019 13:39:37 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver Message-ID: <5C8FE5E9.3040201@openstack.org> Does the Zuul Community have any interest in BoF or Working Group space in Denver? For those unfamiliar, Working Groups are informal groups of people sharing a common interest for a task or code subsystem. Birds of a Feather is an effort to interest new community members to your project through a presentation and Q&A format. Each session type is 40 minutes. Cheers, Jimmy From jimmy at openstack.org Mon Mar 18 18:40:09 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 18 Mar 2019 13:40:09 -0500 Subject: Zuul Board Approval Presentation Message-ID: <5C8FE609.7040507@openstack.org> Hi All - I'm working with Zuul on their pending project confirmation on behalf of the OpenStack Foundation. As you may or may not know, Zuul will need to do a presentation in front of the OpenStack Foundation Board. I've started an etherpad [1] with a rough outline, but ultimately this should be a community driven effort. If you are able to fill in some blanks, we welcome your contribution! Cheers, Jimmy [1] https://etherpad.openstack.org/p/Zuul_2019_OSF_Project_Confirmation_Process From pabelanger at redhat.com Tue Mar 19 16:11:50 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 19 Mar 2019 12:11:50 -0400 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <5C8FE5E9.3040201@openstack.org> References: <5C8FE5E9.3040201@openstack.org> Message-ID: <20190319161150.GA7700@localhost.localdomain> On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: > Does the Zuul Community have any interest in BoF or Working Group space in > Denver? > Confirmed again on IRC[1] this morning, the zuul community is not planning any official BoF or Working Groups for Denver. Thanks for following up. - Paul [1] http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 > For those unfamiliar, Working Groups are informal groups of people sharing a > common interest for a task or code subsystem. Birds of a Feather is an > effort to interest new community members to your project through a > presentation and Q&A format. > > Each session type is 40 minutes. > > Cheers, > Jimmy > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From jimmy at openstack.org Tue Mar 19 16:13:41 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 19 Mar 2019 11:13:41 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <20190319161150.GA7700@localhost.localdomain> References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> Message-ID: <5C911535.7020600@openstack.org> Thanks for checking! > Paul Belanger > March 19, 2019 at 11:11 AM > On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >> Does the Zuul Community have any interest in BoF or Working Group space in >> Denver? >> > Confirmed again on IRC[1] this morning, the zuul community is not planning > any official BoF or Working Groups for Denver. > > Thanks for following up. > - Paul > > [1] http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 > >> For those unfamiliar, Working Groups are informal groups of people sharing a >> common interest for a task or code subsystem. Birds of a Feather is an >> effort to interest new community members to your project through a >> presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 18, 2019 at 1:39 PM > Does the Zuul Community have any interest in BoF or Working Group > space in Denver? > > For those unfamiliar, Working Groups are informal groups of people > sharing a common interest for a task or code subsystem. Birds of a > Feather is an effort to interest new community members to your project > through a presentation and Q&A format. > > Each session type is 40 minutes. > > Cheers, > Jimmy > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 19 16:28:18 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 19 Mar 2019 09:28:18 -0700 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <20190319161150.GA7700@localhost.localdomain> (Paul Belanger's message of "Tue, 19 Mar 2019 12:11:50 -0400") References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> Message-ID: <877ecux1zx.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >> Does the Zuul Community have any interest in BoF or Working Group space in >> Denver? >> > Confirmed again on IRC[1] this morning, the zuul community is not planning > any official BoF or Working Groups for Denver. > > Thanks for following up. > - Paul I think there is some confusion. The Summit and PTG are separate colocated events. We are indeed not planning on doing anything during the PTG, which is Thursday through Saturday. But if I understand correctly, the BoF and Working Group spaces are during the Summit, scheduled Monday through Wednesday. The BoF session -- where we sort of informally chat with potential new users, sounds like it might be a good thing for us to do? If I understand how it's intended to be used, a dedicated time and place during the PTG for current and potential Zuul users to talk might be useful. I'm not quite as sure whether we could make good use of a WG session -- does anyone have thoughts on that? -Jim From jaroslaw.lukow at codilime.com Tue Mar 19 17:05:10 2019 From: jaroslaw.lukow at codilime.com (=?UTF-8?B?SmFyZWsgxYF1a293?=) Date: Tue, 19 Mar 2019 18:05:10 +0100 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <877ecux1zx.fsf@meyer.lemoncheese.net> References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> Message-ID: On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: > I'm not quite as sure whether we could make good use of a WG session -- > does anyone have thoughts on that? Maybe a session about the planned scheduler scale-out/HA design? I'm not sure if it's a good fit for a WG meeting though. Regards Jarek > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -- Jarosław Łukow DevOps Engineer jaroslaw.lukow at codilime.com -- ------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages. From jimmy at openstack.org Tue Mar 19 17:09:39 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 19 Mar 2019 12:09:39 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <877ecux1zx.fsf@meyer.lemoncheese.net> References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> Message-ID: <5C912253.4010604@openstack.org> > James E. Blair > March 19, 2019 at 11:28 AM > > I think there is some confusion. > > The Summit and PTG are separate colocated events. > > We are indeed not planning on doing anything during the PTG, which is > Thursday through Saturday. > > But if I understand correctly, the BoF and Working Group spaces are > during the Summit, scheduled Monday through Wednesday. Yes. BoF and WG would take place during the PTG. > > The BoF session -- where we sort of informally chat with potential new > users, sounds like it might be a good thing for us to do? If I > understand how it's intended to be used, a dedicated time and place > during the PTG for current and potential Zuul users to talk might be > useful. Do you mean "during the PTG" or "during the Summit"? I'm guessing the latter, and yeah IMO this would be a good fit for Zuul. There is a lot of curiosity around it, so it might be a good opportunity to drum up some new contributors. > > I'm not quite as sure whether we could make good use of a WG session -- > does anyone have thoughts on that? > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Paul Belanger > March 19, 2019 at 11:11 AM > On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >> Does the Zuul Community have any interest in BoF or Working Group space in >> Denver? >> > Confirmed again on IRC[1] this morning, the zuul community is not planning > any official BoF or Working Groups for Denver. > > Thanks for following up. > - Paul > > [1] http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 > >> For those unfamiliar, Working Groups are informal groups of people sharing a >> common interest for a task or code subsystem. Birds of a Feather is an >> effort to interest new community members to your project through a >> presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 18, 2019 at 1:39 PM > Does the Zuul Community have any interest in BoF or Working Group > space in Denver? > > For those unfamiliar, Working Groups are informal groups of people > sharing a common interest for a task or code subsystem. Birds of a > Feather is an effort to interest new community members to your project > through a presentation and Q&A format. > > Each session type is 40 minutes. > > Cheers, > Jimmy > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Mar 19 17:10:23 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 19 Mar 2019 12:10:23 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> Message-ID: <5C91227F.4080800@openstack.org> > Jarek Łukow > March 19, 2019 at 12:05 PM > On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >> I'm not quite as sure whether we could make good use of a WG session -- >> does anyone have thoughts on that? > > Maybe a session about the planned scheduler scale-out/HA design? > I'm not sure if it's a good fit for a WG meeting though. This could be a good fit. The idea with a WG session is it's a collaborative session, not just a presentation. > > Regards > Jarek > >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > James E. Blair > March 19, 2019 at 11:28 AM > > I think there is some confusion. > > The Summit and PTG are separate colocated events. > > We are indeed not planning on doing anything during the PTG, which is > Thursday through Saturday. > > But if I understand correctly, the BoF and Working Group spaces are > during the Summit, scheduled Monday through Wednesday. > > The BoF session -- where we sort of informally chat with potential new > users, sounds like it might be a good thing for us to do? If I > understand how it's intended to be used, a dedicated time and place > during the PTG for current and potential Zuul users to talk might be > useful. > > I'm not quite as sure whether we could make good use of a WG session -- > does anyone have thoughts on that? > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Paul Belanger > March 19, 2019 at 11:11 AM > On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >> Does the Zuul Community have any interest in BoF or Working Group space in >> Denver? >> > Confirmed again on IRC[1] this morning, the zuul community is not planning > any official BoF or Working Groups for Denver. > > Thanks for following up. > - Paul > > [1] http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 > >> For those unfamiliar, Working Groups are informal groups of people sharing a >> common interest for a task or code subsystem. Birds of a Feather is an >> effort to interest new community members to your project through a >> presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 18, 2019 at 1:39 PM > Does the Zuul Community have any interest in BoF or Working Group > space in Denver? > > For those unfamiliar, Working Groups are informal groups of people > sharing a common interest for a task or code subsystem. Birds of a > Feather is an effort to interest new community members to your project > through a presentation and Q&A format. > > Each session type is 40 minutes. > > Cheers, > Jimmy > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 19 17:41:42 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 19 Mar 2019 10:41:42 -0700 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <5C912253.4010604@openstack.org> (Jimmy McArthur's message of "Tue, 19 Mar 2019 12:09:39 -0500") References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> <5C912253.4010604@openstack.org> Message-ID: <8736niwyll.fsf@meyer.lemoncheese.net> Jimmy McArthur writes: >> James E. Blair >> March 19, 2019 at 11:28 AM >> >> I think there is some confusion. >> >> The Summit and PTG are separate colocated events. >> >> We are indeed not planning on doing anything during the PTG, which is >> Thursday through Saturday. >> >> But if I understand correctly, the BoF and Working Group spaces are >> during the Summit, scheduled Monday through Wednesday. > Yes. BoF and WG would take place during the PTG. >> >> The BoF session -- where we sort of informally chat with potential new >> users, sounds like it might be a good thing for us to do? If I >> understand how it's intended to be used, a dedicated time and place >> during the PTG for current and potential Zuul users to talk might be >> useful. > Do you mean "during the PTG" or "during the Summit"? I'm guessing the > latter, and yeah IMO this would be a good fit for Zuul. There is a > lot of curiosity around it, so it might be a good opportunity to drum > up some new contributors. Doh! In an email where I tried very hard to be clear about PTG and Summit, I sure did mess that one up didn't I? :) Yes, I meant to say "during the Summit". -Jim From corvus at inaugust.com Tue Mar 19 17:53:08 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 19 Mar 2019 10:53:08 -0700 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <5C91227F.4080800@openstack.org> (Jimmy McArthur's message of "Tue, 19 Mar 2019 12:10:23 -0500") References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> <5C91227F.4080800@openstack.org> Message-ID: <87y35avji3.fsf@meyer.lemoncheese.net> Jimmy McArthur writes: >> Jarek Łukow >> March 19, 2019 at 12:05 PM >> On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >>> I'm not quite as sure whether we could make good use of a WG session -- >>> does anyone have thoughts on that? >> >> Maybe a session about the planned scheduler scale-out/HA design? >> I'm not sure if it's a good fit for a WG meeting though. > This could be a good fit. The idea with a WG session is it's a > collaborative session, not just a presentation. I think we'll be missing some key people involved in that, but I think we could still have a good discussion. If we constructed it as a discussion about planned future changes to Zuul (including the scale out scheduler, but perhaps also including k8s operators, etc), we could probably exchange some good information. As long as we don't think of it as a design session where we have concrete outcomes. But we can share plans and concerns and get them recorded for future discussion. -Jim From jimmy at openstack.org Tue Mar 19 17:56:46 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 19 Mar 2019 12:56:46 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <87y35avji3.fsf@meyer.lemoncheese.net> References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> <5C91227F.4080800@openstack.org> <87y35avji3.fsf@meyer.lemoncheese.net> Message-ID: <5C912D5E.9020408@openstack.org> I could put some placeholders on the schedule and if y'all decide later you don't want them, no problem. Just let me know :) > James E. Blair > March 19, 2019 at 12:53 PM > > I think we'll be missing some key people involved in that, but I think > we could still have a good discussion. If we constructed it as a > discussion about planned future changes to Zuul (including the scale out > scheduler, but perhaps also including k8s operators, etc), we could > probably exchange some good information. As long as we don't think of > it as a design session where we have concrete outcomes. But we can > share plans and concerns and get them recorded for future discussion. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 19, 2019 at 12:10 PM > > >> Jarek Łukow >> March 19, 2019 at 12:05 PM >> On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >>> I'm not quite as sure whether we could make good use of a WG session -- >>> does anyone have thoughts on that? >> >> Maybe a session about the planned scheduler scale-out/HA design? >> I'm not sure if it's a good fit for a WG meeting though. > This could be a good fit. The idea with a WG session is it's a > collaborative session, not just a presentation. >> Regards >> Jarek >> >>> -Jim >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> >> >> James E. Blair >> March 19, 2019 at 11:28 AM >> >> I think there is some confusion. >> >> The Summit and PTG are separate colocated events. >> >> We are indeed not planning on doing anything during the PTG, which is >> Thursday through Saturday. >> >> But if I understand correctly, the BoF and Working Group spaces are >> during the Summit, scheduled Monday through Wednesday. >> >> The BoF session -- where we sort of informally chat with potential new >> users, sounds like it might be a good thing for us to do? If I >> understand how it's intended to be used, a dedicated time and place >> during the PTG for current and potential Zuul users to talk might be >> useful. >> >> I'm not quite as sure whether we could make good use of a WG session -- >> does anyone have thoughts on that? >> >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> Paul Belanger >> March 19, 2019 at 11:11 AM >> On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >>> Does the Zuul Community have any interest in BoF or Working Group space in >>> Denver? >>> >> Confirmed again on IRC[1] this morning, the zuul community is not planning >> any official BoF or Working Groups for Denver. >> >> Thanks for following up. >> - Paul >> >> [1]http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 >> >>> For those unfamiliar, Working Groups are informal groups of people sharing a >>> common interest for a task or code subsystem. Birds of a Feather is an >>> effort to interest new community members to your project through a >>> presentation and Q&A format. >>> >>> Each session type is 40 minutes. >>> >>> Cheers, >>> Jimmy >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> Jimmy McArthur >> March 18, 2019 at 1:39 PM >> Does the Zuul Community have any interest in BoF or Working Group >> space in Denver? >> >> For those unfamiliar, Working Groups are informal groups of people >> sharing a common interest for a task or code subsystem. Birds of a >> Feather is an effort to interest new community members to your >> project through a presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jarek Łukow > March 19, 2019 at 12:05 PM > On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >> I'm not quite as sure whether we could make good use of a WG session -- >> does anyone have thoughts on that? > > Maybe a session about the planned scheduler scale-out/HA design? > I'm not sure if it's a good fit for a WG meeting though. > > Regards > Jarek > >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > > James E. Blair > March 19, 2019 at 11:28 AM > > I think there is some confusion. > > The Summit and PTG are separate colocated events. > > We are indeed not planning on doing anything during the PTG, which is > Thursday through Saturday. > > But if I understand correctly, the BoF and Working Group spaces are > during the Summit, scheduled Monday through Wednesday. > > The BoF session -- where we sort of informally chat with potential new > users, sounds like it might be a good thing for us to do? If I > understand how it's intended to be used, a dedicated time and place > during the PTG for current and potential Zuul users to talk might be > useful. > > I'm not quite as sure whether we could make good use of a WG session -- > does anyone have thoughts on that? > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Paul Belanger > March 19, 2019 at 11:11 AM > On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >> Does the Zuul Community have any interest in BoF or Working Group space in >> Denver? >> > Confirmed again on IRC[1] this morning, the zuul community is not planning > any official BoF or Working Groups for Denver. > > Thanks for following up. > - Paul > > [1] http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 > >> For those unfamiliar, Working Groups are informal groups of people sharing a >> common interest for a task or code subsystem. Birds of a Feather is an >> effort to interest new community members to your project through a >> presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 19 20:35:27 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 19 Mar 2019 13:35:27 -0700 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <5C912D5E.9020408@openstack.org> (Jimmy McArthur's message of "Tue, 19 Mar 2019 12:56:46 -0500") References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> <5C91227F.4080800@openstack.org> <87y35avji3.fsf@meyer.lemoncheese.net> <5C912D5E.9020408@openstack.org> Message-ID: <87bm26vbzk.fsf@meyer.lemoncheese.net> Jimmy McArthur writes: > I could put some placeholders on the schedule and if y'all decide > later you don't want them, no problem. Just let me know :) Thanks, it sounds like we have at least enough interest in both of those to make that worthwhile. -Jim From jimmy at openstack.org Tue Mar 19 20:56:48 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 19 Mar 2019 15:56:48 -0500 Subject: BoF/WG Space at Open Infrastructure Summit Denver In-Reply-To: <87bm26vbzk.fsf@meyer.lemoncheese.net> References: <5C8FE5E9.3040201@openstack.org> <20190319161150.GA7700@localhost.localdomain> <877ecux1zx.fsf@meyer.lemoncheese.net> <5C91227F.4080800@openstack.org> <87y35avji3.fsf@meyer.lemoncheese.net> <5C912D5E.9020408@openstack.org> <87bm26vbzk.fsf@meyer.lemoncheese.net> Message-ID: <5C915790.4010209@openstack.org> Awesome. I've got y'all on the schedule for Tuesday, 9-9:40 and 9:50-10:30. Happy to adjust the abstracts once they are nailed down. Cheers, Jimmy > James E. Blair > March 19, 2019 at 3:35 PM > > Thanks, it sounds like we have at least enough interest in both of those > to make that worthwhile. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 19, 2019 at 12:56 PM > I could put some placeholders on the schedule and if y'all decide > later you don't want them, no problem. Just let me know :) > > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > James E. Blair > March 19, 2019 at 12:53 PM > > I think we'll be missing some key people involved in that, but I think > we could still have a good discussion. If we constructed it as a > discussion about planned future changes to Zuul (including the scale out > scheduler, but perhaps also including k8s operators, etc), we could > probably exchange some good information. As long as we don't think of > it as a design session where we have concrete outcomes. But we can > share plans and concerns and get them recorded for future discussion. > > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jimmy McArthur > March 19, 2019 at 12:10 PM > > >> Jarek Łukow >> March 19, 2019 at 12:05 PM >> On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >>> I'm not quite as sure whether we could make good use of a WG session -- >>> does anyone have thoughts on that? >> >> Maybe a session about the planned scheduler scale-out/HA design? >> I'm not sure if it's a good fit for a WG meeting though. > This could be a good fit. The idea with a WG session is it's a > collaborative session, not just a presentation. >> Regards >> Jarek >> >>> -Jim >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> >> >> James E. Blair >> March 19, 2019 at 11:28 AM >> >> I think there is some confusion. >> >> The Summit and PTG are separate colocated events. >> >> We are indeed not planning on doing anything during the PTG, which is >> Thursday through Saturday. >> >> But if I understand correctly, the BoF and Working Group spaces are >> during the Summit, scheduled Monday through Wednesday. >> >> The BoF session -- where we sort of informally chat with potential new >> users, sounds like it might be a good thing for us to do? If I >> understand how it's intended to be used, a dedicated time and place >> during the PTG for current and potential Zuul users to talk might be >> useful. >> >> I'm not quite as sure whether we could make good use of a WG session -- >> does anyone have thoughts on that? >> >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> Paul Belanger >> March 19, 2019 at 11:11 AM >> On Mon, Mar 18, 2019 at 01:39:37PM -0500, Jimmy McArthur wrote: >>> Does the Zuul Community have any interest in BoF or Working Group space in >>> Denver? >>> >> Confirmed again on IRC[1] this morning, the zuul community is not planning >> any official BoF or Working Groups for Denver. >> >> Thanks for following up. >> - Paul >> >> [1]http://eavesdrop.openstack.org/irclogs/%23zuul/latest.log.html#t2019-03-19T12:52:46 >> >>> For those unfamiliar, Working Groups are informal groups of people sharing a >>> common interest for a task or code subsystem. Birds of a Feather is an >>> effort to interest new community members to your project through a >>> presentation and Q&A format. >>> >>> Each session type is 40 minutes. >>> >>> Cheers, >>> Jimmy >>> >>> _______________________________________________ >>> Zuul-discuss mailing list >>> Zuul-discuss at lists.zuul-ci.org >>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> Jimmy McArthur >> March 18, 2019 at 1:39 PM >> Does the Zuul Community have any interest in BoF or Working Group >> space in Denver? >> >> For those unfamiliar, Working Groups are informal groups of people >> sharing a common interest for a task or code subsystem. Birds of a >> Feather is an effort to interest new community members to your >> project through a presentation and Q&A format. >> >> Each session type is 40 minutes. >> >> Cheers, >> Jimmy >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > Jarek Łukow > March 19, 2019 at 12:05 PM > On Tue, 19 Mar 2019 at 17:28, James E. Blair wrote: >> I'm not quite as sure whether we could make good use of a WG session -- >> does anyone have thoughts on that? > > Maybe a session about the planned scheduler scale-out/HA design? > I'm not sure if it's a good fit for a WG meeting though. > > Regards > Jarek > >> -Jim >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Mar 20 19:57:50 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 20 Mar 2019 12:57:50 -0700 Subject: Migrating git repos to OpenDev Message-ID: <87pnqlnwsh.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > Hi, > > As discussed in November[1], the OpenStack project infrastructure is > being rebranded as "OpenDev" to better support a wider community of > projects. > > We are nearly ready to perform the part of this transition with the > largest impact: moving the authoritative git repositories for existing > projects. Working with the folks representing each of the affected OSF projects, we've scheduled the migration for Friday, April 19, 2019. This is shortly after the next OpenStack release. Please see [2] for further details about what this will involve. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html -Jim From jimmy at openstack.org Wed Mar 20 21:00:31 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 20 Mar 2019 16:00:31 -0500 Subject: Zuul 3.7.0 released In-Reply-To: <87a7hpmgk9.fsf@meyer.lemoncheese.net> References: <87a7hpmgk9.fsf@meyer.lemoncheese.net> Message-ID: <5C92A9EF.1060502@openstack.org> Congrats to the whole Zuul community :) > James E. Blair > March 20, 2019 at 3:33 PM > The Zuul team is pleased to announce the release of Zuul 3.7.0. > > Download it from PyPI: https://pypi.org/project/zuul > > The release notes, included below, are also available in the > documentation with hyperlink references: > > https://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-7-0 > > Upgrade Notes > ============= > > * The default ansible version is now 2.7. In case this breaks any jobs > this can be overridden in job.ansible-version, > tenant.default-ansible-version or scheduler.default_ansible_version > > * In order to support several ansible versions installed simultaneously > Zuul now handles them itself in virtual environments. By default Zuul > installs the needed ansible versions on startup so there is no further > user action required. However it is recommended to pre-install the > ansible environments during installation by invoking > zuul-manage-ansible. > > * The user value in the [fingergw] configuration section previously > defaulted to zuul, but now is unset by default, which will cause > fingergw not to drop privileges. It is recommended that this value be > explicitly set to an unprivileged user. > > * The hosts value in the [zookeeper] configuration section previously > defaulted to localhost:2181, but now is unset by default. This value > is required and must be explicitly set (and the documentation has > always indicated this). > > New Features > ============ > > * Jobs may now specify which ansible version is used to run them. The > ansible version to use can now be specified by job.ansible-version. > > * Zuul now supports ansible 2.6 for running jobs. > > * Zuul now supports ansible 2.7 for running jobs. > > * The ansible_python_interpreter variable is now whitelisted for the > add_host task. > > _______________________________________________ > Zuul-announce mailing list > Zuul-announce at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-announce -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Mar 20 23:11:24 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 20 Mar 2019 16:11:24 -0700 Subject: Parameterized Build In-Reply-To: <87va14pnrk.tristanC@fedora> (Tristan Cacqueray's message of "Thu, 28 Feb 2019 12:58:39 +0900") References: <877eex2ntp.tristanC@fedora> <87imygjrow.fsf@meyer.lemoncheese.net> <875zt8nr37.tristanC@fedora> <87imx7o7fc.fsf@meyer.lemoncheese.net> <87va14pnrk.tristanC@fedora> Message-ID: <87mulpkuoz.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On Mon, Feb 25, 2019 at 14:00 James E. Blair wrote: >> Tristan Cacqueray writes: >> >>>>> In summary, could we adjust the architecture to support executing >>>>> an individual job instead of an entire job graph and could we >>>>> support a zuul-web trigger? >>>> >>>> I'm open to exploring the idea of a filtered job graph; I don't think >>>> it's necessarily in opposition to Zuul's design, though there are some >>>> tricky questions to answer in implementation. >>>> >>>> While I'm not convinced that triggers should be able to pass parameters >>>> to jobs, as input, I'm not opposed to the web trigger itself; I think >>>> that's a fine thing to add and should be relatively straightforward. >>>> >>> >>> The proposed webtrigger driver has been updated: >>> https://review.openstack.org/#/q/topic:webtrigger >>> >>> Unfortunately Jim rejected the implementation because the parameters >>> set by the trigger are not represented in git. Unless we can work >>> around that limitation, then it seems like we have to create "fake" >>> git reviews, or we could implement a git service just to hold the >>> event reference. >> >> I rejected the change with the filtered job graph more completely than >> was warranted. I do think it's something we can explore, but it is not >> straightforward. It is a change to Zuul's operational model, which, >> while not set in stone, is something that we should alter very >> carefully. >> >> It is critical that when we make changes like this that we have a >> complete understanding of the use cases for them, and now I think we do. >> Moreover, the use case you described of QE folks needing to trigger >> builds of jobs in infrequently used configurations aligns rather well >> with another use case we've seen where folks may be resource constrained >> and must make individual choices about what additional jobs to run for >> changes. An experimental pipeline is a solution to this, but only a >> partial one. A filtered job graph could provide a better experience for >> everyone. >> >> So allow me to expand on what I think the challenges are, and what we >> need to do to address them: >> >> * There is an anti-use-case: frequently people come to us in >> #openstack-infra asking to be able to recheck just a failing job so >> their change can merge faster. OpenStack has created a clean-check >> requirement for its gate pipeline explicitly to make it *hard* to >> merge a change when all the jobs do not pass reliably. Therefore, any >> filtered job graph feature must, at the very least, allow an operator >> to prevent this behavior. Moreover, even outside of the environment >> with clean-check, it is clearly not good practice for project gating >> to allow a *single* job to determine whether a change should merge at >> the exclusion of all others. Therefore, it should be difficult, if >> not impossible, for this feature to undermine the best practices for >> gating. >> >> * This feature should be something generally available to any trigger >> where it makes sense. Zuul is an automation system built around >> gating, so if it becomes too easy to do things via the web trigger and >> too difficult to do things via automation, then the system is >> unbalanced and we undermine the thing that we do well. So any >> solution should be designed for as many triggers as make sense. >> >> Therefore, I think if we implement this, we should do the following: >> >> * A job graph filter should be a fundamental concept that every trigger >> has the option of supporting. We should expect to support this not >> only in the web trigger, but also Gerrit and GitHub. I don't think >> it makes sense for the timer trigger, but I would expect it in almost >> any other future trigger as well (IRC, email, etc...). >> >> * They should be configurable as part of the pipeline triggers. This >> will allow operators to determine if the feature should even be >> available at all on a pipeline. When a web trigger is added to a >> pipeline, it should include an option to indicate whether filtering >> should be available. When a Gerrit trigger is added, a comment syntax >> with regular expressions with named groups can be used to provide >> input to the job graph filter. >> >> * Filtered job graphs should be non-voting. This will prevent their >> being used in check or gate pipelines, but can still be used to see >> the results of ad-hoc jobs. >> >> * Create a new reporter action. Currently we report on success, >> failure, and merge-failure. A filtered job graph, since it has no >> success or failure determination, doesn't fit neatly into any of >> those. Therefore, a new action (perhaps "neutral" or "indeterminate") >> will be triggered by the result of a filtered job graph. If a >> pipeline has no "neutral:" reporter, then it won't act on the result >> at all, meaning that again, pipeline operators have to be explicit >> about the behavior. This will prevent users from accidentally >> enabling the gate-wedging behavior of voting on filtered job graphs. >> It will, of course, enable someone to explicitly do that, but as long >> as we include sufficient warnings in the documentation about how it's >> a bad idea, I think that's okay. >> >> I think if we implement it in this way, it will address the issues that >> I have with it, and make things nicer for several use cases. >> > > The above requirements should be implemented in > https://review.openstack.org/639905 with an initial gerrit implementation. > > Unfortunately, this doesn't seems to enable custom parameter to be set, > but perhaps we can work around that by having all the relevant > parameters combinaison expressed as unique job and added to such > job filtered pipeline. I've spoken to some other folks interested in this and reviewed your earlier input and recent discussions on IRC, and I do think we should look into whether we can support dynamic job variables. The thing that clicked for me is that none of the use cases relate to gating, but could still leverage having jobs defined in Zuul. In particular, the use case of someone using a deployment job defined in Zuul to run on an ad-hoc non-production target. That has nothing to do with gating, and there is no way to translate that to a gating workflow. It's just an innovative use of the system, and we have almost everything needed to make it work, while the alternatives are much more complex. I do think we should support that if we can do so without compromising the main functions of Zuul. I think my primary reservation about dynamic variables is basically the same as that for job graph filters: they must not be used to undermine a gating system, and we have to be able to control their use in pipeline definitions. Perhaps we can address the issues in the same way? We could perhaps say that if a job is given dynamic variables, the buildset result is indeterminate. And allow users to specify as part of the trigger whether variable input is accepted. What do you think? -Jim From tdecacqu at redhat.com Thu Mar 21 05:03:00 2019 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 21 Mar 2019 14:03:00 +0900 Subject: Parameterized Build In-Reply-To: <87mulpkuoz.fsf@meyer.lemoncheese.net> References: <877eex2ntp.tristanC@fedora> <87imygjrow.fsf@meyer.lemoncheese.net> <875zt8nr37.tristanC@fedora> <87imx7o7fc.fsf@meyer.lemoncheese.net> <87va14pnrk.tristanC@fedora> <87mulpkuoz.fsf@meyer.lemoncheese.net> Message-ID: <87va0ckeez.tristanC@fedora> On Wed, Mar 20, 2019 at 16:11 James E. Blair wrote: > Tristan Cacqueray writes: > >> On Mon, Feb 25, 2019 at 14:00 James E. Blair wrote: >>> Tristan Cacqueray writes: >>> >>>>>> In summary, could we adjust the architecture to support executing >>>>>> an individual job instead of an entire job graph and could we >>>>>> support a zuul-web trigger? >>>>> >>>>> I'm open to exploring the idea of a filtered job graph; I don't think >>>>> it's necessarily in opposition to Zuul's design, though there are some >>>>> tricky questions to answer in implementation. >>>>> >>>>> While I'm not convinced that triggers should be able to pass parameters >>>>> to jobs, as input, I'm not opposed to the web trigger itself; I think >>>>> that's a fine thing to add and should be relatively straightforward. >>>>> >>>> >>>> The proposed webtrigger driver has been updated: >>>> https://review.openstack.org/#/q/topic:webtrigger >>>> >>>> Unfortunately Jim rejected the implementation because the parameters >>>> set by the trigger are not represented in git. Unless we can work >>>> around that limitation, then it seems like we have to create "fake" >>>> git reviews, or we could implement a git service just to hold the >>>> event reference. >>> >>> I rejected the change with the filtered job graph more completely than >>> was warranted. I do think it's something we can explore, but it is not >>> straightforward. It is a change to Zuul's operational model, which, >>> while not set in stone, is something that we should alter very >>> carefully. >>> >>> It is critical that when we make changes like this that we have a >>> complete understanding of the use cases for them, and now I think we do. >>> Moreover, the use case you described of QE folks needing to trigger >>> builds of jobs in infrequently used configurations aligns rather well >>> with another use case we've seen where folks may be resource constrained >>> and must make individual choices about what additional jobs to run for >>> changes. An experimental pipeline is a solution to this, but only a >>> partial one. A filtered job graph could provide a better experience for >>> everyone. >>> >>> So allow me to expand on what I think the challenges are, and what we >>> need to do to address them: >>> >>> * There is an anti-use-case: frequently people come to us in >>> #openstack-infra asking to be able to recheck just a failing job so >>> their change can merge faster. OpenStack has created a clean-check >>> requirement for its gate pipeline explicitly to make it *hard* to >>> merge a change when all the jobs do not pass reliably. Therefore, any >>> filtered job graph feature must, at the very least, allow an operator >>> to prevent this behavior. Moreover, even outside of the environment >>> with clean-check, it is clearly not good practice for project gating >>> to allow a *single* job to determine whether a change should merge at >>> the exclusion of all others. Therefore, it should be difficult, if >>> not impossible, for this feature to undermine the best practices for >>> gating. >>> >>> * This feature should be something generally available to any trigger >>> where it makes sense. Zuul is an automation system built around >>> gating, so if it becomes too easy to do things via the web trigger and >>> too difficult to do things via automation, then the system is >>> unbalanced and we undermine the thing that we do well. So any >>> solution should be designed for as many triggers as make sense. >>> >>> Therefore, I think if we implement this, we should do the following: >>> >>> * A job graph filter should be a fundamental concept that every trigger >>> has the option of supporting. We should expect to support this not >>> only in the web trigger, but also Gerrit and GitHub. I don't think >>> it makes sense for the timer trigger, but I would expect it in almost >>> any other future trigger as well (IRC, email, etc...). >>> >>> * They should be configurable as part of the pipeline triggers. This >>> will allow operators to determine if the feature should even be >>> available at all on a pipeline. When a web trigger is added to a >>> pipeline, it should include an option to indicate whether filtering >>> should be available. When a Gerrit trigger is added, a comment syntax >>> with regular expressions with named groups can be used to provide >>> input to the job graph filter. >>> >>> * Filtered job graphs should be non-voting. This will prevent their >>> being used in check or gate pipelines, but can still be used to see >>> the results of ad-hoc jobs. >>> >>> * Create a new reporter action. Currently we report on success, >>> failure, and merge-failure. A filtered job graph, since it has no >>> success or failure determination, doesn't fit neatly into any of >>> those. Therefore, a new action (perhaps "neutral" or "indeterminate") >>> will be triggered by the result of a filtered job graph. If a >>> pipeline has no "neutral:" reporter, then it won't act on the result >>> at all, meaning that again, pipeline operators have to be explicit >>> about the behavior. This will prevent users from accidentally >>> enabling the gate-wedging behavior of voting on filtered job graphs. >>> It will, of course, enable someone to explicitly do that, but as long >>> as we include sufficient warnings in the documentation about how it's >>> a bad idea, I think that's okay. >>> >>> I think if we implement it in this way, it will address the issues that >>> I have with it, and make things nicer for several use cases. >>> >> >> The above requirements should be implemented in >> https://review.openstack.org/639905 with an initial gerrit implementation. >> >> Unfortunately, this doesn't seems to enable custom parameter to be set, >> but perhaps we can work around that by having all the relevant >> parameters combinaison expressed as unique job and added to such >> job filtered pipeline. > > I've spoken to some other folks interested in this and reviewed your > earlier input and recent discussions on IRC, and I do think we should > look into whether we can support dynamic job variables. The thing that > clicked for me is that none of the use cases relate to gating, but could > still leverage having jobs defined in Zuul. In particular, the use case > of someone using a deployment job defined in Zuul to run on an ad-hoc > non-production target. That has nothing to do with gating, and there is > no way to translate that to a gating workflow. It's just an innovative > use of the system, and we have almost everything needed to make it work, > while the alternatives are much more complex. I do think we should > support that if we can do so without compromising the main functions of > Zuul. > > I think my primary reservation about dynamic variables is basically the > same as that for job graph filters: they must not be used to undermine a > gating system, and we have to be able to control their use in pipeline > definitions. Perhaps we can address the issues in the same way? We > could perhaps say that if a job is given dynamic variables, the buildset > result is indeterminate. And allow users to specify as part of the > trigger whether variable input is accepted. > > What do you think? > That's great news :-) The job graph filters already added a 'neutral' result so that it can't be used for gating. And this now also the case for dynmaic variables too. Being able to select a single job (job graph filter) out of a pipeline definition is implemented in the topic:webtrigger and dynamic variables is added in the topic:parameterized-build. The user interface relies on https://review.openstack.org/637670 to display the build button only for pipeline that uses a webtrigger. Thus it would be nice to merge that simple addition first. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 22 20:32:28 2019 From: corvus at inaugust.com (James E. Blair) Date: Fri, 22 Mar 2019 13:32:28 -0700 Subject: Speculative docker images In-Reply-To: <87imyc34vf.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Fri, 25 Jan 2019 11:37:56 -0800") References: <87imyc34vf.fsf@meyer.lemoncheese.net> Message-ID: <87d0miejkz.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > Hi, > > As you know, with respect to git repositories, Zuul frequently operates > well into the future, allowing jobs to build software with changes that > haven't landed yet under the assurance that they will. As more of us > have incorporated container image building into our processes, we find > that we want the same thing there -- that not only would our git > repositories represent the future that Zuul sees, but also our container > image registries. I've completed the work described in this email, and it is now in production use in the OpenDev project infrastructure. Documentation describing how to set the system up is published at: https://zuul-ci.org/docs/zuul-jobs/docker-image.html -Jim From pabelanger at redhat.com Sat Mar 23 19:26:49 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Sat, 23 Mar 2019 15:26:49 -0400 Subject: nodepoolcmd + secure.conf Message-ID: <20190323192649.GA12050@localhost.localdomain> Greetings, I wanted to discuss maybe reverting the following change[1]. On friday I updated our local nodepool.yaml configuration to move zookeeper-servers section into secure.conf. This was mostly to prepare for the upcoming patch adding SSL support for zookeeper. However, because 480310 removed the default value of the -s flag, I now find myself having to type the following everytime I want to interact with nodepool: $ nodepool -s /etc/nodepool/secure.conf list $ nodepool -s /etc/nodepool/secure.conf dib-image-list ... I actually don't think a full revert is the best case here, as that would mean ever nodepool install would need to add /etc/nodepool/secure.conf, even if empty to have the CLI command not raise an exception for missing file. I also don't think silently failing, if /etc/nodepool/secure.conf is missing is the best way forward either as this could mask deployment issues. Does anybody have an idea how we could fix this to be a little more user friendly? Thanks! - Paul [1] https://review.openstack.org/480310/ From shrews at redhat.com Mon Mar 25 13:56:24 2019 From: shrews at redhat.com (David Shrewsbury) Date: Mon, 25 Mar 2019 09:56:24 -0400 Subject: nodepoolcmd + secure.conf In-Reply-To: <20190323192649.GA12050@localhost.localdomain> References: <20190323192649.GA12050@localhost.localdomain> Message-ID: Hi, On Sat, Mar 23, 2019 at 3:27 PM Paul Belanger wrote: > Greetings, > > I wanted to discuss maybe reverting the following change[1]. On friday I > updated our local nodepool.yaml configuration to move zookeeper-servers > section into secure.conf. This was mostly to prepare for the upcoming > patch adding SSL support for zookeeper. However, because 480310 removed > the default value of the -s flag, I now find myself having to type the > following everytime I want to interact with nodepool: > > $ nodepool -s /etc/nodepool/secure.conf list > $ nodepool -s /etc/nodepool/secure.conf dib-image-list > ... > > I actually don't think a full revert is the best case here, as that > would mean ever nodepool install would need to add > /etc/nodepool/secure.conf, even if empty to have the CLI command not > raise an exception for missing file. > > I also don't think silently failing, if /etc/nodepool/secure.conf is > missing is the best way forward either as this could mask deployment > issues. > > Does anybody have an idea how we could fix this to be a little more user > friendly? > I think maybe we could change it to automatically try to read /etc/nodepool/secure.conf if it exists, by default. If the default file does not exist, just do nothing. In the case that the -s option is explicitly given, we always try to read the supplied config file, and only then error if it does not exist (because we were told it *should* exist). I can't immediately see any holes in that just yet, but I'm only on my first cup of coffee. :) Shrews -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Mar 25 15:23:34 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 25 Mar 2019 11:23:34 -0400 Subject: nodepoolcmd + secure.conf In-Reply-To: References: <20190323192649.GA12050@localhost.localdomain> Message-ID: <20190325152334.GA7940@localhost.localdomain> On Mon, Mar 25, 2019 at 09:56:24AM -0400, David Shrewsbury wrote: > Hi, > > On Sat, Mar 23, 2019 at 3:27 PM Paul Belanger wrote: > > > Greetings, > > > > I wanted to discuss maybe reverting the following change[1]. On friday I > > updated our local nodepool.yaml configuration to move zookeeper-servers > > section into secure.conf. This was mostly to prepare for the upcoming > > patch adding SSL support for zookeeper. However, because 480310 removed > > the default value of the -s flag, I now find myself having to type the > > following everytime I want to interact with nodepool: > > > > $ nodepool -s /etc/nodepool/secure.conf list > > $ nodepool -s /etc/nodepool/secure.conf dib-image-list > > ... > > > > I actually don't think a full revert is the best case here, as that > > would mean ever nodepool install would need to add > > /etc/nodepool/secure.conf, even if empty to have the CLI command not > > raise an exception for missing file. > > > > > I also don't think silently failing, if /etc/nodepool/secure.conf is > > missing is the best way forward either as this could mask deployment > > issues. > > > > Does anybody have an idea how we could fix this to be a little more user > > friendly? > > > > I think maybe we could change it to automatically try to read > /etc/nodepool/secure.conf > if it exists, by default. If the default file does not exist, just do > nothing. In the case that > the -s option is explicitly given, we always try to read the supplied > config file, and only > then error if it does not exist (because we were told it *should* exist). > > I can't immediately see any holes in that just yet, but I'm only on my > first cup of coffee. :) > wfm, I'll propose a patch based on what you have described. Thanks, - Paul From corvus at inaugust.com Mon Mar 25 17:55:50 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 25 Mar 2019 10:55:50 -0700 Subject: Zuul project update 2019-03-25 Message-ID: <871s2ualeh.fsf@meyer.lemoncheese.net> Hi, Since the last update, Tristan has updated the zookeeper auth patch based on recent comments, rebased the zuul-runner patch on the multi-ansible stack, and Investigated how to enable the executor service to use a COE. I expect we'll have a good discussion about that once Tristan's findings are ready. Fabian worked on a Elasticsearch Zuul reporter to enable indexation of build and buildset data. It enable building analytics dashboards on top of Kibana for instance. I reviewed and helped merge Tobias's work on multi-ansible. We upgrade OpenDev's Zuul, released 3.7.0, found some bugs, fixed them, and I plan on releasing 3.7.1 today. Next I plan on reviewing the zuul-runner series (as outlined in the previous update email). The remainder of the backlog from the last update email: 1) The zuul-runner series This has been in development for a long time, and has a significant impact on the executor. Now that the multi-ansible series has merged and it has been rebased, let's focus on this next. 2) The web authentication spec This has been in development for a while, and folks working on it recently signaled it's ready for review. This shouldn't conflict with anything ahead of it, so we can review it as time permits. 3) The web authentication series Once the spec lands, we can review the implementation which has been developing in parallel. Help write the next update email by updating https://etherpad.openstack.org/p/zuul-update-email Thanks, Jim From kennelson11 at gmail.com Fri Mar 29 00:03:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 28 Mar 2019 17:03:15 -0700 Subject: PTG Team Photos Message-ID: Hello! If your team is interested in having a team photo taken, here is the signup[1]! There are slots Thursday and Friday from 10:00 AM to 4:30 PM. The location is TBD but will likely be close to where registration will be. I'll send an email out the day before with a reminder of your time slot and an exact location. -Kendall (diablo_rojo) [1] https://docs.google.com/spreadsheets/d/1DgsRHVWW2YLv7ewfX0M21zWJRf4wUfPG4ff2V5XtaMg/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: