Proposal to create zuul-operator project
Hey everybody, It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist. SO ... I would like to propose that we create a zuul-operator project as part of Zuul. Given our close ties with Ansible, I think we should use ansible-operator [0] to write it. Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1] Does anyone object? It not, I'll get a repo started. Monty [0] https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/u... [1] https://www.operatorhub.io/
I'm totally for it. Actually it was even on my to-do list for quitr some time already. Artem On Wed, 6 Mar 2019, 18:31 Monty Taylor, <mordred@inaugust.com> wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
Monty
[0]
https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/u... [1] https://www.operatorhub.io/
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
+10 for that! //G Artem Goncharov <artem.goncharov@gmail.com> schrieb am Mi. 6. März 2019 um 18:35:
I'm totally for it. Actually it was even on my to-do list for quitr some time already.
Artem
On Wed, 6 Mar 2019, 18:31 Monty Taylor, <mordred@inaugust.com> wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
Monty
[0]
https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/u... [1] https://www.operatorhub.io/
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- Principal Software Engineer - Systems Design & Engineering Mobile: +49 171 2801345 Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Handelsregister: Amtsgericht München, HRB 153243, Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too. I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not? Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed). I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
Monty
[0] https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/u... [1] https://www.operatorhub.io/
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Paul Belanger <pabelanger@redhat.com> writes:
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think some things we can be opinionated on (for example, we can pick one SQL backend, because our operator will manage it). There are probably a number of ways to handle stats (manage it ourselves, or leave it to the user to be able to report to their central dashboard). We can make things easy for ourselves by ignoring that for starters. We can always add it later.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
I expect our operator would just use the container images we're building. -Jim
Quoting James E. Blair (2019-03-06 10:22:31)
Paul Belanger <pabelanger@redhat.com> writes:
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think some things we can be opinionated on (for example, we can pick one SQL backend, because our operator will manage it). There are probably a number of ways to handle stats (manage it ourselves, or leave it to the user to be able to report to their central dashboard). We can make things easy for ourselves by ignoring that for starters. We can always add it later.
All of the clouds I am interested inputting Zuul on have DBaaS and I tend to prefer those for a number of reasons (global backup policies, better performance, less ops load). I'm not saying v0 has to have this, but it would be nice if the DB part had an option to consume an external one. But I'd probably use it without that, so, just a nice to have. :)
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
I expect our operator would just use the container images we're building.
Another really hard part of this is what to include in the executor image. I currently add in kubectl for kubernetes resources, and boto3 for s3 logs. These are local choices that not everyone needs, and I imagine there are other examples of base jobs that rely on executor image contents not in the base executor image. So I think it's important to make it relatively simple to build your own images, but still use said operator.
On 3/6/19 6:59 PM, Clint Byrum wrote:
Quoting James E. Blair (2019-03-06 10:22:31)
Paul Belanger <pabelanger@redhat.com> writes:
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think some things we can be opinionated on (for example, we can pick one SQL backend, because our operator will manage it). There are probably a number of ways to handle stats (manage it ourselves, or leave it to the user to be able to report to their central dashboard). We can make things easy for ourselves by ignoring that for starters. We can always add it later.
All of the clouds I am interested inputting Zuul on have DBaaS and I tend to prefer those for a number of reasons (global backup policies, better performance, less ops load). I'm not saying v0 has to have this, but it would be nice if the DB part had an option to consume an external one. But I'd probably use it without that, so, just a nice to have. :)
I crossed thread streams and said words about this replying to Paul but missed these... I think I agree with Jim - we should only manage one db. But I also agree with Clint - we should allow users to ask us to not manage a db and hand us a connection url. The default should be to manage a database and using an external one should be an opt-in for more advanced users.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
I expect our operator would just use the container images we're building.
Another really hard part of this is what to include in the executor image. I currently add in kubectl for kubernetes resources, and boto3 for s3 logs. These are local choices that not everyone needs, and I imagine there are other examples of base jobs that rely on executor image contents not in the base executor image. So I think it's important to make it relatively simple to build your own images, but still use said operator.
rook has a config field in its CephCluster spec: image: ceph/ceph:v13.2.2-20181206 that let's people do that. In our case, there are potentially a set of related images - but that should still be easy enough to do - and have them default to the images we build and expect.
On Wed, Mar 06, 2019 at 10:59:33AM -0800, Clint Byrum wrote:
Quoting James E. Blair (2019-03-06 10:22:31)
Paul Belanger <pabelanger@redhat.com> writes:
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think some things we can be opinionated on (for example, we can pick one SQL backend, because our operator will manage it). There are probably a number of ways to handle stats (manage it ourselves, or leave it to the user to be able to report to their central dashboard). We can make things easy for ourselves by ignoring that for starters. We can always add it later.
All of the clouds I am interested inputting Zuul on have DBaaS and I tend to prefer those for a number of reasons (global backup policies, better performance, less ops load). I'm not saying v0 has to have this, but it would be nice if the DB part had an option to consume an external one. But I'd probably use it without that, so, just a nice to have. :)
Yes, completly agree. I'd love to see things like db / zookeeper be able to consume external and if no able to, have option for zuul-operator to spin it up. I am sure we can do it, just wanted to have the discussion around it for all the reasons you just listed.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
I expect our operator would just use the container images we're building.
Another really hard part of this is what to include in the executor image. I currently add in kubectl for kubernetes resources, and boto3 for s3 logs. These are local choices that not everyone needs, and I imagine there are other examples of base jobs that rely on executor image contents not in the base executor image. So I think it's important to make it relatively simple to build your own images, but still use said operator.
++ Also been running into this issue locally myself.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question. I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box. Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one. I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now? But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use. I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python. I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000 I believe there is no value in producing divergent images.
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
Hello, Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools. Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator We also started to investigate operators for Software Factory component and perhaps we could join force?
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example: How are we going to configure the connections and the tenants' configuration? Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator?
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published. Or what happen when the zuul image gets published with an incompatible openstacksdk requirement? Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild. Regards, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
On 2019-03-07 11:32:39 +0900 (+0900), Tristan Cacqueray wrote: [...]
IIUC, we would need to wait for a fix to land in the debian-stretch image,
We could just wait for the fix to land in Debian itself (they're going to be part of the coordinated disclosure of most of the sort of vulnerabilities you're talking about so should be available at basically the same time as it's announced by researchers) and then rebuild those images ourselves if that step is taking too long... or get involved with the community who maintains them and help out?
then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Luckily OpenDev and Zuul image building is managed in the same system and maintained by mostly the same people so I wouldn't anticipate much lag there. But also, this entire stack of images is buildable by downstream consumers who want to do that themselves instead of waiting for us.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement? [...]
I don't see where the choice of underlying distribution comes into play here. We'd explicitly choose which version of OpenStackSDK to install, so if it's incompatible then that's on us, right? -- Jeremy Stanley
On Thu, Mar 07, 2019 at 13:47 Jeremy Stanley wrote:
On 2019-03-07 11:32:39 +0900 (+0900), Tristan Cacqueray wrote: [...]
IIUC, we would need to wait for a fix to land in the debian-stretch image,
We could just wait for the fix to land in Debian itself (they're going to be part of the coordinated disclosure of most of the sort of vulnerabilities you're talking about so should be available at basically the same time as it's announced by researchers) and then rebuild those images ourselves if that step is taking too long... or get involved with the community who maintains them and help out?
then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Luckily OpenDev and Zuul image building is managed in the same system and maintained by mostly the same people so I wouldn't anticipate much lag there. But also, this entire stack of images is buildable by downstream consumers who want to do that themselves instead of waiting for us.
IIUC, the Zuul image is updated when a change is merged in the Zuul repository, so it seems like we are missing a periodic rebuild of the Zuul image, especially for the tagged releases. I don't know how the opendevorg/python-builder gets updated. My question was more about what are the concrete actions we need to perform to update one of the layers managed by opendev. Because from an end-user perspective, what used to be a host update and service restart is now replaced by pulling a brand new debian+opendev+zuul image. Monty mentioned in the other thread that rebuilding the Zuul image should update the debian packages too.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement? [...]
I don't see where the choice of underlying distribution comes into play here. We'd explicitly choose which version of OpenStackSDK to install, so if it's incompatible then that's on us, right?
That's correct, the underlying distribution won't prevent pip from pulling unwanted bits. But this is a new issue/behavior with these layered builds because you can't prevent the Zuul layer from pulling new bits when all you want to do is update the base layer (at least with the current Dockerfile and requirements.txt file from the Zuul repository). These concerns are already adressed by Monty suggestion tu support local builds of the image. I just meant to discuss a couple of scenarios and how its going to affect user of the Zuul image provided by opendev. Regards, -Tristan
-- Jeremy Stanley _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
We also started to investigate operators for Software Factory component and perhaps we could join force?
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
How are we going to configure the connections and the tenants' configuration?
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator?
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain.
Regards, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree. This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed. It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more. But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea. push commits into one - and the operator would have to have git account access. That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it. A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words. We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack. Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes. If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want. those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you. It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
Regards, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree.
This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests. Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services. Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed.
It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more.
But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea.
Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed. The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing.
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access.
That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it.
A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words.
We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack.
Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Agreed, there is no point in supporting multiple distros image.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes.
Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack.
If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you.
My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example.
It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image. Regads, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Fri, Mar 08, 2019 at 03:36:50PM +0900, Tristan Cacqueray wrote:
On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > Hey everybody, > > It seems that kubernetes operators are all the rage these days, and in the > last 2 weeks I've had multiple conversations that have all come back around > to someone wanting a zuul operator to exist. >
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree.
This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests.
Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services.
My first (personal) reaction, is that is too much scope for a zuul-operator, as a zuul operator I don't really want to sign up for managing a gerrit / log service / code hosting / etc. This is what opendev.org is, and I think it is a great thing to provide, but I believe is should live outside of zuul-operator by default. If there is some other opendev-operator that lives a top of zuul-operator, I think I am okay with this. The main reason I say this, is look at the history of puppet-openstackci. In my eyes, it was almost what you mentioned here. A shared layer to manage an openstack-infra like environment. Things like centos support could have been added into puppet-openstackci, in fact i think there are patches on gerrit for this, but it was very difficult to get them merged for reasons. We are always going to have different deployment opinions, which is fine to have, I'd just not like to have a single operator, with tons of feature flags, to toggle what people want. I think this is what other threads are suggestion, have opendev-operator (logstash, code review. etc) depend on zuul-operator (nodepool / zuul / zookeeper), but wanted to get it down again.
Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs.
> SO ... > > I would like to propose that we create a zuul-operator project as part of > Zuul. > > Given our close ties with Ansible, I think we should use ansible-operator > [0] to write it. > > Since we have zuul in production on both vanilla k8s and on OpenShift, I > think we should set up gate jobs to make sure it works on both. Once we're > happy with it - we should publish it to operatorhub. [1] > > Does anyone object? It not, I'll get a repo started. > I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed.
It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more.
But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea.
Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed.
The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing.
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access.
That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it.
A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged.
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words.
We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack.
Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Agreed, there is no point in supporting multiple distros image.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes.
Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack.
If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you.
My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example.
It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image.
Yesterday I had a chance to boot the zuul/zuul-merger image in openshift, and not surprising that seems to work a little different then say a local docker. Openshift wasn't happy that the user inside the container was running as root, seems it really wants a non-root user inside. So, I do think it we might want to make all our images work as well as possible in each thing (docker, podman, k8s, openshift, etc) but honestly, I'm not really sure what the work looks like. In this case, I think it is just a matter of creating a zuul user inside the container and starting zuul / nodepool with it.
Regads, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Fri, Mar 08, 2019 at 08:54:41AM -0500, Paul Belanger wrote:
On Fri, Mar 08, 2019 at 03:36:50PM +0900, Tristan Cacqueray wrote:
On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote: > On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >> Hey everybody, >> >> It seems that kubernetes operators are all the rage these days, and in the >> last 2 weeks I've had multiple conversations that have all come back around >> to someone wanting a zuul operator to exist. >>
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree.
This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests.
Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services.
Now that I've had enough coffee and reread your original reply. I though you were suggesting logservices, code hosting alongside (you side aside) :) So my reply basically rehashed my original opinion for no real reason.
Sorry for the noisy here.
My first (personal) reaction, is that is too much scope for a zuul-operator, as a zuul operator I don't really want to sign up for managing a gerrit / log service / code hosting / etc. This is what opendev.org is, and I think it is a great thing to provide, but I believe is should live outside of zuul-operator by default. If there is some other opendev-operator that lives a top of zuul-operator, I think I am okay with this.
The main reason I say this, is look at the history of puppet-openstackci. In my eyes, it was almost what you mentioned here. A shared layer to manage an openstack-infra like environment.
Things like centos support could have been added into puppet-openstackci, in fact i think there are patches on gerrit for this, but it was very difficult to get them merged for reasons. We are always going to have different deployment opinions, which is fine to have, I'd just not like to have a single operator, with tons of feature flags, to toggle what people want.
I think this is what other threads are suggestion, have opendev-operator (logstash, code review. etc) depend on zuul-operator (nodepool / zuul / zookeeper), but wanted to get it down again.
Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs.
>> SO ... >> >> I would like to propose that we create a zuul-operator project as part of >> Zuul. >> >> Given our close ties with Ansible, I think we should use ansible-operator >> [0] to write it. >> >> Since we have zuul in production on both vanilla k8s and on OpenShift, I >> think we should set up gate jobs to make sure it works on both. Once we're >> happy with it - we should publish it to operatorhub. [1] >> >> Does anyone object? It not, I'll get a repo started. >> > I think it is a fine idea, I've also seen some talk about it recently > too. > > I do think we should have some consensus on the bits we want to support > for the deployment. Maybe going as far as saying it is an opinionated > deployment, for better or worse. As an example, graphite / statsd is > that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed.
It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more.
But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea.
Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed.
The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing.
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access.
That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it.
A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged.
> Also, could we maybe say a single distro for all container things? I > guess by default, that is what the current containers by zuul project is > running today. I don't much have the energy to get into the weeds of pip > / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words.
We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack.
Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Agreed, there is no point in supporting multiple distros image.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes.
Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack.
If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you.
My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example.
It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image.
Yesterday I had a chance to boot the zuul/zuul-merger image in openshift, and not surprising that seems to work a little different then say a local docker. Openshift wasn't happy that the user inside the container was running as root, seems it really wants a non-root user inside. So, I do think it we might want to make all our images work as well as possible in each thing (docker, podman, k8s, openshift, etc) but honestly, I'm not really sure what the work looks like. In this case, I think it is just a matter of creating a zuul user inside the container and starting zuul / nodepool with it.
Regads, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
> I am likely showing how much I don't know about k8s / openshift here, > but wanting to limit the amount of different ways to do a deployment > based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On 3/8/19 6:36 AM, Tristan Cacqueray wrote:
On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote:
On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: > Hey everybody, > > It seems that kubernetes operators are all the rage these days, and in the > last 2 weeks I've had multiple conversations that have all come back around > to someone wanting a zuul operator to exist. >
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree.
This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests.
Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services.
Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs.
Yes. I think the world would be a sad place if there were 2 zuul operators. I definitely also want to list it at operatorhub: https://www.operatorhub.io because apparently the Operator Lifecycle Manager will know how to pull operators from operatorhub based on dependency metadata declarations. Incidentally, this: https://www.operatorhub.io/operator/percona-xtradb-cluster-operator-communit... is there for database. None of the awesome-operators zookeeper ones are on operatorhub - so we may need to dig in to that one a bit more.
> SO ... > > I would like to propose that we create a zuul-operator project as part of > Zuul. > > Given our close ties with Ansible, I think we should use ansible-operator > [0] to write it. > > Since we have zuul in production on both vanilla k8s and on OpenShift, I > think we should set up gate jobs to make sure it works on both. Once we're > happy with it - we should publish it to operatorhub. [1] > > Does anyone object? It not, I'll get a repo started. > I think it is a fine idea, I've also seen some talk about it recently too.
I do think we should have some consensus on the bits we want to support for the deployment. Maybe going as far as saying it is an opinionated deployment, for better or worse. As an example, graphite / statsd is that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed.
It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more.
But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea.
Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed.
The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing.
Yes - I think that's likely a great approach to start with. (although I think we can just call ZuulApplication "Zuul" - but I like typing shorter things. Although I could also see just having a Zuul and a Nodepool and having a db field that could optionally be overridden but otherwise would cause a percona operator to get created and a sqlreporter entry added to the connections list. Similarly the Zuul operator will need to depend on a Zookeeper resource provided by an operator - but allow it to be overridden. Having that there though - I don't think the user of the Zuul operator should have to put in an entry to the Zuul CR telling it to put the zookeeper info into the config file- that should be zuul operator details. *waves hands* Basically - when there's an opportunity for the operator to take care of connecting some info here we should take it. The nodepool case is a bit more interesting to think about - because a Zuul requires a Nodepool - but the Nodepool config is a bit more complex and maybe would start to feel very heavyweight if we just put it all into the CRD? i'm sure we'll learn a lot as we poke ...
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access.
That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it.
A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged.
Agree
Also, could we maybe say a single distro for all container things? I guess by default, that is what the current containers by zuul project is running today. I don't much have the energy to get into the weeds of pip / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words.
We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack.
Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Agreed, there is no point in supporting multiple distros image.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes.
Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack.
If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you.
My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example.
Sure - I just think that it's a better route to have something trigger Zuul to do a rebuild and potential promotion when a base layer changes. (A simple trigger should be sufficient) BuildConfig is a whole other system with a whole other worldview and I would be more worried that Zuul and BuildConfig would fight each other. (and I don't think BuildConfig adds anything, other than potentially having a trigger defined watching for changes to base images)
It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image.
++ - I think it's essential that it can use an optional custom image.
Regads, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
I am likely showing how much I don't know about k8s / openshift here, but wanting to limit the amount of different ways to do a deployment based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Fri, Mar 08, 2019 at 18:23 Monty Taylor wrote:
On 3/8/19 6:36 AM, Tristan Cacqueray wrote:
On Thu, Mar 07, 2019 at 15:19 Monty Taylor wrote:
On 3/7/19 2:13 PM, Paul Belanger wrote:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
On Wed, Mar 06, 2019 at 19:06 Monty Taylor wrote:
On 3/6/19 6:12 PM, Paul Belanger wrote: > On Wed, Mar 06, 2019 at 05:31:11PM +0000, Monty Taylor wrote: >> Hey everybody, >> >> It seems that kubernetes operators are all the rage these days, and in the >> last 2 weeks I've had multiple conversations that have all come back around >> to someone wanting a zuul operator to exist. >>
Hello,
Thank you for starting this thread, It seems like a great oportunity to converge the different deployment and operation tools.
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I agree.
This is definitely an interesting thought - although I'm not sure we're all the way there from the opendev POV yet. As I learn more about operators I could certainly see an opportunity to write one for gitea once the indexing issue is fixed and we can go back to k8s for it. That said - I think opendev operators are out of scope for the zuul team - so we may want to chat about that in a slightly different context. (If the zuul operator goes well and we find it pleasing I could see that motivating doing more of them)
We also started to investigate operators for Software Factory component and perhaps we could join force?
We should absolutely always join forces. :)
Back in the early days of Software Factory we tries to integrate the openstack-infra manifest, but it was such a tedious task to make them work on CentOS outside of the openstack.org domain that we ended up writing our own manifests.
Putting the other opendev components aside (e.g. logservices, code hosting, ...), I think this is a great opportunity to converge the existing Zuul deployment system and come up with a common set of ops tasks for example to backup and restart the services.
Looking at the awesome-operators page, there are 2 ZooKeeper operators and 5 for MySQL... It would be really nice if we are able to settle on a single zuul-operator implementation that satisfies everyone needs.
Yes. I think the world would be a sad place if there were 2 zuul operators.
I definitely also want to list it at operatorhub:
because apparently the Operator Lifecycle Manager will know how to pull operators from operatorhub based on dependency metadata declarations.
Incidentally, this:
https://www.operatorhub.io/operator/percona-xtradb-cluster-operator-communit...
is there for database. None of the awesome-operators zookeeper ones are on operatorhub - so we may need to dig in to that one a bit more.
Not sure how operator dependencies are going to work (and be tested), but it seems like one operator can install another one at runtime.
>> SO ... >> >> I would like to propose that we create a zuul-operator project as part of >> Zuul. >> >> Given our close ties with Ansible, I think we should use ansible-operator >> [0] to write it. >> >> Since we have zuul in production on both vanilla k8s and on OpenShift, I >> think we should set up gate jobs to make sure it works on both. Once we're >> happy with it - we should publish it to operatorhub. [1] >> >> Does anyone object? It not, I'll get a repo started. >> > I think it is a fine idea, I've also seen some talk about it recently > too. > > I do think we should have some consensus on the bits we want to support > for the deployment. Maybe going as far as saying it is an opinionated > deployment, for better or worse. As an example, graphite / statsd is > that in play or not?
I think that's a great question.
I do believe it should be an opinionated deployment - but there are probably places where people will want to make some choices from a specific set of choices. Like - by default I think it should spin up a percona-xtradb cluster as provided by the percona operator. BUT - I could see a setting in teh main Zuul CRD like "use-existing-db" that takes a sqlalchemy url. That way if there is someone who has an existing db infrastructure, they could take advantage of it - but the Zuul operator gets you a working system out of the box.
Obviously we'll need to spin up a zookeeper. Not sure if there is an operator for that yet - we might want to make one or work with someone to make one.
There are actually 2 operators for ZooKeeper already listed in this page: https://github.com/operator-framework/awesome-operators
WOOT!
I could see a graphite/statsd option with the ability to point to an existing graphite. Maybe we just start with CRD field that indicates the stats_host like we have now?
But - yes - opinionated, mostly batteries included - maybe some amount of optional things.
Should we discuss this with a spec? For example:
Yes - probably so - because there's lots of design points (as you start to raise here) I'll start one today and we can iterator on it together.
How are we going to configure the connections and the tenants' configuration?
It's a great question. My first hunch is that both the connections config and the tenant config should just be parameters to the Zuul CRD, and then the operator could take that data, transform it into the right file formats and restart zuul or run zuul-reconfigure as needed.
It's also feasible we could do it as referenced ConfigMap objects. It's also conceivable that maybe we just want connections to be parameters to the Zuul CRD and we want to make ZuulTenant CRD k8s objects and just have the operator concatenate them appropriately. That last option starts to worry me though - starting to feel a bit liek the puppet modules that wanted to re-implement the config language of each thing (like apache) in the Puppet DSL - and how it turns out we wind up preferring the ansible "just jinja template the file itself" approach more.
But I honestly don't know which we'll liek more. I think the EASIEST thing for a simple first start would be having each content just be parameters to the Zuul CRD - but we could probably play with the other things and see what we like. Or maybe there is a different idea.
Everything in the Zuul CRD sounds like the most straightforward. Moreover, the reconcile logic can take care of the differents action depending on what changed.
The CRD I started to work on looked like this: ``` apiVersion: v1 kind: ZuulCluster metadata: name: test-cluster spec: merger: instances: 1 executor: instances: 2 zookeeperHosts: - hostname:port connections: - name: git.zuul-ci.org driver: git baseurl: https://git.zuul-ci.org - name: sqlreporter driver: sql dburi: mysql+pymysql://... - name: gerrit driver: gerrit username: ... tenants: - name: local source: gerrit: config-projects: - config git.zuul-ci.org: untrusted-projects: - zuul-jobs ``` Then I was planning on adding a similar NodepoolCluster and perhaps a ZuulApplication to glue both cluster and manage zookeeper, rdbms and statsd if missing.
Yes - I think that's likely a great approach to start with. (although I think we can just call ZuulApplication "Zuul" - but I like typing shorter things.
Although I could also see just having a Zuul and a Nodepool and having a db field that could optionally be overridden but otherwise would cause a percona operator to get created and a sqlreporter entry added to the connections list. Similarly the Zuul operator will need to depend on a Zookeeper resource provided by an operator - but allow it to be overridden. Having that there though - I don't think the user of the Zuul operator should have to put in an entry to the Zuul CR telling it to put the zookeeper info into the config file- that should be zuul operator details. *waves hands*
Basically - when there's an opportunity for the operator to take care of connecting some info here we should take it.
Yes. So for completeness here are the 2 other CRDs (draft): ``` kind: NodepoolCluster spec: zookeeperHosts: - hostname:port launcher: instances: 2 builder: instances: 1 providers: - name: cloud driver: openstack pools: [] - name: static driver: static ... cloud-config: # clouds yaml content or secret name kube-config: # kube yaml content or secret name ``` And the user facing CRD would be: ``` kind: Zuul spec: connections: [] tenants: [] providers: [] ``` This higher level Zuul resources may takes care of the Zookeeper and DB setup if it's not provided.
The nodepool case is a bit more interesting to think about - because a Zuul requires a Nodepool - but the Nodepool config is a bit more complex and maybe would start to feel very heavyweight if we just put it all into the CRD? i'm sure we'll learn a lot as we poke ...
Secrets should probably live in a Secret resource, but I'm not sure how operator can poke at the services when Secrets are udpated (e.g. if the Zuul dburi changes). Hence to keep things simple at first, it seems easier to manage every configuration bits as part of the CRDs spec.
Software Factory batteries include a zuul-reconfigure job to manage the main.yaml file. Should the zuul-operator setup such job, or perhaps should we also need a zuul-quick-start-operator? I don't think the zuul operator should set up any jobs, because zuul jobs come from git repos - and the zuul operator would have to learn about all of the different types of git backends for it to be able to push commits into one - and the operator would have to have git account access.
That seems like a feature that a software factory operator (or an opendev operator) would be in a much better position to provide - since it's reasonable for those operators to know more things about the surrounding git infrastructure and have accounts for it.
A zuul-reconfigure job might be a nice addition to zuul-jobs though. And maybe there would need to be a zuul-operator-reconfigure if applying main.yaml changes involves pushing a new Zuul CRD manifest via kubectl to get the operator to update the file and run reconfigure. (take this main.yaml file, slurp it into the manifest, run kubectl)
That seems out of scope indeed, though later down the road, it seems like a rather common pattern to have the tenant configuration in a git repository that would reconfigure zuul when a change is merged.
Agree
> Also, could we maybe say a single distro for all container things? I > guess by default, that is what the current containers by zuul project is > running today. I don't much have the energy to get into the weeds of pip > / dpkg / rpm for how things maybe get installed (if needed).
Yes. All of our container images as we are publishing them now are awesome, and are what I expect us to use.
I am categorically opposed to us making and publishing different images. One of the nice things about publishing container images is that you can do them once and do them well. We're using the python base image. It happens to be based on debian-strech - but that's irrelevant. It's running python.
That sounds great, but how are we going to cope with the next heartbleed or shellshock? IIUC, we would need to wait for a fix to land in the debian-stretch image, then wait for a new opendevorg/python-builder image and finally wait for a new zuul image to be published.
Jeremy also responded to this - so this is basically me saying "I agree with fungi" in more words.
We have to do that no matter what the base image is. But we don't actually have to do that. The zuul Dockerfile already does the apt-get update / apt-get install. If needed we could toss in a dist-upgrade if the fix has hit the repo but not the base image so that we don't have to wait for the stack.
Supporting multiple in-image distros doesn't improve that. In fact, it increases the work for upstream because now we have to do that exact process you described, but we have to do it for each distro we're building images with.
Agreed, there is no point in supporting multiple distros image.
Or what happen when the zuul image gets published with an incompatible openstacksdk requirement?
Thus I think the base image and how the zuul bits gets bundled are relevant. For example OpenShift users may want to use BuildConfig instead so that a parent layer update automatically trigger childs' layer rebuild.
Sure - but the zuul images are built with standard Dockerfiles - so there should be nothing stopping people from rebuilding them locally however they want.
Yah, this is what I was hoping we'd discuss more when we dove into the weeds. Can a BuildConfig work with k8s? Or does some other k8s items work with openshift? If not, I think it is fine if we only suppoert one and not try to do everything. Which one we support, I don't really know, hopefully others can say which one is better :) We can also support both, but I feel that is a lot of work for minimal gain. I think it's a strange idea, when you have a Zuul, to want to use a different build system to build zuul images, since you would not be able to CI those images as effectively as if you use the zuul image building jobs. The only good reason for local rebuilds is if you're carrying local changes.
Another good reason to manage the image is for compliance and reproducability. Using another base layer may be easier to audit and update, and users may also want to build their image with curated registries for things like pip or npm to prevent supply chain attack.
If you are carrying local changes, one would imagine you would want to be able to submit code reviews to a git system to manage those changes, and you would want those code reviews to get gated, and you would want to publish the images that result in the gate passing. If you have all of that set up in zuul to test and publish images from your local changes, it standa to reason you'd want zuul to also just do an image rebuild on your local branch in the scenarios you metion, and using a BuildConfig would not really help you do that at all. **BUT** if that's what you want to do, I mean, nothing should be stopping you.
My main concern is about the base layer update. BuildConfig can take care of rebuilding your final image when needed. You may use Zuul to promote the resulting image before deploying it, but the point is, base layer updates may happen out of your control, e.g. after a bug is discovered in openssl for example.
Sure - I just think that it's a better route to have something trigger Zuul to do a rebuild and potential promotion when a base layer changes.
IIUC, BuildConfig are first class citizens in OKD, and it enables images to be managed by the cluster: https://docs.okd.io/latest/dev_guide/builds/index.html#dev-guide-what-is-a-b... It seems like Zuul wouldn't be able to do that as the job would requires a docker service.
(A simple trigger should be sufficient) BuildConfig is a whole other system with a whole other worldview and I would be more worried that Zuul and BuildConfig would fight each other. (and I don't think BuildConfig adds anything, other than potentially having a trigger defined watching for changes to base images)
User of BuildConfig should be able to trigger a Zuul pipeline to take care of the promotion and deployment. At least that is the workflow provided by Jenkins: https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#pipeline-s... Regards, -Tristan
It certainly doesn't mean we need to support multi-distro inside the image content we publish as an upstream. I'm pretty sure an OpenShift BuildConfig can build an image based on python:slim as well as anything else.
I didn't mean to derailed the thread to this image building topic which is not actually relevant to the zuul-operator. Sorry about that :) I mean, as long as the zuul-operator can use an optional custom image.
++ - I think it's essential that it can use an optional custom image.
Regads, -Tristan
I DO think we'll likely need to support specifying an image location in the CRD. (rook does this for ceph too) That's because I'd like for Tobias to be able to use the operator if he chooses, but he uses local builds of the images. And I think that's fine.
> I am likely showing how much I don't know about k8s / openshift here, > but wanting to limit the amount of different ways to do a deployment > based on underlying distro bits.
+10000000000
I believe there is no value in producing divergent images.
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Paul Belanger <pabelanger@redhat.com> writes:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I'm not on the fence about that. :) If we take Tristan's comment as describing a Zuul operator's place in potentially one of many larger systems, that's great. It's good to think about how what we create might be used. But here on this mailing list, this team of folks is focused on the Zuul project, which is the components here: https://git.zuul-ci.org/cgit So it's fine to suggest that perhaps this Zuul operator might be part of a larger OpenDev operator project, but that decision would be made by the OpenDev team. If folks are interested in contributing to such a project, it would be worth bringing it up on the openstack-infra mailing list.
We also started to investigate operators for Software Factory component and perhaps we could join force?
Please do. :) -Jim
On Thu, Mar 07, 2019 at 07:22:32AM -0800, James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I'm not on the fence about that. :)
If we take Tristan's comment as describing a Zuul operator's place in potentially one of many larger systems, that's great. It's good to think about how what we create might be used.
But here on this mailing list, this team of folks is focused on the Zuul project, which is the components here: https://git.zuul-ci.org/cgit
So it's fine to suggest that perhaps this Zuul operator might be part of a larger OpenDev operator project, but that decision would be made by the OpenDev team.
If folks are interested in contributing to such a project, it would be worth bringing it up on the openstack-infra mailing list.
Agree, you said it much better that I originally did. :)
We also started to investigate operators for Software Factory component and perhaps we could join force?
Please do. :)
-Jim
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Thu, Mar 07, 2019 at 07:22 James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
On Thu, Mar 07, 2019 at 11:32:39AM +0900, Tristan Cacqueray wrote:
Ideally, we would grow a hierarchy of operators to manage the gating system, and then you could have an opendev-operator to compose: * log-processor-operator * elastic-search-operator * paste-operator, etherpad-operator, ... * gerrit-operator
I think these are fine to have, but I am on the fence if they are under the scope of what zuul should do. As an example, I can totally see opendev.org wanting doing this. As long as everything is all disconnected from zuul-operator, I would be okay. Personally, I don't want zuul to managed a gerrit.
I'm not on the fence about that. :)
If we take Tristan's comment as describing a Zuul operator's place in potentially one of many larger systems, that's great. It's good to think about how what we create might be used.
But here on this mailing list, this team of folks is focused on the Zuul project, which is the components here: https://git.zuul-ci.org/cgit
So it's fine to suggest that perhaps this Zuul operator might be part of a larger OpenDev operator project, but that decision would be made by the OpenDev team.
Yes, I meant an opendev-operator composing a gerrit-operator and zuul-operator, not the other way around.
If folks are interested in contributing to such a project, it would be worth bringing it up on the openstack-infra mailing list.
We also started to investigate operators for Software Factory component and perhaps we could join force?
Please do. :)
If we are going to rewrite configuration management in the form of kubernetes operators, then perhaps we could accomodate the difference between Software Factory and opendev and share the implementation. Well I jumped ahead and this is probably too early to discuss an opendev-operator (and not the right mailing list), but my suggestion is that a zuul-operator should provides an abstraction that can be used efficiently by a higher level operator. Regards, -Tristan
On Wed, Mar 6, 2019, at 9:31 AM, Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
We should check that the bug we found managing gitea with the ansible k8s module is handled by the ansible-operator or push the appropriate parties to get the fixes released. Long story short the kubernetes python module's thread management doesn't close threads properly in the synchronous case which is how the k8s ansible module consumes it. This library is autogenerated python code and the 9.0 beta version fixes it but there isn't a real release yet [2]. On top of that the openshift client library is how ansible consumes this and this in the k8s module and it pins to 8.x [3]. Ideally the kubernetes python lib would release 9.0, then openshift can require >=9.0, then ansible talking to k8s will work reliably.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
Monty
[0] https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/u... [1] https://www.operatorhub.io/
[2] https://pypi.org/project/kubernetes/#history [3] https://github.com/openshift/openshift-restclient-python/blob/master/require...
On Wed, Mar 06, 2019 at 17:31 Monty Taylor wrote:
Hey everybody,
It seems that kubernetes operators are all the rage these days, and in the last 2 weeks I've had multiple conversations that have all come back around to someone wanting a zuul operator to exist.
SO ...
I would like to propose that we create a zuul-operator project as part of Zuul.
Given our close ties with Ansible, I think we should use ansible-operator [0] to write it.
Since we have zuul in production on both vanilla k8s and on OpenShift, I think we should set up gate jobs to make sure it works on both. Once we're happy with it - we should publish it to operatorhub. [1]
Does anyone object? It not, I'll get a repo started.
Hi, I went ahead and worked on a prototype published here: https://github.com/TristanCacqueray/zuul-operator#usage Here are a few notes: I ran test with OKD v3.11, though it should work with plain kubernetes too. This prototype doesn't use the OLM and assumes these operators are already installed: - zookeeper.pravega.io/v1beta1 ZookeeperCluster - cr.client-go.k8s.io/v1 PgCluster It doesn't uses persistent volume either, I'm still not sure how an operator is supposed to configure such volume. I investigated autoscaling for merger and executor based on the gearman queues. Scale-up works fine, merger gets created on demand, but scale-down is missing graceful stop to wait for pending job before stopping a pod. FWIW here is the formula: https://github.com/TristanCacqueray/zuul-operator/blob/master/ansible/roles/... Lastly I also prototyped a Gerrit operator as well as a higher level operator named Software Factory that composes the Zuul operator like that: https://softwarefactory-project.io/cgit/software-factory/sf-operator/tree/an... As you can see, the Zuul ssh key can be passed by secret name reference, similarly to the PgCluster usersecretname. Regards, -Tristan
participants (9)
-
Artem Goncharov
-
Christoph Goern
-
Clark Boylan
-
Clint Byrum
-
corvus@inaugust.com
-
Jeremy Stanley
-
Monty Taylor
-
Paul Belanger
-
Tristan Cacqueray