Projects config introspection
Hi all, I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions: * Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used. Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query. Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side. Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625 Regards, -Tristan
Hi, for me, being able to serialize Tenant (or Layout) config into a plain json object using the API will be ok. Our current usecases are: 1) Finding unused configuration items (jobs, secrets) and in turn also unused playbooks. 2) Graphing the job inheritance structure. Right now we do it by reading the configuration files on our own [1], but I think is not the best way to go forward. We need to checkout the project repos manually and if we'd want to do anything more advanced we can end up duplicating the Zuul parsing logic - unnecessary and error-prone work. [1] https://github.com/tungsten-infra/zuul-jobs-grapher Regards, JarekOn Fri, 30 Nov 2018 at 07:36, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hi all,
I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions:
* Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used.
Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query.
Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side.
Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625
Regards, -Tristan _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- Jarosław Łukow DevOps Engineer jaroslaw.lukow@codilime.com CodiLime Sp. z o.o. - Ltd. company with its registered office in Poland, 02-493 Warsaw, ul. Krancowa 5. Registered by The District Court for the Capital City of Warsaw, XII Commercial Department of the National Court Register. Entered into National Court Register under No. KRS 0000388871. Tax identification number (NIP) 5272657478. Statistical number (REGON) 142974628. -- ------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages.
Thanks Jarek for the prompt feedback. On November 30, 2018 5:20 pm, Jarek Łukow wrote:
Hi,
for me, being able to serialize Tenant (or Layout) config into a plain json object using the API will be ok.
It seems like that is the path of least resistance to get $subject going on. Though a formal JSON schema of the zuul data model needs lots of rigor and thinkering to efficiently reprensent the relation between all the objects and their source contexts... To get started here is a simple /config endpoints that may be enough for most use-cases: https://review.openstack.org/621344
Our current usecases are: 1) Finding unused configuration items (jobs, secrets) and in turn also unused playbooks. Regarding job's playbooks, here is a change to add them to the job object: https://review.openstack.org/621343
2) Graphing the job inheritance structure. Right now we do it by reading the configuration files on our own [1], but I think is not the best way to go forward. We need to checkout the project repos manually and if we'd want to do anything more advanced we can end up duplicating the Zuul parsing logic - unnecessary and error-prone work.
Job inheritance graph was proposed here: https://review.openstack.org/537869 but it was rejected because it doesn't handle branches and variants... -Tristan
Regards, JarekOn Fri, 30 Nov 2018 at 07:36, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hi all,
I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions:
* Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used.
Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query.
Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side.
Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625
Regards, -Tristan _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
--
Jarosław Łukow
DevOps Engineer
jaroslaw.lukow@codilime.com
CodiLime Sp. z o.o. - Ltd. company with its registered office in Poland, 02-493 Warsaw, ul. Krancowa 5. Registered by The District Court for the Capital City of Warsaw, XII Commercial Department of the National Court Register. Entered into National Court Register under No. KRS 0000388871. Tax identification number (NIP) 5272657478. Statistical number (REGON) 142974628.
--
------------------------------- The information in this email is confidential and may be legally privileged, it may contain information that is confidential in CodiLime Sp. z o. o. It is intended solely for the addressee. Any access to this email by third parties is unauthorized. If you are not the intended recipient of this message, any disclosure, copying, distribution or any action undertaken or neglected in reliance thereon is prohibited and may result in your liability for damages.
Tristan Cacqueray <tdecacqu@redhat.com> writes:
Hi all,
I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions:
* Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used.
Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query.
On the contrary, the job variants are exposed in the current API. When you request a job, you get all the variants. I think that's about all one could ask for. :) But I do agree that it's not a concern here -- the goal here is to find out what *job* a project uses. We don't have to figure out what variants it would use. Once we identify that a job is used by a project, we can inspect the project itself to learn about what variants (project-pipeline and global) might be at play. This is bikeshedding, but I don't think we need a new rest endpoint for that; I think we can hang it off of the job endpoint (e.g., /api/job/{name}/projects or something).
Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side.
I do not think we should do this. The interconnectedness of objects is complex and still changing, and I don't think we should spend time trying to create a schema. Furthermore, the result will be *huge* and I'd rather we send fewer 20MB data structures to clients rather than more. Finally, parsing and interrogating such a huge data structure on the client side will make for a lot of complex code. I'd much rather have a robust API that can answer any individual question conveniently.
Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625
This is also promising. In comments on that change you offered to demonstrate what that would look like applied to non-SQL data. I think it would help to do that so we can evaluate it. One further thing however -- I'm not sure how much more complexity we should undertake in the REST API before we begin work on the Zuul v4 effort -- moving the configuration data into Zookeeper. The current method of gearman queries is not very efficient, and as we add more complexity, it's only going to get worse and make the ultimate transition more difficult. I think we can start on the effort to move things into ZK soon, and that will make building these kinds of APIs much easier. -Jim
On December 1, 2018 5:58 pm, James E. Blair wrote:
Tristan Cacqueray <tdecacqu@redhat.com> writes:
Hi all,
I'd like to start the discussion around introspecting projects configs. The goal is to be able to efficiently answer the following questions:
* Which projects use a specific job, * Which jobs use a specific nodeset, * Which nodesets or jobs use a specific label, and * Which jobs, labels or nodesets are not used.
Putting aside the job variants model (which isn't well exposed in the current REST APIs), the easiest solution would be to add new endpoints, such as /api/job-per-project/{job_name}, or /api/label-per-project/{label_name}. But this feels sub-optimal and we need a new route for each type of query.
On the contrary, the job variants are exposed in the current API. When you request a job, you get all the variants. I think that's about all one could ask for. :)
Looking at: https://zuul.openstack.org/job/devstack-tox-functional What is the difference between the two stable/pike variants? How to tell which parent's variant is the correct parent? Similarly in the projects api, you can not easily differenciate project-template's job from local variant or non-modified jobs. Shouldn't there be a way to reference a job's or project config's variants uniquely?
But I do agree that it's not a concern here -- the goal here is to find out what *job* a project uses. We don't have to figure out what variants it would use. Once we identify that a job is used by a project, we can inspect the project itself to learn about what variants (project-pipeline and global) might be at play.
Regarding labels, you might want to know if a project use a specific variants for some popular jobs.
This is bikeshedding, but I don't think we need a new rest endpoint for that; I think we can hang it off of the job endpoint (e.g., /api/job/{name}/projects or something).
Well the routes can be attached to the existing endpoints, but it's still a very different type of query...
Another solution would be to add a global toDict() method for the whole Tenant configuration object, which can be exposed at /api/config (or simply dumped using a zuul-scheduler command line argument). This would enable custom search to be done client-side.
I do not think we should do this. The interconnectedness of objects is complex and still changing, and I don't think we should spend time trying to create a schema. Furthermore, the result will be *huge* and I'd rather we send fewer 20MB data structures to clients rather than more. Finally, parsing and interrogating such a huge data structure on the client side will make for a lot of complex code. I'd much rather have a robust API that can answer any individual question conveniently.
Note that it's not something you would query frequently, but rather a periodic dataset for cleaning or refactoring purpose (e.g. when changing a nodeset or deleting branches). I've built another utility to query every endpoints to get the job<->project<->nodeset relationships. Having a raw dump of the config would be much more convenient to use... For what it worths, here is the utility: https://softwarefactory-project.io/r/#/c/13135/4/tools/create_zuul_status_da...
Another solution would be to implement the query using GraphQL custom resolver as started in https://review.openstack.org/574625
This is also promising. In comments on that change you offered to demonstrate what that would look like applied to non-SQL data. I think it would help to do that so we can evaluate it.
The scheduler model is partially exposed over GraphQL since PS3, there is even a test to demonstrate the query language: https://review.openstack.org/#/c/574625/3/tests/unit/test_web.py Though I haven't looked at the UI part yet.
One further thing however -- I'm not sure how much more complexity we should undertake in the REST API before we begin work on the Zuul v4 effort -- moving the configuration data into Zookeeper. The current method of gearman queries is not very efficient, and as we add more complexity, it's only going to get worse and make the ultimate transition more difficult. I think we can start on the effort to move things into ZK soon, and that will make building these kinds of APIs much easier.
Hum, I didn't realized the configuration data was being moved to zookeeper... What is the roadmap for Zuul v4? Before growing the zookeeper dependency, could we please consider etcd again? I recently found a year old issue about unauthorized connection being accepted, which is not very encouraging... https://issues.apache.org/jira/browse/ZOOKEEPER-2937 Regards, -Tristan
On 02.12.18, 10:32, "Tristan Cacqueray" <tdecacqu@redhat.com> wrote: On December 1, 2018 5:58 pm, James E. Blair wrote: [...] > One further thing however -- I'm not sure how much more complexity we > should undertake in the REST API before we begin work on the Zuul v4 > effort -- moving the configuration data into Zookeeper. The current > method of gearman queries is not very efficient, and as we add more > complexity, it's only going to get worse and make the ultimate > transition more difficult. I think we can start on the effort to move > things into ZK soon, and that will make building these kinds of APIs > much easier. > Hum, I didn't realized the configuration data was being moved to zookeeper... What is the roadmap for Zuul v4? Before growing the zookeeper dependency, could we please consider etcd again? I recently found a year old issue about unauthorized connection being accepted, which is not very encouraging... https://issues.apache.org/jira/browse/ZOOKEEPER-2937 Zookeeper is already deeply baked into the whole system. I just looked into the differences to etcd and I think the for us most important difference is the way locks work. In zookeeper the lock is an ephemeral node that exists as long as the session exists. This is checked with a heartbeat per session. In contrast etcd has no session concept. Locks are modeled via leases where each lease which needs to be validated every few seconds with a heartbeat. As we make a massive use of locks we would need to handle potentially thousands of locks at the same time which would translate into thousands of heartbeats in etcd every few seconds compared to just one with zookeeper. Further we have with kazoo a zk library with many ready to use recipes while with etcd I only found the lock concept in the python-etcd3 library. So we would need to create our own recipes for caching, leader election,... So I'm not really sure if it's worth switching to etcd in the state we are now. Exchanging this would be a large effort. However the zk stuff is pretty good abstracted in zuul and nodepool so I think if we consider a switch in the future I don't think the effort will be such a big difference compared to switching now. Regarding Zuul v4, I think of that for all topics that are related to make the scheduler scale out ready. That also involves to replace gearman with zookeeper in most places. And by coincidence I've just pushed up my first thoughts about that as a spec: https://review.openstack.org/621479 -Tobias
Thank you Tobias for looking into that again. Please find some comments inlined below. On December 3, 2018 6:49 am, Tobias.Henkel@bmw.de wrote:
On 02.12.18, 10:32, "Tristan Cacqueray" <tdecacqu@redhat.com> wrote:
On December 1, 2018 5:58 pm, James E. Blair wrote: [...]
> One further thing however -- I'm not sure how much more complexity we > should undertake in the REST API before we begin work on the Zuul v4 > effort -- moving the configuration data into Zookeeper. The current > method of gearman queries is not very efficient, and as we add more > complexity, it's only going to get worse and make the ultimate > transition more difficult. I think we can start on the effort to move > things into ZK soon, and that will make building these kinds of APIs > much easier. > Hum, I didn't realized the configuration data was being moved to zookeeper... What is the roadmap for Zuul v4? Before growing the zookeeper dependency, could we please consider etcd again? I recently found a year old issue about unauthorized connection being accepted, which is not very encouraging... https://issues.apache.org/jira/browse/ZOOKEEPER-2937
Zookeeper is already deeply baked into the whole system. I just looked into the differences to etcd and I think the for us most important difference is the way locks work. In zookeeper the lock is an ephemeral node that exists as long as the session exists. This is checked with a heartbeat per session. In contrast etcd has no session concept. Locks are modeled via leases where each lease which needs to be validated every few seconds with a heartbeat. As we make a massive use of locks we would need to handle potentially thousands of locks at the same time which would translate into thousands of heartbeats in etcd every few seconds compared to just one with zookeeper.
It seems like you can attach many locks to a single lease. All the keys attached to a lease are deleted if the lease expires, which seems like a similar implementation of zookeeper's lock.
Further we have with kazoo a zk library with many ready to use recipes while with etcd I only found the lock concept in the python-etcd3 library. So we would need to create our own recipes for caching, leader election,... So I'm not really sure if it's worth switching to etcd in the state we are now. Exchanging this would be a large effort. However the zk stuff is pretty good abstracted in zuul and nodepool so I think if we consider a switch in the future I don't think the effort will be such a big difference compared to switching now.
If we are going to use kazoo recipes, isn't this going to make a switch later more difficult?
Regarding Zuul v4, I think of that for all topics that are related to make the scheduler scale out ready. That also involves to replace gearman with zookeeper in most places. And by coincidence I've just pushed up my first thoughts about that as a spec: https://review.openstack.org/621479
That is looking great :) -Tristan
Thank you, Monty for such an extensive feedback. Please find my replies inlined. On December 10, 2018 4:54 pm, Monty Taylor wrote:
I've since learned that it's considered a Bad Idea to re-use the etcd k8s uses for apps running in k8s. That means you need to run an etcd for your app anyway, which ultimately means the k8s use of etcd is really not as meaningful/beneficial as I originally thought, reducing any perceived gain we might have from aligning with what the k8s community is doing. Given that one of the benefits of using k8s to run applications is to make installing complicated dependencies like zookeeper easier (and plenty of things out there require much more complicated dependency chains than we do) - I don't think "need to run a zookeeper but I already know how to run etcd in k8s" is a very compelling argument.
Another argument has to do with distro packaging, as the complexity of distro packaging for java applications is high and support for zk inside of the RH distros has been low. I know people were already working on getting it in to rh distros properly, but even with that, last time I asked internally at RH about use of software dependencies upstream that were not in distro and/or hard to properly package (this was related to javascript tooling and frameworks) the answer was "write the software the way it needs to be written, don't let the distro packaging situation override the correct technical decision". So I'd argue we've already gotten the feedback needed from the provider of the distro with the youngest/smallest support for zk that it's ok.
I think this is a single argument: etcd is becoming more popular with k8s (and apparantly openstack). Using etcd for Zuul would make it simpler for operator since they are more likely to have the knowledge of running and maintaining an etcd cluster. Moreover, integrating Zookeeper in RH distro isn't really a technical issue; the dependencies are already packaged. However, the issue is the lack of engagement upstream which prevents proper support. [...]
The only reason I could imagine changing that in the future would be if zk upstream abandoned the project. Zookeeper itself is prety much "done" - it's a simple system that's been around for ages and additional 'features' are mostly done via recipes on top of its fundamentals ... so even a mostly absent upstream wouldn't really be a problem as long as it remains active enough to keep up with security fixes and to keep up with Java versions. I'm pretty sure zk is used in enough things that many people have a vested interest in (like Kafka) that it's unlikely zk loses enough upstream support to be a dangerous thing to be based on.
I find this argument to be the most compeling. Zookeeper was orginally developed at Yahoo, and there have been no major releases since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] also seems to be removing the Zookeeper depencency... I disagree that Zookeeper is a simple system. It's a complex stack and it seems to be a painpoint for operations. I wouldn't say that it is a dangerous thing to be based on, but it's a problematic choice to make it such a central piece of Zuul services.
It is a little concerning that the auth issue you link to is still outstanding. However, if you look at the github PR linked: https://github.com/apache/zookeeper/pull/418 it's in merge conflict. I left a ping to the author and reviewer to see if it can be revived. That said - I would hope that zk auth isn't the only piece of the puzzle preventing unauthorized access. I'd expect defense in depth here, with at least some amount of additional restriction in place, such as VPN or iptables or similar.
Even with that PR, the current system may not be enough as it doesn't support multiple auth and you only have a global username:password. For example, with the executor zoning, we may want to be able to revoke or change just one executor credential and not the whole system.
All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :)
Perhaps it's not such an impossible feat to rewrite the zk.py module with an etcd backend. Would such a contribution be considered? Regards, -Tristan [0]: https://code.fb.com/data-infrastructure/location-aware-distribution-configur... [1]: https://issues.apache.org/jira/browse/KAFKA-6598
On Tue, Dec 11, 2018 at 06:45:22AM +0000, Tristan Cacqueray wrote:
Thank you, Monty for such an extensive feedback. Please find my replies inlined.
On December 10, 2018 4:54 pm, Monty Taylor wrote:
I've since learned that it's considered a Bad Idea to re-use the etcd k8s uses for apps running in k8s. That means you need to run an etcd for your app anyway, which ultimately means the k8s use of etcd is really not as meaningful/beneficial as I originally thought, reducing any perceived gain we might have from aligning with what the k8s community is doing. Given that one of the benefits of using k8s to run applications is to make installing complicated dependencies like zookeeper easier (and plenty of things out there require much more complicated dependency chains than we do) - I don't think "need to run a zookeeper but I already know how to run etcd in k8s" is a very compelling argument.
Another argument has to do with distro packaging, as the complexity of distro packaging for java applications is high and support for zk inside of the RH distros has been low. I know people were already working on getting it in to rh distros properly, but even with that, last time I asked internally at RH about use of software dependencies upstream that were not in distro and/or hard to properly package (this was related to javascript tooling and frameworks) the answer was "write the software the way it needs to be written, don't let the distro packaging situation override the correct technical decision". So I'd argue we've already gotten the feedback needed from the provider of the distro with the youngest/smallest support for zk that it's ok.
I think this is a single argument: etcd is becoming more popular with k8s (and apparantly openstack). Using etcd for Zuul would make it simpler for operator since they are more likely to have the knowledge of running and maintaining an etcd cluster. Moreover, integrating Zookeeper in RH distro isn't really a technical issue; the dependencies are already packaged. However, the issue is the lack of engagement upstream which prevents proper support.
FWIW: I'd had the same concerns as you, when we first decided to use Zookeeper. Originally I thought, great another java app to maintain. However, in the last year of running Zookeeper for zuul.o.o, there hasn't been much maintainence that needed to be done. There was a few hiccups at the start around snapshots, but so far zero issues with memory leaks or broken functionality. We even migrated from a single ZK instances, to 3 node ZK cluster, and don't believe there was any problems.
[...]
The only reason I could imagine changing that in the future would be if zk upstream abandoned the project. Zookeeper itself is prety much "done" - it's a simple system that's been around for ages and additional 'features' are mostly done via recipes on top of its fundamentals ... so even a mostly absent upstream wouldn't really be a problem as long as it remains active enough to keep up with security fixes and to keep up with Java versions. I'm pretty sure zk is used in enough things that many people have a vested interest in (like Kafka) that it's unlikely zk loses enough upstream support to be a dangerous thing to be based on.
I find this argument to be the most compeling. Zookeeper was orginally developed at Yahoo, and there have been no major releases since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] also seems to be removing the Zookeeper depencency...
I disagree that Zookeeper is a simple system. It's a complex stack and it seems to be a painpoint for operations. I wouldn't say that it is a dangerous thing to be based on, but it's a problematic choice to make it such a central piece of Zuul services.
As mentioned above, this so far hasn't been my case. Zookeeper just works and haven't needed to do anything. It is possible, as we leverage ZK more, this may changed, but we've done a good job keeping users out of zookeeper so far.
It is a little concerning that the auth issue you link to is still outstanding. However, if you look at the github PR linked: https://github.com/apache/zookeeper/pull/418 it's in merge conflict. I left a ping to the author and reviewer to see if it can be revived. That said - I would hope that zk auth isn't the only piece of the puzzle preventing unauthorized access. I'd expect defense in depth here, with at least some amount of additional restriction in place, such as VPN or iptables or similar.
Even with that PR, the current system may not be enough as it doesn't support multiple auth and you only have a global username:password. For example, with the executor zoning, we may want to be able to revoke or change just one executor credential and not the whole system.
All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :)
Perhaps it's not such an impossible feat to rewrite the zk.py module with an etcd backend. Would such a contribution be considered?
Regards, -Tristan
[0]: https://code.fb.com/data-infrastructure/location-aware-distribution-configur... [1]: https://issues.apache.org/jira/browse/KAFKA-6598
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Tue, Dec 11, 2018 at 1:44 AM Tristan Cacqueray <tdecacqu@redhat.com> wrote: [snip]
All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :)
Perhaps it's not such an impossible feat to rewrite the zk.py module with an etcd backend. Would such a contribution be considered?
I did write that module (trying to keep all zk interaction hidden behind an API of our own design) with the intent that it would make it easier for us to switch to a different backend if ZooKeeper didn't work out. However, even though the code may be easier to write, I think we have to consider all of the problem domain knowledge that we will lose, and will now have to reset with something else, if we do switch. A lot of that knowledge shows up in several other areas of code, too (how we deal with locks, lost connections, refreshing data, etc). I think that's a HUGE negative for us if we were to switch, basically setting us back months (maybe years), at least on a knowledge front. -Dave -- David Shrewsbury (Shrews)
On December 11, 2018 3:32 pm, David Shrewsbury wrote:
On Tue, Dec 11, 2018 at 1:44 AM Tristan Cacqueray <tdecacqu@redhat.com> wrote:
[snip]
All of that is to say - I think the ship of pondering replacing zk with etcd has sailed and there's no real reason to think we might return to it. kazoo recpies help us use zk fully, and that's a good thing. :)
Perhaps it's not such an impossible feat to rewrite the zk.py module with an etcd backend. Would such a contribution be considered?
I did write that module (trying to keep all zk interaction hidden behind an API of our own design) with the intent that it would make it easier for us to switch to a different backend if ZooKeeper didn't work out. However, even though the code may be easier to write, I think we have to consider all of the problem domain knowledge that we will lose, and will now have to reset with something else, if we do switch. A lot of that knowledge shows up in several other areas of code, too (how we deal with locks, lost connections, refreshing data, etc). I think that's a HUGE negative for us if we were to switch, basically setting us back months (maybe years), at least on a knowledge front.
It seems like the domain knowledge may not be all lost, and another implementation would likely benefit from what you learned in the several other areas of the code. I would like to give etcd a shot before ZooKeeper takes over zuul configuration. But such effort requires your consent. Regards, -Tristan
On Mon, Dec 10, 2018, at 10:45 PM, Tristan Cacqueray wrote: Snip
I find this argument to be the most compeling. Zookeeper was orginally developed at Yahoo, and there have been no major releases since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] also seems to be removing the Zookeeper depencency...
Worth noting that Kafka has rejected etcd as well and appears to be implementing things internally themselves. At least according to this bug link.
Snip
Even with that PR, the current system may not be enough as it doesn't support multiple auth and you only have a global username:password. For example, with the executor zoning, we may want to be able to revoke or change just one executor credential and not the whole system.
I don't think this is true. ZooKeeper seems to support an auth scheme using Kerberos and SASL which would allow for every client and server to have its own principal. You can also apply ACLs to paths on top of the authenticated clients. This might not be the simplest setup but should allow for you to revoke a single client's (or server's) access to the cluster.
Snip
On December 11, 2018 5:34 pm, Clark Boylan wrote:
On Mon, Dec 10, 2018, at 10:45 PM, Tristan Cacqueray wrote:
Snip
I find this argument to be the most compeling. Zookeeper was orginally developed at Yahoo, and there have been no major releases since 2008. Facebook[0] seems to be replacing it by LAD. Kafka[1] also seems to be removing the Zookeeper depencency...
Worth noting that Kafka has rejected etcd as well and appears to be implementing things internally themselves. At least according to this bug link.
Yes, it seems like they are moving away from configuration management systems alltogether because it's not a good fit. Bottom line is, users seems to be moving away from ZooKeeper.
Snip
Even with that PR, the current system may not be enough as it doesn't support multiple auth and you only have a global username:password. For example, with the executor zoning, we may want to be able to revoke or change just one executor credential and not the whole system.
I don't think this is true. ZooKeeper seems to support an auth scheme using Kerberos and SASL which would allow for every client and server to have its own principal. You can also apply ACLs to paths on top of the authenticated clients.
This might not be the simplest setup but should allow for you to revoke a single client's (or server's) access to the cluster.
That seems correct, though it's unclear how you would add multiple principals to the znode's acl. Note that ZooKeeper permission are not hierarchical, any change to the acl needs to be applied to all the nodes. -Tristan
participants (8)
-
Clark Boylan
-
corvus@inaugust.com
-
David Shrewsbury
-
Jarek Łukow
-
Monty Taylor
-
Paul Belanger
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray