Hello folks, From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance. So my question is why Zookeeper was picked in the first place, and would it be possible to support/migrate to etcd in the future? Thanks in advance, -Tristan
On 19.04.18, 05:29, "Tristan Cacqueray" <tdecacqu@redhat.com> wrote: Hello folks, From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance. So my question is why Zookeeper was picked in the first place, and would it be possible to support/migrate to etcd in the future? Thanks in advance, -Tristan Actually I asked the same question a year ago and I think the answer was that at least at the time this was decided zookeeper was better with locks. But Jim probably knows that much better than me. Regarding adding support for etcd I think this whole synchronization and locking stuff is very complicated and we should support only one method. So *if* we would decide to support etcd we should migrate. However this would be a major development effort. I'd consider this effort too high for having the benefit of reusing an etcd cluster in some deployments (note that this imlies that this increases effort for all deployments already using zookeeper). So I would not consider migrating unless etcd has some clear technical benefits over zookeeper which may even enable new use cases. Kind regards Tpbias
On 2018-04-19 05:40:54 +0000 (+0000), Tobias.Henkel@bmw.de wrote: [...]
Actually I asked the same question a year ago and I think the answer was that at least at the time this was decided zookeeper was better with locks. But Jim probably knows that much better than me. [...]
A major reason was that usable versions of etcd were not actually packaged by stable linux distributions when the design choice was made. Better to design around something you can use right away rather than blocking on someone else's roadmap and release schedule. -- Jeremy Stanley
Jeremy Stanley <fungi@yuggoth.org> writes:
On 2018-04-19 05:40:54 +0000 (+0000), Tobias.Henkel@bmw.de wrote: [...]
Actually I asked the same question a year ago and I think the answer was that at least at the time this was decided zookeeper was better with locks. But Jim probably knows that much better than me. [...]
A major reason was that usable versions of etcd were not actually packaged by stable linux distributions when the design choice was made. Better to design around something you can use right away rather than blocking on someone else's roadmap and release schedule.
I have heard that perhaps etcd is now just as suitable for all of the tasks as zookeeper -- but that's from an in informal conversation, not a technical evaluation. This thread has raised several questions: * Is etcd capable of supporting all the processes that we currently use and plan to use? * Is it recommended for general use? * Is it easily installable? I also agree that due to the fact that behavior and performance are (and will be even more so) so closely tied to this system, we should only use one, and not try to support both through an abstraction layer. If folks are interested in switching, I think in addition to the above questions, it would be good to investigate the scalability and performance of the two systems. Is locking performance equivalent? Is one or the other going to perform better once we store the entire configuration (and per-change deltas), and the pipeline states in it? -Jim
On Thu, Apr 19, 2018 at 10:33 AM, James E. Blair <corvus@inaugust.com> wrote:
This thread has raised several questions:
* Is etcd capable of supporting all the processes that we currently use and plan to use? * Is it recommended for general use? * Is it easily installable?
It's also worth asking ourselves if we want to do more out of Zookeeper, I suppose. We're using it mostly for lock consensus right now but both Zookeeper and etcd can do more than that. For example, the executor configuration could be stored in etcd instead of flat files, etc. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter]
On Thu, Apr 19, 2018, at 7:33 AM, James E. Blair wrote:
Jeremy Stanley <fungi@yuggoth.org> writes:
On 2018-04-19 05:40:54 +0000 (+0000), Tobias.Henkel@bmw.de wrote: [...]
Actually I asked the same question a year ago and I think the answer was that at least at the time this was decided zookeeper was better with locks. But Jim probably knows that much better than me. [...]
A major reason was that usable versions of etcd were not actually packaged by stable linux distributions when the design choice was made. Better to design around something you can use right away rather than blocking on someone else's roadmap and release schedule.
I have heard that perhaps etcd is now just as suitable for all of the tasks as zookeeper -- but that's from an in informal conversation, not a technical evaluation.
This thread has raised several questions:
* Is etcd capable of supporting all the processes that we currently use and plan to use? * Is it recommended for general use? * Is it easily installable?
I also agree that due to the fact that behavior and performance are (and will be even more so) so closely tied to this system, we should only use one, and not try to support both through an abstraction layer.
If folks are interested in switching, I think in addition to the above questions, it would be good to investigate the scalability and performance of the two systems. Is locking performance equivalent? Is one or the other going to perform better once we store the entire configuration (and per-change deltas), and the pipeline states in it?
An additional item to keep in mind is OpenStack found that interfacing with etcd3 with existing python client libraries was not as straight foward as initially assumed. The problem being etcd3's api is gRPC and the python gRPC implementation relies on proper python threads and does not work with eventlet. As a result OpenStack is using the JSON gRPC gateway proxy (I think) which will proxy REST calls to gRPC. This may (or may not) have an effect on performance. Zuul doesn't use eventlet but there seems to be a desire to use more asyncio which I expect will have similar issues based on [0]. There is work in progress but it is still an open issue. All this to say ease of development/use from python3 should likely be a criteria as well. [0] https://github.com/grpc/grpc/issues/6046 Clark
Clark Boylan <cboylan@sapwetik.org> writes:
Zuul doesn't use eventlet but there seems to be a desire to use more asyncio which I expect will have similar issues based on [0]. There is work in progress but it is still an open issue.
All this to say ease of development/use from python3 should likely be a criteria as well.
I have no desire to use more asyncio. Most of Zuul is using threads to good effect and without confusion (most of our threads are truly independent processes which don't need to worry about interacting with others). We're using asyncio only because the only websocket implementation for python3 requires it. If there were another option, I would be happy to switch in a heartbeat. Until then, we've been careful to restrict our use of asyncio strictly to the zuul-web process, and I think we should continue that. That said -- we know that we want zuul-web to interact with zk/etcd3, so as long as zuul-web is using asyncio, the issue you raise is one to consider. -Jim
On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance.
Thought k8s recommend not to use the etcd cluster for anything else other than k8s? Or has changed recently? Or did you mean that anyone operating a k8s cluster would be more comfortable in using etcd already so running a second one should be just as easy? -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
On April 19, 2018 7:54 am, Darragh Bailey wrote:
On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance.
Thought k8s recommend not to use the etcd cluster for anything else other than k8s?
Or has changed recently?
I can't tell if it's recommended or not, but it sounds like a good idea to consolidate both services and only manage one.
Or did you mean that anyone operating a k8s cluster would be more comfortable in using etcd already so running a second one should be just as easy?
That can also be a selling point. -Tristan
On 19/04/18 12:31, Tristan Cacqueray wrote:
On April 19, 2018 7:54 am, Darragh Bailey wrote:
On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance.
Thought k8s recommend not to use the etcd cluster for anything else other than k8s?
Or has changed recently?
I can't tell if it's recommended or not, but it sounds like a good idea to consolidate both services and only manage one.
I would not recommend that the k8s base etcd cluster gets used for anything other than k8s - in previous testing if any other system had a spike of writes, it could bring the k8s cluster to its knees. Also, Kubernetes Secrets are stored in plain text in the etcd store and does not support etcd auth, which means you cannot limit users to sub paths of the etcd tree.
Or did you mean that anyone operating a k8s cluster would be more comfortable in using etcd already so running a second one should be just as easy?
That can also be a selling point.
-Tristan
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On 19 April 2018 at 15:36, Graham Hayes <gr@ham.ie> wrote:
On 19/04/18 12:31, Tristan Cacqueray wrote:
On April 19, 2018 7:54 am, Darragh Bailey wrote:
On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
From a Kubernetes operator point of view, the Zookeeper component sounds redundant with the already running etcd service. Moreover etcd seems more active and it shows better overall performance.
Thought k8s recommend not to use the etcd cluster for anything else other than k8s?
Or has changed recently?
I can't tell if it's recommended or not, but it sounds like a good idea to consolidate both services and only manage one.
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ under "Prerequisites" it mentions "Therefore, run etcd clusters on dedicated machines or isolated environments for guaranteed resource requirements <https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations> ." Referenced by Referenced at https://github.com/kubernetes/kops/issues/2216#issuecomment-335984163 Also hinted at https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ under "Securing etcd clusters" it says to only allow the "API server" to have access.
I would not recommend that the k8s base etcd cluster gets used for anything other than k8s - in previous testing if any other system had a spike of writes, it could bring the k8s cluster to its knees.
Also, Kubernetes Secrets are stored in plain text in the etcd store and does not support etcd auth, which means you cannot limit users to sub paths of the etcd tree.
I had wondered if this was just due to etcd originally not having any concept of RBAC, but as of 2.1 it's present, so maybe the above reason is the main one it remains in place. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
On 04/20/2018 11:52 AM, Darragh Bailey wrote: Wow. I missed a fun thread while being offline this week. :)
On 19 April 2018 at 15:36, Graham Hayes <gr@ham.ie <mailto:gr@ham.ie>> wrote:
On 19/04/18 12:31, Tristan Cacqueray wrote: > On April 19, 2018 7:54 am, Darragh Bailey wrote: >> On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu@redhat.com <mailto:tdecacqu@redhat.com>> wrote: >> >>> Hello folks, >>> >>> From a Kubernetes operator point of view, the Zookeeper component >>> sounds redundant with the already running etcd service. Moreover >>> etcd seems more active and it shows better overall performance.
One of the original reasons was that when we started the Zuul v3 effort, etcd3 had not yet been written. The word, as we understood it from the etcd folks, was that etcd at that time should not be used for fair-locking. A bit later, etcd3 was released and "doesn't support fair locking" restriction was considered no longer valid. However, at that point we were already down the ZK road. As Jeremy mentioned, then there was the issue that there were no distro-packages of etcd3, so even if they were equiv at that point, adopting etcd3 instead of zk was harder, so we did not really consider it. At this point, etcd3 is in the distros, so it IS equiv in terms of deployment complexity - but we're MUCH further down the zk path. As was indicated in other messages, I agree that if we did decide to go etcd, it would be a migration. Due to the fact that we've already got production systems running on zk, doing that shouldn't be done lightly. That said, as Jim mentioned, we do have additional things we'd like to do with the scheduler. It's possible that doing those with etcd instead of zk would be substantially better enough that it would be more advantageous to switch than the deployer pain switching would entail. (I'm not saying it is - just that we have some new use-cases for the zk store, and as part of thinking about those, it wouldn't be absurd to take the time to evaluate both the cost of adding those use-cases to zk vs. the cost of implementing them in etcd. It's net-new development so there are unanswered/unexplored areas)
>> >> Thought k8s recommend not to use the etcd cluster for anything else other >> than k8s? >> >> Or has changed recently? >> > I can't tell if it's recommended or not, but it sounds like a good idea > to consolidate both services and only manage one.
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ under "Prerequisites" it mentions "Therefore, run etcd clusters on dedicated machines or isolated environments for guaranteed resource requirements <https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations>." Referenced by Referenced at https://github.com/kubernetes/kops/issues/2216#issuecomment-335984163
Also hinted at https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ under "Securing etcd clusters" it says to only allow the "API server" to have access.
I would not recommend that the k8s base etcd cluster gets used for anything other than k8s - in previous testing if any other system had a spike of writes, it could bring the k8s cluster to its knees.
Also, Kubernetes Secrets are stored in plain text in the etcd store and does not support etcd auth, which means you cannot limit users to sub paths of the etcd tree.
I had wondered if this was just due to etcd originally not having any concept of RBAC, but as of 2.1 it's present, so maybe the above reason is the main one it remains in place.
The above info seems like really good information to capture so that at the very least it can be part of the answer on this topic (which I'm sure will come up again) If we start publishing helm charts / ansible playbooks for running a zuul in k8s, we should be able to re-use the zookeeper helm charts for the associated zookeeper. From that POV, I think I'd try to communicate it as just "a component of zuul" like any of the other components. I do think it's a question that's gonna keep coming up and one we should make sure we have a good answer for. I'd sum it up by saying that when we started etcd3 simply did not exist, etcd2 communicated that it was inappropriate for our particular use case, and although etcd3 does exist now, the cost of switching is so far not outweighed by the benefits of doing so. I do not believe there are any philosophical issues with etcd or current reasons we'd choose zk over etcd if we were to start from scratch today, purely cost-benefit issues. Monty
participants (9)
-
Clark Boylan
-
corvus@inaugust.com
-
Darragh Bailey
-
David Moreau Simard
-
Graham Hayes
-
Jeremy Stanley
-
Monty Taylor
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray