Question regarding using etcd instead of Zookeeper

Monty Taylor mordred at inaugust.com
Sat Apr 21 11:11:04 UTC 2018


On 04/20/2018 11:52 AM, Darragh Bailey wrote:

Wow. I missed a fun thread while being offline this week. :)


> On 19 April 2018 at 15:36, Graham Hayes <gr at ham.ie <mailto:gr at ham.ie>> 
> wrote:
> 
> 
> 
>     On 19/04/18 12:31, Tristan Cacqueray wrote:
>     > On April 19, 2018 7:54 am, Darragh Bailey wrote:
>     >> On 19 April 2018 at 04:29, Tristan Cacqueray <tdecacqu at redhat.com <mailto:tdecacqu at redhat.com>> wrote:
>     >>
>     >>> Hello folks,
>     >>>
>     >>> From a Kubernetes operator point of view, the Zookeeper component
>     >>> sounds redundant with the already running etcd service. Moreover
>     >>> etcd seems more active and it shows better overall performance.

One of the original reasons was that when we started the Zuul v3 effort, 
etcd3 had not yet been written. The word, as we understood it from the 
etcd folks, was that etcd at that time should not be used for fair-locking.

A bit later, etcd3 was released and "doesn't support fair locking" 
restriction was considered no longer valid. However, at that point we 
were already down the ZK road. As Jeremy mentioned, then there was the 
issue that there were no distro-packages of etcd3, so even if they were 
equiv at that point, adopting etcd3 instead of zk was harder, so we did 
not really consider it.

At this point, etcd3 is in the distros, so it IS equiv in terms of 
deployment complexity - but we're MUCH further down the zk path.

As was indicated in other messages, I agree that if we did decide to go 
etcd, it would be a migration. Due to the fact that we've already got 
production systems running on zk, doing that shouldn't be done lightly. 
That said, as Jim mentioned, we do have additional things we'd like to 
do with the scheduler. It's possible that doing those with etcd instead 
of zk would be substantially better enough that it would be more 
advantageous to switch than the deployer pain switching would entail. 
(I'm not saying it is - just that we have some new use-cases for the zk 
store, and as part of thinking about those, it wouldn't be absurd to 
take the time to evaluate both the cost of adding those use-cases to zk 
vs. the cost of implementing them in etcd. It's net-new development so 
there are unanswered/unexplored areas)

>     >>
>     >> Thought k8s recommend not to use the etcd cluster for anything else other
>     >> than k8s?
>     >>
>     >> Or has changed recently?
>     >>
>     > I can't tell if it's recommended or not, but it sounds like a good idea
>     > to consolidate both services and only manage one.
> 
> 
> https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ 
> under "Prerequisites" it mentions "Therefore, run etcd clusters on 
> dedicated machines or isolated environments for guaranteed resource 
> requirements 
> <https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations>."
> Referenced by Referenced at 
> https://github.com/kubernetes/kops/issues/2216#issuecomment-335984163
> 
> Also hinted at 
> https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ 
> under "Securing etcd clusters" it says to only allow the "API server" to 
> have access.
> 
>     I would not recommend that the k8s base etcd cluster gets used for
>     anything other than k8s - in previous testing if any other system
>     had a spike of writes, it could bring the k8s cluster to its knees.
> 
>     Also, Kubernetes Secrets are stored in plain text in the etcd store
>     and does not support etcd auth, which means you cannot limit users to
>     sub paths of the etcd tree.
> 
> 
> 
> I had wondered if this was just due to etcd originally not having any 
> concept of RBAC, but as of 2.1 it's present, so maybe the above reason 
> is the main one it remains in place.

The above info seems like really good information to capture so that at 
the very least it can be part of the answer on this topic (which I'm 
sure will come up again)

If we start publishing helm charts / ansible playbooks for running a 
zuul in k8s, we should be able to re-use the zookeeper helm charts for 
the associated zookeeper. From that POV, I think I'd try to communicate 
it as just "a component of zuul" like any of the other components.

I do think it's a question that's gonna keep coming up and one we should 
make sure we have a good answer for. I'd sum it up by saying that when 
we started etcd3 simply did not exist, etcd2 communicated that it was 
inappropriate for our particular use case, and although etcd3 does exist 
now, the cost of switching is so far not outweighed by the benefits of 
doing so. I do not believe there are any philosophical issues with etcd
or current reasons we'd choose zk over etcd if we were to start from 
scratch today, purely cost-benefit issues.

Monty




More information about the Zuul-discuss mailing list