Zookeeper ConnectionLoss/SessionExpired issues

Tristan Cacqueray tdecacqu at redhat.com
Wed Mar 7 05:38:00 UTC 2018


Thanks for the thorough review David.
Indeed kazoo Locks are ephemerals and they vanish when the connection is lost.
However the zk-retry patch doesn't seems to affect current behaviors,
please see my comments inlined bellow and on the nodepool review.

On March 6, 2018 6:38 pm, David Shrewsbury wrote:
> I've played with some test code around this and I feel convinced we could
> have
> issues with locks and ephemeral nodes if we are hiding ZK connection
> problems
> (particularly session timeouts, which cause us to lose both things) behind
> retries.
> 
Would you mind sharing the test code?

> Right now we can depend on those connection errors to bubble up to nodepool
> and zuul as exceptions, and we can assume our locks or ephemeral nodes are
> gone at that point and handle accordingly. If we now hide those, we need to
> begin
> using a listener [1] to know when our session has timed out and set some
> sort of
> state that we will have to check at multiple key points. FWIW, I started
> nodepool
> v3 off with such a listener and the logic became pretty difficult fairly
> quickly.
>
The proposal aims at removing those "key points" checks because they add
complexicity and they look prone to race conditions, for example when
the connection get lost after the "if self._zk.lost" check.

Looking at all Zookeeper Lock usage, only the imageBuildLock seems to
benefit from the checks: if a builder lost the connection
during a build, it will mark the image as FAILED and another builder will try
again. What happens if the other builder succeed before the first one?
With the zk-retry patch, both builder will mark the image as READY,
and as far as I can tell, this doesn't seems to be an issue.

The other Lock aren't protected, for example multiple builder may start
an image upload when the connection get lost. The zk-retry patch doesn't
seem to change the behavior.

Zuul node locks are still an issue, e.g. builds may stay queued
whenever the zk-retry patch is used.
Perhaps a follow-up patch could improve that situation, for example
by re-creating the request when the connection state switch back to CONNECTED.


> If you are running with this code in your environment now, I'd say you've
> probably
> just been fairly lucky to not have a ZK session timeout in just the right
> place. I found
> it fairly difficult to get to happen in my tests today, and it was never
> consistent when
> I did get it to happen (either something weird with ZK itself, or the kazoo
> library).
> 
Well we used to get those kazoo connection stack traces almost daily,
and I couldn't find the root cause, ftr here is the setup:

* Instances network setup by Neutron with ovs,
* CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and
* Zookeeper configuration: http://paste.openstack.org/show/693515/

Though the bottomline is to make Zuul and Nodepool more resilient to
Zookeeper issues.

What would be the wrong place for a connection lost to happen?


> Also, I hope I'm not coming off as difficult around this. I just know how
> hard it was
> to get what we have now functioning correctly, so I'm very protective of
> it. Hopefully
> we can get this sorted out for you, and we can learn a bunch of stuff along
> the way.
> 

Not all, Zookeeper error handling is tricky :)
Thank you for sharing your valuable feedback. 

-Tristan

> [1]
> http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connection-events
> 
> On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury <shrewsbury.dave at gmail.com
>> wrote:
> 
>>
>>
>> On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury <
>> shrewsbury.dave at gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray <tdecacqu at redhat.com>
>>> wrote:
>>>
>>> [snip]
>>>
>>>
>>>> The current review implements the above logic: sequence create aren't
>>>> using the bulk retry, instead it issues a regular create call, and if
>>>> there is an exception, then it looks for the uuid in the objects list
>>>> before issueing another create call:
>>>> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
>>>
>>>
>>> Ah, apologies for not noticing this in my quick lookover earlier. I'll
>>> take a
>>> closer look today. If that's what the code is doing, this could alleviate
>>> much
>>> of my concern.
>>>
>>>
>> Ok, I think what you're doing with sequences looks pretty could and should
>> work for
>> us. While more thoroughly looking at the review, I did see a potential
>> issue around
>> locking, particularly around holding a lock during the long-running image
>> build process.
>>
>> Now that we keep retrying, even through a total disconnect, there may be
>> places in the
>> code where we assume we still have the lock (like the image build code) if
>> we haven't
>> seen any ZK errors. Perhaps we should think about this aspect for a bit.
>>
>> -Dave
>>
>> --
>> David Shrewsbury (Shrews)
>>
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20180307/97a0a73f/attachment-0002.sig>


More information about the Zuul-discuss mailing list