Zookeeper ConnectionLoss/SessionExpired issues

Tristan Cacqueray tdecacqu at redhat.com
Sat Mar 10 05:56:57 UTC 2018


On March 7, 2018 3:28 pm, David Shrewsbury wrote:
> On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray <tdecacqu at redhat.com>
> wrote:
> 
>> Thanks for the thorough review David.
>> Indeed kazoo Locks are ephemerals and they vanish when the connection is
>> lost.
>> However the zk-retry patch doesn't seems to affect current behaviors,
>> please see my comments inlined bellow and on the nodepool review.
> 
> 
> Actually, the behavior is very different. See the sample code and output
> below.
> 
> 
> 
>>
>>
>> On March 6, 2018 6:38 pm, David Shrewsbury wrote:
>>
>>> I've played with some test code around this and I feel convinced we could
>>> have
>>> issues with locks and ephemeral nodes if we are hiding ZK connection
>>> problems
>>> (particularly session timeouts, which cause us to lose both things) behind
>>> retries.
>>>
>>> Would you mind sharing the test code?
> 
> 
> Certainly.
> 
> This is the script I used that simulates current nodepool behavior
> (zk_retry_test1.py):
> 
>    http://paste.openstack.org/show/693650/
> 
> This is the script that simulates your proposed changes (zk_retry_test2.py):
> 
>    http://paste.openstack.org/show/693651/
> 
> Running zk_retry_test1.py locally (and shutting down ZK during the sleep()
> call and
> leaving it shutdown long enough for the session to expire) produces the
> following output
> (note the exception):
> 
>    http://paste.openstack.org/show/693653/
> 
> Doing the same with zk_retry_test2.py produces very different output (note
> the lack
> of the exception):
> 
>    http://paste.openstack.org/show/693654/
> 
> Your proposed changes will hide that SessionExpiredError exception. So if
> we have
> code depending on that exception bubbling up to know we've lost a lock or an
> ephemeral node, it will now be broken with the use of retry(). And there is
> nothing
> provided in your review that let's us know the session was lost.
> 
Well yes, the goal is to ignore whenever the session or connection is lost.
In particular, if the plan is to replace gearman by zookeeper, then we'd
better be more resilient to zookeeper exceptions.

So, the issue is the kazoo Lock implementation, which uses ephemeral objects
that are automatically removed when the session is lost. I've reworked
the review to add Lock verification before critical state changes:
Instead of inferring Lock status from the connection state there is now a
lockCheck method to ensure that the lock is still present.

> Looking at some of the builder and launcher code this morning with the lost
> session error
> freshly in mind, I see lots of code that doesn't even take this into
> account properly, so I think
> we may be potentially broken in many places with retries or not.  :(
> 
> 
> 
>>
>> Right now we can depend on those connection errors to bubble up to nodepool
>>> and zuul as exceptions, and we can assume our locks or ephemeral nodes are
>>> gone at that point and handle accordingly. If we now hide those, we need
>>> to
>>> begin
>>> using a listener [1] to know when our session has timed out and set some
>>> sort of
>>> state that we will have to check at multiple key points. FWIW, I started
>>> nodepool
>>> v3 off with such a listener and the logic became pretty difficult fairly
>>> quickly.
>>>
>>> The proposal aims at removing those "key points" checks because they add
>> complexicity and they look prone to race conditions, for example when
>> the connection get lost after the "if self._zk.lost" check.
>>
> 
> Well, we HAVE to know when we lost a session. I like that your review
> removes
> the "wait for the connection to return" pieces. But we still need to know
> when the
> session is terminated. We can lose a connection, but keep our session. Or
> we can
> lose a connection and lose our session (if the connection loss was long
> enough).
> 
> 
It seems like we can get around the connection listener logic by
checking the ephemeral lock node without relying on the zk.lost property.

> 
>> Looking at all Zookeeper Lock usage, only the imageBuildLock seems to
>> benefit from the checks: if a builder lost the connection
>> during a build, it will mark the image as FAILED and another builder will
>> try
>> again. What happens if the other builder succeed before the first one?
>> With the zk-retry patch, both builder will mark the image as READY,
>> and as far as I can tell, this doesn't seems to be an issue.
>>
> 
> That would be an issue, actually. We'd have two builds of the same image.
> We only keep the 2 most recent builds, so we'd lose our previous image.
> If the 2 we just built is a bad image, we'd have nothing to rollback to.
> 
Ok, that makes sense. However I think this behavior is still incorrect if
another builder finishes the build before the first builder. It seems like
we need to check if the build isn't READY before setting it as FAILED.
> 
> 
>>
>> The other Lock aren't protected, for example multiple builder may start
>> an image upload when the connection get lost. The zk-retry patch doesn't
>> seem to change the behavior.
>>
> 
> If nothing else, I think this discussion has revealed deficiencies in our
> code base
> around handling session losses (I cite an example below). Our locks are
> supposed
> to be protected against this but I don't think we've done this well in all
> places. Whether
> we use retries or not, I think we need to correct this.
> 
> 
>> Zuul node locks are still an issue, e.g. builds may stay queued
>> whenever the zk-retry patch is used.
>> Perhaps a follow-up patch could improve that situation, for example
>> by re-creating the request when the connection state switch back to
>> CONNECTED.
>>
>>
>> If you are running with this code in your environment now, I'd say you've
>>> probably
>>> just been fairly lucky to not have a ZK session' timeout in just the right
>>> place. I found
>>> it fairly difficult to get to happen in my tests today, and it was never
>>> consistent when
>>> I did get it to happen (either something weird with ZK itself, or the
>>> kazoo
>>> library).
>>>
>>> Well we used to get those kazoo connection stack traces almost daily,
>> and I couldn't find the root cause, ftr here is the setup:
>>
>> * Instances network setup by Neutron with ovs,
>> * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and
>> * Zookeeper configuration: http://paste.openstack.org/show/693515/
>>
>> Though the bottomline is to make Zuul and Nodepool more resilient to
>> Zookeeper issues.
>>
>> What would be the wrong place for a connection lost to happen?
> 
> 
> Connection loss and session loss are different. A connection loss may or
> may not result in a session loss (it depends on how long we lost the
> connection).
> 
> An example of a bad place to lose a session would be during node launch.
> Right now, if NodeLauncher._launchNode() experiences a session loss, the
> SessionExpiredError exception might bubble up and we'd retry the launch.
> Now it turns out, we aren't handling that particular exception and we are
> still assuming we have lock. This is a huge bug that I've just discovered
> thanks
> to this discussion because in that situation, we could have an unlocked Node
> in BUILDING state, and our cleanup thread could delete that before the
> building
> thread does something else with it again.
> 
Similarly, I added a new protection before setting a node as READY: the
launcher checks the lock, and if it got lost, the node is left as
BUILDING to ensure proper cleanup.


> So in that example, we must add an except clause to catch that particular
> exception and handle it differently. With your proposed changes, we'd need a
> way to know our session was lost, and make the same choices for handling it
> differently. You're review isn't providing that, but I _can_ see a way
> forward for
> the current code.
> 
I think the current code isn't ideal because it won't catch connection
issues happening after the check for zk.lost. There will always
be a window where the connection can get lost between the check and
the critical code.

Alternatively, we could implement persistent Lock using a timestamp and
a thread to refresh it. Perhaps this implemention could be added to
kazoo codebase directly.

> 
> 
>> Also, I hope I'm not coming off as difficult around this. I just know how
>>> hard it was
>>> to get what we have now functioning correctly, so I'm very protective of
>>> it. Hopefully
>>> we can get this sorted out for you, and we can learn a bunch of stuff
>>> along
>>> the way.
>>>
>>>
>> Not all, Zookeeper error handling is tricky :)
>> Thank you for sharing your valuable feedback.
> 
> 
> This has been an excellent discussion!
> 
Thank you again for discussing this with me!
-Tristan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20180310/04717a54/attachment-0002.sig>


More information about the Zuul-discuss mailing list