Zookeeper ConnectionLoss/SessionExpired issues

David Shrewsbury shrewsbury.dave at gmail.com
Mon Mar 5 14:54:15 UTC 2018


Hi!  Thanks for starting this discussion.

On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray <tdecacqu at redhat.com>
wrote:

> Hello folks,
>
> Here is a recap of the issues happening when the connection to
> Zookeeper is unstable:
>
> * Nodepool marks dib build as failed,
> * ConnectionLoss/SessionExpired exceptions happen when the connection
>  drop after the "self._zk.suspended or self._zk.lost" checks,
> * Zuul lost NodeRequest which may lock change in queued state, and
> * Zookeeper needs to be running before starting Zuul or Nodepool,
>  otherwise the daemon stay stuck in limbo.
>
> This used to happen frequently in our deployment (running in an
> OpenStack tenant with zookeeper on a dedicated instance). It seems like
> the current connection listener doesn't work properly and the errors
> seems to mostly happen when the connection is actually used.
>
>
> There are a couple of patches[0] to leverage Kazoo retry facilities and
> to mitigate above issues by automatically retrying operations until the
> service is available. The nodepool patch adds a uuid to Sequence object
> as suggested by the Zookeeper ErrorHandling documentation[1].
>
>
It's these sequence objects that concerns me most because we are so
heavily dependent on them in both zuul and nodepool.  I looked at your
current
tests, and it seems that they are testing that the sequences are sequential
after
the retry.  From my reading of the ZK ErrorHandling docs, I don't think
this is
guaranteed. Those docs mention that it is possible to get more than one
sequence object created on a request that is retried. Particularly, around
this
sentence describing what could happen around a single "create" request:

    "now it could be that x-109 was the result of the previous create, so
the
     process actually owns both x-109 and x-111."

So my main source of concern here (that I don't think we are testing?) is
what
happens when we create, say, a new Node object, but actually end up with 2
new Nodes (which I think would have identical data). I don't know how our
current code path would handle such a situation. This is the thing I think
we need
to figure out how to deal with, and I don't think is covered by the current
review.
And, to be honest, I'm not quite sure how to mimic this in tests.



> The last remaining issue is the Zuul ephemeral NodeRequest code that
> doesn't
> re-create the request if they got deleted when the connection is reset.
>
>
> I think those patches improve the overall service reliability and we have
> been using them successfully without issues in software-factory packages.
>
> Regards,
> -Tristan
>
> [0]: https://review.openstack.org/#/q/topic:zookeeper-retry
> [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling
>
> _______________________________________________
> Zuul-discuss mailing list
> Zuul-discuss at lists.zuul-ci.org
> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>
>


-- 
David Shrewsbury (Shrews)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20180305/c6bc9a36/attachment-0002.html>


More information about the Zuul-discuss mailing list