Thanks for the thorough review David.
Indeed kazoo Locks are ephemerals and they vanish when the connection is lost.
However the zk-retry patch doesn't seems to affect current behaviors,
please see my comments inlined bellow and on the nodepool review.
On March 6, 2018 6:38 pm, David Shrewsbury wrote:
I've played with some test code around this and I feel convinced we couldWould you mind sharing the test code?
have
issues with locks and ephemeral nodes if we are hiding ZK connection
problems
(particularly session timeouts, which cause us to lose both things) behind
retries.
Right now we can depend on those connection errors to bubble up to nodepoolThe proposal aims at removing those "key points" checks because they add
and zuul as exceptions, and we can assume our locks or ephemeral nodes are
gone at that point and handle accordingly. If we now hide those, we need to
begin
using a listener [1] to know when our session has timed out and set some
sort of
state that we will have to check at multiple key points. FWIW, I started
nodepool
v3 off with such a listener and the logic became pretty difficult fairly
quickly.
complexicity and they look prone to race conditions, for example when
the connection get lost after the "if self._zk.lost" check.
Looking at all Zookeeper Lock usage, only the imageBuildLock seems to
benefit from the checks: if a builder lost the connection
during a build, it will mark the image as FAILED and another builder will try
again. What happens if the other builder succeed before the first one?
With the zk-retry patch, both builder will mark the image as READY,
and as far as I can tell, this doesn't seems to be an issue.
The other Lock aren't protected, for example multiple builder may start
an image upload when the connection get lost. The zk-retry patch doesn't
seem to change the behavior.
Zuul node locks are still an issue, e.g. builds may stay queued
whenever the zk-retry patch is used.
Perhaps a follow-up patch could improve that situation, for example
by re-creating the request when the connection state switch back to CONNECTED.
If you are running with this code in your environment now, I'd say you'veWell we used to get those kazoo connection stack traces almost daily,
probably
just been fairly lucky to not have a ZK session timeout in just the right
place. I found
it fairly difficult to get to happen in my tests today, and it was never
consistent when
I did get it to happen (either something weird with ZK itself, or the kazoo
library).
and I couldn't find the root cause, ftr here is the setup:
* Instances network setup by Neutron with ovs,
* CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and
* Zookeeper configuration: http://paste.openstack.org/show/693515/
Though the bottomline is to make Zuul and Nodepool more resilient to
Zookeeper issues.
What would be the wrong place for a connection lost to happen?
Also, I hope I'm not coming off as difficult around this. I just know how
hard it was
to get what we have now functioning correctly, so I'm very protective of
it. Hopefully
we can get this sorted out for you, and we can learn a bunch of stuff along
the way.
Not all, Zookeeper error handling is tricky :)
Thank you for sharing your valuable feedback.