Zookeeper ConnectionLoss/SessionExpired issues
Hello folks, Here is a recap of the issues happening when the connection to Zookeeper is unstable: * Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo. This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used. There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1]. The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset. I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages. Regards, -Tristan [0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling
On Mon, Mar 05, 2018 at 03:05:40AM +0000, Tristan Cacqueray wrote:
Hello folks,
Here is a recap of the issues happening when the connection to Zookeeper is unstable:
* Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo. This last one is a bug IMO, and makes it harder for deployments. Specifically, in openstack-infra case, if we start zuul / nodepool before we have opened the firewall to zookeeper, kazoo.client stops trying to connect after a short amount of time. Once the ports are open / zookeeper started the only way forward is to manually stop / start zuul / nodepool. I'd love us to keep supporting this use case of zookeeper not started before zuul / nodepool so reties here make sense.
Today gearman allows us to operate this way, makesense for zookeeper to support it too. I'll defer to corvus / Shrews to other specific items.
This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used.
There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1].
The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset.
I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages.
Regards, -Tristan
[0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Hi! Thanks for starting this discussion. On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
Here is a recap of the issues happening when the connection to Zookeeper is unstable:
* Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo.
This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used.
There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1].
It's these sequence objects that concerns me most because we are so heavily dependent on them in both zuul and nodepool. I looked at your current tests, and it seems that they are testing that the sequences are sequential after the retry. From my reading of the ZK ErrorHandling docs, I don't think this is guaranteed. Those docs mention that it is possible to get more than one sequence object created on a request that is retried. Particularly, around this sentence describing what could happen around a single "create" request: "now it could be that x-109 was the result of the previous create, so the process actually owns both x-109 and x-111." So my main source of concern here (that I don't think we are testing?) is what happens when we create, say, a new Node object, but actually end up with 2 new Nodes (which I think would have identical data). I don't know how our current code path would handle such a situation. This is the thing I think we need to figure out how to deal with, and I don't think is covered by the current review. And, to be honest, I'm not quite sure how to mimic this in tests.
The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset.
I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages.
Regards, -Tristan
[0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
On March 5, 2018 2:54 pm, David Shrewsbury wrote:
Hi! Thanks for starting this discussion.
On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Hello folks,
Here is a recap of the issues happening when the connection to Zookeeper is unstable:
* Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo.
This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used.
There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1].
It's these sequence objects that concerns me most because we are so heavily dependent on them in both zuul and nodepool. I looked at your current tests, and it seems that they are testing that the sequences are sequential after the retry. From my reading of the ZK ErrorHandling docs, I don't think this is guaranteed. Those docs mention that it is possible to get more than one sequence object created on a request that is retried. Particularly, around this sentence describing what could happen around a single "create" request:
"now it could be that x-109 was the result of the previous create, so the process actually owns both x-109 and x-111."
So my main source of concern here (that I don't think we are testing?) is what happens when we create, say, a new Node object, but actually end up with 2 new Nodes (which I think would have identical data).
The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
I don't know how our current code path would handle such a situation. This is the thing I think we need to figure out how to deal with, and I don't think is covered by the current review. And, to be honest, I'm not quite sure how to mimic this in tests.
It seems like blocking the 4th socket read can simulate a connection lost after the object is actually created and test the look for uuid loop. The proposed test do excercice that code path, though this may vary dependending on the environment, when the create dialog fits in less than 4 reads. Regards, -Tristan
The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset.
I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages.
Regards, -Tristan
[0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote: [snip]
The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern. -Dave -- David Shrewsbury (Shrews)
On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury <shrewsbury.dave@gmail.com> wrote:
On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
[snip]
The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern.
Ok, I think what you're doing with sequences looks pretty could and should work for us. While more thoroughly looking at the review, I did see a potential issue around locking, particularly around holding a lock during the long-running image build process. Now that we keep retrying, even through a total disconnect, there may be places in the code where we assume we still have the lock (like the image build code) if we haven't seen any ZK errors. Perhaps we should think about this aspect for a bit. -Dave -- David Shrewsbury (Shrews)
I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries. Right now we can depend on those connection errors to bubble up to nodepool and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly. If you are running with this code in your environment now, I'd say you've probably just been fairly lucky to not have a ZK session timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library). Also, I hope I'm not coming off as difficult around this. I just know how hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way. [1] http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connect... On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury <shrewsbury.dave@gmail.com
wrote:
On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < shrewsbury.dave@gmail.com> wrote:
On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
[snip]
The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern.
Ok, I think what you're doing with sequences looks pretty could and should work for us. While more thoroughly looking at the review, I did see a potential issue around locking, particularly around holding a lock during the long-running image build process.
Now that we keep retrying, even through a total disconnect, there may be places in the code where we assume we still have the lock (like the image build code) if we haven't seen any ZK errors. Perhaps we should think about this aspect for a bit.
-Dave
-- David Shrewsbury (Shrews)
-- David Shrewsbury (Shrews)
Thanks for the thorough review David. Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review. On March 6, 2018 6:38 pm, David Shrewsbury wrote:
I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries.
Would you mind sharing the test code?
Right now we can depend on those connection errors to bubble up to nodepool and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly.
The proposal aims at removing those "key points" checks because they add complexicity and they look prone to race conditions, for example when the connection get lost after the "if self._zk.lost" check. Looking at all Zookeeper Lock usage, only the imageBuildLock seems to benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue. The other Lock aren't protected, for example multiple builder may start an image upload when the connection get lost. The zk-retry patch doesn't seem to change the behavior. Zuul node locks are still an issue, e.g. builds may stay queued whenever the zk-retry patch is used. Perhaps a follow-up patch could improve that situation, for example by re-creating the request when the connection state switch back to CONNECTED.
If you are running with this code in your environment now, I'd say you've probably just been fairly lucky to not have a ZK session timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library).
Well we used to get those kazoo connection stack traces almost daily, and I couldn't find the root cause, ftr here is the setup: * Instances network setup by Neutron with ovs, * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and * Zookeeper configuration: http://paste.openstack.org/show/693515/ Though the bottomline is to make Zuul and Nodepool more resilient to Zookeeper issues. What would be the wrong place for a connection lost to happen?
Also, I hope I'm not coming off as difficult around this. I just know how hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way.
Not all, Zookeeper error handling is tricky :) Thank you for sharing your valuable feedback. -Tristan
[1] http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connect...
On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury <shrewsbury.dave@gmail.com
wrote:
On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < shrewsbury.dave@gmail.com> wrote:
On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
[snip]
The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py
Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern.
Ok, I think what you're doing with sequences looks pretty could and should work for us. While more thoroughly looking at the review, I did see a potential issue around locking, particularly around holding a lock during the long-running image build process.
Now that we keep retrying, even through a total disconnect, there may be places in the code where we assume we still have the lock (like the image build code) if we haven't seen any ZK errors. Perhaps we should think about this aspect for a bit.
-Dave
-- David Shrewsbury (Shrews)
-- David Shrewsbury (Shrews)
On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Thanks for the thorough review David. Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review.
Actually, the behavior is very different. See the sample code and output below.
On March 6, 2018 6:38 pm, David Shrewsbury wrote:
I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries.
Would you mind sharing the test code?
Certainly. This is the script I used that simulates current nodepool behavior (zk_retry_test1.py): http://paste.openstack.org/show/693650/ This is the script that simulates your proposed changes (zk_retry_test2.py): http://paste.openstack.org/show/693651/ Running zk_retry_test1.py locally (and shutting down ZK during the sleep() call and leaving it shutdown long enough for the session to expire) produces the following output (note the exception): http://paste.openstack.org/show/693653/ Doing the same with zk_retry_test2.py produces very different output (note the lack of the exception): http://paste.openstack.org/show/693654/ Your proposed changes will hide that SessionExpiredError exception. So if we have code depending on that exception bubbling up to know we've lost a lock or an ephemeral node, it will now be broken with the use of retry(). And there is nothing provided in your review that let's us know the session was lost. Looking at some of the builder and launcher code this morning with the lost session error freshly in mind, I see lots of code that doesn't even take this into account properly, so I think we may be potentially broken in many places with retries or not. :(
and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly.
The proposal aims at removing those "key points" checks because they add complexicity and they look prone to race conditions, for example when
Right now we can depend on those connection errors to bubble up to nodepool the connection get lost after the "if self._zk.lost" check.
Well, we HAVE to know when we lost a session. I like that your review removes the "wait for the connection to return" pieces. But we still need to know when the session is terminated. We can lose a connection, but keep our session. Or we can lose a connection and lose our session (if the connection loss was long enough).
Looking at all Zookeeper Lock usage, only the imageBuildLock seems to benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue.
That would be an issue, actually. We'd have two builds of the same image. We only keep the 2 most recent builds, so we'd lose our previous image. If the 2 we just built is a bad image, we'd have nothing to rollback to.
The other Lock aren't protected, for example multiple builder may start an image upload when the connection get lost. The zk-retry patch doesn't seem to change the behavior.
If nothing else, I think this discussion has revealed deficiencies in our code base around handling session losses (I cite an example below). Our locks are supposed to be protected against this but I don't think we've done this well in all places. Whether we use retries or not, I think we need to correct this.
Zuul node locks are still an issue, e.g. builds may stay queued whenever the zk-retry patch is used. Perhaps a follow-up patch could improve that situation, for example by re-creating the request when the connection state switch back to CONNECTED.
If you are running with this code in your environment now, I'd say you've
probably just been fairly lucky to not have a ZK session timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library).
Well we used to get those kazoo connection stack traces almost daily, and I couldn't find the root cause, ftr here is the setup:
* Instances network setup by Neutron with ovs, * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and * Zookeeper configuration: http://paste.openstack.org/show/693515/
Though the bottomline is to make Zuul and Nodepool more resilient to Zookeeper issues.
What would be the wrong place for a connection lost to happen?
Connection loss and session loss are different. A connection loss may or may not result in a session loss (it depends on how long we lost the connection). An example of a bad place to lose a session would be during node launch. Right now, if NodeLauncher._launchNode() experiences a session loss, the SessionExpiredError exception might bubble up and we'd retry the launch. Now it turns out, we aren't handling that particular exception and we are still assuming we have lock. This is a huge bug that I've just discovered thanks to this discussion because in that situation, we could have an unlocked Node in BUILDING state, and our cleanup thread could delete that before the building thread does something else with it again. So in that example, we must add an except clause to catch that particular exception and handle it differently. With your proposed changes, we'd need a way to know our session was lost, and make the same choices for handling it differently. You're review isn't providing that, but I _can_ see a way forward for the current code.
Also, I hope I'm not coming off as difficult around this. I just know how
hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way.
Not all, Zookeeper error handling is tricky :) Thank you for sharing your valuable feedback.
This has been an excellent discussion!
On March 7, 2018 3:28 pm, David Shrewsbury wrote:
On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Thanks for the thorough review David. Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review.
Actually, the behavior is very different. See the sample code and output below.
On March 6, 2018 6:38 pm, David Shrewsbury wrote:
I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries.
Would you mind sharing the test code?
Certainly.
This is the script I used that simulates current nodepool behavior (zk_retry_test1.py):
http://paste.openstack.org/show/693650/
This is the script that simulates your proposed changes (zk_retry_test2.py):
http://paste.openstack.org/show/693651/
Running zk_retry_test1.py locally (and shutting down ZK during the sleep() call and leaving it shutdown long enough for the session to expire) produces the following output (note the exception):
http://paste.openstack.org/show/693653/
Doing the same with zk_retry_test2.py produces very different output (note the lack of the exception):
http://paste.openstack.org/show/693654/
Your proposed changes will hide that SessionExpiredError exception. So if we have code depending on that exception bubbling up to know we've lost a lock or an ephemeral node, it will now be broken with the use of retry(). And there is nothing provided in your review that let's us know the session was lost.
Well yes, the goal is to ignore whenever the session or connection is lost. In particular, if the plan is to replace gearman by zookeeper, then we'd better be more resilient to zookeeper exceptions. So, the issue is the kazoo Lock implementation, which uses ephemeral objects that are automatically removed when the session is lost. I've reworked the review to add Lock verification before critical state changes: Instead of inferring Lock status from the connection state there is now a lockCheck method to ensure that the lock is still present.
Looking at some of the builder and launcher code this morning with the lost session error freshly in mind, I see lots of code that doesn't even take this into account properly, so I think we may be potentially broken in many places with retries or not. :(
and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly.
The proposal aims at removing those "key points" checks because they add complexicity and they look prone to race conditions, for example when
Right now we can depend on those connection errors to bubble up to nodepool the connection get lost after the "if self._zk.lost" check.
Well, we HAVE to know when we lost a session. I like that your review removes the "wait for the connection to return" pieces. But we still need to know when the session is terminated. We can lose a connection, but keep our session. Or we can lose a connection and lose our session (if the connection loss was long enough).
It seems like we can get around the connection listener logic by checking the ephemeral lock node without relying on the zk.lost property.
Looking at all Zookeeper Lock usage, only the imageBuildLock seems to benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue.
That would be an issue, actually. We'd have two builds of the same image. We only keep the 2 most recent builds, so we'd lose our previous image. If the 2 we just built is a bad image, we'd have nothing to rollback to.
Ok, that makes sense. However I think this behavior is still incorrect if another builder finishes the build before the first builder. It seems like we need to check if the build isn't READY before setting it as FAILED.
The other Lock aren't protected, for example multiple builder may start an image upload when the connection get lost. The zk-retry patch doesn't seem to change the behavior.
If nothing else, I think this discussion has revealed deficiencies in our code base around handling session losses (I cite an example below). Our locks are supposed to be protected against this but I don't think we've done this well in all places. Whether we use retries or not, I think we need to correct this.
Zuul node locks are still an issue, e.g. builds may stay queued whenever the zk-retry patch is used. Perhaps a follow-up patch could improve that situation, for example by re-creating the request when the connection state switch back to CONNECTED.
If you are running with this code in your environment now, I'd say you've
probably just been fairly lucky to not have a ZK session' timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library).
Well we used to get those kazoo connection stack traces almost daily, and I couldn't find the root cause, ftr here is the setup:
* Instances network setup by Neutron with ovs, * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and * Zookeeper configuration: http://paste.openstack.org/show/693515/
Though the bottomline is to make Zuul and Nodepool more resilient to Zookeeper issues.
What would be the wrong place for a connection lost to happen?
Connection loss and session loss are different. A connection loss may or may not result in a session loss (it depends on how long we lost the connection).
An example of a bad place to lose a session would be during node launch. Right now, if NodeLauncher._launchNode() experiences a session loss, the SessionExpiredError exception might bubble up and we'd retry the launch. Now it turns out, we aren't handling that particular exception and we are still assuming we have lock. This is a huge bug that I've just discovered thanks to this discussion because in that situation, we could have an unlocked Node in BUILDING state, and our cleanup thread could delete that before the building thread does something else with it again.
Similarly, I added a new protection before setting a node as READY: the launcher checks the lock, and if it got lost, the node is left as BUILDING to ensure proper cleanup.
So in that example, we must add an except clause to catch that particular exception and handle it differently. With your proposed changes, we'd need a way to know our session was lost, and make the same choices for handling it differently. You're review isn't providing that, but I _can_ see a way forward for the current code.
I think the current code isn't ideal because it won't catch connection issues happening after the check for zk.lost. There will always be a window where the connection can get lost between the check and the critical code. Alternatively, we could implement persistent Lock using a timestamp and a thread to refresh it. Perhaps this implemention could be added to kazoo codebase directly.
Also, I hope I'm not coming off as difficult around this. I just know how
hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way.
Not all, Zookeeper error handling is tricky :) Thank you for sharing your valuable feedback.
This has been an excellent discussion!
Thank you again for discussing this with me! -Tristan
On Sat, Mar 10, 2018 at 12:56 AM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
On March 7, 2018 3:28 pm, David Shrewsbury wrote:
On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
Thanks for the thorough review David.
Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review.
Actually, the behavior is very different. See the sample code and output below.
On March 6, 2018 6:38 pm, David Shrewsbury wrote:
I've played with some test code around this and I feel convinced we could
have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries.
Would you mind sharing the test code?
Certainly.
This is the script I used that simulates current nodepool behavior (zk_retry_test1.py):
http://paste.openstack.org/show/693650/
This is the script that simulates your proposed changes (zk_retry_test2.py):
http://paste.openstack.org/show/693651/
Running zk_retry_test1.py locally (and shutting down ZK during the sleep() call and leaving it shutdown long enough for the session to expire) produces the following output (note the exception):
http://paste.openstack.org/show/693653/
Doing the same with zk_retry_test2.py produces very different output (note the lack of the exception):
http://paste.openstack.org/show/693654/
Your proposed changes will hide that SessionExpiredError exception. So if we have code depending on that exception bubbling up to know we've lost a lock or an ephemeral node, it will now be broken with the use of retry(). And there is nothing provided in your review that let's us know the session was lost.
Well yes, the goal is to ignore whenever the session or connection is lost. In particular, if the plan is to replace gearman by zookeeper, then we'd better be more resilient to zookeeper exceptions.
So, the issue is the kazoo Lock implementation, which uses ephemeral objects that are automatically removed when the session is lost. I've reworked the review to add Lock verification before critical state changes: Instead of inferring Lock status from the connection state there is now a lockCheck method to ensure that the lock is still present.
Looking at some of the builder and launcher code this morning with the lost
session error freshly in mind, I see lots of code that doesn't even take this into account properly, so I think we may be potentially broken in many places with retries or not. :(
Right now we can depend on those connection errors to bubble up to nodepool
and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly.
The proposal aims at removing those "key points" checks because they add
complexicity and they look prone to race conditions, for example when the connection get lost after the "if self._zk.lost" check.
Well, we HAVE to know when we lost a session. I like that your review removes the "wait for the connection to return" pieces. But we still need to know when the session is terminated. We can lose a connection, but keep our session. Or we can lose a connection and lose our session (if the connection loss was long enough).
It seems like we can get around the connection listener logic by checking the ephemeral lock node without relying on the zk.lost property.
Looking at all Zookeeper Lock usage, only the imageBuildLock seems to
benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue.
That would be an issue, actually. We'd have two builds of the same image. We only keep the 2 most recent builds, so we'd lose our previous image. If the 2 we just built is a bad image, we'd have nothing to rollback to.
Ok, that makes sense. However I think this behavior is still incorrect if another builder finishes the build before the first builder. It seems like we need to check if the build isn't READY before setting it as FAILED.
I don't get your thinking here. We should NEVER have multiple builders building the same image. That's the whole reason for the image build lock. [snip] This email is simply an FYI on where my thinking is on this right now. I'm still mulling over the options and waiting for feedback from others (particularly Jim). These are the things I know: 1) A lot of our code has been depending on exceptions being thrown from ZK to decide to abort certain operations. Session errors being among the expected exceptions. 2) We have places in current code that don't even consider session errors, and this needs to be corrected. Very much for sad making. :( 3) You have provided a way to both use retry() AND examine locks to determine if we have lost a particular lock. (Still need something similar for ephemeral nodes, but that's easy.) 4) When considering how to move forward with fixing 1 and 2 to support your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks. -Dave -- David Shrewsbury (Shrews)
On Mon, Mar 12, 2018 at 5:37 PM, David Shrewsbury <shrewsbury.dave@gmail.com
wrote:
[snip] 4) When considering how to move forward with fixing 1 and 2 to support
your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks.
To help me (and others) wrap my head around this, here are two solutions using both methods to the bug discussed earlier. (I'm a visual learner and needed to actually see this to form an opinion). This is using the current (not-retry) code: https://review.openstack.org/552538 This is based off your proposed change to use retry: https://review.openstack.org/552551 Note that in the 2nd review, we now have to scatter lock checks before any attempts to store ZooKeeper data within our critical section. Where as in the first, we can just wrap the entire critical section with a try-except. Based on this, I feel like using retry() is going to force us to add much more code, and will end up being more error prone since it adds places where we must check for session loss that would affect locks and ephemeral nodes. I feel like maybe we could use retry() for the initial connection to eliminate the "zookeeper not running before nodepool" startup problem. But I'm not feeling good about using it throughout the rest of the code. -Dave -- David Shrewsbury (Shrews)
David Shrewsbury <shrewsbury.dave@gmail.com> writes:
This email is simply an FYI on where my thinking is on this right now. I'm still mulling over the options and waiting for feedback from others (particularly Jim).
These are the things I know:
1) A lot of our code has been depending on exceptions being thrown from ZK to decide to abort certain operations. Session errors being among the expected exceptions.
2) We have places in current code that don't even consider session errors, and this needs to be corrected. Very much for sad making. :(
3) You have provided a way to both use retry() AND examine locks to determine if we have lost a particular lock. (Still need something similar for ephemeral nodes, but that's easy.)
4) When considering how to move forward with fixing 1 and 2 to support your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks.
My inclination is to prefer coarse error handling. In general, the approach we take in Nodepool is to assume that every call we make can throw an exception, and then some level higher up in the call stack will deal with it (abort the build, mark a node as deleted, etc). That comes from our experience working with clouds, but it can apply equally to interacting with any system. The benefit is that it usually makes code easier to follow for the reasons David mentions. Having to remember to check something after every operation is error-prone (and those are the kinds of errors that lead us to prefer language with exception handling). However, I really like the fact that Tristan's patch removes some explicit session handling code. Removing the wait-for-zookeeper sections is very nice. Though maybe we could just refactor them into a common method. One thing that we haven't discussed is the effect on program flow. In nodepool, we have a large number of threads interacting with zookeeper, and in general, it's probably okay for them to pause while retrying some event. However, in Zuul, the only thread interacting with zookeeper is the main scheduler thread. If we adopt retries in Zuul, that thread will end up getting stuck waiting for zookeeper, whereas it could be busy reporting results or canceling jobs. I've learned a lot reading this thread and the patches. I'm happy to continue exploring ways of improving our zookeeper handling. -Jim
Hello, Looking at our ancient logs (Nov 2017 before we start using zk-retry), we actually only had ConnectionLoss at the begining of the run() code, here is a collection of them: http://paste.openstack.org/show/700703/ I wonder if nodepool.openstack.org is avoiding them because it is such a busy system. It would be useful if other deployments looked for ConnectionLoss and SessionExpired in their logs and share the stacktraces. Please find more comments inlined below. On March 13, 2018 4:37 pm, James E. Blair wrote:
David Shrewsbury <shrewsbury.dave@gmail.com> writes:
This email is simply an FYI on where my thinking is on this right now. I'm still mulling over the options and waiting for feedback from others (particularly Jim).
These are the things I know:
1) A lot of our code has been depending on exceptions being thrown from ZK to decide to abort certain operations. Session errors being among the expected exceptions.
Last PS makes the checkLock method raise the SessionExpired which seems to provide the same behavior.
2) We have places in current code that don't even consider session errors, and this needs to be corrected. Very much for sad making. :(
3) You have provided a way to both use retry() AND examine locks to determine if we have lost a particular lock. (Still need something similar for ephemeral nodes, but that's easy.)
It seems more robus than relying on second hand informations from the connection listener.
4) When considering how to move forward with fixing 1 and 2 to support your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks.
My inclination is to prefer coarse error handling. In general, the approach we take in Nodepool is to assume that every call we make can throw an exception, and then some level higher up in the call stack will deal with it (abort the build, mark a node as deleted, etc). That comes from our experience working with clouds, but it can apply equally to interacting with any system. The benefit is that it usually makes code easier to follow for the reasons David mentions. Having to remember to check something after every operation is error-prone (and those are the kinds of errors that lead us to prefer language with exception handling).
Agreed, but why the builder isn't relying on exception handling here: http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder... I actually implemented manual check because of that builder use-case, though this can be easily modified to raise exceptions instead.
However, I really like the fact that Tristan's patch removes some explicit session handling code. Removing the wait-for-zookeeper sections is very nice. Though maybe we could just refactor them into a common method.
I still don't understand the logic of checking if the connection has been lost. What happens if the connection get lost after the wait-for-zookeeper sections?
One thing that we haven't discussed is the effect on program flow. In nodepool, we have a large number of threads interacting with zookeeper, and in general, it's probably okay for them to pause while retrying some event. However, in Zuul, the only thread interacting with zookeeper is the main scheduler thread. If we adopt retries in Zuul, that thread will end up getting stuck waiting for zookeeper, whereas it could be busy reporting results or canceling jobs.
Actually we haven't discussed the effect of multiple thread experiencing SessionExpired errors, will kazoo raise the exception for each threads, or will only one get it? Looking at the builder code linked above, it seems like the first thread may resetLostFlag().
I've learned a lot reading this thread and the patches. I'm happy to continue exploring ways of improving our zookeeper handling.
-Jim
Please have a look again at 535537 (PS5), I think it implements the best of both strategies: it removes the wait-for-zookeeper sections while still using the SessionExpired exception. -Tristan
On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote: [snip[
Please have a look again at 535537 (PS5), I think it implements the best of both strategies: it removes the wait-for-zookeeper sections while still using the SessionExpired exception.
Consolidating the lock checks into the write methods works since we only need to make sure that we still have the lock before we update the ZK database. This doesn't work for ephemeral nodes though. How would you propose we address that? -Dave -- David Shrewsbury (Shrews)
On March 14, 2018 5:10 pm, David Shrewsbury wrote:
On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray <tdecacqu@redhat.com> wrote:
[snip[
Please have a look again at 535537 (PS5), I think it implements the best of both strategies: it removes the wait-for-zookeeper sections while still using the SessionExpired exception.
Consolidating the lock checks into the write methods works since we only need to make sure that we still have the lock before we update the ZK database. This doesn't work for ephemeral nodes though. How would you propose we address that?
Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan
On 16.03.18, 02:45, "Tristan Cacqueray" <tdecacqu@redhat.com> wrote: Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan Regarding the NodeRequest in zuul there is a change [1] in review to react on lost node requests. [1] https://review.openstack.org/537335 - Tobias
participants (5)
-
corvus@inaugust.com
-
David Shrewsbury
-
Paul Belanger
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray