From iwienand at redhat.com Thu Mar 1 00:02:51 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Mar 2018 11:02:51 +1100 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: <8855d552-ea31-2334-10dd-219576b65274@redhat.com> On 02/28/2018 02:11 PM, David Shrewsbury wrote: > I don't think a full removal is necessary. If the change I proposed > to add config info to launcher registration is deemed worthy, it's a > simple change to the current code to pull that new data. > Yeah, I like consistency here too. I don't see any reason why that > can't be easily fixed when we make the change mentioned above. Although I've proposed a revert in [1] that's just so that I can stack refactoring changes ontop for aforementioned consistency. [2] restores it, (depending-on the "keep labels in zk" change [3]). I like using a dict to return the values; even if currently it only has one element we may find a reason to add the list of launchers that support a given label, or some other info, in the future. >> Since we're talking about it; would we be better to remove the ".json" >> suffix checks, and just look at the mime type accept headers and >> return based on that? It seems to be the more common approach. > I'm not the most strongly versed in web things, but on the surface, this > seems pretty logical to me. This is proposed in [4] Thanks! -i [1] https://review.openstack.org/548424 [2] https://review.openstack.org/548442 [3] https://review.openstack.org/548376 [4] https://review.openstack.org/548440 From tdecacqu at redhat.com Mon Mar 5 03:05:40 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 05 Mar 2018 03:05:40 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues Message-ID: <1520216795.pc3r2jevm5.tristanC@fedora> Hello folks, Here is a recap of the issues happening when the connection to Zookeeper is unstable: * Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo. This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used. There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1]. The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset. I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages. Regards, -Tristan [0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Mon Mar 5 12:56:08 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 5 Mar 2018 07:56:08 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520216795.pc3r2jevm5.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: <20180305125608.GB31103@localhost.localdomain> On Mon, Mar 05, 2018 at 03:05:40AM +0000, Tristan Cacqueray wrote: > Hello folks, > > Here is a recap of the issues happening when the connection to > Zookeeper is unstable: > > * Nodepool marks dib build as failed, > * ConnectionLoss/SessionExpired exceptions happen when the connection > drop after the "self._zk.suspended or self._zk.lost" checks, > * Zuul lost NodeRequest which may lock change in queued state, and > * Zookeeper needs to be running before starting Zuul or Nodepool, > otherwise the daemon stay stuck in limbo. This last one is a bug IMO, and makes it harder for deployments. Specifically, in openstack-infra case, if we start zuul / nodepool before we have opened the firewall to zookeeper, kazoo.client stops trying to connect after a short amount of time. Once the ports are open / zookeeper started the only way forward is to manually stop / start zuul / nodepool. I'd love us to keep supporting this use case of zookeeper not started before zuul / nodepool so reties here make sense. Today gearman allows us to operate this way, makesense for zookeeper to support it too. I'll defer to corvus / Shrews to other specific items. > > This used to happen frequently in our deployment (running in an > OpenStack tenant with zookeeper on a dedicated instance). It seems like > the current connection listener doesn't work properly and the errors > seems to mostly happen when the connection is actually used. > > > There are a couple of patches[0] to leverage Kazoo retry facilities and > to mitigate above issues by automatically retrying operations until the > service is available. The nodepool patch adds a uuid to Sequence object > as suggested by the Zookeeper ErrorHandling documentation[1]. > > The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't > re-create the request if they got deleted when the connection is reset. > > > I think those patches improve the overall service reliability and we have > been using them successfully without issues in software-factory packages. > > Regards, > -Tristan > > [0]: https://review.openstack.org/#/q/topic:zookeeper-retry > [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From shrewsbury.dave at gmail.com Mon Mar 5 14:54:15 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 5 Mar 2018 09:54:15 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520216795.pc3r2jevm5.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: Hi! Thanks for starting this discussion. On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray wrote: > Hello folks, > > Here is a recap of the issues happening when the connection to > Zookeeper is unstable: > > * Nodepool marks dib build as failed, > * ConnectionLoss/SessionExpired exceptions happen when the connection > drop after the "self._zk.suspended or self._zk.lost" checks, > * Zuul lost NodeRequest which may lock change in queued state, and > * Zookeeper needs to be running before starting Zuul or Nodepool, > otherwise the daemon stay stuck in limbo. > > This used to happen frequently in our deployment (running in an > OpenStack tenant with zookeeper on a dedicated instance). It seems like > the current connection listener doesn't work properly and the errors > seems to mostly happen when the connection is actually used. > > > There are a couple of patches[0] to leverage Kazoo retry facilities and > to mitigate above issues by automatically retrying operations until the > service is available. The nodepool patch adds a uuid to Sequence object > as suggested by the Zookeeper ErrorHandling documentation[1]. > > It's these sequence objects that concerns me most because we are so heavily dependent on them in both zuul and nodepool. I looked at your current tests, and it seems that they are testing that the sequences are sequential after the retry. From my reading of the ZK ErrorHandling docs, I don't think this is guaranteed. Those docs mention that it is possible to get more than one sequence object created on a request that is retried. Particularly, around this sentence describing what could happen around a single "create" request: "now it could be that x-109 was the result of the previous create, so the process actually owns both x-109 and x-111." So my main source of concern here (that I don't think we are testing?) is what happens when we create, say, a new Node object, but actually end up with 2 new Nodes (which I think would have identical data). I don't know how our current code path would handle such a situation. This is the thing I think we need to figure out how to deal with, and I don't think is covered by the current review. And, to be honest, I'm not quite sure how to mimic this in tests. > The last remaining issue is the Zuul ephemeral NodeRequest code that > doesn't > re-create the request if they got deleted when the connection is reset. > > > I think those patches improve the overall service reliability and we have > been using them successfully without issues in software-factory packages. > > Regards, > -Tristan > > [0]: https://review.openstack.org/#/q/topic:zookeeper-retry > [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Mar 6 00:59:54 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 06 Mar 2018 00:59:54 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: <1520296945.n7h8zkzye1.tristanC@fedora> On March 5, 2018 2:54 pm, David Shrewsbury wrote: > Hi! Thanks for starting this discussion. > > On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray > wrote: > >> Hello folks, >> >> Here is a recap of the issues happening when the connection to >> Zookeeper is unstable: >> >> * Nodepool marks dib build as failed, >> * ConnectionLoss/SessionExpired exceptions happen when the connection >> drop after the "self._zk.suspended or self._zk.lost" checks, >> * Zuul lost NodeRequest which may lock change in queued state, and >> * Zookeeper needs to be running before starting Zuul or Nodepool, >> otherwise the daemon stay stuck in limbo. >> >> This used to happen frequently in our deployment (running in an >> OpenStack tenant with zookeeper on a dedicated instance). It seems like >> the current connection listener doesn't work properly and the errors >> seems to mostly happen when the connection is actually used. >> >> >> There are a couple of patches[0] to leverage Kazoo retry facilities and >> to mitigate above issues by automatically retrying operations until the >> service is available. The nodepool patch adds a uuid to Sequence object >> as suggested by the Zookeeper ErrorHandling documentation[1]. >> >> > It's these sequence objects that concerns me most because we are so > heavily dependent on them in both zuul and nodepool. I looked at your > current > tests, and it seems that they are testing that the sequences are sequential > after > the retry. From my reading of the ZK ErrorHandling docs, I don't think > this is > guaranteed. Those docs mention that it is possible to get more than one > sequence object created on a request that is retried. Particularly, around > this > sentence describing what could happen around a single "create" request: > > "now it could be that x-109 was the result of the previous create, so > the > process actually owns both x-109 and x-111." > > So my main source of concern here (that I don't think we are testing?) is > what > happens when we create, say, a new Node object, but actually end up with 2 > new Nodes (which I think would have identical data). The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py > I don't know how our current code path would handle such a situation. > This is the thing I think > we need > to figure out how to deal with, and I don't think is covered by the current > review. > And, to be honest, I'm not quite sure how to mimic this in tests. > It seems like blocking the 4th socket read can simulate a connection lost after the object is actually created and test the look for uuid loop. The proposed test do excercice that code path, though this may vary dependending on the environment, when the create dialog fits in less than 4 reads. Regards, -Tristan > > >> The last remaining issue is the Zuul ephemeral NodeRequest code that >> doesn't >> re-create the request if they got deleted when the connection is reset. >> >> >> I think those patches improve the overall service reliability and we have >> been using them successfully without issues in software-factory packages. >> >> Regards, >> -Tristan >> >> [0]: https://review.openstack.org/#/q/topic:zookeeper-retry >> [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> > > > -- > David Shrewsbury (Shrews) > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Tue Mar 6 13:35:51 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 08:35:51 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520296945.n7h8zkzye1.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray wrote: [snip] > The current review implements the above logic: sequence create aren't > using the bulk retry, instead it issues a regular create call, and if > there is an exception, then it looks for the uuid in the objects list > before issueing another create call: > Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Mar 6 15:47:25 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 10:47:25 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury wrote: > > > On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray > wrote: > > [snip] > > >> The current review implements the above logic: sequence create aren't >> using the bulk retry, instead it issues a regular create call, and if >> there is an exception, then it looks for the uuid in the objects list >> before issueing another create call: >> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py > > > Ah, apologies for not noticing this in my quick lookover earlier. I'll > take a > closer look today. If that's what the code is doing, this could alleviate > much > of my concern. > > Ok, I think what you're doing with sequences looks pretty could and should work for us. While more thoroughly looking at the review, I did see a potential issue around locking, particularly around holding a lock during the long-running image build process. Now that we keep retrying, even through a total disconnect, there may be places in the code where we assume we still have the lock (like the image build code) if we haven't seen any ZK errors. Perhaps we should think about this aspect for a bit. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Mar 6 18:38:47 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 13:38:47 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries. Right now we can depend on those connection errors to bubble up to nodepool and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly. If you are running with this code in your environment now, I'd say you've probably just been fairly lucky to not have a ZK session timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library). Also, I hope I'm not coming off as difficult around this. I just know how hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way. [1] http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connection-events On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury wrote: > > > On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < > shrewsbury.dave at gmail.com> wrote: > >> >> >> On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray >> wrote: >> >> [snip] >> >> >>> The current review implements the above logic: sequence create aren't >>> using the bulk retry, instead it issues a regular create call, and if >>> there is an exception, then it looks for the uuid in the objects list >>> before issueing another create call: >>> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py >> >> >> Ah, apologies for not noticing this in my quick lookover earlier. I'll >> take a >> closer look today. If that's what the code is doing, this could alleviate >> much >> of my concern. >> >> > Ok, I think what you're doing with sequences looks pretty could and should > work for > us. While more thoroughly looking at the review, I did see a potential > issue around > locking, particularly around holding a lock during the long-running image > build process. > > Now that we keep retrying, even through a total disconnect, there may be > places in the > code where we assume we still have the lock (like the image build code) if > we haven't > seen any ZK errors. Perhaps we should think about this aspect for a bit. > > -Dave > > -- > David Shrewsbury (Shrews) > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Mar 7 05:38:00 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 07 Mar 2018 05:38:00 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: <1520387902.pcnp10igdm.tristanC@fedora> Thanks for the thorough review David. Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review. On March 6, 2018 6:38 pm, David Shrewsbury wrote: > I've played with some test code around this and I feel convinced we could > have > issues with locks and ephemeral nodes if we are hiding ZK connection > problems > (particularly session timeouts, which cause us to lose both things) behind > retries. > Would you mind sharing the test code? > Right now we can depend on those connection errors to bubble up to nodepool > and zuul as exceptions, and we can assume our locks or ephemeral nodes are > gone at that point and handle accordingly. If we now hide those, we need to > begin > using a listener [1] to know when our session has timed out and set some > sort of > state that we will have to check at multiple key points. FWIW, I started > nodepool > v3 off with such a listener and the logic became pretty difficult fairly > quickly. > The proposal aims at removing those "key points" checks because they add complexicity and they look prone to race conditions, for example when the connection get lost after the "if self._zk.lost" check. Looking at all Zookeeper Lock usage, only the imageBuildLock seems to benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue. The other Lock aren't protected, for example multiple builder may start an image upload when the connection get lost. The zk-retry patch doesn't seem to change the behavior. Zuul node locks are still an issue, e.g. builds may stay queued whenever the zk-retry patch is used. Perhaps a follow-up patch could improve that situation, for example by re-creating the request when the connection state switch back to CONNECTED. > If you are running with this code in your environment now, I'd say you've > probably > just been fairly lucky to not have a ZK session timeout in just the right > place. I found > it fairly difficult to get to happen in my tests today, and it was never > consistent when > I did get it to happen (either something weird with ZK itself, or the kazoo > library). > Well we used to get those kazoo connection stack traces almost daily, and I couldn't find the root cause, ftr here is the setup: * Instances network setup by Neutron with ovs, * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and * Zookeeper configuration: http://paste.openstack.org/show/693515/ Though the bottomline is to make Zuul and Nodepool more resilient to Zookeeper issues. What would be the wrong place for a connection lost to happen? > Also, I hope I'm not coming off as difficult around this. I just know how > hard it was > to get what we have now functioning correctly, so I'm very protective of > it. Hopefully > we can get this sorted out for you, and we can learn a bunch of stuff along > the way. > Not all, Zookeeper error handling is tricky :) Thank you for sharing your valuable feedback. -Tristan > [1] > http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connection-events > > On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury > wrote: > >> >> >> On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < >> shrewsbury.dave at gmail.com> wrote: >> >>> >>> >>> On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray >>> wrote: >>> >>> [snip] >>> >>> >>>> The current review implements the above logic: sequence create aren't >>>> using the bulk retry, instead it issues a regular create call, and if >>>> there is an exception, then it looks for the uuid in the objects list >>>> before issueing another create call: >>>> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py >>> >>> >>> Ah, apologies for not noticing this in my quick lookover earlier. I'll >>> take a >>> closer look today. If that's what the code is doing, this could alleviate >>> much >>> of my concern. >>> >>> >> Ok, I think what you're doing with sequences looks pretty could and should >> work for >> us. While more thoroughly looking at the review, I did see a potential >> issue around >> locking, particularly around holding a lock during the long-running image >> build process. >> >> Now that we keep retrying, even through a total disconnect, there may be >> places in the >> code where we assume we still have the lock (like the image build code) if >> we haven't >> seen any ZK errors. Perhaps we should think about this aspect for a bit. >> >> -Dave >> >> -- >> David Shrewsbury (Shrews) >> > > > > -- > David Shrewsbury (Shrews) > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Wed Mar 7 15:28:53 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 7 Mar 2018 10:28:53 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520387902.pcnp10igdm.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> Message-ID: On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray wrote: > Thanks for the thorough review David. > Indeed kazoo Locks are ephemerals and they vanish when the connection is > lost. > However the zk-retry patch doesn't seems to affect current behaviors, > please see my comments inlined bellow and on the nodepool review. Actually, the behavior is very different. See the sample code and output below. > > > On March 6, 2018 6:38 pm, David Shrewsbury wrote: > >> I've played with some test code around this and I feel convinced we could >> have >> issues with locks and ephemeral nodes if we are hiding ZK connection >> problems >> (particularly session timeouts, which cause us to lose both things) behind >> retries. >> >> Would you mind sharing the test code? Certainly. This is the script I used that simulates current nodepool behavior (zk_retry_test1.py): http://paste.openstack.org/show/693650/ This is the script that simulates your proposed changes (zk_retry_test2.py): http://paste.openstack.org/show/693651/ Running zk_retry_test1.py locally (and shutting down ZK during the sleep() call and leaving it shutdown long enough for the session to expire) produces the following output (note the exception): http://paste.openstack.org/show/693653/ Doing the same with zk_retry_test2.py produces very different output (note the lack of the exception): http://paste.openstack.org/show/693654/ Your proposed changes will hide that SessionExpiredError exception. So if we have code depending on that exception bubbling up to know we've lost a lock or an ephemeral node, it will now be broken with the use of retry(). And there is nothing provided in your review that let's us know the session was lost. Looking at some of the builder and launcher code this morning with the lost session error freshly in mind, I see lots of code that doesn't even take this into account properly, so I think we may be potentially broken in many places with retries or not. :( > > Right now we can depend on those connection errors to bubble up to nodepool >> and zuul as exceptions, and we can assume our locks or ephemeral nodes are >> gone at that point and handle accordingly. If we now hide those, we need >> to >> begin >> using a listener [1] to know when our session has timed out and set some >> sort of >> state that we will have to check at multiple key points. FWIW, I started >> nodepool >> v3 off with such a listener and the logic became pretty difficult fairly >> quickly. >> >> The proposal aims at removing those "key points" checks because they add > complexicity and they look prone to race conditions, for example when > the connection get lost after the "if self._zk.lost" check. > Well, we HAVE to know when we lost a session. I like that your review removes the "wait for the connection to return" pieces. But we still need to know when the session is terminated. We can lose a connection, but keep our session. Or we can lose a connection and lose our session (if the connection loss was long enough). > Looking at all Zookeeper Lock usage, only the imageBuildLock seems to > benefit from the checks: if a builder lost the connection > during a build, it will mark the image as FAILED and another builder will > try > again. What happens if the other builder succeed before the first one? > With the zk-retry patch, both builder will mark the image as READY, > and as far as I can tell, this doesn't seems to be an issue. > That would be an issue, actually. We'd have two builds of the same image. We only keep the 2 most recent builds, so we'd lose our previous image. If the 2 we just built is a bad image, we'd have nothing to rollback to. > > The other Lock aren't protected, for example multiple builder may start > an image upload when the connection get lost. The zk-retry patch doesn't > seem to change the behavior. > If nothing else, I think this discussion has revealed deficiencies in our code base around handling session losses (I cite an example below). Our locks are supposed to be protected against this but I don't think we've done this well in all places. Whether we use retries or not, I think we need to correct this. > Zuul node locks are still an issue, e.g. builds may stay queued > whenever the zk-retry patch is used. > Perhaps a follow-up patch could improve that situation, for example > by re-creating the request when the connection state switch back to > CONNECTED. > > > If you are running with this code in your environment now, I'd say you've >> probably >> just been fairly lucky to not have a ZK session timeout in just the right >> place. I found >> it fairly difficult to get to happen in my tests today, and it was never >> consistent when >> I did get it to happen (either something weird with ZK itself, or the >> kazoo >> library). >> >> Well we used to get those kazoo connection stack traces almost daily, > and I couldn't find the root cause, ftr here is the setup: > > * Instances network setup by Neutron with ovs, > * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and > * Zookeeper configuration: http://paste.openstack.org/show/693515/ > > Though the bottomline is to make Zuul and Nodepool more resilient to > Zookeeper issues. > > What would be the wrong place for a connection lost to happen? Connection loss and session loss are different. A connection loss may or may not result in a session loss (it depends on how long we lost the connection). An example of a bad place to lose a session would be during node launch. Right now, if NodeLauncher._launchNode() experiences a session loss, the SessionExpiredError exception might bubble up and we'd retry the launch. Now it turns out, we aren't handling that particular exception and we are still assuming we have lock. This is a huge bug that I've just discovered thanks to this discussion because in that situation, we could have an unlocked Node in BUILDING state, and our cleanup thread could delete that before the building thread does something else with it again. So in that example, we must add an except clause to catch that particular exception and handle it differently. With your proposed changes, we'd need a way to know our session was lost, and make the same choices for handling it differently. You're review isn't providing that, but I _can_ see a way forward for the current code. > Also, I hope I'm not coming off as difficult around this. I just know how >> hard it was >> to get what we have now functioning correctly, so I'm very protective of >> it. Hopefully >> we can get this sorted out for you, and we can learn a bunch of stuff >> along >> the way. >> >> > Not all, Zookeeper error handling is tricky :) > Thank you for sharing your valuable feedback. This has been an excellent discussion! -------------- next part -------------- An HTML attachment was scrubbed... URL: From daragh.bailey at gmail.com Wed Mar 7 21:33:33 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Wed, 7 Mar 2018 21:33:33 +0000 Subject: working on zuul trigger plugin for jenkins Message-ID: Hi, Spend some time last week hacking away at a zuul trigger plugin for jenkins, really just about getting some better understanding of how this should hang together and what questions need to be asked. Lots of time spent reading details on gRPC as it was suggested at https://etherpad.openstack.org/p/zuulv3-jenkins-integration So far I've got a (very) rudimentary gRPC server that responds to a single status request with OK working as a Jenkins plugin that starts up automatically. I'll try to get it released somewhere soon so it's possible to discuss code, though if it is of interest to the OpenStack community maybe it should be hosted on review.openstack.org? Some general things I've noted: - authenticating requests doesn't appear to be straight forward with gRPC - unclear whether this can share the default jenkins http port even though gRPC and REST (currently using dedicated port) can technically do this - given the number of messages exchanged for a single job, should this use bidirectional streaming - started with a proto layout that is mostly unary client calls and server streaming responses - but should the trigger really be a single connection with multiple messages around the hand-off coordination? I've also had a bunch of questions to bounce off Monty around how to deal with Zuulv3 and Jenkins node coordination, the upshot of those is to leave that to nodepool and probably help out with the Jenkins nodepool plugin and make that a dependency of the zuul trigger plugin. For more detail, originally I was worrying about how to ensure zuul trigger plugin would work without using the nodepool plugin, and it would end up needing to query Jenkins as to whether it had the resources to execute the on before triggering, and this would require a deeper integration into Zuul. It was suggested that this would complicate it deeply and would be better to have nodepool manage the resources, so that zuul would only start the ansible job that would call the zuul trigger API endpoints if nodepool had sufficient resources for Jenkins to use to execute the job. It appears clear any migration path to Zuulv3 involving Jenkins would want to use Zuulv2 -> Gearman -> Jenkins -> Nodepoolv3 first, then migrate Zuulv3. So I'm looking to help on the Jenkins Nodepool plugin first, and that will probably make it much more straight forward to understand what needs to be done in the trigger plugin. A couple of weeks ago someone was in the Zuul room chatting about working on the Nodepool plugin for Jenkins, wondering if there has been any progress there, and what I can help out with in the future. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Mar 8 19:21:58 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Mar 2018 13:21:58 -0600 Subject: Landing changes to zuul-web javascript Message-ID: Hey everybody! Something just occurred to me while reviewing the next set of patches from Tristan to zuul-web that I think is important to share more broadly. In OpenStack's deployment of Zuul, HTML/JS changes to zuul-web go live basically immediately (give or take 15 minutes) No services need to be restarted or anything. However, while the python code will also be _installed_ at the same time, it will not go live immediately. It's also completely possible for someone to deploy web assets to a separate web server or even an object store like swift. For that reason, I'd like to suggest we make a practice of splitting web changes into patches that add methods to the REST interface on zuul-web, and patches that add Javascript consumption. Then make sure when approving patches that the REST changes have gone live - which will let us verify the draft changes from the build-javascript-content job. We could obviously add feature-flags to the javascript so that the two patches could land together. However, I don't think it's time for that yet for two reasons: a) this dashboard is still pretty new and we're still figuring some things out but more importantly b) we can't verify the html/js changes live in the draft change until the REST side has gone live anyway Anyway - I'm sure there will be exceptions where it's fine - so people should use good judgment as always - but at least be aware that landing javascript changes will essentially result in a near-immediate application of them to production for OpenStack and judge accordingly. Monty From fungi at yuggoth.org Thu Mar 8 19:30:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 19:30:05 +0000 Subject: Landing changes to zuul-web javascript In-Reply-To: References: Message-ID: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> On 2018-03-08 13:21:58 -0600 (-0600), Monty Taylor wrote: [...] > b) we can't verify the html/js changes live in the draft change until the > REST side has gone live anyway [...] Can we add a draft publication job which puts an alternate dashboard on logs.openstack.org pointing at the zuul.openstack.org API? The methods it uses are presumably read-only, so this should be safe in theory right? Then we get something akin to the storyboard-webclient draft builds (without needing to point them at a neutered -dev instance) where we can effectively test out the proposed UI changes before approving. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Thu Mar 8 19:48:03 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Mar 2018 13:48:03 -0600 Subject: Landing changes to zuul-web javascript In-Reply-To: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> References: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> Message-ID: <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> On 03/08/2018 01:30 PM, Jeremy Stanley wrote: > On 2018-03-08 13:21:58 -0600 (-0600), Monty Taylor wrote: > [...] >> b) we can't verify the html/js changes live in the draft change until the >> REST side has gone live anyway > [...] > > Can we add a draft publication job which puts an alternate dashboard > on logs.openstack.org pointing at the zuul.openstack.org API? The > methods it uses are presumably read-only, so this should be safe in > theory right? Then we get something akin to the storyboard-webclient > draft builds (without needing to point them at a neutered -dev > instance) where we can effectively test out the proposed UI changes > before approving. Already done - and this is the thing of which I speak: http://logs.openstack.org/71/545671/10/check/build-javascript-content/07bf929/npm/html/status.html The "build-javascript-content" job publishes a draft copy of the nice new shiny javascript/html content - and links to it using success_url. It's set up to point to the live zuul.openstack.org API. The main issue is that it can only point to the live API - so if a change has both python and javascript elements, while we'll publish a draft copy of the web content, the server-side content will only get updated once the python lands and is rolled live - so the draft web changes won't show the new functionality. In fact, the one I linked to above is a javascript change at the end of a stack that includes python changes as well. If you click on "projects" - you'll note that it doesn't show anything - because we haven't landed the python Projects API endpoint yet. Monty From fungi at yuggoth.org Thu Mar 8 21:04:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 21:04:22 +0000 Subject: Landing changes to zuul-web javascript In-Reply-To: <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> References: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> Message-ID: <20180308210421.3kdlmd357tzjlkgg@yuggoth.org> On 2018-03-08 13:48:03 -0600 (-0600), Monty Taylor wrote: [...] > > Can we add a draft publication job which puts an alternate dashboard > > on logs.openstack.org pointing at the zuul.openstack.org API? [...] > Already done - and this is the thing of which I speak: [...] Awesome! I have nothing constructive to add aside from my wholehearted agreement on this plan. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 9 17:54:07 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 09 Mar 2018 09:54:07 -0800 Subject: Landing changes to zuul-web javascript In-Reply-To: (Monty Taylor's message of "Thu, 8 Mar 2018 13:21:58 -0600") References: Message-ID: <877eqljfsw.fsf@meyer.lemoncheese.net> Monty Taylor writes: > For that reason, I'd like to suggest we make a practice of splitting > web changes into patches that add methods to the REST interface on > zuul-web, and patches that add Javascript consumption. Then make sure > when approving patches that the REST changes have gone live - which > will let us verify the draft changes from the build-javascript-content > job. This sounds good -- the only thing I'd add (or, perhaps "emphasize" would be the right word) is that changes which add REST methods should have unit tests which exercise them. That should give us the confidence to land those alone, before the JS changes land. -Jim From tdecacqu at redhat.com Sat Mar 10 05:56:57 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sat, 10 Mar 2018 05:56:57 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> Message-ID: <1520651673.vknnu6fdkq.tristanC@fedora> On March 7, 2018 3:28 pm, David Shrewsbury wrote: > On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray > wrote: > >> Thanks for the thorough review David. >> Indeed kazoo Locks are ephemerals and they vanish when the connection is >> lost. >> However the zk-retry patch doesn't seems to affect current behaviors, >> please see my comments inlined bellow and on the nodepool review. > > > Actually, the behavior is very different. See the sample code and output > below. > > > >> >> >> On March 6, 2018 6:38 pm, David Shrewsbury wrote: >> >>> I've played with some test code around this and I feel convinced we could >>> have >>> issues with locks and ephemeral nodes if we are hiding ZK connection >>> problems >>> (particularly session timeouts, which cause us to lose both things) behind >>> retries. >>> >>> Would you mind sharing the test code? > > > Certainly. > > This is the script I used that simulates current nodepool behavior > (zk_retry_test1.py): > > http://paste.openstack.org/show/693650/ > > This is the script that simulates your proposed changes (zk_retry_test2.py): > > http://paste.openstack.org/show/693651/ > > Running zk_retry_test1.py locally (and shutting down ZK during the sleep() > call and > leaving it shutdown long enough for the session to expire) produces the > following output > (note the exception): > > http://paste.openstack.org/show/693653/ > > Doing the same with zk_retry_test2.py produces very different output (note > the lack > of the exception): > > http://paste.openstack.org/show/693654/ > > Your proposed changes will hide that SessionExpiredError exception. So if > we have > code depending on that exception bubbling up to know we've lost a lock or an > ephemeral node, it will now be broken with the use of retry(). And there is > nothing > provided in your review that let's us know the session was lost. > Well yes, the goal is to ignore whenever the session or connection is lost. In particular, if the plan is to replace gearman by zookeeper, then we'd better be more resilient to zookeeper exceptions. So, the issue is the kazoo Lock implementation, which uses ephemeral objects that are automatically removed when the session is lost. I've reworked the review to add Lock verification before critical state changes: Instead of inferring Lock status from the connection state there is now a lockCheck method to ensure that the lock is still present. > Looking at some of the builder and launcher code this morning with the lost > session error > freshly in mind, I see lots of code that doesn't even take this into > account properly, so I think > we may be potentially broken in many places with retries or not. :( > > > >> >> Right now we can depend on those connection errors to bubble up to nodepool >>> and zuul as exceptions, and we can assume our locks or ephemeral nodes are >>> gone at that point and handle accordingly. If we now hide those, we need >>> to >>> begin >>> using a listener [1] to know when our session has timed out and set some >>> sort of >>> state that we will have to check at multiple key points. FWIW, I started >>> nodepool >>> v3 off with such a listener and the logic became pretty difficult fairly >>> quickly. >>> >>> The proposal aims at removing those "key points" checks because they add >> complexicity and they look prone to race conditions, for example when >> the connection get lost after the "if self._zk.lost" check. >> > > Well, we HAVE to know when we lost a session. I like that your review > removes > the "wait for the connection to return" pieces. But we still need to know > when the > session is terminated. We can lose a connection, but keep our session. Or > we can > lose a connection and lose our session (if the connection loss was long > enough). > > It seems like we can get around the connection listener logic by checking the ephemeral lock node without relying on the zk.lost property. > >> Looking at all Zookeeper Lock usage, only the imageBuildLock seems to >> benefit from the checks: if a builder lost the connection >> during a build, it will mark the image as FAILED and another builder will >> try >> again. What happens if the other builder succeed before the first one? >> With the zk-retry patch, both builder will mark the image as READY, >> and as far as I can tell, this doesn't seems to be an issue. >> > > That would be an issue, actually. We'd have two builds of the same image. > We only keep the 2 most recent builds, so we'd lose our previous image. > If the 2 we just built is a bad image, we'd have nothing to rollback to. > Ok, that makes sense. However I think this behavior is still incorrect if another builder finishes the build before the first builder. It seems like we need to check if the build isn't READY before setting it as FAILED. > > >> >> The other Lock aren't protected, for example multiple builder may start >> an image upload when the connection get lost. The zk-retry patch doesn't >> seem to change the behavior. >> > > If nothing else, I think this discussion has revealed deficiencies in our > code base > around handling session losses (I cite an example below). Our locks are > supposed > to be protected against this but I don't think we've done this well in all > places. Whether > we use retries or not, I think we need to correct this. > > >> Zuul node locks are still an issue, e.g. builds may stay queued >> whenever the zk-retry patch is used. >> Perhaps a follow-up patch could improve that situation, for example >> by re-creating the request when the connection state switch back to >> CONNECTED. >> >> >> If you are running with this code in your environment now, I'd say you've >>> probably >>> just been fairly lucky to not have a ZK session' timeout in just the right >>> place. I found >>> it fairly difficult to get to happen in my tests today, and it was never >>> consistent when >>> I did get it to happen (either something weird with ZK itself, or the >>> kazoo >>> library). >>> >>> Well we used to get those kazoo connection stack traces almost daily, >> and I couldn't find the root cause, ftr here is the setup: >> >> * Instances network setup by Neutron with ovs, >> * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and >> * Zookeeper configuration: http://paste.openstack.org/show/693515/ >> >> Though the bottomline is to make Zuul and Nodepool more resilient to >> Zookeeper issues. >> >> What would be the wrong place for a connection lost to happen? > > > Connection loss and session loss are different. A connection loss may or > may not result in a session loss (it depends on how long we lost the > connection). > > An example of a bad place to lose a session would be during node launch. > Right now, if NodeLauncher._launchNode() experiences a session loss, the > SessionExpiredError exception might bubble up and we'd retry the launch. > Now it turns out, we aren't handling that particular exception and we are > still assuming we have lock. This is a huge bug that I've just discovered > thanks > to this discussion because in that situation, we could have an unlocked Node > in BUILDING state, and our cleanup thread could delete that before the > building > thread does something else with it again. > Similarly, I added a new protection before setting a node as READY: the launcher checks the lock, and if it got lost, the node is left as BUILDING to ensure proper cleanup. > So in that example, we must add an except clause to catch that particular > exception and handle it differently. With your proposed changes, we'd need a > way to know our session was lost, and make the same choices for handling it > differently. You're review isn't providing that, but I _can_ see a way > forward for > the current code. > I think the current code isn't ideal because it won't catch connection issues happening after the check for zk.lost. There will always be a window where the connection can get lost between the check and the critical code. Alternatively, we could implement persistent Lock using a timestamp and a thread to refresh it. Perhaps this implemention could be added to kazoo codebase directly. > > >> Also, I hope I'm not coming off as difficult around this. I just know how >>> hard it was >>> to get what we have now functioning correctly, so I'm very protective of >>> it. Hopefully >>> we can get this sorted out for you, and we can learn a bunch of stuff >>> along >>> the way. >>> >>> >> Not all, Zookeeper error handling is tricky :) >> Thank you for sharing your valuable feedback. > > > This has been an excellent discussion! > Thank you again for discussing this with me! -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Mon Mar 12 21:37:29 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 12 Mar 2018 17:37:29 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520651673.vknnu6fdkq.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: On Sat, Mar 10, 2018 at 12:56 AM, Tristan Cacqueray wrote: > On March 7, 2018 3:28 pm, David Shrewsbury wrote: > >> On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray >> wrote: >> >> Thanks for the thorough review David. >>> Indeed kazoo Locks are ephemerals and they vanish when the connection is >>> lost. >>> However the zk-retry patch doesn't seems to affect current behaviors, >>> please see my comments inlined bellow and on the nodepool review. >>> >> >> >> Actually, the behavior is very different. See the sample code and output >> below. >> >> >> >> >>> >>> On March 6, 2018 6:38 pm, David Shrewsbury wrote: >>> >>> I've played with some test code around this and I feel convinced we could >>>> have >>>> issues with locks and ephemeral nodes if we are hiding ZK connection >>>> problems >>>> (particularly session timeouts, which cause us to lose both things) >>>> behind >>>> retries. >>>> >>>> Would you mind sharing the test code? >>>> >>> >> >> Certainly. >> >> This is the script I used that simulates current nodepool behavior >> (zk_retry_test1.py): >> >> http://paste.openstack.org/show/693650/ >> >> This is the script that simulates your proposed changes >> (zk_retry_test2.py): >> >> http://paste.openstack.org/show/693651/ >> >> Running zk_retry_test1.py locally (and shutting down ZK during the sleep() >> call and >> leaving it shutdown long enough for the session to expire) produces the >> following output >> (note the exception): >> >> http://paste.openstack.org/show/693653/ >> >> Doing the same with zk_retry_test2.py produces very different output (note >> the lack >> of the exception): >> >> http://paste.openstack.org/show/693654/ >> >> Your proposed changes will hide that SessionExpiredError exception. So if >> we have >> code depending on that exception bubbling up to know we've lost a lock or >> an >> ephemeral node, it will now be broken with the use of retry(). And there >> is >> nothing >> provided in your review that let's us know the session was lost. >> >> Well yes, the goal is to ignore whenever the session or connection is > lost. > In particular, if the plan is to replace gearman by zookeeper, then we'd > better be more resilient to zookeeper exceptions. > > So, the issue is the kazoo Lock implementation, which uses ephemeral > objects > that are automatically removed when the session is lost. I've reworked > the review to add Lock verification before critical state changes: > Instead of inferring Lock status from the connection state there is now a > lockCheck method to ensure that the lock is still present. > > > Looking at some of the builder and launcher code this morning with the lost >> session error >> freshly in mind, I see lots of code that doesn't even take this into >> account properly, so I think >> we may be potentially broken in many places with retries or not. :( >> >> >> >> >>> Right now we can depend on those connection errors to bubble up to >>> nodepool >>> >>>> and zuul as exceptions, and we can assume our locks or ephemeral nodes >>>> are >>>> gone at that point and handle accordingly. If we now hide those, we need >>>> to >>>> begin >>>> using a listener [1] to know when our session has timed out and set some >>>> sort of >>>> state that we will have to check at multiple key points. FWIW, I started >>>> nodepool >>>> v3 off with such a listener and the logic became pretty difficult fairly >>>> quickly. >>>> >>>> The proposal aims at removing those "key points" checks because they add >>>> >>> complexicity and they look prone to race conditions, for example when >>> the connection get lost after the "if self._zk.lost" check. >>> >>> >> Well, we HAVE to know when we lost a session. I like that your review >> removes >> the "wait for the connection to return" pieces. But we still need to know >> when the >> session is terminated. We can lose a connection, but keep our session. Or >> we can >> lose a connection and lose our session (if the connection loss was long >> enough). >> >> >> It seems like we can get around the connection listener logic by > checking the ephemeral lock node without relying on the zk.lost property. > > >> Looking at all Zookeeper Lock usage, only the imageBuildLock seems to >>> benefit from the checks: if a builder lost the connection >>> during a build, it will mark the image as FAILED and another builder will >>> try >>> again. What happens if the other builder succeed before the first one? >>> With the zk-retry patch, both builder will mark the image as READY, >>> and as far as I can tell, this doesn't seems to be an issue. >>> >>> >> That would be an issue, actually. We'd have two builds of the same image. >> We only keep the 2 most recent builds, so we'd lose our previous image. >> If the 2 we just built is a bad image, we'd have nothing to rollback to. >> >> Ok, that makes sense. However I think this behavior is still incorrect if > another builder finishes the build before the first builder. It seems like > we need to check if the build isn't READY before setting it as FAILED. I don't get your thinking here. We should NEVER have multiple builders building the same image. That's the whole reason for the image build lock. [snip] This email is simply an FYI on where my thinking is on this right now. I'm still mulling over the options and waiting for feedback from others (particularly Jim). These are the things I know: 1) A lot of our code has been depending on exceptions being thrown from ZK to decide to abort certain operations. Session errors being among the expected exceptions. 2) We have places in current code that don't even consider session errors, and this needs to be corrected. Very much for sad making. :( 3) You have provided a way to both use retry() AND examine locks to determine if we have lost a particular lock. (Still need something similar for ephemeral nodes, but that's easy.) 4) When considering how to move forward with fixing 1 and 2 to support your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tobias.Henkel at bmw.de Tue Mar 13 14:15:26 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 13 Mar 2018 14:15:26 +0000 Subject: Zuul meeting time Message-ID: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Hi zuul community, Currently the zuul meeting is at 22UTC which is currently 23:00 localtime or soon 0:00 within summer time at my location. That makes it impossible for me to participate regularly. However I’d really like to attend the zuul meeting. So I’d like to ask zuul folks about your opinion about shifting it around 20ish UTC. I wouldn’t have any strong opinion about the day. Thanks and kind regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 13 14:43:35 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 07:43:35 -0700 Subject: Zuul meeting time In-Reply-To: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> (Tobias Henkel's message of "Tue, 13 Mar 2018 14:15:26 +0000") References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Message-ID: <874llkxch4.fsf@meyer.lemoncheese.net> writes: > Hi zuul community, > > Currently the zuul meeting is at 22UTC which is currently 23:00 > localtime or soon 0:00 within summer time at my location. That makes > it impossible for me to participate regularly. However I’d really like > to attend the zuul meeting. So I’d like to ask zuul folks about your > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > opinion about the day. I'd be happy with 20:00 utc. It's one of the least-worst times we've come up with in OpenStack, making it equally moderately painful for APAC and EMEA. We can keep it on Monday, or we have the option of changing to Tuesday. That would place it immediately after the OpenStack infra meeting. -Jim From fungi at yuggoth.org Tue Mar 13 14:55:31 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Mar 2018 14:55:31 +0000 Subject: Zuul meeting time In-Reply-To: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Message-ID: <20180313145529.epubrt577nvicyyc@yuggoth.org> On 2018-03-13 14:15:26 +0000 (+0000), Tobias.Henkel at bmw.de wrote: > Currently the zuul meeting is at 22UTC which is currently 23:00 > localtime or soon 0:00 within summer time at my location. That > makes it impossible for me to participate regularly. However I’d > really like to attend the zuul meeting. So I’d like to ask zuul > folks about your opinion about shifting it around 20ish UTC. I > wouldn’t have any strong opinion about the day. I have no objection to 20:00 UTC (no regularly-scheduled conflicts for that time on any day of the week), but it's also only 3pm or 4pm local time for me. I'm more concerned with how it might impact APAC attendees and as such am particularly interested in their input. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Tue Mar 13 14:56:54 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 10:56:54 -0400 Subject: Zuul meeting time In-Reply-To: <874llkxch4.fsf@meyer.lemoncheese.net> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> Message-ID: <20180313145654.GA21469@localhost.localdomain> On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > writes: > > > Hi zuul community, > > > > Currently the zuul meeting is at 22UTC which is currently 23:00 > > localtime or soon 0:00 within summer time at my location. That makes > > it impossible for me to participate regularly. However I’d really like > > to attend the zuul meeting. So I’d like to ask zuul folks about your > > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > > opinion about the day. > > I'd be happy with 20:00 utc. It's one of the least-worst times we've > come up with in OpenStack, making it equally moderately painful for APAC > and EMEA. > > We can keep it on Monday, or we have the option of changing to Tuesday. > That would place it immediately after the OpenStack infra meeting. > > -Jim > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps move my meetings to a single day. Paul From corvus at inaugust.com Tue Mar 13 15:02:28 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 08:02:28 -0700 Subject: Zuul meeting time In-Reply-To: <20180313145654.GA21469@localhost.localdomain> (Paul Belanger's message of "Tue, 13 Mar 2018 10:56:54 -0400") References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> Message-ID: <87muzcvx17.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: >> writes: >> >> > Hi zuul community, >> > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 >> > localtime or soon 0:00 within summer time at my location. That makes >> > it impossible for me to participate regularly. However I’d really like >> > to attend the zuul meeting. So I’d like to ask zuul folks about your >> > opinion about shifting it around 20ish UTC. I wouldn’t have any strong >> > opinion about the day. >> >> I'd be happy with 20:00 utc. It's one of the least-worst times we've >> come up with in OpenStack, making it equally moderately painful for APAC >> and EMEA. >> >> We can keep it on Monday, or we have the option of changing to Tuesday. >> That would place it immediately after the OpenStack infra meeting. >> >> -Jim >> > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps > move my meetings to a single day. 22:00 is the current time -- did you mean to indicate you wanted to keep it at that time, or that you were okay with the proposed move to 20:00? -Jim From shrewsbury.dave at gmail.com Tue Mar 13 15:32:11 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 13 Mar 2018 11:32:11 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: On Mon, Mar 12, 2018 at 5:37 PM, David Shrewsbury wrote: [snip] 4) When considering how to move forward with fixing 1 and 2 to support > your change, where we used to be able to surround a critical section of > code with try-except to handle session errors, we now have to sprinkle in > checks for locks and ephemeral nodes. Which is better? My gut is > telling me that try-except would be much nicer to use than liberally adding > lock/ephemeral node checks after each ZK API call. But, since our current > session handling is poor to non-existent in places, I haven't decided if > this is true or not. Here is where I'd like to get opinions from other > folks. > To help me (and others) wrap my head around this, here are two solutions using both methods to the bug discussed earlier. (I'm a visual learner and needed to actually see this to form an opinion). This is using the current (not-retry) code: https://review.openstack.org/552538 This is based off your proposed change to use retry: https://review.openstack.org/552551 Note that in the 2nd review, we now have to scatter lock checks before any attempts to store ZooKeeper data within our critical section. Where as in the first, we can just wrap the entire critical section with a try-except. Based on this, I feel like using retry() is going to force us to add much more code, and will end up being more error prone since it adds places where we must check for session loss that would affect locks and ephemeral nodes. I feel like maybe we could use retry() for the initial connection to eliminate the "zookeeper not running before nodepool" startup problem. But I'm not feeling good about using it throughout the rest of the code. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Tue Mar 13 15:36:48 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 11:36:48 -0400 Subject: Zuul meeting time In-Reply-To: <87muzcvx17.fsf@meyer.lemoncheese.net> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> <87muzcvx17.fsf@meyer.lemoncheese.net> Message-ID: <20180313153648.GB21469@localhost.localdomain> On Tue, Mar 13, 2018 at 08:02:28AM -0700, James E. Blair wrote: > Paul Belanger writes: > > > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > >> writes: > >> > >> > Hi zuul community, > >> > > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 > >> > localtime or soon 0:00 within summer time at my location. That makes > >> > it impossible for me to participate regularly. However I’d really like > >> > to attend the zuul meeting. So I’d like to ask zuul folks about your > >> > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > >> > opinion about the day. > >> > >> I'd be happy with 20:00 utc. It's one of the least-worst times we've > >> come up with in OpenStack, making it equally moderately painful for APAC > >> and EMEA. > >> > >> We can keep it on Monday, or we have the option of changing to Tuesday. > >> That would place it immediately after the OpenStack infra meeting. > >> > >> -Jim > >> > > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps > > move my meetings to a single day. > > 22:00 is the current time -- did you mean to indicate you wanted to keep > it at that time, or that you were okay with the proposed move to 20:00? > Apologies, cold medication. 20:00UTC also works for me. From corvus at inaugust.com Tue Mar 13 16:37:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 09:37:24 -0700 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: (David Shrewsbury's message of "Mon, 12 Mar 2018 17:37:29 -0400") References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: <87y3iwszi3.fsf@meyer.lemoncheese.net> David Shrewsbury writes: > This email is simply an FYI on where my thinking is on this right now. > I'm still mulling over the options and waiting for feedback from others > (particularly Jim). > > These are the things I know: > > 1) A lot of our code has been depending on exceptions being thrown > from ZK to decide to abort certain operations. Session errors being > among the expected exceptions. > > 2) We have places in current code that don't even consider session > errors, and this needs to be corrected. Very much for sad making. :( > > 3) You have provided a way to both use retry() AND examine locks to > determine if we have lost a particular lock. (Still need something similar > for ephemeral nodes, but that's easy.) > > 4) When considering how to move forward with fixing 1 and 2 to support > your change, where we used to be able to surround a critical section of > code with try-except to handle session errors, we now have to sprinkle in > checks for locks and ephemeral nodes. Which is better? My gut is > telling me that try-except would be much nicer to use than liberally adding > lock/ephemeral node checks after each ZK API call. But, since our current > session handling is poor to non-existent in places, I haven't decided if > this is true or not. Here is where I'd like to get opinions from other > folks. My inclination is to prefer coarse error handling. In general, the approach we take in Nodepool is to assume that every call we make can throw an exception, and then some level higher up in the call stack will deal with it (abort the build, mark a node as deleted, etc). That comes from our experience working with clouds, but it can apply equally to interacting with any system. The benefit is that it usually makes code easier to follow for the reasons David mentions. Having to remember to check something after every operation is error-prone (and those are the kinds of errors that lead us to prefer language with exception handling). However, I really like the fact that Tristan's patch removes some explicit session handling code. Removing the wait-for-zookeeper sections is very nice. Though maybe we could just refactor them into a common method. One thing that we haven't discussed is the effect on program flow. In nodepool, we have a large number of threads interacting with zookeeper, and in general, it's probably okay for them to pause while retrying some event. However, in Zuul, the only thread interacting with zookeeper is the main scheduler thread. If we adopt retries in Zuul, that thread will end up getting stuck waiting for zookeeper, whereas it could be busy reporting results or canceling jobs. I've learned a lot reading this thread and the patches. I'm happy to continue exploring ways of improving our zookeeper handling. -Jim From corvus at inaugust.com Tue Mar 13 20:33:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 13:33:42 -0700 Subject: Disabling nullwrap Message-ID: <871sgnsok9.fsf@meyer.lemoncheese.net> Hi, The recent security fixes have underscored that bubblewrap is currently key to Zuul executor security. While Zuul does have an option to run without bubblewrap (called "nullwrap"), it is not recommended, and it doesn't support secrets, which is a fairly important feature. Nullwrap was written with the idea of aiding those use-cases where it is not possible to run with bubblewrap. However, I think that if such a case still exists, we should work to address that -- nullwrap really shouldn't be the answer. I'd like to disable the ability to use nullwrap to avoid any potential confusion on the subject in the future. If anyone is currently using nullwrap, please speak up so we can start working on a solution. -Jim From clint at fewbar.com Tue Mar 13 20:50:17 2018 From: clint at fewbar.com (Clint Byrum) Date: Tue, 13 Mar 2018 13:50:17 -0700 Subject: Disabling nullwrap In-Reply-To: <871sgnsok9.fsf@meyer.lemoncheese.net> References: <871sgnsok9.fsf@meyer.lemoncheese.net> Message-ID: <1520974148-sup-7367@fewbar.com> Excerpts from corvus's message of 2018-03-13 13:33:42 -0700: > Hi, > > The recent security fixes have underscored that bubblewrap is currently > key to Zuul executor security. While Zuul does have an option to run > without bubblewrap (called "nullwrap"), it is not recommended, and it > doesn't support secrets, which is a fairly important feature. > > Nullwrap was written with the idea of aiding those use-cases where it is > not possible to run with bubblewrap. However, I think that if such a > case still exists, we should work to address that -- nullwrap really > shouldn't be the answer. > > I'd like to disable the ability to use nullwrap to avoid any potential > confusion on the subject in the future. > > If anyone is currently using nullwrap, please speak up so we can start > working on a solution. > +1 from me. I had created nullwrap originally when I had some difficulty testing with bubblewrap, but time has passed and it is now easier to get in most Linux distros. From pabelanger at redhat.com Tue Mar 13 20:55:57 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 16:55:57 -0400 Subject: Disabling nullwrap In-Reply-To: <871sgnsok9.fsf@meyer.lemoncheese.net> References: <871sgnsok9.fsf@meyer.lemoncheese.net> Message-ID: <20180313205557.GA20606@localhost.localdomain> On Tue, Mar 13, 2018 at 01:33:42PM -0700, James E. Blair wrote: > Hi, > > The recent security fixes have underscored that bubblewrap is currently > key to Zuul executor security. While Zuul does have an option to run > without bubblewrap (called "nullwrap"), it is not recommended, and it > doesn't support secrets, which is a fairly important feature. > > Nullwrap was written with the idea of aiding those use-cases where it is > not possible to run with bubblewrap. However, I think that if such a > case still exists, we should work to address that -- nullwrap really > shouldn't be the answer. > > I'd like to disable the ability to use nullwrap to avoid any potential > confusion on the subject in the future. > +1 it does seem most Linux distro do support bubblewrap packages now. > If anyone is currently using nullwrap, please speak up so we can start > working on a solution. > I want to say I only heard of one place (OpenShift Online) where bubblewrap had to be disabled. I haven't confirmed that myself however. > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From tdecacqu at redhat.com Wed Mar 14 01:05:05 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Mar 2018 01:05:05 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <87y3iwszi3.fsf@meyer.lemoncheese.net> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> Message-ID: <1520986087.e1akjswbaa.tristanC@fedora> Hello, Looking at our ancient logs (Nov 2017 before we start using zk-retry), we actually only had ConnectionLoss at the begining of the run() code, here is a collection of them: http://paste.openstack.org/show/700703/ I wonder if nodepool.openstack.org is avoiding them because it is such a busy system. It would be useful if other deployments looked for ConnectionLoss and SessionExpired in their logs and share the stacktraces. Please find more comments inlined below. On March 13, 2018 4:37 pm, James E. Blair wrote: > David Shrewsbury writes: > >> This email is simply an FYI on where my thinking is on this right now. >> I'm still mulling over the options and waiting for feedback from others >> (particularly Jim). >> >> These are the things I know: >> >> 1) A lot of our code has been depending on exceptions being thrown >> from ZK to decide to abort certain operations. Session errors being >> among the expected exceptions. >> Last PS makes the checkLock method raise the SessionExpired which seems to provide the same behavior. >> 2) We have places in current code that don't even consider session >> errors, and this needs to be corrected. Very much for sad making. :( >> >> 3) You have provided a way to both use retry() AND examine locks to >> determine if we have lost a particular lock. (Still need something similar >> for ephemeral nodes, but that's easy.) >> It seems more robus than relying on second hand informations from the connection listener. >> 4) When considering how to move forward with fixing 1 and 2 to support >> your change, where we used to be able to surround a critical section of >> code with try-except to handle session errors, we now have to sprinkle in >> checks for locks and ephemeral nodes. Which is better? My gut is >> telling me that try-except would be much nicer to use than liberally adding >> lock/ephemeral node checks after each ZK API call. But, since our current >> session handling is poor to non-existent in places, I haven't decided if >> this is true or not. Here is where I'd like to get opinions from other >> folks. > > My inclination is to prefer coarse error handling. In general, the > approach we take in Nodepool is to assume that every call we make can > throw an exception, and then some level higher up in the call stack will > deal with it (abort the build, mark a node as deleted, etc). That comes > from our experience working with clouds, but it can apply equally to > interacting with any system. The benefit is that it usually makes code > easier to follow for the reasons David mentions. Having to remember to > check something after every operation is error-prone (and those are the > kinds of errors that lead us to prefer language with exception > handling). > Agreed, but why the builder isn't relying on exception handling here: http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder.py#n789 I actually implemented manual check because of that builder use-case, though this can be easily modified to raise exceptions instead. > However, I really like the fact that Tristan's patch removes some > explicit session handling code. Removing the wait-for-zookeeper > sections is very nice. Though maybe we could just refactor them into a > common method. > I still don't understand the logic of checking if the connection has been lost. What happens if the connection get lost after the wait-for-zookeeper sections? > One thing that we haven't discussed is the effect on program flow. In > nodepool, we have a large number of threads interacting with zookeeper, > and in general, it's probably okay for them to pause while retrying some > event. However, in Zuul, the only thread interacting with zookeeper is > the main scheduler thread. If we adopt retries in Zuul, that thread > will end up getting stuck waiting for zookeeper, whereas it could be > busy reporting results or canceling jobs. > Actually we haven't discussed the effect of multiple thread experiencing SessionExpired errors, will kazoo raise the exception for each threads, or will only one get it? Looking at the builder code linked above, it seems like the first thread may resetLostFlag(). > I've learned a lot reading this thread and the patches. I'm happy to > continue exploring ways of improving our zookeeper handling. > > -Jim > Please have a look again at 535537 (PS5), I think it implements the best of both strategies: it removes the wait-for-zookeeper sections while still using the SessionExpired exception. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joshua.hesketh at gmail.com Wed Mar 14 03:15:01 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 14 Mar 2018 14:15:01 +1100 Subject: Zuul meeting time In-Reply-To: <20180313153648.GB21469@localhost.localdomain> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> <87muzcvx17.fsf@meyer.lemoncheese.net> <20180313153648.GB21469@localhost.localdomain> Message-ID: Hey, 20:00UTC would be 6am for eastern Australia once daylight savings ends in a few weeks. This is not ideal for me and I would selfishly prefer to leave it, but since I have been less active in general perhaps it is better to optimise for those doing significant work. That said, perhaps we could look at moving it to 21:00UTC, which would make it 7am for myself and hopefully not too late for others. Cheers, Josh On Wed, Mar 14, 2018 at 2:36 AM, Paul Belanger wrote: > On Tue, Mar 13, 2018 at 08:02:28AM -0700, James E. Blair wrote: > > Paul Belanger writes: > > > > > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > > >> writes: > > >> > > >> > Hi zuul community, > > >> > > > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 > > >> > localtime or soon 0:00 within summer time at my location. That makes > > >> > it impossible for me to participate regularly. However I’d really > like > > >> > to attend the zuul meeting. So I’d like to ask zuul folks about your > > >> > opinion about shifting it around 20ish UTC. I wouldn’t have any > strong > > >> > opinion about the day. > > >> > > >> I'd be happy with 20:00 utc. It's one of the least-worst times we've > > >> come up with in OpenStack, making it equally moderately painful for > APAC > > >> and EMEA. > > >> > > >> We can keep it on Monday, or we have the option of changing to > Tuesday. > > >> That would place it immediately after the OpenStack infra meeting. > > >> > > >> -Jim > > >> > > > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It > actually helps > > > move my meetings to a single day. > > > > 22:00 is the current time -- did you mean to indicate you wanted to keep > > it at that time, or that you were okay with the proposed move to 20:00? > > > Apologies, cold medication. 20:00UTC also works for me. > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Mar 14 03:32:46 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Mar 2018 03:32:46 +0000 Subject: Disabling nullwrap In-Reply-To: <20180313205557.GA20606@localhost.localdomain> References: <871sgnsok9.fsf@meyer.lemoncheese.net> <20180313205557.GA20606@localhost.localdomain> Message-ID: <1520997724.3tm19tpo9a.tristanC@fedora> On March 13, 2018 8:55 pm, Paul Belanger wrote: > On Tue, Mar 13, 2018 at 01:33:42PM -0700, James E. Blair wrote: >> Hi, >> >> The recent security fixes have underscored that bubblewrap is currently >> key to Zuul executor security. While Zuul does have an option to run >> without bubblewrap (called "nullwrap"), it is not recommended, and it >> doesn't support secrets, which is a fairly important feature. >> >> Nullwrap was written with the idea of aiding those use-cases where it is >> not possible to run with bubblewrap. However, I think that if such a >> case still exists, we should work to address that -- nullwrap really >> shouldn't be the answer. >> >> I'd like to disable the ability to use nullwrap to avoid any potential >> confusion on the subject in the future. >> > +1 it does seem most Linux distro do support bubblewrap packages now. > >> If anyone is currently using nullwrap, please speak up so we can start >> working on a solution. >> > I want to say I only heard of one place (OpenShift Online) where bubblewrap had > to be disabled. I haven't confirmed that myself however. > Hi, Bubblewrap currently does require a privileged container to create new namespaces. With containers we are supposed to split each context into dedicated containers as opposed to multiple differents process running in the same container. I am not sure how feasable this is with the driver interface where we need to share execution data between each play run. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Wed Mar 14 17:10:27 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 14 Mar 2018 13:10:27 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520986087.e1akjswbaa.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> Message-ID: On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray wrote: [snip[ >> Please have a look again at 535537 (PS5), I think it implements the best > of both strategies: it removes the wait-for-zookeeper sections while still > using the SessionExpired exception. Consolidating the lock checks into the write methods works since we only need to make sure that we still have the lock before we update the ZK database. This doesn't work for ephemeral nodes though. How would you propose we address that? -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Fri Mar 16 01:46:04 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 16 Mar 2018 01:46:04 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> Message-ID: <1521163612.mijxhfgki1.tristanC@fedora> On March 14, 2018 5:10 pm, David Shrewsbury wrote: > On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray > wrote: > > [snip[ > > >>> Please have a look again at 535537 (PS5), I think it implements the best >> of both strategies: it removes the wait-for-zookeeper sections while still >> using the SessionExpired exception. > > > Consolidating the lock checks into the write methods works since we only > need to make sure that we still have the lock before we update the ZK > database. This doesn't work for ephemeral nodes though. How would you > propose we address that? > > Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Fri Mar 16 08:10:33 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Fri, 16 Mar 2018 08:10:33 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1521163612.mijxhfgki1.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> <1521163612.mijxhfgki1.tristanC@fedora> Message-ID: On 16.03.18, 02:45, "Tristan Cacqueray" wrote: Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan Regarding the NodeRequest in zuul there is a change [1] in review to react on lost node requests. [1] https://review.openstack.org/537335 - Tobias From tdecacqu at redhat.com Mon Mar 19 02:20:22 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 19 Mar 2018 02:20:22 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) Message-ID: <1521425389.tf2bg9ecc0.tristanC@fedora> Hello folks, Is there a plan to add a tenant scope to all the services? For example, we could add the tenant name to NodeRequest and a 'tenant' setting in nodepool.yaml so that a provider and its labels could be restricted to a particular tenant. Similarly for the executors, we could be able to isolate tenant job execution from each other. Has this been discussed at the PTG? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Mon Mar 19 06:51:45 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 19 Mar 2018 06:51:45 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) In-Reply-To: <1521425389.tf2bg9ecc0.tristanC@fedora> References: <1521425389.tf2bg9ecc0.tristanC@fedora> Message-ID: Hi, On 19.03.18, 03:19, "Tristan Cacqueray" wrote: Hello folks, Is there a plan to add a tenant scope to all the services? For example, we could add the tenant name to NodeRequest and a 'tenant' setting in nodepool.yaml so that a provider and its labels could be restricted to a particular tenant. Similarly for the executors, we could be able to isolate tenant job execution from each other. Has this been discussed at the PTG? Regards, -Tristan Regarding tenant label restriction we had a discussion at the PTG [1]. We Discussed two options, making nodepool tenant aware and defining the label restrictions in the zuul tenant config. I'm not sure if there was a final decision (at least the protocol doesn't mention a decision). But defining these restrictions in zuul seem easier to implement to me and further adds the information about tenants in a single specific location so you don't have to mess with tenant config and nodepool config to allow/disallow labels for tenants. So I'm in favor of handling that use case in the tenant config. Regarding the executors there was a discussion about region-support for executors (which you might be able to exploit to isolate executor tenants from each other). Check out the topic ' Zuul executor affinity' for this. I'm interested in your use case about this as the zuul-executor already has a two-barrier (ansible restrictions and bwrap) approach for isolating jobs (and thus tenants) against each other. [1] https://etherpad.openstack.org/p/infra-rocky-ptg Kind regards Tobias From tdecacqu at redhat.com Mon Mar 19 08:03:23 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 19 Mar 2018 08:03:23 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) In-Reply-To: References: <1521425389.tf2bg9ecc0.tristanC@fedora> Message-ID: <1521445551.receronx60.tristanC@fedora> Hi Tobias, thanks for your feedback, please find some comments inlined bellow. On March 19, 2018 6:51 am, Tobias.Henkel at bmw.de wrote: > Hi, > > On 19.03.18, 03:19, "Tristan Cacqueray" wrote: > > Hello folks, > > Is there a plan to add a tenant scope to all the services? > > For example, we could add the tenant name to NodeRequest and a 'tenant' > setting in nodepool.yaml so that a provider and its labels could be > restricted to a particular tenant. > > Similarly for the executors, we could be able to isolate tenant job > execution from each other. > > > Has this been discussed at the PTG? > > Regards, > -Tristan > > Regarding tenant label restriction we had a discussion at the PTG [1]. We > Discussed two options, making nodepool tenant aware and defining the > label restrictions in the zuul tenant config. I'm not sure if there was a > final decision (at least the protocol doesn't mention a decision). But > defining these restrictions in zuul seem easier to implement to me and > further adds the information about tenants in a single specific location > so you don't have to mess with tenant config and nodepool config to > allow/disallow labels for tenants. So I'm in favor of handling that use > case in the tenant config. > It seems worth investigating nodepool tenant awareness for more flexibility. For example, the 'tenant' setting in nodepool.yaml could be applied at the top-level (the whole process would only respond to a tenant request), or at the provider level (e.g. a provider would only serve a tenant) or at the label level (e.g. a label would only be available to a tenant). The use-case would be to support a tenant who provides its own cloud so that its resources consuption would not impact the capacity of the other tenants. We could simulate that with label restriction, but it seems less elegant to create new labels per providers. > Regarding the executors there was a discussion about region-support for > executors (which you might be able to exploit to isolate executor tenants > from each other). Check out the topic ' Zuul executor affinity' for this. > I'm interested in your use case about this as the zuul-executor > already has a two-barrier (ansible restrictions and bwrap) approach for > isolating jobs (and thus tenants) against each other. > The region-support for executors seems to address location isolation. I would like to be able to isolate executor between tenant to prevent one executor break out to affect other tenants. It would let us better dispatch the load too. > [1] https://etherpad.openstack.org/p/infra-rocky-ptg > I can't tell if those are notes or is this already part of the roadmap? Thanks, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mrhillsman at gmail.com Wed Mar 21 13:39:53 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 08:39:53 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> Message-ID: I also have went through the "ZFS" at least once and ran into quite a few issues I think and started documenting[0] them. Bernd not sure if you have run into additional issues but feel free to add to the etherpad; it is not organized at all right now but the plan was/is to report back a list of things and patch(es). [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuul-from-scratch) On Wed, Mar 21, 2018 at 3:26 AM, wrote: > Hi Bernd, > > On 21.03.18, 08:54, "Bernd Bausch" wrote: > > Sorry for this newbie intrusion on the distribution list. I am trying > to > follow the Zuul from Scratch instructions[1] to set up Zuul at home. > The > purpose is twofold: Learn about Zuul, and check whether Zuul from > Scratch > instructions are accurate and make sense. > > Currently I am at a roadblock. > > The problem is the Github webhook, through which Github sends data to > Zuul. > It should be > http:///connection/github/payload [2], but on the Zuul > server > that > I have deployed so far, nothing listens at port 80. Thus, Github gets > a "connection refused" when trying to contact my server. I used > tcpdump to > confirm that my server does indeed receive an http POST from Github. > > Either something is wrong with my setup, which prevents Zuul from > listening > at > 80. Or the webhook isn't documented correctly. > > How can I confirm what could be the problem? > > Bernd > > [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html > [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, > https://docs.openstack.org/infra/zuul/admin/zuul-from- > scratch.html#configure > -github > > > It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web > by default > listens on port 9000. While the documented webhook config in github > assumes port 80. > So you need to either change the webhook to use 9000 or change the zuul > config to use > port 80 [1]. > > Further I pushed up a change to fix the doc [2]. > > Sidenote: we now have zuul specific mailing lists [3] so you might want to > also subscribe > to them. > > [1] https://docs.openstack.org/infra/zuul/admin/components. > html#attr-web.port > [2] https://review.openstack.org/554829 > [3] http://lists.zuul-ci.org > > Kind regards > Tobias > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Mar 22 01:10:46 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 20:10:46 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <015001d3c179$29e23310$7da69930$@gmail.com> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: You can just submit a patch afaik Bernd On Wed, Mar 21, 2018 at 8:00 PM, Bernd Bausch wrote: > Melvin, Tobias, > > Thanks for your inputs. I added my thoughts to the Etherpad and subscribed > to the zuul-ci list. > As Tobias says, port 9000 for the webhook allows Github to send its > “ping”. Thus, I got over this hurdle. > > Note that the Github driver documentation [1] also fails to mention that > the webhook requires port 9000. Here is a process question: Am I allowed to > fix simple bugs like this one or do I have to first submit them to > Launchpad? > > Bernd. > > [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html > ---- > From: Melvin Hillsman [mailto:mrhillsman at gmail.com] > Sent: Wednesday, March 21, 2018 10:40 PM > To: Tobias.Henkel at bmw.de > Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org > Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook > > I also have went through the "ZFS" at least once and ran into quite a few > issues I think and started documenting[0] them. Bernd not sure if you have > run into additional issues but feel free to add to the etherpad; it is not > organized at all right now but the plan was/is to report back a list of > things and patch(es). > > [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs( > zuul-from-scratch) > > On Wed, Mar 21, 2018 at 3:26 AM, wrote: > Hi Bernd, > > On 21.03.18, 08:54, "Bernd Bausch" wrote: > > Sorry for this newbie intrusion on the distribution list. I am trying > to > follow the Zuul from Scratch instructions[1] to set up Zuul at home. > The > purpose is twofold: Learn about Zuul, and check whether Zuul from > Scratch > instructions are accurate and make sense. > > Currently I am at a roadblock. > > The problem is the Github webhook, through which Github sends data to > Zuul. > It should be > http:///connection/github/payload [2], but on the Zuul > server > that > I have deployed so far, nothing listens at port 80. Thus, Github gets > a "connection refused" when trying to contact my server. I used > tcpdump to > confirm that my server does indeed receive an http POST from Github. > > Either something is wrong with my setup, which prevents Zuul from > listening > at > 80. Or the webhook isn't documented correctly. > > How can I confirm what could be the problem? > > Bernd > > [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html > [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, > https://docs.openstack.org/infra/zuul/admin/zuul-from- > scratch.html#configure > -github > > It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web > by default > listens on port 9000. While the documented webhook config in github > assumes port 80. > So you need to either change the webhook to use 9000 or change the zuul > config to use > port 80 [1]. > > Further I pushed up a change to fix the doc [2]. > > Sidenote: we now have zuul specific mailing lists [3] so you might want to > also subscribe > to them. > > [1] https://docs.openstack.org/infra/zuul/admin/components. > html#attr-web.port > [2] https://review.openstack.org/554829 > [3] http://lists.zuul-ci.org > > Kind regards > Tobias > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Mar 22 01:11:12 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 20:11:12 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: Thanks for adding to the etherpad btw! On Wed, Mar 21, 2018 at 8:10 PM, Melvin Hillsman wrote: > You can just submit a patch afaik Bernd > > On Wed, Mar 21, 2018 at 8:00 PM, Bernd Bausch > wrote: > >> Melvin, Tobias, >> >> Thanks for your inputs. I added my thoughts to the Etherpad and >> subscribed to the zuul-ci list. >> As Tobias says, port 9000 for the webhook allows Github to send its >> “ping”. Thus, I got over this hurdle. >> >> Note that the Github driver documentation [1] also fails to mention that >> the webhook requires port 9000. Here is a process question: Am I allowed to >> fix simple bugs like this one or do I have to first submit them to >> Launchpad? >> >> Bernd. >> >> [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html >> ---- >> From: Melvin Hillsman [mailto:mrhillsman at gmail.com] >> Sent: Wednesday, March 21, 2018 10:40 PM >> To: Tobias.Henkel at bmw.de >> Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org >> Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook >> >> I also have went through the "ZFS" at least once and ran into quite a few >> issues I think and started documenting[0] them. Bernd not sure if you have >> run into additional issues but feel free to add to the etherpad; it is not >> organized at all right now but the plan was/is to report back a list of >> things and patch(es). >> >> [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuu >> l-from-scratch) >> >> On Wed, Mar 21, 2018 at 3:26 AM, wrote: >> Hi Bernd, >> >> On 21.03.18, 08:54, "Bernd Bausch" wrote: >> >> Sorry for this newbie intrusion on the distribution list. I am trying >> to >> follow the Zuul from Scratch instructions[1] to set up Zuul at home. >> The >> purpose is twofold: Learn about Zuul, and check whether Zuul from >> Scratch >> instructions are accurate and make sense. >> >> Currently I am at a roadblock. >> >> The problem is the Github webhook, through which Github sends data to >> Zuul. >> It should be >> http:///connection/github/payload [2], but on the >> Zuul server >> that >> I have deployed so far, nothing listens at port 80. Thus, Github gets >> a "connection refused" when trying to contact my server. I used >> tcpdump to >> confirm that my server does indeed receive an http POST from Github. >> >> Either something is wrong with my setup, which prevents Zuul from >> listening >> at >> 80. Or the webhook isn't documented correctly. >> >> How can I confirm what could be the problem? >> >> Bernd >> >> [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratc >> h.html >> [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, >> https://docs.openstack.org/infra/zuul/admin/zuul-from-scratc >> h.html#configure >> -github >> >> It looks like you discovered a bug in the zuul-from-scratch docs. >> Zuul-web by default >> listens on port 9000. While the documented webhook config in github >> assumes port 80. >> So you need to either change the webhook to use 9000 or change the zuul >> config to use >> port 80 [1]. >> >> Further I pushed up a change to fix the doc [2]. >> >> Sidenote: we now have zuul specific mailing lists [3] so you might want >> to also subscribe >> to them. >> >> [1] https://docs.openstack.org/infra/zuul/admin/components.html# >> attr-web.port >> [2] https://review.openstack.org/554829 >> [3] http://lists.zuul-ci.org >> >> Kind regards >> Tobias >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> >> >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Mar 22 01:00:23 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 22 Mar 2018 10:00:23 +0900 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> Message-ID: <015001d3c179$29e23310$7da69930$@gmail.com> Melvin, Tobias, Thanks for your inputs. I added my thoughts to the Etherpad and subscribed to the zuul-ci list. As Tobias says, port 9000 for the webhook allows Github to send its “ping”. Thus, I got over this hurdle. Note that the Github driver documentation [1] also fails to mention that the webhook requires port 9000. Here is a process question: Am I allowed to fix simple bugs like this one or do I have to first submit them to Launchpad? Bernd. [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html ---- From: Melvin Hillsman [mailto:mrhillsman at gmail.com] Sent: Wednesday, March 21, 2018 10:40 PM To: Tobias.Henkel at bmw.de Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook I also have went through the "ZFS" at least once and ran into quite a few issues I think and started documenting[0] them. Bernd not sure if you have run into additional issues but feel free to add to the etherpad; it is not organized at all right now but the plan was/is to report back a list of things and patch(es). [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuul-from-scratch) On Wed, Mar 21, 2018 at 3:26 AM, wrote: Hi Bernd, On 21.03.18, 08:54, "Bernd Bausch" wrote: Sorry for this newbie intrusion on the distribution list. I am trying to follow the Zuul from Scratch instructions[1] to set up Zuul at home. The purpose is twofold: Learn about Zuul, and check whether Zuul from Scratch instructions are accurate and make sense. Currently I am at a roadblock. The problem is the Github webhook, through which Github sends data to Zuul. It should be http:///connection/github/payload [2], but on the Zuul server that I have deployed so far, nothing listens at port 80. Thus, Github gets a "connection refused" when trying to contact my server. I used tcpdump to confirm that my server does indeed receive an http POST from Github. Either something is wrong with my setup, which prevents Zuul from listening at 80. Or the webhook isn't documented correctly. How can I confirm what could be the problem? Bernd [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html#configure -github It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web by default listens on port 9000. While the documented webhook config in github assumes port 80. So you need to either change the webhook to use 9000 or change the zuul config to use port 80 [1]. Further I pushed up a change to fix the doc [2]. Sidenote: we now have zuul specific mailing lists [3] so you might want to also subscribe to them. [1] https://docs.openstack.org/infra/zuul/admin/components.html#attr-web.port [2] https://review.openstack.org/554829 [3] http://lists.zuul-ci.org Kind regards Tobias _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 22 18:30:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Mar 2018 18:30:21 +0000 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <015001d3c179$29e23310$7da69930$@gmail.com> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: <20180322183020.37aj2r3idgcgsqsp@yuggoth.org> On 2018-03-22 10:00:23 +0900 (+0900), Bernd Bausch wrote: [...] > Here is a process question: Am I allowed to fix simple bugs like > this one or do I have to first submit them to Launchpad? Feel free to just push a change for review so long as you can adequately explain in the commit message why it's needed (in this case that shouldn't be terribly hard). Also, see Zuul's README.rst or the footer at the bottom of https://zuul-ci.org/ for details on how to report bugs (definitely not Launchpad). Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Mar 28 00:26:54 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 27 Mar 2018 17:26:54 -0700 Subject: Docs and git repos on zuul-ci.org Message-ID: <87zi2t59kh.fsf@meyer.lemoncheese.net> Hi, We now have all of the Zuul documentation hosted on the Zuul website[1], so folks may want to start referencing those URLs, rather than docs.openstack.org. We also have the Zuul git repos mirrored under the Zuul domain at [2]. That page shows only the Zuul-related git repos. Additionally, if you're referencing the zuul-jobs or zuul-base-jobs repos, you may want to switch your connection to use git.zuul-ci.org instead. Note that if you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- with no "openstack-infra" prefixed to the name, as there are no organization prefixes on repositories mirrored at git.zuul-ci.org. [1] https://zuul-ci.org/docs/ [2] http://git.zuul-ci.org/ -Jim From pabelanger at redhat.com Wed Mar 28 00:45:53 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 27 Mar 2018 20:45:53 -0400 Subject: Docs and git repos on zuul-ci.org In-Reply-To: <87zi2t59kh.fsf@meyer.lemoncheese.net> References: <87zi2t59kh.fsf@meyer.lemoncheese.net> Message-ID: <20180328004553.GA23613@localhost.localdomain> On Tue, Mar 27, 2018 at 05:26:54PM -0700, James E. Blair wrote: > Hi, > > We now have all of the Zuul documentation hosted on the Zuul website[1], > so folks may want to start referencing those URLs, rather than > docs.openstack.org. > > We also have the Zuul git repos mirrored under the Zuul domain at [2]. > That page shows only the Zuul-related git repos. Additionally, if > you're referencing the zuul-jobs or zuul-base-jobs repos, you may want > to switch your connection to use git.zuul-ci.org instead. Note that if > you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- > with no "openstack-infra" prefixed to the name, as there are no > organization prefixes on repositories mirrored at git.zuul-ci.org. > > [1] https://zuul-ci.org/docs/ > [2] http://git.zuul-ci.org/ > Yay, looks great. I am guessing [2] is a typo, and we should be recommending HTTPS protocol for git repos? > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Mar 28 14:10:00 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 28 Mar 2018 07:10:00 -0700 Subject: Docs and git repos on zuul-ci.org In-Reply-To: <20180328004553.GA23613@localhost.localdomain> (Paul Belanger's message of "Tue, 27 Mar 2018 20:45:53 -0400") References: <87zi2t59kh.fsf@meyer.lemoncheese.net> <20180328004553.GA23613@localhost.localdomain> Message-ID: <87sh8k47gn.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Tue, Mar 27, 2018 at 05:26:54PM -0700, James E. Blair wrote: >> Hi, >> >> We now have all of the Zuul documentation hosted on the Zuul website[1], >> so folks may want to start referencing those URLs, rather than >> docs.openstack.org. >> >> We also have the Zuul git repos mirrored under the Zuul domain at [2]. >> That page shows only the Zuul-related git repos. Additionally, if >> you're referencing the zuul-jobs or zuul-base-jobs repos, you may want >> to switch your connection to use git.zuul-ci.org instead. Note that if >> you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- >> with no "openstack-infra" prefixed to the name, as there are no >> organization prefixes on repositories mirrored at git.zuul-ci.org. >> >> [1] https://zuul-ci.org/docs/ >> [2] http://git.zuul-ci.org/ >> > Yay, looks great. I am guessing [2] is a typo, and we should be recommending > HTTPS protocol for git repos? Yep, sorry. Should be https://git.zuul-ci.org/ -Jim From tdecacqu at redhat.com Thu Mar 29 10:33:27 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 29 Mar 2018 10:33:27 +0000 Subject: Software Factory 3.0 released Message-ID: <1522319289.u1l2ey5pdq.tristanC@fedora> Greeting folks, We are pleased to announce the release of Software Factory version 3.0: http://www.softwarefactory-project.io/releases/3.0 It is a software development forge that integrates the last version of Zuul and Nodepool along with a few patches that couldn't make it into the upstream release: * Ansible-2.4 support: https://review.openstack.org/535839 * Dynamic tenant configuration: https://review.openstack.org/535878 * MQTT reporter: https://review.openstack.org/535543 * Job and Projects dashboard: https://review.openstack.org/550979 * Broken config protection: https://review.openstack.org/535511 * Zookeeper auto-retry: https://review.openstack.org/535537 * OpenContainer Nodepool provider: https://review.openstack.org/535556 * Kubernetes Nodepool provider: https://review.openstack.org/535557 Best regards, The Software Factory Team -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Thu Mar 29 16:44:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 29 Mar 2018 09:44:24 -0700 Subject: Next steps after 3.0.0 release Message-ID: <878taau907.fsf@meyer.lemoncheese.net> Hi, Now that 3.0.0 is released[1] (yay!), it's a good time to go over what we need to do next. The first order of business is to land the changes to add reno: https://review.openstack.org/536861 https://review.openstack.org/262597 You should read the docs that those changes introduce (they'll be in the developer docs for future reference), but the key bit is: When adding a noteworthy feature, fixing a noteworthy bug or introducing a behavior change that a user or operator should know about, it is a good idea to add a release note to the same patch. Every release from this point out will have release notes included in the documentation. We relaxed our review standards significantly during the feature branch phase, but we've been returning to normal for the past few months. At this point, the master branches of Nodepool and Zuul should always be kept working and ready for release. Gating usually handles the technical side of that, but don't lose track of the intent when it comes to other aspects. Please make sure that changes have documentation if they affect a user-visible aspect of the system, have a release note if the change is noteworthy, and have tests if they alter behavior. If a change is missing these things, please ask about it. These rules don't need to be too rigid, but if we flex them, we should be in the habit of discussing it and establishing that there's a good reason. We need to be careful about making changes to the zuul-jobs and zuul-base-jobs repos now. We'll write up a policy and include it in the docs for those repos, but the important thing is that we can't use new Zuul features in those repos until we've given folks a chance to upgrade (so probably some number of weeks after the feature appears in a release). And we can't remove features from Zuul until we've removed their use from zuul-jobs, and given folks who might be running a fork of zuul-jobs time to update. I expect us to release Zuul and Nodepool much more often now. Any time we make a significant change and establish that the software is working as intended, we should release. We've assigned some very specific meaning to version numbers recently, but don't think that will continue. The versions we release will be what they are depending on what has landed when we release. Thanks everyone for your work on the release, and I look forward to reviewing all the patches that are waiting in the wings! -Jim [1] http://lists.zuul-ci.org/pipermail/zuul-announce/2018-March/000006.html From hashar at free.fr Fri Mar 30 08:02:53 2018 From: hashar at free.fr (Antoine Musso) Date: Fri, 30 Mar 2018 10:02:53 +0200 Subject: Software Factory 3.0 released In-Reply-To: <1522319289.u1l2ey5pdq.tristanC@fedora> References: <1522319289.u1l2ey5pdq.tristanC@fedora> Message-ID: On 29/03/2018 12:33, Tristan Cacqueray wrote: > Greeting folks, > > We are pleased to announce the release of Software Factory version 3.0: > http://www.softwarefactory-project.io/releases/3.0 > > It is a software development forge that integrates the last version of > Zuul and Nodepool along with a few patches that couldn't make it into > the upstream release: > > * Ansible-2.4 support: https://review.openstack.org/535839 > * Dynamic tenant configuration: https://review.openstack.org/535878 > * MQTT reporter: https://review.openstack.org/535543 > * Job and Projects dashboard: https://review.openstack.org/550979 > * Broken config protection: https://review.openstack.org/535511 > * Zookeeper auto-retry: https://review.openstack.org/535537 > * OpenContainer Nodepool provider: https://review.openstack.org/535556 > * Kubernetes Nodepool provider: https://review.openstack.org/535557 Hello, It seems those patches were proposed to the v3 feature branch which eventually got mass abandoned on the v3 branch got merged into master. Is there any plan to get them reviewed/merged and added in the next Zuul/Nodepool versions? I am specially interested in the Kubernetes Nodepool provider. Wikimedia is planning to migrate its CI stack to Kubernetes and surely we will have time/budget/brains to work on enhancing and maintaining it. Though for now we are still on Zuul v2 and an obsolete Nodepool version. cheers, -- Antoine "hashar" Musso From tdecacqu at redhat.com Fri Mar 30 09:13:03 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 30 Mar 2018 09:13:03 +0000 Subject: Software Factory 3.0 released In-Reply-To: References: <1522319289.u1l2ey5pdq.tristanC@fedora> Message-ID: <1522397895.fu80ju1srv.tristanC@fedora> On March 30, 2018 8:02 am, Antoine Musso wrote: > On 29/03/2018 12:33, Tristan Cacqueray wrote: >> Greeting folks, >> >> We are pleased to announce the release of Software Factory version 3.0: >> http://www.softwarefactory-project.io/releases/3.0 >> >> It is a software development forge that integrates the last version of >> Zuul and Nodepool along with a few patches that couldn't make it into >> the upstream release: >> >> * Ansible-2.4 support: https://review.openstack.org/535839 >> * Dynamic tenant configuration: https://review.openstack.org/535878 >> * MQTT reporter: https://review.openstack.org/535543 >> * Job and Projects dashboard: https://review.openstack.org/550979 >> * Broken config protection: https://review.openstack.org/535511 >> * Zookeeper auto-retry: https://review.openstack.org/535537 >> * OpenContainer Nodepool provider: https://review.openstack.org/535556 >> * Kubernetes Nodepool provider: https://review.openstack.org/535557 > > Hello, > > It seems those patches were proposed to the v3 feature branch which > eventually got mass abandoned on the v3 branch got merged into master. > > Is there any plan to get them reviewed/merged and added in the next > Zuul/Nodepool versions? > Hi Antoine, It's unclear what is the scope of the next versions. We integrated the above patches in Software Factory packages as they improve the user experience without changing the services interface. They are up for review on the master branch and we are available to update them if needed. I think the most important one is the broken config protection as it prevents a single commit to break every tenants. > I am specially interested in the Kubernetes Nodepool provider. Wikimedia > is planning to migrate its CI stack to Kubernetes and surely we will > have time/budget/brains to work on enhancing and maintaining it. Though > for now we are still on Zuul v2 and an obsolete Nodepool version. > The proposed Kubernetes driver relies on a containerized openssh service so that zuul-jobs can work seemlessly on OpenStack or Kubernetes resources. Note that it's not a true native Kubernetes driver, for example it doesn't create speculative container images nor use specific Kubernetes workflow. This driver just let you run job in a container instead of a fully fledge instance. Perhaps the driver should be renamed k8s-ssh if other #zuul users want to leverage Nodepool to truely test container based projects. Anyway that deserves a thread on its own :-) Cheers, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 30 19:31:41 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 30 Mar 2018 12:31:41 -0700 Subject: Software Factory 3.0 released In-Reply-To: <1522397895.fu80ju1srv.tristanC@fedora> (Tristan Cacqueray's message of "Fri, 30 Mar 2018 09:13:03 +0000") References: <1522319289.u1l2ey5pdq.tristanC@fedora> <1522397895.fu80ju1srv.tristanC@fedora> Message-ID: <87vaddpdgi.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On March 30, 2018 8:02 am, Antoine Musso wrote: >> On 29/03/2018 12:33, Tristan Cacqueray wrote: >>> Greeting folks, >>> >>> We are pleased to announce the release of Software Factory version 3.0: >>> http://www.softwarefactory-project.io/releases/3.0 >>> >>> It is a software development forge that integrates the last version of >>> Zuul and Nodepool along with a few patches that couldn't make it into >>> the upstream release: >>> >>> * Ansible-2.4 support: https://review.openstack.org/535839 >>> * Dynamic tenant configuration: https://review.openstack.org/535878 >>> * MQTT reporter: https://review.openstack.org/535543 >>> * Job and Projects dashboard: https://review.openstack.org/550979 >>> * Broken config protection: https://review.openstack.org/535511 >>> * Zookeeper auto-retry: https://review.openstack.org/535537 >>> * OpenContainer Nodepool provider: https://review.openstack.org/535556 >>> * Kubernetes Nodepool provider: https://review.openstack.org/535557 >> >> Hello, >> >> It seems those patches were proposed to the v3 feature branch which >> eventually got mass abandoned on the v3 branch got merged into master. >> >> Is there any plan to get them reviewed/merged and added in the next >> Zuul/Nodepool versions? >> > Hi Antoine, > > It's unclear what is the scope of the next versions. We integrated the > above patches in Software Factory packages as they improve the user > experience without changing the services interface. They are up for > review on the master branch and we are available to update them if needed. > > I think the most important one is the broken config protection as it > prevents a single commit to break every tenants. > >> I am specially interested in the Kubernetes Nodepool provider. Wikimedia >> is planning to migrate its CI stack to Kubernetes and surely we will >> have time/budget/brains to work on enhancing and maintaining it. Though >> for now we are still on Zuul v2 and an obsolete Nodepool version. >> > The proposed Kubernetes driver relies on a containerized openssh service > so that zuul-jobs can work seemlessly on OpenStack or Kubernetes > resources. Note that it's not a true native Kubernetes driver, for > example it doesn't create speculative container images nor use specific > Kubernetes workflow. This driver just let you run job in a container instead > of a fully fledge instance. > > Perhaps the driver should be renamed k8s-ssh if other #zuul users want > to leverage Nodepool to truely test container based projects. Anyway > that deserves a thread on its own :-) There's been a lot of research and discussion around this -- it's a complex topic. Now that v3 is released, I plan on writing up the notes we have and laying out a plan for us to discuss. Hopefully I'll be able to do that next week. -Jim From iwienand at redhat.com Thu Mar 1 00:02:51 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Mar 2018 11:02:51 +1100 Subject: [nodepool] Webapp label_list In-Reply-To: References: <1519741171.44y8fc6wez.tristanC@fedora> Message-ID: <8855d552-ea31-2334-10dd-219576b65274@redhat.com> On 02/28/2018 02:11 PM, David Shrewsbury wrote: > I don't think a full removal is necessary. If the change I proposed > to add config info to launcher registration is deemed worthy, it's a > simple change to the current code to pull that new data. > Yeah, I like consistency here too. I don't see any reason why that > can't be easily fixed when we make the change mentioned above. Although I've proposed a revert in [1] that's just so that I can stack refactoring changes ontop for aforementioned consistency. [2] restores it, (depending-on the "keep labels in zk" change [3]). I like using a dict to return the values; even if currently it only has one element we may find a reason to add the list of launchers that support a given label, or some other info, in the future. >> Since we're talking about it; would we be better to remove the ".json" >> suffix checks, and just look at the mime type accept headers and >> return based on that? It seems to be the more common approach. > I'm not the most strongly versed in web things, but on the surface, this > seems pretty logical to me. This is proposed in [4] Thanks! -i [1] https://review.openstack.org/548424 [2] https://review.openstack.org/548442 [3] https://review.openstack.org/548376 [4] https://review.openstack.org/548440 From tdecacqu at redhat.com Mon Mar 5 03:05:40 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 05 Mar 2018 03:05:40 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues Message-ID: <1520216795.pc3r2jevm5.tristanC@fedora> Hello folks, Here is a recap of the issues happening when the connection to Zookeeper is unstable: * Nodepool marks dib build as failed, * ConnectionLoss/SessionExpired exceptions happen when the connection drop after the "self._zk.suspended or self._zk.lost" checks, * Zuul lost NodeRequest which may lock change in queued state, and * Zookeeper needs to be running before starting Zuul or Nodepool, otherwise the daemon stay stuck in limbo. This used to happen frequently in our deployment (running in an OpenStack tenant with zookeeper on a dedicated instance). It seems like the current connection listener doesn't work properly and the errors seems to mostly happen when the connection is actually used. There are a couple of patches[0] to leverage Kazoo retry facilities and to mitigate above issues by automatically retrying operations until the service is available. The nodepool patch adds a uuid to Sequence object as suggested by the Zookeeper ErrorHandling documentation[1]. The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't re-create the request if they got deleted when the connection is reset. I think those patches improve the overall service reliability and we have been using them successfully without issues in software-factory packages. Regards, -Tristan [0]: https://review.openstack.org/#/q/topic:zookeeper-retry [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Mon Mar 5 12:56:08 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 5 Mar 2018 07:56:08 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520216795.pc3r2jevm5.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: <20180305125608.GB31103@localhost.localdomain> On Mon, Mar 05, 2018 at 03:05:40AM +0000, Tristan Cacqueray wrote: > Hello folks, > > Here is a recap of the issues happening when the connection to > Zookeeper is unstable: > > * Nodepool marks dib build as failed, > * ConnectionLoss/SessionExpired exceptions happen when the connection > drop after the "self._zk.suspended or self._zk.lost" checks, > * Zuul lost NodeRequest which may lock change in queued state, and > * Zookeeper needs to be running before starting Zuul or Nodepool, > otherwise the daemon stay stuck in limbo. This last one is a bug IMO, and makes it harder for deployments. Specifically, in openstack-infra case, if we start zuul / nodepool before we have opened the firewall to zookeeper, kazoo.client stops trying to connect after a short amount of time. Once the ports are open / zookeeper started the only way forward is to manually stop / start zuul / nodepool. I'd love us to keep supporting this use case of zookeeper not started before zuul / nodepool so reties here make sense. Today gearman allows us to operate this way, makesense for zookeeper to support it too. I'll defer to corvus / Shrews to other specific items. > > This used to happen frequently in our deployment (running in an > OpenStack tenant with zookeeper on a dedicated instance). It seems like > the current connection listener doesn't work properly and the errors > seems to mostly happen when the connection is actually used. > > > There are a couple of patches[0] to leverage Kazoo retry facilities and > to mitigate above issues by automatically retrying operations until the > service is available. The nodepool patch adds a uuid to Sequence object > as suggested by the Zookeeper ErrorHandling documentation[1]. > > The last remaining issue is the Zuul ephemeral NodeRequest code that doesn't > re-create the request if they got deleted when the connection is reset. > > > I think those patches improve the overall service reliability and we have > been using them successfully without issues in software-factory packages. > > Regards, > -Tristan > > [0]: https://review.openstack.org/#/q/topic:zookeeper-retry > [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From shrewsbury.dave at gmail.com Mon Mar 5 14:54:15 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 5 Mar 2018 09:54:15 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520216795.pc3r2jevm5.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: Hi! Thanks for starting this discussion. On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray wrote: > Hello folks, > > Here is a recap of the issues happening when the connection to > Zookeeper is unstable: > > * Nodepool marks dib build as failed, > * ConnectionLoss/SessionExpired exceptions happen when the connection > drop after the "self._zk.suspended or self._zk.lost" checks, > * Zuul lost NodeRequest which may lock change in queued state, and > * Zookeeper needs to be running before starting Zuul or Nodepool, > otherwise the daemon stay stuck in limbo. > > This used to happen frequently in our deployment (running in an > OpenStack tenant with zookeeper on a dedicated instance). It seems like > the current connection listener doesn't work properly and the errors > seems to mostly happen when the connection is actually used. > > > There are a couple of patches[0] to leverage Kazoo retry facilities and > to mitigate above issues by automatically retrying operations until the > service is available. The nodepool patch adds a uuid to Sequence object > as suggested by the Zookeeper ErrorHandling documentation[1]. > > It's these sequence objects that concerns me most because we are so heavily dependent on them in both zuul and nodepool. I looked at your current tests, and it seems that they are testing that the sequences are sequential after the retry. From my reading of the ZK ErrorHandling docs, I don't think this is guaranteed. Those docs mention that it is possible to get more than one sequence object created on a request that is retried. Particularly, around this sentence describing what could happen around a single "create" request: "now it could be that x-109 was the result of the previous create, so the process actually owns both x-109 and x-111." So my main source of concern here (that I don't think we are testing?) is what happens when we create, say, a new Node object, but actually end up with 2 new Nodes (which I think would have identical data). I don't know how our current code path would handle such a situation. This is the thing I think we need to figure out how to deal with, and I don't think is covered by the current review. And, to be honest, I'm not quite sure how to mimic this in tests. > The last remaining issue is the Zuul ephemeral NodeRequest code that > doesn't > re-create the request if they got deleted when the connection is reset. > > > I think those patches improve the overall service reliability and we have > been using them successfully without issues in software-factory packages. > > Regards, > -Tristan > > [0]: https://review.openstack.org/#/q/topic:zookeeper-retry > [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Tue Mar 6 00:59:54 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 06 Mar 2018 00:59:54 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> Message-ID: <1520296945.n7h8zkzye1.tristanC@fedora> On March 5, 2018 2:54 pm, David Shrewsbury wrote: > Hi! Thanks for starting this discussion. > > On Sun, Mar 4, 2018 at 10:05 PM, Tristan Cacqueray > wrote: > >> Hello folks, >> >> Here is a recap of the issues happening when the connection to >> Zookeeper is unstable: >> >> * Nodepool marks dib build as failed, >> * ConnectionLoss/SessionExpired exceptions happen when the connection >> drop after the "self._zk.suspended or self._zk.lost" checks, >> * Zuul lost NodeRequest which may lock change in queued state, and >> * Zookeeper needs to be running before starting Zuul or Nodepool, >> otherwise the daemon stay stuck in limbo. >> >> This used to happen frequently in our deployment (running in an >> OpenStack tenant with zookeeper on a dedicated instance). It seems like >> the current connection listener doesn't work properly and the errors >> seems to mostly happen when the connection is actually used. >> >> >> There are a couple of patches[0] to leverage Kazoo retry facilities and >> to mitigate above issues by automatically retrying operations until the >> service is available. The nodepool patch adds a uuid to Sequence object >> as suggested by the Zookeeper ErrorHandling documentation[1]. >> >> > It's these sequence objects that concerns me most because we are so > heavily dependent on them in both zuul and nodepool. I looked at your > current > tests, and it seems that they are testing that the sequences are sequential > after > the retry. From my reading of the ZK ErrorHandling docs, I don't think > this is > guaranteed. Those docs mention that it is possible to get more than one > sequence object created on a request that is retried. Particularly, around > this > sentence describing what could happen around a single "create" request: > > "now it could be that x-109 was the result of the previous create, so > the > process actually owns both x-109 and x-111." > > So my main source of concern here (that I don't think we are testing?) is > what > happens when we create, say, a new Node object, but actually end up with 2 > new Nodes (which I think would have identical data). The current review implements the above logic: sequence create aren't using the bulk retry, instead it issues a regular create call, and if there is an exception, then it looks for the uuid in the objects list before issueing another create call: Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py > I don't know how our current code path would handle such a situation. > This is the thing I think > we need > to figure out how to deal with, and I don't think is covered by the current > review. > And, to be honest, I'm not quite sure how to mimic this in tests. > It seems like blocking the 4th socket read can simulate a connection lost after the object is actually created and test the look for uuid loop. The proposed test do excercice that code path, though this may vary dependending on the environment, when the create dialog fits in less than 4 reads. Regards, -Tristan > > >> The last remaining issue is the Zuul ephemeral NodeRequest code that >> doesn't >> re-create the request if they got deleted when the connection is reset. >> >> >> I think those patches improve the overall service reliability and we have >> been using them successfully without issues in software-factory packages. >> >> Regards, >> -Tristan >> >> [0]: https://review.openstack.org/#/q/topic:zookeeper-retry >> [1]: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ErrorHandling >> >> _______________________________________________ >> Zuul-discuss mailing list >> Zuul-discuss at lists.zuul-ci.org >> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss >> >> > > > -- > David Shrewsbury (Shrews) > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Tue Mar 6 13:35:51 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 08:35:51 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520296945.n7h8zkzye1.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray wrote: [snip] > The current review implements the above logic: sequence create aren't > using the bulk retry, instead it issues a regular create call, and if > there is an exception, then it looks for the uuid in the objects list > before issueing another create call: > Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py Ah, apologies for not noticing this in my quick lookover earlier. I'll take a closer look today. If that's what the code is doing, this could alleviate much of my concern. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Mar 6 15:47:25 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 10:47:25 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury wrote: > > > On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray > wrote: > > [snip] > > >> The current review implements the above logic: sequence create aren't >> using the bulk retry, instead it issues a regular create call, and if >> there is an exception, then it looks for the uuid in the objects list >> before issueing another create call: >> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py > > > Ah, apologies for not noticing this in my quick lookover earlier. I'll > take a > closer look today. If that's what the code is doing, this could alleviate > much > of my concern. > > Ok, I think what you're doing with sequences looks pretty could and should work for us. While more thoroughly looking at the review, I did see a potential issue around locking, particularly around holding a lock during the long-running image build process. Now that we keep retrying, even through a total disconnect, there may be places in the code where we assume we still have the lock (like the image build code) if we haven't seen any ZK errors. Perhaps we should think about this aspect for a bit. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Tue Mar 6 18:38:47 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 6 Mar 2018 13:38:47 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: I've played with some test code around this and I feel convinced we could have issues with locks and ephemeral nodes if we are hiding ZK connection problems (particularly session timeouts, which cause us to lose both things) behind retries. Right now we can depend on those connection errors to bubble up to nodepool and zuul as exceptions, and we can assume our locks or ephemeral nodes are gone at that point and handle accordingly. If we now hide those, we need to begin using a listener [1] to know when our session has timed out and set some sort of state that we will have to check at multiple key points. FWIW, I started nodepool v3 off with such a listener and the logic became pretty difficult fairly quickly. If you are running with this code in your environment now, I'd say you've probably just been fairly lucky to not have a ZK session timeout in just the right place. I found it fairly difficult to get to happen in my tests today, and it was never consistent when I did get it to happen (either something weird with ZK itself, or the kazoo library). Also, I hope I'm not coming off as difficult around this. I just know how hard it was to get what we have now functioning correctly, so I'm very protective of it. Hopefully we can get this sorted out for you, and we can learn a bunch of stuff along the way. [1] http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connection-events On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury wrote: > > > On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < > shrewsbury.dave at gmail.com> wrote: > >> >> >> On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray >> wrote: >> >> [snip] >> >> >>> The current review implements the above logic: sequence create aren't >>> using the bulk retry, instead it issues a regular create call, and if >>> there is an exception, then it looks for the uuid in the objects list >>> before issueing another create call: >>> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py >> >> >> Ah, apologies for not noticing this in my quick lookover earlier. I'll >> take a >> closer look today. If that's what the code is doing, this could alleviate >> much >> of my concern. >> >> > Ok, I think what you're doing with sequences looks pretty could and should > work for > us. While more thoroughly looking at the review, I did see a potential > issue around > locking, particularly around holding a lock during the long-running image > build process. > > Now that we keep retrying, even through a total disconnect, there may be > places in the > code where we assume we still have the lock (like the image build code) if > we haven't > seen any ZK errors. Perhaps we should think about this aspect for a bit. > > -Dave > > -- > David Shrewsbury (Shrews) > -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Mar 7 05:38:00 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 07 Mar 2018 05:38:00 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> Message-ID: <1520387902.pcnp10igdm.tristanC@fedora> Thanks for the thorough review David. Indeed kazoo Locks are ephemerals and they vanish when the connection is lost. However the zk-retry patch doesn't seems to affect current behaviors, please see my comments inlined bellow and on the nodepool review. On March 6, 2018 6:38 pm, David Shrewsbury wrote: > I've played with some test code around this and I feel convinced we could > have > issues with locks and ephemeral nodes if we are hiding ZK connection > problems > (particularly session timeouts, which cause us to lose both things) behind > retries. > Would you mind sharing the test code? > Right now we can depend on those connection errors to bubble up to nodepool > and zuul as exceptions, and we can assume our locks or ephemeral nodes are > gone at that point and handle accordingly. If we now hide those, we need to > begin > using a listener [1] to know when our session has timed out and set some > sort of > state that we will have to check at multiple key points. FWIW, I started > nodepool > v3 off with such a listener and the logic became pretty difficult fairly > quickly. > The proposal aims at removing those "key points" checks because they add complexicity and they look prone to race conditions, for example when the connection get lost after the "if self._zk.lost" check. Looking at all Zookeeper Lock usage, only the imageBuildLock seems to benefit from the checks: if a builder lost the connection during a build, it will mark the image as FAILED and another builder will try again. What happens if the other builder succeed before the first one? With the zk-retry patch, both builder will mark the image as READY, and as far as I can tell, this doesn't seems to be an issue. The other Lock aren't protected, for example multiple builder may start an image upload when the connection get lost. The zk-retry patch doesn't seem to change the behavior. Zuul node locks are still an issue, e.g. builds may stay queued whenever the zk-retry patch is used. Perhaps a follow-up patch could improve that situation, for example by re-creating the request when the connection state switch back to CONNECTED. > If you are running with this code in your environment now, I'd say you've > probably > just been fairly lucky to not have a ZK session timeout in just the right > place. I found > it fairly difficult to get to happen in my tests today, and it was never > consistent when > I did get it to happen (either something weird with ZK itself, or the kazoo > library). > Well we used to get those kazoo connection stack traces almost daily, and I couldn't find the root cause, ftr here is the setup: * Instances network setup by Neutron with ovs, * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and * Zookeeper configuration: http://paste.openstack.org/show/693515/ Though the bottomline is to make Zuul and Nodepool more resilient to Zookeeper issues. What would be the wrong place for a connection lost to happen? > Also, I hope I'm not coming off as difficult around this. I just know how > hard it was > to get what we have now functioning correctly, so I'm very protective of > it. Hopefully > we can get this sorted out for you, and we can learn a bunch of stuff along > the way. > Not all, Zookeeper error handling is tricky :) Thank you for sharing your valuable feedback. -Tristan > [1] > http://kazoo.readthedocs.io/en/latest/basic_usage.html#listening-for-connection-events > > On Tue, Mar 6, 2018 at 10:47 AM, David Shrewsbury > wrote: > >> >> >> On Tue, Mar 6, 2018 at 8:35 AM, David Shrewsbury < >> shrewsbury.dave at gmail.com> wrote: >> >>> >>> >>> On Mon, Mar 5, 2018 at 7:59 PM, Tristan Cacqueray >>> wrote: >>> >>> [snip] >>> >>> >>>> The current review implements the above logic: sequence create aren't >>>> using the bulk retry, instead it issues a regular create call, and if >>>> there is an exception, then it looks for the uuid in the objects list >>>> before issueing another create call: >>>> Line 730 of https://review.openstack.org/#/c/535537/3/nodepool/zk.py >>> >>> >>> Ah, apologies for not noticing this in my quick lookover earlier. I'll >>> take a >>> closer look today. If that's what the code is doing, this could alleviate >>> much >>> of my concern. >>> >>> >> Ok, I think what you're doing with sequences looks pretty could and should >> work for >> us. While more thoroughly looking at the review, I did see a potential >> issue around >> locking, particularly around holding a lock during the long-running image >> build process. >> >> Now that we keep retrying, even through a total disconnect, there may be >> places in the >> code where we assume we still have the lock (like the image build code) if >> we haven't >> seen any ZK errors. Perhaps we should think about this aspect for a bit. >> >> -Dave >> >> -- >> David Shrewsbury (Shrews) >> > > > > -- > David Shrewsbury (Shrews) > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Wed Mar 7 15:28:53 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 7 Mar 2018 10:28:53 -0500 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520387902.pcnp10igdm.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> Message-ID: On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray wrote: > Thanks for the thorough review David. > Indeed kazoo Locks are ephemerals and they vanish when the connection is > lost. > However the zk-retry patch doesn't seems to affect current behaviors, > please see my comments inlined bellow and on the nodepool review. Actually, the behavior is very different. See the sample code and output below. > > > On March 6, 2018 6:38 pm, David Shrewsbury wrote: > >> I've played with some test code around this and I feel convinced we could >> have >> issues with locks and ephemeral nodes if we are hiding ZK connection >> problems >> (particularly session timeouts, which cause us to lose both things) behind >> retries. >> >> Would you mind sharing the test code? Certainly. This is the script I used that simulates current nodepool behavior (zk_retry_test1.py): http://paste.openstack.org/show/693650/ This is the script that simulates your proposed changes (zk_retry_test2.py): http://paste.openstack.org/show/693651/ Running zk_retry_test1.py locally (and shutting down ZK during the sleep() call and leaving it shutdown long enough for the session to expire) produces the following output (note the exception): http://paste.openstack.org/show/693653/ Doing the same with zk_retry_test2.py produces very different output (note the lack of the exception): http://paste.openstack.org/show/693654/ Your proposed changes will hide that SessionExpiredError exception. So if we have code depending on that exception bubbling up to know we've lost a lock or an ephemeral node, it will now be broken with the use of retry(). And there is nothing provided in your review that let's us know the session was lost. Looking at some of the builder and launcher code this morning with the lost session error freshly in mind, I see lots of code that doesn't even take this into account properly, so I think we may be potentially broken in many places with retries or not. :( > > Right now we can depend on those connection errors to bubble up to nodepool >> and zuul as exceptions, and we can assume our locks or ephemeral nodes are >> gone at that point and handle accordingly. If we now hide those, we need >> to >> begin >> using a listener [1] to know when our session has timed out and set some >> sort of >> state that we will have to check at multiple key points. FWIW, I started >> nodepool >> v3 off with such a listener and the logic became pretty difficult fairly >> quickly. >> >> The proposal aims at removing those "key points" checks because they add > complexicity and they look prone to race conditions, for example when > the connection get lost after the "if self._zk.lost" check. > Well, we HAVE to know when we lost a session. I like that your review removes the "wait for the connection to return" pieces. But we still need to know when the session is terminated. We can lose a connection, but keep our session. Or we can lose a connection and lose our session (if the connection loss was long enough). > Looking at all Zookeeper Lock usage, only the imageBuildLock seems to > benefit from the checks: if a builder lost the connection > during a build, it will mark the image as FAILED and another builder will > try > again. What happens if the other builder succeed before the first one? > With the zk-retry patch, both builder will mark the image as READY, > and as far as I can tell, this doesn't seems to be an issue. > That would be an issue, actually. We'd have two builds of the same image. We only keep the 2 most recent builds, so we'd lose our previous image. If the 2 we just built is a bad image, we'd have nothing to rollback to. > > The other Lock aren't protected, for example multiple builder may start > an image upload when the connection get lost. The zk-retry patch doesn't > seem to change the behavior. > If nothing else, I think this discussion has revealed deficiencies in our code base around handling session losses (I cite an example below). Our locks are supposed to be protected against this but I don't think we've done this well in all places. Whether we use retries or not, I think we need to correct this. > Zuul node locks are still an issue, e.g. builds may stay queued > whenever the zk-retry patch is used. > Perhaps a follow-up patch could improve that situation, for example > by re-creating the request when the connection state switch back to > CONNECTED. > > > If you are running with this code in your environment now, I'd say you've >> probably >> just been fairly lucky to not have a ZK session timeout in just the right >> place. I found >> it fairly difficult to get to happen in my tests today, and it was never >> consistent when >> I did get it to happen (either something weird with ZK itself, or the >> kazoo >> library). >> >> Well we used to get those kazoo connection stack traces almost daily, > and I couldn't find the root cause, ftr here is the setup: > > * Instances network setup by Neutron with ovs, > * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and > * Zookeeper configuration: http://paste.openstack.org/show/693515/ > > Though the bottomline is to make Zuul and Nodepool more resilient to > Zookeeper issues. > > What would be the wrong place for a connection lost to happen? Connection loss and session loss are different. A connection loss may or may not result in a session loss (it depends on how long we lost the connection). An example of a bad place to lose a session would be during node launch. Right now, if NodeLauncher._launchNode() experiences a session loss, the SessionExpiredError exception might bubble up and we'd retry the launch. Now it turns out, we aren't handling that particular exception and we are still assuming we have lock. This is a huge bug that I've just discovered thanks to this discussion because in that situation, we could have an unlocked Node in BUILDING state, and our cleanup thread could delete that before the building thread does something else with it again. So in that example, we must add an except clause to catch that particular exception and handle it differently. With your proposed changes, we'd need a way to know our session was lost, and make the same choices for handling it differently. You're review isn't providing that, but I _can_ see a way forward for the current code. > Also, I hope I'm not coming off as difficult around this. I just know how >> hard it was >> to get what we have now functioning correctly, so I'm very protective of >> it. Hopefully >> we can get this sorted out for you, and we can learn a bunch of stuff >> along >> the way. >> >> > Not all, Zookeeper error handling is tricky :) > Thank you for sharing your valuable feedback. This has been an excellent discussion! -------------- next part -------------- An HTML attachment was scrubbed... URL: From daragh.bailey at gmail.com Wed Mar 7 21:33:33 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Wed, 7 Mar 2018 21:33:33 +0000 Subject: working on zuul trigger plugin for jenkins Message-ID: Hi, Spend some time last week hacking away at a zuul trigger plugin for jenkins, really just about getting some better understanding of how this should hang together and what questions need to be asked. Lots of time spent reading details on gRPC as it was suggested at https://etherpad.openstack.org/p/zuulv3-jenkins-integration So far I've got a (very) rudimentary gRPC server that responds to a single status request with OK working as a Jenkins plugin that starts up automatically. I'll try to get it released somewhere soon so it's possible to discuss code, though if it is of interest to the OpenStack community maybe it should be hosted on review.openstack.org? Some general things I've noted: - authenticating requests doesn't appear to be straight forward with gRPC - unclear whether this can share the default jenkins http port even though gRPC and REST (currently using dedicated port) can technically do this - given the number of messages exchanged for a single job, should this use bidirectional streaming - started with a proto layout that is mostly unary client calls and server streaming responses - but should the trigger really be a single connection with multiple messages around the hand-off coordination? I've also had a bunch of questions to bounce off Monty around how to deal with Zuulv3 and Jenkins node coordination, the upshot of those is to leave that to nodepool and probably help out with the Jenkins nodepool plugin and make that a dependency of the zuul trigger plugin. For more detail, originally I was worrying about how to ensure zuul trigger plugin would work without using the nodepool plugin, and it would end up needing to query Jenkins as to whether it had the resources to execute the on before triggering, and this would require a deeper integration into Zuul. It was suggested that this would complicate it deeply and would be better to have nodepool manage the resources, so that zuul would only start the ansible job that would call the zuul trigger API endpoints if nodepool had sufficient resources for Jenkins to use to execute the job. It appears clear any migration path to Zuulv3 involving Jenkins would want to use Zuulv2 -> Gearman -> Jenkins -> Nodepoolv3 first, then migrate Zuulv3. So I'm looking to help on the Jenkins Nodepool plugin first, and that will probably make it much more straight forward to understand what needs to be done in the trigger plugin. A couple of weeks ago someone was in the Zuul room chatting about working on the Nodepool plugin for Jenkins, wondering if there has been any progress there, and what I can help out with in the future. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Mar 8 19:21:58 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Mar 2018 13:21:58 -0600 Subject: Landing changes to zuul-web javascript Message-ID: Hey everybody! Something just occurred to me while reviewing the next set of patches from Tristan to zuul-web that I think is important to share more broadly. In OpenStack's deployment of Zuul, HTML/JS changes to zuul-web go live basically immediately (give or take 15 minutes) No services need to be restarted or anything. However, while the python code will also be _installed_ at the same time, it will not go live immediately. It's also completely possible for someone to deploy web assets to a separate web server or even an object store like swift. For that reason, I'd like to suggest we make a practice of splitting web changes into patches that add methods to the REST interface on zuul-web, and patches that add Javascript consumption. Then make sure when approving patches that the REST changes have gone live - which will let us verify the draft changes from the build-javascript-content job. We could obviously add feature-flags to the javascript so that the two patches could land together. However, I don't think it's time for that yet for two reasons: a) this dashboard is still pretty new and we're still figuring some things out but more importantly b) we can't verify the html/js changes live in the draft change until the REST side has gone live anyway Anyway - I'm sure there will be exceptions where it's fine - so people should use good judgment as always - but at least be aware that landing javascript changes will essentially result in a near-immediate application of them to production for OpenStack and judge accordingly. Monty From fungi at yuggoth.org Thu Mar 8 19:30:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 19:30:05 +0000 Subject: Landing changes to zuul-web javascript In-Reply-To: References: Message-ID: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> On 2018-03-08 13:21:58 -0600 (-0600), Monty Taylor wrote: [...] > b) we can't verify the html/js changes live in the draft change until the > REST side has gone live anyway [...] Can we add a draft publication job which puts an alternate dashboard on logs.openstack.org pointing at the zuul.openstack.org API? The methods it uses are presumably read-only, so this should be safe in theory right? Then we get something akin to the storyboard-webclient draft builds (without needing to point them at a neutered -dev instance) where we can effectively test out the proposed UI changes before approving. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Thu Mar 8 19:48:03 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Mar 2018 13:48:03 -0600 Subject: Landing changes to zuul-web javascript In-Reply-To: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> References: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> Message-ID: <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> On 03/08/2018 01:30 PM, Jeremy Stanley wrote: > On 2018-03-08 13:21:58 -0600 (-0600), Monty Taylor wrote: > [...] >> b) we can't verify the html/js changes live in the draft change until the >> REST side has gone live anyway > [...] > > Can we add a draft publication job which puts an alternate dashboard > on logs.openstack.org pointing at the zuul.openstack.org API? The > methods it uses are presumably read-only, so this should be safe in > theory right? Then we get something akin to the storyboard-webclient > draft builds (without needing to point them at a neutered -dev > instance) where we can effectively test out the proposed UI changes > before approving. Already done - and this is the thing of which I speak: http://logs.openstack.org/71/545671/10/check/build-javascript-content/07bf929/npm/html/status.html The "build-javascript-content" job publishes a draft copy of the nice new shiny javascript/html content - and links to it using success_url. It's set up to point to the live zuul.openstack.org API. The main issue is that it can only point to the live API - so if a change has both python and javascript elements, while we'll publish a draft copy of the web content, the server-side content will only get updated once the python lands and is rolled live - so the draft web changes won't show the new functionality. In fact, the one I linked to above is a javascript change at the end of a stack that includes python changes as well. If you click on "projects" - you'll note that it doesn't show anything - because we haven't landed the python Projects API endpoint yet. Monty From fungi at yuggoth.org Thu Mar 8 21:04:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 21:04:22 +0000 Subject: Landing changes to zuul-web javascript In-Reply-To: <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> References: <20180308193005.bdpbodzj7a3lrvcw@yuggoth.org> <9d19ca77-be2f-fc85-c0d9-5f7551fcbd2e@inaugust.com> Message-ID: <20180308210421.3kdlmd357tzjlkgg@yuggoth.org> On 2018-03-08 13:48:03 -0600 (-0600), Monty Taylor wrote: [...] > > Can we add a draft publication job which puts an alternate dashboard > > on logs.openstack.org pointing at the zuul.openstack.org API? [...] > Already done - and this is the thing of which I speak: [...] Awesome! I have nothing constructive to add aside from my wholehearted agreement on this plan. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 9 17:54:07 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 09 Mar 2018 09:54:07 -0800 Subject: Landing changes to zuul-web javascript In-Reply-To: (Monty Taylor's message of "Thu, 8 Mar 2018 13:21:58 -0600") References: Message-ID: <877eqljfsw.fsf@meyer.lemoncheese.net> Monty Taylor writes: > For that reason, I'd like to suggest we make a practice of splitting > web changes into patches that add methods to the REST interface on > zuul-web, and patches that add Javascript consumption. Then make sure > when approving patches that the REST changes have gone live - which > will let us verify the draft changes from the build-javascript-content > job. This sounds good -- the only thing I'd add (or, perhaps "emphasize" would be the right word) is that changes which add REST methods should have unit tests which exercise them. That should give us the confidence to land those alone, before the JS changes land. -Jim From tdecacqu at redhat.com Sat Mar 10 05:56:57 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sat, 10 Mar 2018 05:56:57 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> Message-ID: <1520651673.vknnu6fdkq.tristanC@fedora> On March 7, 2018 3:28 pm, David Shrewsbury wrote: > On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray > wrote: > >> Thanks for the thorough review David. >> Indeed kazoo Locks are ephemerals and they vanish when the connection is >> lost. >> However the zk-retry patch doesn't seems to affect current behaviors, >> please see my comments inlined bellow and on the nodepool review. > > > Actually, the behavior is very different. See the sample code and output > below. > > > >> >> >> On March 6, 2018 6:38 pm, David Shrewsbury wrote: >> >>> I've played with some test code around this and I feel convinced we could >>> have >>> issues with locks and ephemeral nodes if we are hiding ZK connection >>> problems >>> (particularly session timeouts, which cause us to lose both things) behind >>> retries. >>> >>> Would you mind sharing the test code? > > > Certainly. > > This is the script I used that simulates current nodepool behavior > (zk_retry_test1.py): > > http://paste.openstack.org/show/693650/ > > This is the script that simulates your proposed changes (zk_retry_test2.py): > > http://paste.openstack.org/show/693651/ > > Running zk_retry_test1.py locally (and shutting down ZK during the sleep() > call and > leaving it shutdown long enough for the session to expire) produces the > following output > (note the exception): > > http://paste.openstack.org/show/693653/ > > Doing the same with zk_retry_test2.py produces very different output (note > the lack > of the exception): > > http://paste.openstack.org/show/693654/ > > Your proposed changes will hide that SessionExpiredError exception. So if > we have > code depending on that exception bubbling up to know we've lost a lock or an > ephemeral node, it will now be broken with the use of retry(). And there is > nothing > provided in your review that let's us know the session was lost. > Well yes, the goal is to ignore whenever the session or connection is lost. In particular, if the plan is to replace gearman by zookeeper, then we'd better be more resilient to zookeeper exceptions. So, the issue is the kazoo Lock implementation, which uses ephemeral objects that are automatically removed when the session is lost. I've reworked the review to add Lock verification before critical state changes: Instead of inferring Lock status from the connection state there is now a lockCheck method to ensure that the lock is still present. > Looking at some of the builder and launcher code this morning with the lost > session error > freshly in mind, I see lots of code that doesn't even take this into > account properly, so I think > we may be potentially broken in many places with retries or not. :( > > > >> >> Right now we can depend on those connection errors to bubble up to nodepool >>> and zuul as exceptions, and we can assume our locks or ephemeral nodes are >>> gone at that point and handle accordingly. If we now hide those, we need >>> to >>> begin >>> using a listener [1] to know when our session has timed out and set some >>> sort of >>> state that we will have to check at multiple key points. FWIW, I started >>> nodepool >>> v3 off with such a listener and the logic became pretty difficult fairly >>> quickly. >>> >>> The proposal aims at removing those "key points" checks because they add >> complexicity and they look prone to race conditions, for example when >> the connection get lost after the "if self._zk.lost" check. >> > > Well, we HAVE to know when we lost a session. I like that your review > removes > the "wait for the connection to return" pieces. But we still need to know > when the > session is terminated. We can lose a connection, but keep our session. Or > we can > lose a connection and lose our session (if the connection loss was long > enough). > > It seems like we can get around the connection listener logic by checking the ephemeral lock node without relying on the zk.lost property. > >> Looking at all Zookeeper Lock usage, only the imageBuildLock seems to >> benefit from the checks: if a builder lost the connection >> during a build, it will mark the image as FAILED and another builder will >> try >> again. What happens if the other builder succeed before the first one? >> With the zk-retry patch, both builder will mark the image as READY, >> and as far as I can tell, this doesn't seems to be an issue. >> > > That would be an issue, actually. We'd have two builds of the same image. > We only keep the 2 most recent builds, so we'd lose our previous image. > If the 2 we just built is a bad image, we'd have nothing to rollback to. > Ok, that makes sense. However I think this behavior is still incorrect if another builder finishes the build before the first builder. It seems like we need to check if the build isn't READY before setting it as FAILED. > > >> >> The other Lock aren't protected, for example multiple builder may start >> an image upload when the connection get lost. The zk-retry patch doesn't >> seem to change the behavior. >> > > If nothing else, I think this discussion has revealed deficiencies in our > code base > around handling session losses (I cite an example below). Our locks are > supposed > to be protected against this but I don't think we've done this well in all > places. Whether > we use retries or not, I think we need to correct this. > > >> Zuul node locks are still an issue, e.g. builds may stay queued >> whenever the zk-retry patch is used. >> Perhaps a follow-up patch could improve that situation, for example >> by re-creating the request when the connection state switch back to >> CONNECTED. >> >> >> If you are running with this code in your environment now, I'd say you've >>> probably >>> just been fairly lucky to not have a ZK session' timeout in just the right >>> place. I found >>> it fairly difficult to get to happen in my tests today, and it was never >>> consistent when >>> I did get it to happen (either something weird with ZK itself, or the >>> kazoo >>> library). >>> >>> Well we used to get those kazoo connection stack traces almost daily, >> and I couldn't find the root cause, ftr here is the setup: >> >> * Instances network setup by Neutron with ovs, >> * CentOS 7.4 kernels, with tcp_ecn sysctl set to 0, and >> * Zookeeper configuration: http://paste.openstack.org/show/693515/ >> >> Though the bottomline is to make Zuul and Nodepool more resilient to >> Zookeeper issues. >> >> What would be the wrong place for a connection lost to happen? > > > Connection loss and session loss are different. A connection loss may or > may not result in a session loss (it depends on how long we lost the > connection). > > An example of a bad place to lose a session would be during node launch. > Right now, if NodeLauncher._launchNode() experiences a session loss, the > SessionExpiredError exception might bubble up and we'd retry the launch. > Now it turns out, we aren't handling that particular exception and we are > still assuming we have lock. This is a huge bug that I've just discovered > thanks > to this discussion because in that situation, we could have an unlocked Node > in BUILDING state, and our cleanup thread could delete that before the > building > thread does something else with it again. > Similarly, I added a new protection before setting a node as READY: the launcher checks the lock, and if it got lost, the node is left as BUILDING to ensure proper cleanup. > So in that example, we must add an except clause to catch that particular > exception and handle it differently. With your proposed changes, we'd need a > way to know our session was lost, and make the same choices for handling it > differently. You're review isn't providing that, but I _can_ see a way > forward for > the current code. > I think the current code isn't ideal because it won't catch connection issues happening after the check for zk.lost. There will always be a window where the connection can get lost between the check and the critical code. Alternatively, we could implement persistent Lock using a timestamp and a thread to refresh it. Perhaps this implemention could be added to kazoo codebase directly. > > >> Also, I hope I'm not coming off as difficult around this. I just know how >>> hard it was >>> to get what we have now functioning correctly, so I'm very protective of >>> it. Hopefully >>> we can get this sorted out for you, and we can learn a bunch of stuff >>> along >>> the way. >>> >>> >> Not all, Zookeeper error handling is tricky :) >> Thank you for sharing your valuable feedback. > > > This has been an excellent discussion! > Thank you again for discussing this with me! -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Mon Mar 12 21:37:29 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Mon, 12 Mar 2018 17:37:29 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520651673.vknnu6fdkq.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: On Sat, Mar 10, 2018 at 12:56 AM, Tristan Cacqueray wrote: > On March 7, 2018 3:28 pm, David Shrewsbury wrote: > >> On Wed, Mar 7, 2018 at 12:38 AM, Tristan Cacqueray >> wrote: >> >> Thanks for the thorough review David. >>> Indeed kazoo Locks are ephemerals and they vanish when the connection is >>> lost. >>> However the zk-retry patch doesn't seems to affect current behaviors, >>> please see my comments inlined bellow and on the nodepool review. >>> >> >> >> Actually, the behavior is very different. See the sample code and output >> below. >> >> >> >> >>> >>> On March 6, 2018 6:38 pm, David Shrewsbury wrote: >>> >>> I've played with some test code around this and I feel convinced we could >>>> have >>>> issues with locks and ephemeral nodes if we are hiding ZK connection >>>> problems >>>> (particularly session timeouts, which cause us to lose both things) >>>> behind >>>> retries. >>>> >>>> Would you mind sharing the test code? >>>> >>> >> >> Certainly. >> >> This is the script I used that simulates current nodepool behavior >> (zk_retry_test1.py): >> >> http://paste.openstack.org/show/693650/ >> >> This is the script that simulates your proposed changes >> (zk_retry_test2.py): >> >> http://paste.openstack.org/show/693651/ >> >> Running zk_retry_test1.py locally (and shutting down ZK during the sleep() >> call and >> leaving it shutdown long enough for the session to expire) produces the >> following output >> (note the exception): >> >> http://paste.openstack.org/show/693653/ >> >> Doing the same with zk_retry_test2.py produces very different output (note >> the lack >> of the exception): >> >> http://paste.openstack.org/show/693654/ >> >> Your proposed changes will hide that SessionExpiredError exception. So if >> we have >> code depending on that exception bubbling up to know we've lost a lock or >> an >> ephemeral node, it will now be broken with the use of retry(). And there >> is >> nothing >> provided in your review that let's us know the session was lost. >> >> Well yes, the goal is to ignore whenever the session or connection is > lost. > In particular, if the plan is to replace gearman by zookeeper, then we'd > better be more resilient to zookeeper exceptions. > > So, the issue is the kazoo Lock implementation, which uses ephemeral > objects > that are automatically removed when the session is lost. I've reworked > the review to add Lock verification before critical state changes: > Instead of inferring Lock status from the connection state there is now a > lockCheck method to ensure that the lock is still present. > > > Looking at some of the builder and launcher code this morning with the lost >> session error >> freshly in mind, I see lots of code that doesn't even take this into >> account properly, so I think >> we may be potentially broken in many places with retries or not. :( >> >> >> >> >>> Right now we can depend on those connection errors to bubble up to >>> nodepool >>> >>>> and zuul as exceptions, and we can assume our locks or ephemeral nodes >>>> are >>>> gone at that point and handle accordingly. If we now hide those, we need >>>> to >>>> begin >>>> using a listener [1] to know when our session has timed out and set some >>>> sort of >>>> state that we will have to check at multiple key points. FWIW, I started >>>> nodepool >>>> v3 off with such a listener and the logic became pretty difficult fairly >>>> quickly. >>>> >>>> The proposal aims at removing those "key points" checks because they add >>>> >>> complexicity and they look prone to race conditions, for example when >>> the connection get lost after the "if self._zk.lost" check. >>> >>> >> Well, we HAVE to know when we lost a session. I like that your review >> removes >> the "wait for the connection to return" pieces. But we still need to know >> when the >> session is terminated. We can lose a connection, but keep our session. Or >> we can >> lose a connection and lose our session (if the connection loss was long >> enough). >> >> >> It seems like we can get around the connection listener logic by > checking the ephemeral lock node without relying on the zk.lost property. > > >> Looking at all Zookeeper Lock usage, only the imageBuildLock seems to >>> benefit from the checks: if a builder lost the connection >>> during a build, it will mark the image as FAILED and another builder will >>> try >>> again. What happens if the other builder succeed before the first one? >>> With the zk-retry patch, both builder will mark the image as READY, >>> and as far as I can tell, this doesn't seems to be an issue. >>> >>> >> That would be an issue, actually. We'd have two builds of the same image. >> We only keep the 2 most recent builds, so we'd lose our previous image. >> If the 2 we just built is a bad image, we'd have nothing to rollback to. >> >> Ok, that makes sense. However I think this behavior is still incorrect if > another builder finishes the build before the first builder. It seems like > we need to check if the build isn't READY before setting it as FAILED. I don't get your thinking here. We should NEVER have multiple builders building the same image. That's the whole reason for the image build lock. [snip] This email is simply an FYI on where my thinking is on this right now. I'm still mulling over the options and waiting for feedback from others (particularly Jim). These are the things I know: 1) A lot of our code has been depending on exceptions being thrown from ZK to decide to abort certain operations. Session errors being among the expected exceptions. 2) We have places in current code that don't even consider session errors, and this needs to be corrected. Very much for sad making. :( 3) You have provided a way to both use retry() AND examine locks to determine if we have lost a particular lock. (Still need something similar for ephemeral nodes, but that's easy.) 4) When considering how to move forward with fixing 1 and 2 to support your change, where we used to be able to surround a critical section of code with try-except to handle session errors, we now have to sprinkle in checks for locks and ephemeral nodes. Which is better? My gut is telling me that try-except would be much nicer to use than liberally adding lock/ephemeral node checks after each ZK API call. But, since our current session handling is poor to non-existent in places, I haven't decided if this is true or not. Here is where I'd like to get opinions from other folks. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tobias.Henkel at bmw.de Tue Mar 13 14:15:26 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Tue, 13 Mar 2018 14:15:26 +0000 Subject: Zuul meeting time Message-ID: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Hi zuul community, Currently the zuul meeting is at 22UTC which is currently 23:00 localtime or soon 0:00 within summer time at my location. That makes it impossible for me to participate regularly. However I’d really like to attend the zuul meeting. So I’d like to ask zuul folks about your opinion about shifting it around 20ish UTC. I wouldn’t have any strong opinion about the day. Thanks and kind regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Mar 13 14:43:35 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 07:43:35 -0700 Subject: Zuul meeting time In-Reply-To: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> (Tobias Henkel's message of "Tue, 13 Mar 2018 14:15:26 +0000") References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Message-ID: <874llkxch4.fsf@meyer.lemoncheese.net> writes: > Hi zuul community, > > Currently the zuul meeting is at 22UTC which is currently 23:00 > localtime or soon 0:00 within summer time at my location. That makes > it impossible for me to participate regularly. However I’d really like > to attend the zuul meeting. So I’d like to ask zuul folks about your > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > opinion about the day. I'd be happy with 20:00 utc. It's one of the least-worst times we've come up with in OpenStack, making it equally moderately painful for APAC and EMEA. We can keep it on Monday, or we have the option of changing to Tuesday. That would place it immediately after the OpenStack infra meeting. -Jim From fungi at yuggoth.org Tue Mar 13 14:55:31 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Mar 2018 14:55:31 +0000 Subject: Zuul meeting time In-Reply-To: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> Message-ID: <20180313145529.epubrt577nvicyyc@yuggoth.org> On 2018-03-13 14:15:26 +0000 (+0000), Tobias.Henkel at bmw.de wrote: > Currently the zuul meeting is at 22UTC which is currently 23:00 > localtime or soon 0:00 within summer time at my location. That > makes it impossible for me to participate regularly. However I’d > really like to attend the zuul meeting. So I’d like to ask zuul > folks about your opinion about shifting it around 20ish UTC. I > wouldn’t have any strong opinion about the day. I have no objection to 20:00 UTC (no regularly-scheduled conflicts for that time on any day of the week), but it's also only 3pm or 4pm local time for me. I'm more concerned with how it might impact APAC attendees and as such am particularly interested in their input. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Tue Mar 13 14:56:54 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 10:56:54 -0400 Subject: Zuul meeting time In-Reply-To: <874llkxch4.fsf@meyer.lemoncheese.net> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> Message-ID: <20180313145654.GA21469@localhost.localdomain> On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > writes: > > > Hi zuul community, > > > > Currently the zuul meeting is at 22UTC which is currently 23:00 > > localtime or soon 0:00 within summer time at my location. That makes > > it impossible for me to participate regularly. However I’d really like > > to attend the zuul meeting. So I’d like to ask zuul folks about your > > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > > opinion about the day. > > I'd be happy with 20:00 utc. It's one of the least-worst times we've > come up with in OpenStack, making it equally moderately painful for APAC > and EMEA. > > We can keep it on Monday, or we have the option of changing to Tuesday. > That would place it immediately after the OpenStack infra meeting. > > -Jim > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps move my meetings to a single day. Paul From corvus at inaugust.com Tue Mar 13 15:02:28 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 08:02:28 -0700 Subject: Zuul meeting time In-Reply-To: <20180313145654.GA21469@localhost.localdomain> (Paul Belanger's message of "Tue, 13 Mar 2018 10:56:54 -0400") References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> Message-ID: <87muzcvx17.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: >> writes: >> >> > Hi zuul community, >> > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 >> > localtime or soon 0:00 within summer time at my location. That makes >> > it impossible for me to participate regularly. However I’d really like >> > to attend the zuul meeting. So I’d like to ask zuul folks about your >> > opinion about shifting it around 20ish UTC. I wouldn’t have any strong >> > opinion about the day. >> >> I'd be happy with 20:00 utc. It's one of the least-worst times we've >> come up with in OpenStack, making it equally moderately painful for APAC >> and EMEA. >> >> We can keep it on Monday, or we have the option of changing to Tuesday. >> That would place it immediately after the OpenStack infra meeting. >> >> -Jim >> > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps > move my meetings to a single day. 22:00 is the current time -- did you mean to indicate you wanted to keep it at that time, or that you were okay with the proposed move to 20:00? -Jim From shrewsbury.dave at gmail.com Tue Mar 13 15:32:11 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Tue, 13 Mar 2018 11:32:11 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: On Mon, Mar 12, 2018 at 5:37 PM, David Shrewsbury wrote: [snip] 4) When considering how to move forward with fixing 1 and 2 to support > your change, where we used to be able to surround a critical section of > code with try-except to handle session errors, we now have to sprinkle in > checks for locks and ephemeral nodes. Which is better? My gut is > telling me that try-except would be much nicer to use than liberally adding > lock/ephemeral node checks after each ZK API call. But, since our current > session handling is poor to non-existent in places, I haven't decided if > this is true or not. Here is where I'd like to get opinions from other > folks. > To help me (and others) wrap my head around this, here are two solutions using both methods to the bug discussed earlier. (I'm a visual learner and needed to actually see this to form an opinion). This is using the current (not-retry) code: https://review.openstack.org/552538 This is based off your proposed change to use retry: https://review.openstack.org/552551 Note that in the 2nd review, we now have to scatter lock checks before any attempts to store ZooKeeper data within our critical section. Where as in the first, we can just wrap the entire critical section with a try-except. Based on this, I feel like using retry() is going to force us to add much more code, and will end up being more error prone since it adds places where we must check for session loss that would affect locks and ephemeral nodes. I feel like maybe we could use retry() for the initial connection to eliminate the "zookeeper not running before nodepool" startup problem. But I'm not feeling good about using it throughout the rest of the code. -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Tue Mar 13 15:36:48 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 11:36:48 -0400 Subject: Zuul meeting time In-Reply-To: <87muzcvx17.fsf@meyer.lemoncheese.net> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> <87muzcvx17.fsf@meyer.lemoncheese.net> Message-ID: <20180313153648.GB21469@localhost.localdomain> On Tue, Mar 13, 2018 at 08:02:28AM -0700, James E. Blair wrote: > Paul Belanger writes: > > > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > >> writes: > >> > >> > Hi zuul community, > >> > > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 > >> > localtime or soon 0:00 within summer time at my location. That makes > >> > it impossible for me to participate regularly. However I’d really like > >> > to attend the zuul meeting. So I’d like to ask zuul folks about your > >> > opinion about shifting it around 20ish UTC. I wouldn’t have any strong > >> > opinion about the day. > >> > >> I'd be happy with 20:00 utc. It's one of the least-worst times we've > >> come up with in OpenStack, making it equally moderately painful for APAC > >> and EMEA. > >> > >> We can keep it on Monday, or we have the option of changing to Tuesday. > >> That would place it immediately after the OpenStack infra meeting. > >> > >> -Jim > >> > > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It actually helps > > move my meetings to a single day. > > 22:00 is the current time -- did you mean to indicate you wanted to keep > it at that time, or that you were okay with the proposed move to 20:00? > Apologies, cold medication. 20:00UTC also works for me. From corvus at inaugust.com Tue Mar 13 16:37:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 09:37:24 -0700 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: (David Shrewsbury's message of "Mon, 12 Mar 2018 17:37:29 -0400") References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> Message-ID: <87y3iwszi3.fsf@meyer.lemoncheese.net> David Shrewsbury writes: > This email is simply an FYI on where my thinking is on this right now. > I'm still mulling over the options and waiting for feedback from others > (particularly Jim). > > These are the things I know: > > 1) A lot of our code has been depending on exceptions being thrown > from ZK to decide to abort certain operations. Session errors being > among the expected exceptions. > > 2) We have places in current code that don't even consider session > errors, and this needs to be corrected. Very much for sad making. :( > > 3) You have provided a way to both use retry() AND examine locks to > determine if we have lost a particular lock. (Still need something similar > for ephemeral nodes, but that's easy.) > > 4) When considering how to move forward with fixing 1 and 2 to support > your change, where we used to be able to surround a critical section of > code with try-except to handle session errors, we now have to sprinkle in > checks for locks and ephemeral nodes. Which is better? My gut is > telling me that try-except would be much nicer to use than liberally adding > lock/ephemeral node checks after each ZK API call. But, since our current > session handling is poor to non-existent in places, I haven't decided if > this is true or not. Here is where I'd like to get opinions from other > folks. My inclination is to prefer coarse error handling. In general, the approach we take in Nodepool is to assume that every call we make can throw an exception, and then some level higher up in the call stack will deal with it (abort the build, mark a node as deleted, etc). That comes from our experience working with clouds, but it can apply equally to interacting with any system. The benefit is that it usually makes code easier to follow for the reasons David mentions. Having to remember to check something after every operation is error-prone (and those are the kinds of errors that lead us to prefer language with exception handling). However, I really like the fact that Tristan's patch removes some explicit session handling code. Removing the wait-for-zookeeper sections is very nice. Though maybe we could just refactor them into a common method. One thing that we haven't discussed is the effect on program flow. In nodepool, we have a large number of threads interacting with zookeeper, and in general, it's probably okay for them to pause while retrying some event. However, in Zuul, the only thread interacting with zookeeper is the main scheduler thread. If we adopt retries in Zuul, that thread will end up getting stuck waiting for zookeeper, whereas it could be busy reporting results or canceling jobs. I've learned a lot reading this thread and the patches. I'm happy to continue exploring ways of improving our zookeeper handling. -Jim From corvus at inaugust.com Tue Mar 13 20:33:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Mar 2018 13:33:42 -0700 Subject: Disabling nullwrap Message-ID: <871sgnsok9.fsf@meyer.lemoncheese.net> Hi, The recent security fixes have underscored that bubblewrap is currently key to Zuul executor security. While Zuul does have an option to run without bubblewrap (called "nullwrap"), it is not recommended, and it doesn't support secrets, which is a fairly important feature. Nullwrap was written with the idea of aiding those use-cases where it is not possible to run with bubblewrap. However, I think that if such a case still exists, we should work to address that -- nullwrap really shouldn't be the answer. I'd like to disable the ability to use nullwrap to avoid any potential confusion on the subject in the future. If anyone is currently using nullwrap, please speak up so we can start working on a solution. -Jim From clint at fewbar.com Tue Mar 13 20:50:17 2018 From: clint at fewbar.com (Clint Byrum) Date: Tue, 13 Mar 2018 13:50:17 -0700 Subject: Disabling nullwrap In-Reply-To: <871sgnsok9.fsf@meyer.lemoncheese.net> References: <871sgnsok9.fsf@meyer.lemoncheese.net> Message-ID: <1520974148-sup-7367@fewbar.com> Excerpts from corvus's message of 2018-03-13 13:33:42 -0700: > Hi, > > The recent security fixes have underscored that bubblewrap is currently > key to Zuul executor security. While Zuul does have an option to run > without bubblewrap (called "nullwrap"), it is not recommended, and it > doesn't support secrets, which is a fairly important feature. > > Nullwrap was written with the idea of aiding those use-cases where it is > not possible to run with bubblewrap. However, I think that if such a > case still exists, we should work to address that -- nullwrap really > shouldn't be the answer. > > I'd like to disable the ability to use nullwrap to avoid any potential > confusion on the subject in the future. > > If anyone is currently using nullwrap, please speak up so we can start > working on a solution. > +1 from me. I had created nullwrap originally when I had some difficulty testing with bubblewrap, but time has passed and it is now easier to get in most Linux distros. From pabelanger at redhat.com Tue Mar 13 20:55:57 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 16:55:57 -0400 Subject: Disabling nullwrap In-Reply-To: <871sgnsok9.fsf@meyer.lemoncheese.net> References: <871sgnsok9.fsf@meyer.lemoncheese.net> Message-ID: <20180313205557.GA20606@localhost.localdomain> On Tue, Mar 13, 2018 at 01:33:42PM -0700, James E. Blair wrote: > Hi, > > The recent security fixes have underscored that bubblewrap is currently > key to Zuul executor security. While Zuul does have an option to run > without bubblewrap (called "nullwrap"), it is not recommended, and it > doesn't support secrets, which is a fairly important feature. > > Nullwrap was written with the idea of aiding those use-cases where it is > not possible to run with bubblewrap. However, I think that if such a > case still exists, we should work to address that -- nullwrap really > shouldn't be the answer. > > I'd like to disable the ability to use nullwrap to avoid any potential > confusion on the subject in the future. > +1 it does seem most Linux distro do support bubblewrap packages now. > If anyone is currently using nullwrap, please speak up so we can start > working on a solution. > I want to say I only heard of one place (OpenShift Online) where bubblewrap had to be disabled. I haven't confirmed that myself however. > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From tdecacqu at redhat.com Wed Mar 14 01:05:05 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Mar 2018 01:05:05 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <87y3iwszi3.fsf@meyer.lemoncheese.net> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> Message-ID: <1520986087.e1akjswbaa.tristanC@fedora> Hello, Looking at our ancient logs (Nov 2017 before we start using zk-retry), we actually only had ConnectionLoss at the begining of the run() code, here is a collection of them: http://paste.openstack.org/show/700703/ I wonder if nodepool.openstack.org is avoiding them because it is such a busy system. It would be useful if other deployments looked for ConnectionLoss and SessionExpired in their logs and share the stacktraces. Please find more comments inlined below. On March 13, 2018 4:37 pm, James E. Blair wrote: > David Shrewsbury writes: > >> This email is simply an FYI on where my thinking is on this right now. >> I'm still mulling over the options and waiting for feedback from others >> (particularly Jim). >> >> These are the things I know: >> >> 1) A lot of our code has been depending on exceptions being thrown >> from ZK to decide to abort certain operations. Session errors being >> among the expected exceptions. >> Last PS makes the checkLock method raise the SessionExpired which seems to provide the same behavior. >> 2) We have places in current code that don't even consider session >> errors, and this needs to be corrected. Very much for sad making. :( >> >> 3) You have provided a way to both use retry() AND examine locks to >> determine if we have lost a particular lock. (Still need something similar >> for ephemeral nodes, but that's easy.) >> It seems more robus than relying on second hand informations from the connection listener. >> 4) When considering how to move forward with fixing 1 and 2 to support >> your change, where we used to be able to surround a critical section of >> code with try-except to handle session errors, we now have to sprinkle in >> checks for locks and ephemeral nodes. Which is better? My gut is >> telling me that try-except would be much nicer to use than liberally adding >> lock/ephemeral node checks after each ZK API call. But, since our current >> session handling is poor to non-existent in places, I haven't decided if >> this is true or not. Here is where I'd like to get opinions from other >> folks. > > My inclination is to prefer coarse error handling. In general, the > approach we take in Nodepool is to assume that every call we make can > throw an exception, and then some level higher up in the call stack will > deal with it (abort the build, mark a node as deleted, etc). That comes > from our experience working with clouds, but it can apply equally to > interacting with any system. The benefit is that it usually makes code > easier to follow for the reasons David mentions. Having to remember to > check something after every operation is error-prone (and those are the > kinds of errors that lead us to prefer language with exception > handling). > Agreed, but why the builder isn't relying on exception handling here: http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder.py#n789 I actually implemented manual check because of that builder use-case, though this can be easily modified to raise exceptions instead. > However, I really like the fact that Tristan's patch removes some > explicit session handling code. Removing the wait-for-zookeeper > sections is very nice. Though maybe we could just refactor them into a > common method. > I still don't understand the logic of checking if the connection has been lost. What happens if the connection get lost after the wait-for-zookeeper sections? > One thing that we haven't discussed is the effect on program flow. In > nodepool, we have a large number of threads interacting with zookeeper, > and in general, it's probably okay for them to pause while retrying some > event. However, in Zuul, the only thread interacting with zookeeper is > the main scheduler thread. If we adopt retries in Zuul, that thread > will end up getting stuck waiting for zookeeper, whereas it could be > busy reporting results or canceling jobs. > Actually we haven't discussed the effect of multiple thread experiencing SessionExpired errors, will kazoo raise the exception for each threads, or will only one get it? Looking at the builder code linked above, it seems like the first thread may resetLostFlag(). > I've learned a lot reading this thread and the patches. I'm happy to > continue exploring ways of improving our zookeeper handling. > > -Jim > Please have a look again at 535537 (PS5), I think it implements the best of both strategies: it removes the wait-for-zookeeper sections while still using the SessionExpired exception. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joshua.hesketh at gmail.com Wed Mar 14 03:15:01 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 14 Mar 2018 14:15:01 +1100 Subject: Zuul meeting time In-Reply-To: <20180313153648.GB21469@localhost.localdomain> References: <4F09B441-7111-4052-905A-4CB180DD167A@bmw.de> <874llkxch4.fsf@meyer.lemoncheese.net> <20180313145654.GA21469@localhost.localdomain> <87muzcvx17.fsf@meyer.lemoncheese.net> <20180313153648.GB21469@localhost.localdomain> Message-ID: Hey, 20:00UTC would be 6am for eastern Australia once daylight savings ends in a few weeks. This is not ideal for me and I would selfishly prefer to leave it, but since I have been less active in general perhaps it is better to optimise for those doing significant work. That said, perhaps we could look at moving it to 21:00UTC, which would make it 7am for myself and hopefully not too late for others. Cheers, Josh On Wed, Mar 14, 2018 at 2:36 AM, Paul Belanger wrote: > On Tue, Mar 13, 2018 at 08:02:28AM -0700, James E. Blair wrote: > > Paul Belanger writes: > > > > > On Tue, Mar 13, 2018 at 07:43:35AM -0700, James E. Blair wrote: > > >> writes: > > >> > > >> > Hi zuul community, > > >> > > > >> > Currently the zuul meeting is at 22UTC which is currently 23:00 > > >> > localtime or soon 0:00 within summer time at my location. That makes > > >> > it impossible for me to participate regularly. However I’d really > like > > >> > to attend the zuul meeting. So I’d like to ask zuul folks about your > > >> > opinion about shifting it around 20ish UTC. I wouldn’t have any > strong > > >> > opinion about the day. > > >> > > >> I'd be happy with 20:00 utc. It's one of the least-worst times we've > > >> come up with in OpenStack, making it equally moderately painful for > APAC > > >> and EMEA. > > >> > > >> We can keep it on Monday, or we have the option of changing to > Tuesday. > > >> That would place it immediately after the OpenStack infra meeting. > > >> > > >> -Jim > > >> > > > +1 to 22:00UTC, I also don't mind if we move it to Tuesday. It > actually helps > > > move my meetings to a single day. > > > > 22:00 is the current time -- did you mean to indicate you wanted to keep > > it at that time, or that you were okay with the proposed move to 20:00? > > > Apologies, cold medication. 20:00UTC also works for me. > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Mar 14 03:32:46 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Mar 2018 03:32:46 +0000 Subject: Disabling nullwrap In-Reply-To: <20180313205557.GA20606@localhost.localdomain> References: <871sgnsok9.fsf@meyer.lemoncheese.net> <20180313205557.GA20606@localhost.localdomain> Message-ID: <1520997724.3tm19tpo9a.tristanC@fedora> On March 13, 2018 8:55 pm, Paul Belanger wrote: > On Tue, Mar 13, 2018 at 01:33:42PM -0700, James E. Blair wrote: >> Hi, >> >> The recent security fixes have underscored that bubblewrap is currently >> key to Zuul executor security. While Zuul does have an option to run >> without bubblewrap (called "nullwrap"), it is not recommended, and it >> doesn't support secrets, which is a fairly important feature. >> >> Nullwrap was written with the idea of aiding those use-cases where it is >> not possible to run with bubblewrap. However, I think that if such a >> case still exists, we should work to address that -- nullwrap really >> shouldn't be the answer. >> >> I'd like to disable the ability to use nullwrap to avoid any potential >> confusion on the subject in the future. >> > +1 it does seem most Linux distro do support bubblewrap packages now. > >> If anyone is currently using nullwrap, please speak up so we can start >> working on a solution. >> > I want to say I only heard of one place (OpenShift Online) where bubblewrap had > to be disabled. I haven't confirmed that myself however. > Hi, Bubblewrap currently does require a privileged container to create new namespaces. With containers we are supposed to split each context into dedicated containers as opposed to multiple differents process running in the same container. I am not sure how feasable this is with the driver interface where we need to share execution data between each play run. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shrewsbury.dave at gmail.com Wed Mar 14 17:10:27 2018 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 14 Mar 2018 13:10:27 -0400 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1520986087.e1akjswbaa.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> Message-ID: On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray wrote: [snip[ >> Please have a look again at 535537 (PS5), I think it implements the best > of both strategies: it removes the wait-for-zookeeper sections while still > using the SessionExpired exception. Consolidating the lock checks into the write methods works since we only need to make sure that we still have the lock before we update the ZK database. This doesn't work for ephemeral nodes though. How would you propose we address that? -Dave -- David Shrewsbury (Shrews) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Fri Mar 16 01:46:04 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 16 Mar 2018 01:46:04 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> Message-ID: <1521163612.mijxhfgki1.tristanC@fedora> On March 14, 2018 5:10 pm, David Shrewsbury wrote: > On Tue, Mar 13, 2018 at 9:05 PM, Tristan Cacqueray > wrote: > > [snip[ > > >>> Please have a look again at 535537 (PS5), I think it implements the best >> of both strategies: it removes the wait-for-zookeeper sections while still >> using the SessionExpired exception. > > > Consolidating the lock checks into the write methods works since we only > need to make sure that we still have the lock before we update the ZK > database. This doesn't work for ephemeral nodes though. How would you > propose we address that? > > Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Fri Mar 16 08:10:33 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Fri, 16 Mar 2018 08:10:33 +0000 Subject: Zookeeper ConnectionLoss/SessionExpired issues In-Reply-To: <1521163612.mijxhfgki1.tristanC@fedora> References: <1520216795.pc3r2jevm5.tristanC@fedora> <1520296945.n7h8zkzye1.tristanC@fedora> <1520387902.pcnp10igdm.tristanC@fedora> <1520651673.vknnu6fdkq.tristanC@fedora> <87y3iwszi3.fsf@meyer.lemoncheese.net> <1520986087.e1akjswbaa.tristanC@fedora> <1521163612.mijxhfgki1.tristanC@fedora> Message-ID: On 16.03.18, 02:45, "Tristan Cacqueray" wrote: Nodepool uses 2 ephemeral nodes: * launcher registration, which is auto-recreated at the begining of the run loop * Min-Ready NodeRequest, which doesn't seems to be an issue if lost, the launcher will recreate them. Zuul only uses 1 ephemeral node for NodeRequest. It is currently not protected (with or without the zk-retry patch): if the session is lost, then NodeRequest are not re-created and queue items may get stuck in the queued state. I think we could improve this by storing the request in a local list, and have a checkRequests method to re-create them when this happens. -Tristan Regarding the NodeRequest in zuul there is a change [1] in review to react on lost node requests. [1] https://review.openstack.org/537335 - Tobias From tdecacqu at redhat.com Mon Mar 19 02:20:22 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 19 Mar 2018 02:20:22 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) Message-ID: <1521425389.tf2bg9ecc0.tristanC@fedora> Hello folks, Is there a plan to add a tenant scope to all the services? For example, we could add the tenant name to NodeRequest and a 'tenant' setting in nodepool.yaml so that a provider and its labels could be restricted to a particular tenant. Similarly for the executors, we could be able to isolate tenant job execution from each other. Has this been discussed at the PTG? Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Mon Mar 19 06:51:45 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Mon, 19 Mar 2018 06:51:45 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) In-Reply-To: <1521425389.tf2bg9ecc0.tristanC@fedora> References: <1521425389.tf2bg9ecc0.tristanC@fedora> Message-ID: Hi, On 19.03.18, 03:19, "Tristan Cacqueray" wrote: Hello folks, Is there a plan to add a tenant scope to all the services? For example, we could add the tenant name to NodeRequest and a 'tenant' setting in nodepool.yaml so that a provider and its labels could be restricted to a particular tenant. Similarly for the executors, we could be able to isolate tenant job execution from each other. Has this been discussed at the PTG? Regards, -Tristan Regarding tenant label restriction we had a discussion at the PTG [1]. We Discussed two options, making nodepool tenant aware and defining the label restrictions in the zuul tenant config. I'm not sure if there was a final decision (at least the protocol doesn't mention a decision). But defining these restrictions in zuul seem easier to implement to me and further adds the information about tenants in a single specific location so you don't have to mess with tenant config and nodepool config to allow/disallow labels for tenants. So I'm in favor of handling that use case in the tenant config. Regarding the executors there was a discussion about region-support for executors (which you might be able to exploit to isolate executor tenants from each other). Check out the topic ' Zuul executor affinity' for this. I'm interested in your use case about this as the zuul-executor already has a two-barrier (ansible restrictions and bwrap) approach for isolating jobs (and thus tenants) against each other. [1] https://etherpad.openstack.org/p/infra-rocky-ptg Kind regards Tobias From tdecacqu at redhat.com Mon Mar 19 08:03:23 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 19 Mar 2018 08:03:23 +0000 Subject: Services missing tenant scope (nodepool-*, zuul-executor) In-Reply-To: References: <1521425389.tf2bg9ecc0.tristanC@fedora> Message-ID: <1521445551.receronx60.tristanC@fedora> Hi Tobias, thanks for your feedback, please find some comments inlined bellow. On March 19, 2018 6:51 am, Tobias.Henkel at bmw.de wrote: > Hi, > > On 19.03.18, 03:19, "Tristan Cacqueray" wrote: > > Hello folks, > > Is there a plan to add a tenant scope to all the services? > > For example, we could add the tenant name to NodeRequest and a 'tenant' > setting in nodepool.yaml so that a provider and its labels could be > restricted to a particular tenant. > > Similarly for the executors, we could be able to isolate tenant job > execution from each other. > > > Has this been discussed at the PTG? > > Regards, > -Tristan > > Regarding tenant label restriction we had a discussion at the PTG [1]. We > Discussed two options, making nodepool tenant aware and defining the > label restrictions in the zuul tenant config. I'm not sure if there was a > final decision (at least the protocol doesn't mention a decision). But > defining these restrictions in zuul seem easier to implement to me and > further adds the information about tenants in a single specific location > so you don't have to mess with tenant config and nodepool config to > allow/disallow labels for tenants. So I'm in favor of handling that use > case in the tenant config. > It seems worth investigating nodepool tenant awareness for more flexibility. For example, the 'tenant' setting in nodepool.yaml could be applied at the top-level (the whole process would only respond to a tenant request), or at the provider level (e.g. a provider would only serve a tenant) or at the label level (e.g. a label would only be available to a tenant). The use-case would be to support a tenant who provides its own cloud so that its resources consuption would not impact the capacity of the other tenants. We could simulate that with label restriction, but it seems less elegant to create new labels per providers. > Regarding the executors there was a discussion about region-support for > executors (which you might be able to exploit to isolate executor tenants > from each other). Check out the topic ' Zuul executor affinity' for this. > I'm interested in your use case about this as the zuul-executor > already has a two-barrier (ansible restrictions and bwrap) approach for > isolating jobs (and thus tenants) against each other. > The region-support for executors seems to address location isolation. I would like to be able to isolate executor between tenant to prevent one executor break out to affect other tenants. It would let us better dispatch the load too. > [1] https://etherpad.openstack.org/p/infra-rocky-ptg > I can't tell if those are notes or is this already part of the roadmap? Thanks, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mrhillsman at gmail.com Wed Mar 21 13:39:53 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 08:39:53 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> Message-ID: I also have went through the "ZFS" at least once and ran into quite a few issues I think and started documenting[0] them. Bernd not sure if you have run into additional issues but feel free to add to the etherpad; it is not organized at all right now but the plan was/is to report back a list of things and patch(es). [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuul-from-scratch) On Wed, Mar 21, 2018 at 3:26 AM, wrote: > Hi Bernd, > > On 21.03.18, 08:54, "Bernd Bausch" wrote: > > Sorry for this newbie intrusion on the distribution list. I am trying > to > follow the Zuul from Scratch instructions[1] to set up Zuul at home. > The > purpose is twofold: Learn about Zuul, and check whether Zuul from > Scratch > instructions are accurate and make sense. > > Currently I am at a roadblock. > > The problem is the Github webhook, through which Github sends data to > Zuul. > It should be > http:///connection/github/payload [2], but on the Zuul > server > that > I have deployed so far, nothing listens at port 80. Thus, Github gets > a "connection refused" when trying to contact my server. I used > tcpdump to > confirm that my server does indeed receive an http POST from Github. > > Either something is wrong with my setup, which prevents Zuul from > listening > at > 80. Or the webhook isn't documented correctly. > > How can I confirm what could be the problem? > > Bernd > > [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html > [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, > https://docs.openstack.org/infra/zuul/admin/zuul-from- > scratch.html#configure > -github > > > It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web > by default > listens on port 9000. While the documented webhook config in github > assumes port 80. > So you need to either change the webhook to use 9000 or change the zuul > config to use > port 80 [1]. > > Further I pushed up a change to fix the doc [2]. > > Sidenote: we now have zuul specific mailing lists [3] so you might want to > also subscribe > to them. > > [1] https://docs.openstack.org/infra/zuul/admin/components. > html#attr-web.port > [2] https://review.openstack.org/554829 > [3] http://lists.zuul-ci.org > > Kind regards > Tobias > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Mar 22 01:10:46 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 20:10:46 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <015001d3c179$29e23310$7da69930$@gmail.com> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: You can just submit a patch afaik Bernd On Wed, Mar 21, 2018 at 8:00 PM, Bernd Bausch wrote: > Melvin, Tobias, > > Thanks for your inputs. I added my thoughts to the Etherpad and subscribed > to the zuul-ci list. > As Tobias says, port 9000 for the webhook allows Github to send its > “ping”. Thus, I got over this hurdle. > > Note that the Github driver documentation [1] also fails to mention that > the webhook requires port 9000. Here is a process question: Am I allowed to > fix simple bugs like this one or do I have to first submit them to > Launchpad? > > Bernd. > > [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html > ---- > From: Melvin Hillsman [mailto:mrhillsman at gmail.com] > Sent: Wednesday, March 21, 2018 10:40 PM > To: Tobias.Henkel at bmw.de > Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org > Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook > > I also have went through the "ZFS" at least once and ran into quite a few > issues I think and started documenting[0] them. Bernd not sure if you have > run into additional issues but feel free to add to the etherpad; it is not > organized at all right now but the plan was/is to report back a list of > things and patch(es). > > [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs( > zuul-from-scratch) > > On Wed, Mar 21, 2018 at 3:26 AM, wrote: > Hi Bernd, > > On 21.03.18, 08:54, "Bernd Bausch" wrote: > > Sorry for this newbie intrusion on the distribution list. I am trying > to > follow the Zuul from Scratch instructions[1] to set up Zuul at home. > The > purpose is twofold: Learn about Zuul, and check whether Zuul from > Scratch > instructions are accurate and make sense. > > Currently I am at a roadblock. > > The problem is the Github webhook, through which Github sends data to > Zuul. > It should be > http:///connection/github/payload [2], but on the Zuul > server > that > I have deployed so far, nothing listens at port 80. Thus, Github gets > a "connection refused" when trying to contact my server. I used > tcpdump to > confirm that my server does indeed receive an http POST from Github. > > Either something is wrong with my setup, which prevents Zuul from > listening > at > 80. Or the webhook isn't documented correctly. > > How can I confirm what could be the problem? > > Bernd > > [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html > [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, > https://docs.openstack.org/infra/zuul/admin/zuul-from- > scratch.html#configure > -github > > It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web > by default > listens on port 9000. While the documented webhook config in github > assumes port 80. > So you need to either change the webhook to use 9000 or change the zuul > config to use > port 80 [1]. > > Further I pushed up a change to fix the doc [2]. > > Sidenote: we now have zuul specific mailing lists [3] so you might want to > also subscribe > to them. > > [1] https://docs.openstack.org/infra/zuul/admin/components. > html#attr-web.port > [2] https://review.openstack.org/554829 > [3] http://lists.zuul-ci.org > > Kind regards > Tobias > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Mar 22 01:11:12 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 20:11:12 -0500 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: Thanks for adding to the etherpad btw! On Wed, Mar 21, 2018 at 8:10 PM, Melvin Hillsman wrote: > You can just submit a patch afaik Bernd > > On Wed, Mar 21, 2018 at 8:00 PM, Bernd Bausch > wrote: > >> Melvin, Tobias, >> >> Thanks for your inputs. I added my thoughts to the Etherpad and >> subscribed to the zuul-ci list. >> As Tobias says, port 9000 for the webhook allows Github to send its >> “ping”. Thus, I got over this hurdle. >> >> Note that the Github driver documentation [1] also fails to mention that >> the webhook requires port 9000. Here is a process question: Am I allowed to >> fix simple bugs like this one or do I have to first submit them to >> Launchpad? >> >> Bernd. >> >> [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html >> ---- >> From: Melvin Hillsman [mailto:mrhillsman at gmail.com] >> Sent: Wednesday, March 21, 2018 10:40 PM >> To: Tobias.Henkel at bmw.de >> Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org >> Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook >> >> I also have went through the "ZFS" at least once and ran into quite a few >> issues I think and started documenting[0] them. Bernd not sure if you have >> run into additional issues but feel free to add to the etherpad; it is not >> organized at all right now but the plan was/is to report back a list of >> things and patch(es). >> >> [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuu >> l-from-scratch) >> >> On Wed, Mar 21, 2018 at 3:26 AM, wrote: >> Hi Bernd, >> >> On 21.03.18, 08:54, "Bernd Bausch" wrote: >> >> Sorry for this newbie intrusion on the distribution list. I am trying >> to >> follow the Zuul from Scratch instructions[1] to set up Zuul at home. >> The >> purpose is twofold: Learn about Zuul, and check whether Zuul from >> Scratch >> instructions are accurate and make sense. >> >> Currently I am at a roadblock. >> >> The problem is the Github webhook, through which Github sends data to >> Zuul. >> It should be >> http:///connection/github/payload [2], but on the >> Zuul server >> that >> I have deployed so far, nothing listens at port 80. Thus, Github gets >> a "connection refused" when trying to contact my server. I used >> tcpdump to >> confirm that my server does indeed receive an http POST from Github. >> >> Either something is wrong with my setup, which prevents Zuul from >> listening >> at >> 80. Or the webhook isn't documented correctly. >> >> How can I confirm what could be the problem? >> >> Bernd >> >> [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratc >> h.html >> [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, >> https://docs.openstack.org/infra/zuul/admin/zuul-from-scratc >> h.html#configure >> -github >> >> It looks like you discovered a bug in the zuul-from-scratch docs. >> Zuul-web by default >> listens on port 9000. While the documented webhook config in github >> assumes port 80. >> So you need to either change the webhook to use 9000 or change the zuul >> config to use >> port 80 [1]. >> >> Further I pushed up a change to fix the doc [2]. >> >> Sidenote: we now have zuul specific mailing lists [3] so you might want >> to also subscribe >> to them. >> >> [1] https://docs.openstack.org/infra/zuul/admin/components.html# >> attr-web.port >> [2] https://review.openstack.org/554829 >> [3] http://lists.zuul-ci.org >> >> Kind regards >> Tobias >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> >> >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Mar 22 01:00:23 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 22 Mar 2018 10:00:23 +0900 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> Message-ID: <015001d3c179$29e23310$7da69930$@gmail.com> Melvin, Tobias, Thanks for your inputs. I added my thoughts to the Etherpad and subscribed to the zuul-ci list. As Tobias says, port 9000 for the webhook allows Github to send its “ping”. Thus, I got over this hurdle. Note that the Github driver documentation [1] also fails to mention that the webhook requires port 9000. Here is a process question: Am I allowed to fix simple bugs like this one or do I have to first submit them to Launchpad? Bernd. [1] https://docs.openstack.org/infra/zuul/admin/drivers/github.html ---- From: Melvin Hillsman [mailto:mrhillsman at gmail.com] Sent: Wednesday, March 21, 2018 10:40 PM To: Tobias.Henkel at bmw.de Cc: berndbausch at gmail.com; zuul-discuss at lists.zuul-ci.org Subject: Re: [OpenStack-Infra] Zuul: Question about Github webhook I also have went through the "ZFS" at least once and ran into quite a few issues I think and started documenting[0] them. Bernd not sure if you have run into additional issues but feel free to add to the etherpad; it is not organized at all right now but the plan was/is to report back a list of things and patch(es). [0] https://etherpad.openstack.org/p/zuul-quickstart-and-zfs(zuul-from-scratch) On Wed, Mar 21, 2018 at 3:26 AM, wrote: Hi Bernd, On 21.03.18, 08:54, "Bernd Bausch" wrote: Sorry for this newbie intrusion on the distribution list. I am trying to follow the Zuul from Scratch instructions[1] to set up Zuul at home. The purpose is twofold: Learn about Zuul, and check whether Zuul from Scratch instructions are accurate and make sense. Currently I am at a roadblock. The problem is the Github webhook, through which Github sends data to Zuul. It should be http:///connection/github/payload [2], but on the Zuul server that I have deployed so far, nothing listens at port 80. Thus, Github gets a "connection refused" when trying to contact my server. I used tcpdump to confirm that my server does indeed receive an http POST from Github. Either something is wrong with my setup, which prevents Zuul from listening at 80. Or the webhook isn't documented correctly. How can I confirm what could be the problem? Bernd [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html#configure -github It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web by default listens on port 9000. While the documented webhook config in github assumes port 80. So you need to either change the webhook to use 9000 or change the zuul config to use port 80 [1]. Further I pushed up a change to fix the doc [2]. Sidenote: we now have zuul specific mailing lists [3] so you might want to also subscribe to them. [1] https://docs.openstack.org/infra/zuul/admin/components.html#attr-web.port [2] https://review.openstack.org/554829 [3] http://lists.zuul-ci.org Kind regards Tobias _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 22 18:30:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Mar 2018 18:30:21 +0000 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <015001d3c179$29e23310$7da69930$@gmail.com> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> <015001d3c179$29e23310$7da69930$@gmail.com> Message-ID: <20180322183020.37aj2r3idgcgsqsp@yuggoth.org> On 2018-03-22 10:00:23 +0900 (+0900), Bernd Bausch wrote: [...] > Here is a process question: Am I allowed to fix simple bugs like > this one or do I have to first submit them to Launchpad? Feel free to just push a change for review so long as you can adequately explain in the commit message why it's needed (in this case that shouldn't be terribly hard). Also, see Zuul's README.rst or the footer at the bottom of https://zuul-ci.org/ for details on how to report bugs (definitely not Launchpad). Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Mar 28 00:26:54 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 27 Mar 2018 17:26:54 -0700 Subject: Docs and git repos on zuul-ci.org Message-ID: <87zi2t59kh.fsf@meyer.lemoncheese.net> Hi, We now have all of the Zuul documentation hosted on the Zuul website[1], so folks may want to start referencing those URLs, rather than docs.openstack.org. We also have the Zuul git repos mirrored under the Zuul domain at [2]. That page shows only the Zuul-related git repos. Additionally, if you're referencing the zuul-jobs or zuul-base-jobs repos, you may want to switch your connection to use git.zuul-ci.org instead. Note that if you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- with no "openstack-infra" prefixed to the name, as there are no organization prefixes on repositories mirrored at git.zuul-ci.org. [1] https://zuul-ci.org/docs/ [2] http://git.zuul-ci.org/ -Jim From pabelanger at redhat.com Wed Mar 28 00:45:53 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 27 Mar 2018 20:45:53 -0400 Subject: Docs and git repos on zuul-ci.org In-Reply-To: <87zi2t59kh.fsf@meyer.lemoncheese.net> References: <87zi2t59kh.fsf@meyer.lemoncheese.net> Message-ID: <20180328004553.GA23613@localhost.localdomain> On Tue, Mar 27, 2018 at 05:26:54PM -0700, James E. Blair wrote: > Hi, > > We now have all of the Zuul documentation hosted on the Zuul website[1], > so folks may want to start referencing those URLs, rather than > docs.openstack.org. > > We also have the Zuul git repos mirrored under the Zuul domain at [2]. > That page shows only the Zuul-related git repos. Additionally, if > you're referencing the zuul-jobs or zuul-base-jobs repos, you may want > to switch your connection to use git.zuul-ci.org instead. Note that if > you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- > with no "openstack-infra" prefixed to the name, as there are no > organization prefixes on repositories mirrored at git.zuul-ci.org. > > [1] https://zuul-ci.org/docs/ > [2] http://git.zuul-ci.org/ > Yay, looks great. I am guessing [2] is a typo, and we should be recommending HTTPS protocol for git repos? > -Jim > > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From corvus at inaugust.com Wed Mar 28 14:10:00 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 28 Mar 2018 07:10:00 -0700 Subject: Docs and git repos on zuul-ci.org In-Reply-To: <20180328004553.GA23613@localhost.localdomain> (Paul Belanger's message of "Tue, 27 Mar 2018 20:45:53 -0400") References: <87zi2t59kh.fsf@meyer.lemoncheese.net> <20180328004553.GA23613@localhost.localdomain> Message-ID: <87sh8k47gn.fsf@meyer.lemoncheese.net> Paul Belanger writes: > On Tue, Mar 27, 2018 at 05:26:54PM -0700, James E. Blair wrote: >> Hi, >> >> We now have all of the Zuul documentation hosted on the Zuul website[1], >> so folks may want to start referencing those URLs, rather than >> docs.openstack.org. >> >> We also have the Zuul git repos mirrored under the Zuul domain at [2]. >> That page shows only the Zuul-related git repos. Additionally, if >> you're referencing the zuul-jobs or zuul-base-jobs repos, you may want >> to switch your connection to use git.zuul-ci.org instead. Note that if >> you do that, the project name will be "git.zuul-ci.org/zuul-jobs" -- >> with no "openstack-infra" prefixed to the name, as there are no >> organization prefixes on repositories mirrored at git.zuul-ci.org. >> >> [1] https://zuul-ci.org/docs/ >> [2] http://git.zuul-ci.org/ >> > Yay, looks great. I am guessing [2] is a typo, and we should be recommending > HTTPS protocol for git repos? Yep, sorry. Should be https://git.zuul-ci.org/ -Jim From tdecacqu at redhat.com Thu Mar 29 10:33:27 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 29 Mar 2018 10:33:27 +0000 Subject: Software Factory 3.0 released Message-ID: <1522319289.u1l2ey5pdq.tristanC@fedora> Greeting folks, We are pleased to announce the release of Software Factory version 3.0: http://www.softwarefactory-project.io/releases/3.0 It is a software development forge that integrates the last version of Zuul and Nodepool along with a few patches that couldn't make it into the upstream release: * Ansible-2.4 support: https://review.openstack.org/535839 * Dynamic tenant configuration: https://review.openstack.org/535878 * MQTT reporter: https://review.openstack.org/535543 * Job and Projects dashboard: https://review.openstack.org/550979 * Broken config protection: https://review.openstack.org/535511 * Zookeeper auto-retry: https://review.openstack.org/535537 * OpenContainer Nodepool provider: https://review.openstack.org/535556 * Kubernetes Nodepool provider: https://review.openstack.org/535557 Best regards, The Software Factory Team -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Thu Mar 29 16:44:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 29 Mar 2018 09:44:24 -0700 Subject: Next steps after 3.0.0 release Message-ID: <878taau907.fsf@meyer.lemoncheese.net> Hi, Now that 3.0.0 is released[1] (yay!), it's a good time to go over what we need to do next. The first order of business is to land the changes to add reno: https://review.openstack.org/536861 https://review.openstack.org/262597 You should read the docs that those changes introduce (they'll be in the developer docs for future reference), but the key bit is: When adding a noteworthy feature, fixing a noteworthy bug or introducing a behavior change that a user or operator should know about, it is a good idea to add a release note to the same patch. Every release from this point out will have release notes included in the documentation. We relaxed our review standards significantly during the feature branch phase, but we've been returning to normal for the past few months. At this point, the master branches of Nodepool and Zuul should always be kept working and ready for release. Gating usually handles the technical side of that, but don't lose track of the intent when it comes to other aspects. Please make sure that changes have documentation if they affect a user-visible aspect of the system, have a release note if the change is noteworthy, and have tests if they alter behavior. If a change is missing these things, please ask about it. These rules don't need to be too rigid, but if we flex them, we should be in the habit of discussing it and establishing that there's a good reason. We need to be careful about making changes to the zuul-jobs and zuul-base-jobs repos now. We'll write up a policy and include it in the docs for those repos, but the important thing is that we can't use new Zuul features in those repos until we've given folks a chance to upgrade (so probably some number of weeks after the feature appears in a release). And we can't remove features from Zuul until we've removed their use from zuul-jobs, and given folks who might be running a fork of zuul-jobs time to update. I expect us to release Zuul and Nodepool much more often now. Any time we make a significant change and establish that the software is working as intended, we should release. We've assigned some very specific meaning to version numbers recently, but don't think that will continue. The versions we release will be what they are depending on what has landed when we release. Thanks everyone for your work on the release, and I look forward to reviewing all the patches that are waiting in the wings! -Jim [1] http://lists.zuul-ci.org/pipermail/zuul-announce/2018-March/000006.html From hashar at free.fr Fri Mar 30 08:02:53 2018 From: hashar at free.fr (Antoine Musso) Date: Fri, 30 Mar 2018 10:02:53 +0200 Subject: Software Factory 3.0 released In-Reply-To: <1522319289.u1l2ey5pdq.tristanC@fedora> References: <1522319289.u1l2ey5pdq.tristanC@fedora> Message-ID: On 29/03/2018 12:33, Tristan Cacqueray wrote: > Greeting folks, > > We are pleased to announce the release of Software Factory version 3.0: > http://www.softwarefactory-project.io/releases/3.0 > > It is a software development forge that integrates the last version of > Zuul and Nodepool along with a few patches that couldn't make it into > the upstream release: > > * Ansible-2.4 support: https://review.openstack.org/535839 > * Dynamic tenant configuration: https://review.openstack.org/535878 > * MQTT reporter: https://review.openstack.org/535543 > * Job and Projects dashboard: https://review.openstack.org/550979 > * Broken config protection: https://review.openstack.org/535511 > * Zookeeper auto-retry: https://review.openstack.org/535537 > * OpenContainer Nodepool provider: https://review.openstack.org/535556 > * Kubernetes Nodepool provider: https://review.openstack.org/535557 Hello, It seems those patches were proposed to the v3 feature branch which eventually got mass abandoned on the v3 branch got merged into master. Is there any plan to get them reviewed/merged and added in the next Zuul/Nodepool versions? I am specially interested in the Kubernetes Nodepool provider. Wikimedia is planning to migrate its CI stack to Kubernetes and surely we will have time/budget/brains to work on enhancing and maintaining it. Though for now we are still on Zuul v2 and an obsolete Nodepool version. cheers, -- Antoine "hashar" Musso From tdecacqu at redhat.com Fri Mar 30 09:13:03 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 30 Mar 2018 09:13:03 +0000 Subject: Software Factory 3.0 released In-Reply-To: References: <1522319289.u1l2ey5pdq.tristanC@fedora> Message-ID: <1522397895.fu80ju1srv.tristanC@fedora> On March 30, 2018 8:02 am, Antoine Musso wrote: > On 29/03/2018 12:33, Tristan Cacqueray wrote: >> Greeting folks, >> >> We are pleased to announce the release of Software Factory version 3.0: >> http://www.softwarefactory-project.io/releases/3.0 >> >> It is a software development forge that integrates the last version of >> Zuul and Nodepool along with a few patches that couldn't make it into >> the upstream release: >> >> * Ansible-2.4 support: https://review.openstack.org/535839 >> * Dynamic tenant configuration: https://review.openstack.org/535878 >> * MQTT reporter: https://review.openstack.org/535543 >> * Job and Projects dashboard: https://review.openstack.org/550979 >> * Broken config protection: https://review.openstack.org/535511 >> * Zookeeper auto-retry: https://review.openstack.org/535537 >> * OpenContainer Nodepool provider: https://review.openstack.org/535556 >> * Kubernetes Nodepool provider: https://review.openstack.org/535557 > > Hello, > > It seems those patches were proposed to the v3 feature branch which > eventually got mass abandoned on the v3 branch got merged into master. > > Is there any plan to get them reviewed/merged and added in the next > Zuul/Nodepool versions? > Hi Antoine, It's unclear what is the scope of the next versions. We integrated the above patches in Software Factory packages as they improve the user experience without changing the services interface. They are up for review on the master branch and we are available to update them if needed. I think the most important one is the broken config protection as it prevents a single commit to break every tenants. > I am specially interested in the Kubernetes Nodepool provider. Wikimedia > is planning to migrate its CI stack to Kubernetes and surely we will > have time/budget/brains to work on enhancing and maintaining it. Though > for now we are still on Zuul v2 and an obsolete Nodepool version. > The proposed Kubernetes driver relies on a containerized openssh service so that zuul-jobs can work seemlessly on OpenStack or Kubernetes resources. Note that it's not a true native Kubernetes driver, for example it doesn't create speculative container images nor use specific Kubernetes workflow. This driver just let you run job in a container instead of a fully fledge instance. Perhaps the driver should be renamed k8s-ssh if other #zuul users want to leverage Nodepool to truely test container based projects. Anyway that deserves a thread on its own :-) Cheers, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 30 19:31:41 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 30 Mar 2018 12:31:41 -0700 Subject: Software Factory 3.0 released In-Reply-To: <1522397895.fu80ju1srv.tristanC@fedora> (Tristan Cacqueray's message of "Fri, 30 Mar 2018 09:13:03 +0000") References: <1522319289.u1l2ey5pdq.tristanC@fedora> <1522397895.fu80ju1srv.tristanC@fedora> Message-ID: <87vaddpdgi.fsf@meyer.lemoncheese.net> Tristan Cacqueray writes: > On March 30, 2018 8:02 am, Antoine Musso wrote: >> On 29/03/2018 12:33, Tristan Cacqueray wrote: >>> Greeting folks, >>> >>> We are pleased to announce the release of Software Factory version 3.0: >>> http://www.softwarefactory-project.io/releases/3.0 >>> >>> It is a software development forge that integrates the last version of >>> Zuul and Nodepool along with a few patches that couldn't make it into >>> the upstream release: >>> >>> * Ansible-2.4 support: https://review.openstack.org/535839 >>> * Dynamic tenant configuration: https://review.openstack.org/535878 >>> * MQTT reporter: https://review.openstack.org/535543 >>> * Job and Projects dashboard: https://review.openstack.org/550979 >>> * Broken config protection: https://review.openstack.org/535511 >>> * Zookeeper auto-retry: https://review.openstack.org/535537 >>> * OpenContainer Nodepool provider: https://review.openstack.org/535556 >>> * Kubernetes Nodepool provider: https://review.openstack.org/535557 >> >> Hello, >> >> It seems those patches were proposed to the v3 feature branch which >> eventually got mass abandoned on the v3 branch got merged into master. >> >> Is there any plan to get them reviewed/merged and added in the next >> Zuul/Nodepool versions? >> > Hi Antoine, > > It's unclear what is the scope of the next versions. We integrated the > above patches in Software Factory packages as they improve the user > experience without changing the services interface. They are up for > review on the master branch and we are available to update them if needed. > > I think the most important one is the broken config protection as it > prevents a single commit to break every tenants. > >> I am specially interested in the Kubernetes Nodepool provider. Wikimedia >> is planning to migrate its CI stack to Kubernetes and surely we will >> have time/budget/brains to work on enhancing and maintaining it. Though >> for now we are still on Zuul v2 and an obsolete Nodepool version. >> > The proposed Kubernetes driver relies on a containerized openssh service > so that zuul-jobs can work seemlessly on OpenStack or Kubernetes > resources. Note that it's not a true native Kubernetes driver, for > example it doesn't create speculative container images nor use specific > Kubernetes workflow. This driver just let you run job in a container instead > of a fully fledge instance. > > Perhaps the driver should be renamed k8s-ssh if other #zuul users want > to leverage Nodepool to truely test container based projects. Anyway > that deserves a thread on its own :-) There's been a lot of research and discussion around this -- it's a complex topic. Now that v3 is released, I plan on writing up the notes we have and laying out a plan for us to discuss. Hopefully I'll be able to do that next week. -Jim