Greetings, We (ansible team) are running into a issue using the zuul github driver, please see the following pastebin: http://paste.openstack.org/show/752756/ As you can see, we don't seem to be getting reliable network access to api.github.com. I am currently trying to figure out if this is a network issue with our control plane provider (vexxhost) or something related to GitHub side (eg: bad load balancer). However, what it does result in, is Zuul missing events from GitHub and not properly enqueue PRs into the pipelines. I understand there is only so much we might be able to do regarding a network issue, but I'd be interested to hear if there are idea of how we could tolerate them more. I also wonder if we should be reporting these issues back to the users PR too. My main goal is to minimize the amount of humans that _think_ zuul is to blame for tests not being run on their PRs, when in fact it is something else. - Paul
Paul Belanger <pabelanger@redhat.com> writes:
Greetings,
We (ansible team) are running into a issue using the zuul github driver, please see the following pastebin:
http://paste.openstack.org/show/752756/
As you can see, we don't seem to be getting reliable network access to api.github.com. I am currently trying to figure out if this is a network issue with our control plane provider (vexxhost) or something related to GitHub side (eg: bad load balancer). However, what it does result in, is Zuul missing events from GitHub and not properly enqueue PRs into the pipelines.
I understand there is only so much we might be able to do regarding a network issue, but I'd be interested to hear if there are idea of how we could tolerate them more. I also wonder if we should be reporting these issues back to the users PR too.
My main goal is to minimize the amount of humans that _think_ zuul is to blame for tests not being run on their PRs, when in fact it is something else.
That is a 10 second read timeout, which means that either GH did not begin sending a response within 10 seconds after the request was sent, or that there was more than a 10 second gap between receiving packets. Aside from continued network diagnostics there are two things we could do in Zuul itself: 1) Increase the read timeout (if we think that for some reason, waiting longer would be productive). 2) Retry requests. But I'd start with some packet captures to see if there's any further information to be found there. That might suggest whether either of these options would be helpful or a hindrance (eg, increasing a timeout for something that will never return). -Jim
On Tue, Jun 11, 2019 at 09:37:55AM -0700, James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
Greetings,
We (ansible team) are running into a issue using the zuul github driver, please see the following pastebin:
http://paste.openstack.org/show/752756/
As you can see, we don't seem to be getting reliable network access to api.github.com. I am currently trying to figure out if this is a network issue with our control plane provider (vexxhost) or something related to GitHub side (eg: bad load balancer). However, what it does result in, is Zuul missing events from GitHub and not properly enqueue PRs into the pipelines.
I understand there is only so much we might be able to do regarding a network issue, but I'd be interested to hear if there are idea of how we could tolerate them more. I also wonder if we should be reporting these issues back to the users PR too.
My main goal is to minimize the amount of humans that _think_ zuul is to blame for tests not being run on their PRs, when in fact it is something else.
That is a 10 second read timeout, which means that either GH did not begin sending a response within 10 seconds after the request was sent, or that there was more than a 10 second gap between receiving packets.
Aside from continued network diagnostics there are two things we could do in Zuul itself:
1) Increase the read timeout (if we think that for some reason, waiting longer would be productive).
Based on discussion in #zuul, I've created: https://review.opendev.org/664667/
2) Retry requests.
But I'd start with some packet captures to see if there's any further information to be found there. That might suggest whether either of these options would be helpful or a hindrance (eg, increasing a timeout for something that will never return).
Good idea, I'll see about setting up tcpdump for a few hours.
-Jim
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On Tue, Jun 11, 2019 at 05:19:28PM -0400, Paul Belanger wrote:
On Tue, Jun 11, 2019 at 09:37:55AM -0700, James E. Blair wrote:
Paul Belanger <pabelanger@redhat.com> writes:
Greetings,
We (ansible team) are running into a issue using the zuul github driver, please see the following pastebin:
http://paste.openstack.org/show/752756/
As you can see, we don't seem to be getting reliable network access to api.github.com. I am currently trying to figure out if this is a network issue with our control plane provider (vexxhost) or something related to GitHub side (eg: bad load balancer). However, what it does result in, is Zuul missing events from GitHub and not properly enqueue PRs into the pipelines.
I understand there is only so much we might be able to do regarding a network issue, but I'd be interested to hear if there are idea of how we could tolerate them more. I also wonder if we should be reporting these issues back to the users PR too.
My main goal is to minimize the amount of humans that _think_ zuul is to blame for tests not being run on their PRs, when in fact it is something else.
That is a 10 second read timeout, which means that either GH did not begin sending a response within 10 seconds after the request was sent, or that there was more than a 10 second gap between receiving packets.
Aside from continued network diagnostics there are two things we could do in Zuul itself:
1) Increase the read timeout (if we think that for some reason, waiting longer would be productive).
Based on discussion in #zuul, I've created:
https://review.opendev.org/664667/
2) Retry requests.
But I'd start with some packet captures to see if there's any further information to be found there. That might suggest whether either of these options would be helpful or a hindrance (eg, increasing a timeout for something that will never return).
Good idea, I'll see about setting up tcpdump for a few hours.
As a follow up, we applied 664667 last night without an issue, so far we are no longer seeing the exceptions from the original pastebin above. What we are now seeing is an increase of 502 Server Error exceptions raised by github3.py, which caused me to create the following retry requests patch: https://review.opendev.org/664843/ So, going back to my original comments, it is more clear this is an issue github rather then network problem. - Paul
participants (2)
-
corvus@inaugust.com
-
Paul Belanger