On Tue, Jul 06, 2021 at 03:33:02PM +1000, Ian Wienand wrote:
Hopefully some other eyes might see what I'm missing!
Indeed clarkb's did. There was perhaps a confluence of errors here. I'm not sure what happened before the restart. The important line of the logs was Change <Branch 0x7fa023409a60 openstack/requirements refs/heads/master updated None..None> is already in pipeline, ignoring When examining the periodic queue, indeed there was already running, but stuck jobs there. There were at least two -- the first was for ref 0000.0000 which appears have come in when the queues were restored after the restart. The second was indeed a buildset for refs/heads/master triggered by the periodic timer that was queued on an ARM64 node. We have had some problems with nodepool communicating with one of the ARM64 clouds during this time, but I'm not sure what made the job get stuck as "queued" and never time out exactly. zuul dequeue --tenant openstack --pipeline periodic --project openstack/requirements --ref refs/heads/master first removed the one with the 0000.0000 ref (not sure why); then when run a second time removed the buildset started by the periodic timer some days ago. So mystery largely solved; although I think it shows room for improvement with node allocation error/timeout handling and our re-enqueue methods. -i