Debugging non-starting periodic jobs; probably since 4.6.0
Hi, I've been trying to debug why release-wheel-cache https://zuul.opendev.org/t/openstack/builds?job_name=release-wheel-cache doesn't seem to have run since 2021-06-27. I don't believe we've made any changes to these jobs, but I think Zuul has deployed quite a bit of change. The build-wheel-cache jobs are in openstack-zuul-jobs [1]. We then have publish-wheel-cache jobs [2] that are children of these jobs and essentially wrap the build jobs with "get afs token, write wheels to afs" wrappers. These jobs are in the periodic pipeline of the requirements project via project-config [3]. Finally the release-wheel-cache (also [3]) runs after all the publish-wheel-cache-* jobs to "vos release" the various wheel volumes. Note it seems none of the individual jobs (publish-wheel-cache-*) has run since 2021-06-27 either, but I've just concentrated on release-wheel-cache as it depends on all the others. I believe the last time it ran has been captured in [4] starting at 2021-06-27 06:00:35,268 DEBUG zuul.Scheduler: [e:963101cc7c01460abfd34664e5610e18] Forwarding trigger event <TimerTriggerEvent 0x7fe33db4fa00 timer openstack/requirements refs/heads/master> This is the last result we see in the build log. The next time it tries is [5] at 2021-06-28 06:00:17,437 DEBUG zuul.Scheduler: [e:afdb7039494649c09e8bb2b64158a385] Forwarding trigger event <TimerTriggerEvent 0x7fe3424015e0 timer openstack/requirements refs/heads/master> I think the interesting thing here is that this seems to go into a tight loop of 2021-06-28 06:08:34,701 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] Checking for changes needed by <Branch 0x7fe343803e50 openstack/requirements refs/heads/master updated None..None>: 2021-06-28 06:08:34,701 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] <class 'zuul.model.Branch'> does not support dependencies ... 2021-07-02 14:20:30,699 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] Checking for changes needed by <Branch 0x7fe343803e50 openstack/requirements refs/heads/master updated None..None>: 2021-07-02 14:20:30,700 DEBUG zuul.Pipeline.openstack.periodic: [e: afdb7039494649c09e8bb2b64158a385] <class 'zuul.model.Branch'> does not support dependencies every few seconds for the next few days; I think until restarted Zuul at @ 2021-07-02 14:22:33 according to logs at [6] (it stops at 14:20) So then the latest run is triggered at [7] 2021-07-05 06:00:13,771 DEBUG zuul.Scheduler: [e: 5fe4dd08d1a44700bf92fd475565f687] Forwarding trigger event <TimerTriggerEvent 0x7fa02c8d62e0 timer openstack/requirements refs/heads/master> but nothing seems to run. Further complicating things, I know the requirements project had some config errors from the recent security fixes until about 2021-06-28 [8]. So far I'm at a bit of a loss to correlate all this into a coherent story. Hopefully some other eyes might see what I'm missing! Thanks, -i [1] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/j... [2] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/jobs.y... [3] https://opendev.org/openstack/project-config/src/branch/master/zuul.d/projec... [4] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/963101cc7c01460a... [5] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/afdb7039494649c0... [6] https://wiki.openstack.org/wiki/Infrastructure_Status [7] http://people.redhat.com/~iwienand/zuul-periodic-27-06-2021/5fe4dd08d1a44700... [8] https://review.opendev.org/c/openstack/requirements/+/798361
On Tue, Jul 06, 2021 at 03:33:02PM +1000, Ian Wienand wrote:
Hopefully some other eyes might see what I'm missing!
Indeed clarkb's did. There was perhaps a confluence of errors here. I'm not sure what happened before the restart. The important line of the logs was Change <Branch 0x7fa023409a60 openstack/requirements refs/heads/master updated None..None> is already in pipeline, ignoring When examining the periodic queue, indeed there was already running, but stuck jobs there. There were at least two -- the first was for ref 0000.0000 which appears have come in when the queues were restored after the restart. The second was indeed a buildset for refs/heads/master triggered by the periodic timer that was queued on an ARM64 node. We have had some problems with nodepool communicating with one of the ARM64 clouds during this time, but I'm not sure what made the job get stuck as "queued" and never time out exactly. zuul dequeue --tenant openstack --pipeline periodic --project openstack/requirements --ref refs/heads/master first removed the one with the 0000.0000 ref (not sure why); then when run a second time removed the buildset started by the periodic timer some days ago. So mystery largely solved; although I think it shows room for improvement with node allocation error/timeout handling and our re-enqueue methods. -i
participants (1)
-
Ian Wienand