Artem Goncharov <artem.goncharov@gmail.com> writes:
Hi,
I have tried some days ago update to 4.3.0 and faced the same problems again. This time I have saved a better trace (Zuul is receiving WebHook, but doesn’t really start anything):
2021-05-21 20:51:54,894 DEBUG zuul.Scheduler: Run handler awake 2021-05-21 20:51:54,907 ERROR zuul.Scheduler: Exception in run handler: Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/zuul/scheduler.py", line 1190, in run self.process_global_trigger_queue() File "/usr/local/lib/python3.8/site-packages/zuul/scheduler.py", line 1233, in process_global_trigger_queue for event in self.trigger_events: File "/usr/local/lib/python3.8/site-packages/zuul/zk/event_queues.py", line 507, in __iter__ event = event_class.fromDict(event_data) File "/usr/local/lib/python3.8/site-packages/zuul/model.py", line 3523, in fromDict event.updateFromDict(data) File "/usr/local/lib/python3.8/site-packages/zuul/driver/github/githubmodel.py", line 94, in updateFromDict super().updateFromDict(d) File "/usr/local/lib/python3.8/site-packages/zuul/model.py", line 4000, in updateFromDict self.connection_name = d["connection_name"] KeyError: ‘connection_name'
After digging in the code and thinking what could have went wrong I decided to try cleaning ZK completely. This have helped (sadly means complete reset with loosing track of all built images). So I assume with all migrations something went wrong in ZK.
I feel like having some more docs on how to deal with ZK now, that it takes more and more responsibilities, should be added for those out there using Zuul. This should be a no-brainer to reset ZK state, since screwing or loosing ZK cluster is also not an impossible thing.
Is it possible that the zuul-web and zuul-scheduler versions were not in sync at one point? For most of the ZK work, we're expecting the entire cluster to be updated simultaneously (after all, we don't support HA schedulers yet). If one component wrote data in an older format, that could happen. Or of course if there was a bug that we fixed later. I agree, in cases like that we should at least have the option to reset everything. Did you document the procedure you used? -Jim