Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset. That is unfortunate -- that's probably causing a lot of extra resources to be consumed.
I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway. We've also discussed having a cleanup *job* which runs after all other jobs are complete, though I think that adding support for pausing jobs reduced the need for that somewhat (since a job can run, pause, and continue after all other jobs regardless of whether they succeed). I don't think that directly addresses your use case because it can still be aborted.
One downside of this approach is that Zuul can provide no guarantees that the playbook will run, only a best effort. If the Zuul executor is killed, there's nothing we can do about it. Yeah, to be honest I'd resigned myself to the fact that if Zuul components get restarted I'm gona have to manually check for unreleased locks, and I put them on a 12 hour timeout for that reason too. (Luckily there's decent visibility on the locks). Because of that, the most reliable way to deal with resource locks is a lock that doesn't require explicit unlocking. The way that Zuul locks nodes in Nodepool accounts for this (if Zuul is killed, the locks are automatically released). Is it possible for you to encode the resources you need as static nodes in Nodepool and add them to your nodeset requests? Then you would get the benefit of Nodepool's locking. If not, we have discussed adding support for more resource types to Nodepool; perhaps that would help. Yeah it may be worth investigating a smarter locking scheme. For context the resources are pieces of specialised HW, and they are shared with a Jenkins system where they are arbitrated by the Lockable Resources Plugin. Support was added for sharing by creating a Jenkins job that when run, locks the requested resource then watches a given file until it is deleted, whereupon it releases the lock. So the Zuul job acquires the lock by
On 10/12/2018 22:49, James E. Blair wrote: triggering the Jenkins job, then releases the resource by deleting its own workspace (Jenkins can see the Zuul workspace over NFS). Perhaps a smarter scheme would be to have Jenkins connect to a TCP endpoint to report that the lock is acquired, then release it at soon as it can't communicate with the endpoint any more.
I still think that an unconditional cleanup phase would be useful though, even if best-effort, because with dequeue-on-new-patchset we are gona end up with a lot of job workspaces to garbage-collect (with the best-effort cleanup phase I would expect it to be a small enough problem that our infra will change before we need to worry about it!).
Making Zuul properly aware of the resources sounds cleaner but it actually brings the unfortunate complication that I want to strictly minimise the regions where the resources are locked: I don't want jobs to be holding locks while they compile code or upload logs, for example.
That all makes sense, thanks. We should make sure to put the caveats in the documentation so it doesn't surprise anyone. -Jim