Adding an unconditional cleanup job phase
Hi all, Looking, for the moment, for lower-hanging fruit in my quest to reduce test resource pressure with Zuul. One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset. I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway. If you agree so far, can we talk semantics & implementation? I think it would be sensible that these playbooks are _never_ killed, but that obviously poses a risk that users can get stuck with an unkillable job running. I guess a simple solution is to have a distinct timeout from the job timeout, which covers just the running of these playbooks. We recommend setting this pretty short (default 30 secs, say), and strictly minimising the logic in these playbooks. If I'm still making sense then from what I've read so far, it seems all the complexity here can just go in AnsibleJob - the gearman communication would already support this. The client would just send the executor:stop TextJob as normal, and then the server can return its response asynchronously when the cleanup playbooks have finished? How does that sound? Cheers, Brendan
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset.
That is unfortunate -- that's probably causing a lot of extra resources to be consumed.
I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway.
We've also discussed having a cleanup *job* which runs after all other jobs are complete, though I think that adding support for pausing jobs reduced the need for that somewhat (since a job can run, pause, and continue after all other jobs regardless of whether they succeed). I don't think that directly addresses your use case because it can still be aborted. One downside of this approach is that Zuul can provide no guarantees that the playbook will run, only a best effort. If the Zuul executor is killed, there's nothing we can do about it. Because of that, the most reliable way to deal with resource locks is a lock that doesn't require explicit unlocking. The way that Zuul locks nodes in Nodepool accounts for this (if Zuul is killed, the locks are automatically released). Is it possible for you to encode the resources you need as static nodes in Nodepool and add them to your nodeset requests? Then you would get the benefit of Nodepool's locking. If not, we have discussed adding support for more resource types to Nodepool; perhaps that would help.
If you agree so far, can we talk semantics & implementation?
I think it would be sensible that these playbooks are _never_ killed, but that obviously poses a risk that users can get stuck with an unkillable job running. I guess a simple solution is to have a distinct timeout from the job timeout, which covers just the running of these playbooks. We recommend setting this pretty short (default 30 secs, say), and strictly minimising the logic in these playbooks.
If I'm still making sense then from what I've read so far, it seems all the complexity here can just go in AnsibleJob - the gearman communication would already support this. The client would just send the executor:stop TextJob as normal, and then the server can return its response asynchronously when the cleanup playbooks have finished?
How does that sound?
If we still think a cleanup phase is the best approach despite the caveats (which we should document), the implementation you describe sounds good. A busy system would probably need more than 30 seconds (I'd still probably give it a few minutes). -Jim
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset. That is unfortunate -- that's probably causing a lot of extra resources to be consumed.
I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway. We've also discussed having a cleanup *job* which runs after all other jobs are complete, though I think that adding support for pausing jobs reduced the need for that somewhat (since a job can run, pause, and continue after all other jobs regardless of whether they succeed). I don't think that directly addresses your use case because it can still be aborted.
One downside of this approach is that Zuul can provide no guarantees that the playbook will run, only a best effort. If the Zuul executor is killed, there's nothing we can do about it. Yeah, to be honest I'd resigned myself to the fact that if Zuul components get restarted I'm gona have to manually check for unreleased locks, and I put them on a 12 hour timeout for that reason too. (Luckily there's decent visibility on the locks). Because of that, the most reliable way to deal with resource locks is a lock that doesn't require explicit unlocking. The way that Zuul locks nodes in Nodepool accounts for this (if Zuul is killed, the locks are automatically released). Is it possible for you to encode the resources you need as static nodes in Nodepool and add them to your nodeset requests? Then you would get the benefit of Nodepool's locking. If not, we have discussed adding support for more resource types to Nodepool; perhaps that would help. Yeah it may be worth investigating a smarter locking scheme. For context the resources are pieces of specialised HW, and they are shared with a Jenkins system where they are arbitrated by the Lockable Resources Plugin. Support was added for sharing by creating a Jenkins job that when run, locks the requested resource then watches a given file until it is deleted, whereupon it releases the lock. So the Zuul job acquires the lock by
On 10/12/2018 22:49, James E. Blair wrote: triggering the Jenkins job, then releases the resource by deleting its own workspace (Jenkins can see the Zuul workspace over NFS). Perhaps a smarter scheme would be to have Jenkins connect to a TCP endpoint to report that the lock is acquired, then release it at soon as it can't communicate with the endpoint any more. I still think that an unconditional cleanup phase would be useful though, even if best-effort, because with dequeue-on-new-patchset we are gona end up with a lot of job workspaces to garbage-collect (with the best-effort cleanup phase I would expect it to be a small enough problem that our infra will change before we need to worry about it!). Making Zuul properly aware of the resources sounds cleaner but it actually brings the unfortunate complication that I want to strictly minimise the regions where the resources are locked: I don't want jobs to be holding locks while they compile code or upload logs, for example.
If you agree so far, can we talk semantics & implementation?
I think it would be sensible that these playbooks are _never_ killed, but that obviously poses a risk that users can get stuck with an unkillable job running. I guess a simple solution is to have a distinct timeout from the job timeout, which covers just the running of these playbooks. We recommend setting this pretty short (default 30 secs, say), and strictly minimising the logic in these playbooks.
If I'm still making sense then from what I've read so far, it seems all the complexity here can just go in AnsibleJob - the gearman communication would already support this. The client would just send the executor:stop TextJob as normal, and then the server can return its response asynchronously when the cleanup playbooks have finished?
How does that sound? If we still think a cleanup phase is the best approach despite the caveats (which we should document), the implementation you describe sounds good. A busy system would probably need more than 30 seconds (I'd still probably give it a few minutes).
-Jim
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
One unfortunate hack in my use-case is that we currently lock and release test resources which are shared with another system. This is handled by an ansible role that sends API calls to that other system. This means summarily killing a running job can prevent a resource from getting unlocked. To get around this I've had to disable pipeline.dequeue-on-new-patchset. That is unfortunate -- that's probably causing a lot of extra resources to be consumed.
I'd like to add stage where playbooks can be added that still gets run even if the job gets stopped. My guess is that this is also useful to other people who are using the static nodepool driver and would like to unconditionally clean up files left behind by jobs - does that sound correct? It generally seems that there would be value in having and unconditional cleanup stage, anyway. We've also discussed having a cleanup *job* which runs after all other jobs are complete, though I think that adding support for pausing jobs reduced the need for that somewhat (since a job can run, pause, and continue after all other jobs regardless of whether they succeed). I don't think that directly addresses your use case because it can still be aborted.
One downside of this approach is that Zuul can provide no guarantees that the playbook will run, only a best effort. If the Zuul executor is killed, there's nothing we can do about it. Yeah, to be honest I'd resigned myself to the fact that if Zuul components get restarted I'm gona have to manually check for unreleased locks, and I put them on a 12 hour timeout for that reason too. (Luckily there's decent visibility on the locks). Because of that, the most reliable way to deal with resource locks is a lock that doesn't require explicit unlocking. The way that Zuul locks nodes in Nodepool accounts for this (if Zuul is killed, the locks are automatically released). Is it possible for you to encode the resources you need as static nodes in Nodepool and add them to your nodeset requests? Then you would get the benefit of Nodepool's locking. If not, we have discussed adding support for more resource types to Nodepool; perhaps that would help. Yeah it may be worth investigating a smarter locking scheme. For context the resources are pieces of specialised HW, and they are shared with a Jenkins system where they are arbitrated by the Lockable Resources Plugin. Support was added for sharing by creating a Jenkins job that when run, locks the requested resource then watches a given file until it is deleted, whereupon it releases the lock. So the Zuul job acquires the lock by
On 10/12/2018 22:49, James E. Blair wrote: triggering the Jenkins job, then releases the resource by deleting its own workspace (Jenkins can see the Zuul workspace over NFS). Perhaps a smarter scheme would be to have Jenkins connect to a TCP endpoint to report that the lock is acquired, then release it at soon as it can't communicate with the endpoint any more.
I still think that an unconditional cleanup phase would be useful though, even if best-effort, because with dequeue-on-new-patchset we are gona end up with a lot of job workspaces to garbage-collect (with the best-effort cleanup phase I would expect it to be a small enough problem that our infra will change before we need to worry about it!).
Making Zuul properly aware of the resources sounds cleaner but it actually brings the unfortunate complication that I want to strictly minimise the regions where the resources are locked: I don't want jobs to be holding locks while they compile code or upload logs, for example.
That all makes sense, thanks. We should make sure to put the caveats in the documentation so it doesn't surprise anyone. -Jim
participants (2)
-
Brendan Jackman
-
corvus@inaugust.com