Adding an unconditional cleanup job phase
James E. Blair
corvus at inaugust.com
Mon Dec 10 15:49:41 UTC 2018
Brendan Jackman <brendan.jackman at bluwirelesstechnology.com> writes:
> One unfortunate hack in my use-case is that we currently lock and
> release test resources which are shared with another system. This
> is handled by an ansible role that sends API calls to that other
> system. This means summarily killing a running job can prevent a
> resource from getting unlocked. To get around this I've had to
> disable pipeline.dequeue-on-new-patchset.
That is unfortunate -- that's probably causing a lot of extra resources
to be consumed.
> I'd like to add stage where playbooks can be added that still
> gets run even if the job gets stopped. My guess is that this is
> also useful to other people who are using the static nodepool
> driver and would like to unconditionally clean up files left
> behind by jobs - does that sound correct? It generally seems that
> there would be value in having and unconditional cleanup stage,
> anyway.
We've also discussed having a cleanup *job* which runs after all other
jobs are complete, though I think that adding support for pausing jobs
reduced the need for that somewhat (since a job can run, pause, and
continue after all other jobs regardless of whether they succeed). I
don't think that directly addresses your use case because it can still
be aborted.
One downside of this approach is that Zuul can provide no guarantees
that the playbook will run, only a best effort. If the Zuul executor is
killed, there's nothing we can do about it.
Because of that, the most reliable way to deal with resource locks is a
lock that doesn't require explicit unlocking. The way that Zuul locks
nodes in Nodepool accounts for this (if Zuul is killed, the locks are
automatically released). Is it possible for you to encode the resources
you need as static nodes in Nodepool and add them to your nodeset
requests? Then you would get the benefit of Nodepool's locking. If
not, we have discussed adding support for more resource types to
Nodepool; perhaps that would help.
> If you agree so far, can we talk semantics & implementation?
>
> I think it would be sensible that these playbooks are _never_
> killed, but that obviously poses a risk that users can get stuck
> with an unkillable job running. I guess a simple solution is to
> have a distinct timeout from the job timeout, which covers just
> the running of these playbooks. We recommend setting this pretty
> short (default 30 secs, say), and strictly minimising the logic
> in these playbooks.
>
> If I'm still making sense then from what I've read so far, it
> seems all the complexity here can just go in AnsibleJob - the
> gearman communication would already support this. The client
> would just send the executor:stop TextJob as normal, and then the
> server can return its response asynchronously when the cleanup
> playbooks have finished?
>
> How does that sound?
If we still think a cleanup phase is the best approach despite the
caveats (which we should document), the implementation you describe
sounds good. A busy system would probably need more than 30 seconds
(I'd still probably give it a few minutes).
-Jim
More information about the Zuul-discuss
mailing list