Semaphores and resources

Tobias.Henkel at bmw.de Tobias.Henkel at bmw.de
Mon Nov 26 15:31:13 UTC 2018


The time penalty would be the wait time for a node per iteration. So especially with the
resource pressure in openstack at the moment this can take quite some time.

This is configurable per job. So keeping in mind that this will waste some resources it
should only be used for quick and cheap jobs.

I’ve already a change up for review that implements the described behavior:
https://review.openstack.org/619004

-Tobias

On 26.11.18, 15:58, "David Shrewsbury" <shrewsbury.dave at gmail.com<mailto:shrewsbury.dave at gmail.com>> wrote:

I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable
gains from ever doing the node allocation first? I.e., do we have numbers showing this to
actually be more efficient in the "quick job" case?  I'm mostly just wondering how an admin
would know how/when to tune for this specific case if we left "allocate then lock semaphore"
as an option.

-Dave


On Wed, Nov 14, 2018 at 6:14 AM Tristan Cacqueray <tdecacqu at redhat.com<mailto:tdecacqu at redhat.com>> wrote:
On November 14, 2018 9:48 am, Joshua Hesketh wrote:
> (Had originally forgot to reply-all)
>
> I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm
> not sure the use case of locking first is valid enough to justify the extra
> complexity of making it an option.
>
> The point of a semaphore is often not to waste resources, so I fail to
> understand the use case of holding nodes ahead of time. Besides, if you
> have resources to spare (or "waste"), then you could have a bigger ready
> allocation in your nodepool meaning that new jobs will pick up a node
> reasonably quickly anyway.
>
When semaphores are used by fast jobs (e.g. publication), and if nodepool
is running at capacity, it may be better to hold the node ahead of time.

Though it sounds like a special case which shouldn't be the default
behavior.

-Tristan


> Again though, as I'm not a user of this feature I'm likely missing
> something :-).
>
> On Tue, Nov 13, 2018 at 2:37 AM <Tobias.Henkel at bmw.de<mailto:Tobias.Henkel at bmw.de>> wrote:
>
>> Hi Zuulers,
>>
>>
>>
>> We just discovered an (in our case) unpleasant behaviour of zuul and
>> semaphores.
>>
>>
>>
>> Scenario:
>>
>> We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have
>> 30 items with job ‘foo’ in the queue.
>>
>> The current behaviour of zuul is that it first gets the nodes and then
>> locks the semaphore.
>>
>> This means that 5 jobs run and 25 wait for the semaphore but already got
>> and locked their nodes.
>>
>> In case the job is expensive and long running this can become quite a big
>> resource waste problem if we’re in a quota constrained situation.
>>
>>
>>
>> After some discussion on IRC this would be a proposal to solve this
>> problem:
>>
>> We could make the order of ‘request nodes’ and ‘lock semaphore’
>> configurable per job with a default to the safe behaviour of
>>
>> ‘lock semaphore’ then ‘request nodes’. Jobs would still have the
>> possibility to do the fast path by requesting the nodes first.
>>
>> This makes sense if you have lightweight jobs that should run in a fast
>> consecutive sequence and you know that some waste of resources is ok.
>>
>>
>>
>> What do the other semaphore users out there think about this?
>>
>>
>>
>> Kind regards
>>
>> Tobias
>> _______________________________________________
>> Zuul-discuss mailing list
>> Zuul-discuss at lists.zuul-ci.org<mailto:Zuul-discuss at lists.zuul-ci.org>
>> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>>
> _______________________________________________
> Zuul-discuss mailing list
> Zuul-discuss at lists.zuul-ci.org<mailto:Zuul-discuss at lists.zuul-ci.org>
> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
>
_______________________________________________
Zuul-discuss mailing list
Zuul-discuss at lists.zuul-ci.org<mailto:Zuul-discuss at lists.zuul-ci.org>
http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss


--
David Shrewsbury (Shrews)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20181126/2d80a42d/attachment-0001.html>


More information about the Zuul-discuss mailing list