Hi Zuulers, We just discovered an (in our case) unpleasant behaviour of zuul and semaphores. Scenario: We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue. The current behaviour of zuul is that it first gets the nodes and then locks the semaphore. This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes. In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation. After some discussion on IRC this would be a proposal to solve this problem: We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of ‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first. This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok. What do the other semaphore users out there think about this? Kind regards Tobias
<Tobias.Henkel@bmw.de> writes:
Hi Zuulers,
We just discovered an (in our case) unpleasant behaviour of zuul and semaphores.
Scenario: We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue. The current behaviour of zuul is that it first gets the nodes and then locks the semaphore. This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes. In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation.
After some discussion on IRC this would be a proposal to solve this problem: We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of ‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first. This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok.
What do the other semaphore users out there think about this?
If no one sees any potential problems with the new behavior, I think this sounds like a good change (including making it the default). -Jim
(Had originally forgot to reply-all) I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm not sure the use case of locking first is valid enough to justify the extra complexity of making it an option. The point of a semaphore is often not to waste resources, so I fail to understand the use case of holding nodes ahead of time. Besides, if you have resources to spare (or "waste"), then you could have a bigger ready allocation in your nodepool meaning that new jobs will pick up a node reasonably quickly anyway. Again though, as I'm not a user of this feature I'm likely missing something :-). On Tue, Nov 13, 2018 at 2:37 AM <Tobias.Henkel@bmw.de> wrote:
Hi Zuulers,
We just discovered an (in our case) unpleasant behaviour of zuul and semaphores.
Scenario:
We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue.
The current behaviour of zuul is that it first gets the nodes and then locks the semaphore.
This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes.
In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation.
After some discussion on IRC this would be a proposal to solve this problem:
We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of
‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first.
This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok.
What do the other semaphore users out there think about this?
Kind regards
Tobias _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
On November 14, 2018 9:48 am, Joshua Hesketh wrote:
(Had originally forgot to reply-all)
I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm not sure the use case of locking first is valid enough to justify the extra complexity of making it an option.
The point of a semaphore is often not to waste resources, so I fail to understand the use case of holding nodes ahead of time. Besides, if you have resources to spare (or "waste"), then you could have a bigger ready allocation in your nodepool meaning that new jobs will pick up a node reasonably quickly anyway.
When semaphores are used by fast jobs (e.g. publication), and if nodepool is running at capacity, it may be better to hold the node ahead of time. Though it sounds like a special case which shouldn't be the default behavior. -Tristan
Again though, as I'm not a user of this feature I'm likely missing something :-).
On Tue, Nov 13, 2018 at 2:37 AM <Tobias.Henkel@bmw.de> wrote:
Hi Zuulers,
We just discovered an (in our case) unpleasant behaviour of zuul and semaphores.
Scenario:
We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue.
The current behaviour of zuul is that it first gets the nodes and then locks the semaphore.
This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes.
In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation.
After some discussion on IRC this would be a proposal to solve this problem:
We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of
‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first.
This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok.
What do the other semaphore users out there think about this?
Kind regards
Tobias _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option. -Dave On Wed, Nov 14, 2018 at 6:14 AM Tristan Cacqueray <tdecacqu@redhat.com> wrote:
On November 14, 2018 9:48 am, Joshua Hesketh wrote:
(Had originally forgot to reply-all)
I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm not sure the use case of locking first is valid enough to justify the extra complexity of making it an option.
The point of a semaphore is often not to waste resources, so I fail to understand the use case of holding nodes ahead of time. Besides, if you have resources to spare (or "waste"), then you could have a bigger ready allocation in your nodepool meaning that new jobs will pick up a node reasonably quickly anyway.
When semaphores are used by fast jobs (e.g. publication), and if nodepool is running at capacity, it may be better to hold the node ahead of time.
Though it sounds like a special case which shouldn't be the default behavior.
-Tristan
Again though, as I'm not a user of this feature I'm likely missing something :-).
On Tue, Nov 13, 2018 at 2:37 AM <Tobias.Henkel@bmw.de> wrote:
Hi Zuulers,
We just discovered an (in our case) unpleasant behaviour of zuul and semaphores.
Scenario:
We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue.
The current behaviour of zuul is that it first gets the nodes and then locks the semaphore.
This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes.
In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation.
After some discussion on IRC this would be a proposal to solve this problem:
We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of
‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first.
This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok.
What do the other semaphore users out there think about this?
Kind regards
Tobias _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
-- David Shrewsbury (Shrews)
The time penalty would be the wait time for a node per iteration. So especially with the resource pressure in openstack at the moment this can take quite some time. This is configurable per job. So keeping in mind that this will waste some resources it should only be used for quick and cheap jobs. I’ve already a change up for review that implements the described behavior: https://review.openstack.org/619004 -Tobias On 26.11.18, 15:58, "David Shrewsbury" <shrewsbury.dave@gmail.com<mailto:shrewsbury.dave@gmail.com>> wrote: I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option. -Dave On Wed, Nov 14, 2018 at 6:14 AM Tristan Cacqueray <tdecacqu@redhat.com<mailto:tdecacqu@redhat.com>> wrote: On November 14, 2018 9:48 am, Joshua Hesketh wrote:
(Had originally forgot to reply-all)
I'm not a user of semaphores, so perhaps my opinion is invalid, however I'm not sure the use case of locking first is valid enough to justify the extra complexity of making it an option.
The point of a semaphore is often not to waste resources, so I fail to understand the use case of holding nodes ahead of time. Besides, if you have resources to spare (or "waste"), then you could have a bigger ready allocation in your nodepool meaning that new jobs will pick up a node reasonably quickly anyway.
When semaphores are used by fast jobs (e.g. publication), and if nodepool is running at capacity, it may be better to hold the node ahead of time. Though it sounds like a special case which shouldn't be the default behavior. -Tristan
Again though, as I'm not a user of this feature I'm likely missing something :-).
On Tue, Nov 13, 2018 at 2:37 AM <Tobias.Henkel@bmw.de<mailto:Tobias.Henkel@bmw.de>> wrote:
Hi Zuulers,
We just discovered an (in our case) unpleasant behaviour of zuul and semaphores.
Scenario:
We have a job ‘foo’ with a semaphore of 5. Now the queue grows and we have 30 items with job ‘foo’ in the queue.
The current behaviour of zuul is that it first gets the nodes and then locks the semaphore.
This means that 5 jobs run and 25 wait for the semaphore but already got and locked their nodes.
In case the job is expensive and long running this can become quite a big resource waste problem if we’re in a quota constrained situation.
After some discussion on IRC this would be a proposal to solve this problem:
We could make the order of ‘request nodes’ and ‘lock semaphore’ configurable per job with a default to the safe behaviour of
‘lock semaphore’ then ‘request nodes’. Jobs would still have the possibility to do the fast path by requesting the nodes first.
This makes sense if you have lightweight jobs that should run in a fast consecutive sequence and you know that some waste of resources is ok.
What do the other semaphore users out there think about this?
Kind regards
Tobias _______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org<mailto:Zuul-discuss@lists.zuul-ci.org> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org<mailto:Zuul-discuss@lists.zuul-ci.org> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
_______________________________________________ Zuul-discuss mailing list Zuul-discuss@lists.zuul-ci.org<mailto:Zuul-discuss@lists.zuul-ci.org> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss -- David Shrewsbury (Shrews)
David Shrewsbury <shrewsbury.dave@gmail.com> writes:
I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option.
I think it would depend on the nature of the job and the load on the system, so I'm not sure we can generalize there. I will just say that in OpenStack's case, we considered (and still may if we get around to it) implementing AFS mirror updates as a Zuul job; in that case I'd be more than happy for the system to get a head start on requesting a node (which could take hours to obtain) at the risk of waiting a few minutes for a previous job to finish. Indeed -- consider two such jobs in our system launched at the same time during a period where it takes 2 hours to allocate a node. If we obtained the semaphore first, then then those two jobs would take 4 hours to complete, but if we obtained the semaphore last, they would take 2 hours. That at least is the sort of thing a user (not necessarily admin) would want to consider. -Jim
On Mon, Nov 26, 2018 at 10:43 AM James E. Blair <corvus@inaugust.com> wrote:
David Shrewsbury <shrewsbury.dave@gmail.com> writes:
I'm a bit late to the discussion, so maybe this point is moot, but is there any measurable gains from ever doing the node allocation first? I.e., do we have numbers showing this to actually be more efficient in the "quick job" case? I'm mostly just wondering how an admin would know how/when to tune for this specific case if we left "allocate then lock semaphore" as an option.
I think it would depend on the nature of the job and the load on the system, so I'm not sure we can generalize there. I will just say that in OpenStack's case, we considered (and still may if we get around to it) implementing AFS mirror updates as a Zuul job; in that case I'd be more than happy for the system to get a head start on requesting a node (which could take hours to obtain) at the risk of waiting a few minutes for a previous job to finish.
Indeed -- consider two such jobs in our system launched at the same time during a period where it takes 2 hours to allocate a node. If we obtained the semaphore first, then then those two jobs would take 4 hours to complete, but if we obtained the semaphore last, they would take 2 hours.
This example helps cement the usefulness in my head. Thanks. We should consider adding it to the documentation for the new option (or something similar).
That at least is the sort of thing a user (not necessarily admin) would want to consider.
-Jim
-- David Shrewsbury (Shrews)
participants (5)
-
corvus@inaugust.com
-
David Shrewsbury
-
Joshua Hesketh
-
Tobias.Henkel@bmw.de
-
Tristan Cacqueray