Node request priorities
Hi, In OpenStack, we occasionally have periods where, for various reasons, one project, or a group of projects, uses a sufficiently large number of node resources that it significantly delays results for other projects. We happen to be in one of those periods, so I've had some time to think about ways to mitigate this. First, let me say that one of the potential causes is long and unreliable tests. There are efforts underway to communicate that and hopefully address it. But that topic isn't really for this list. Our concern is, when a system gets into this state, can or should we do anything in Zuul to mitigate it? I would like to avoid (for now) the use of project-based quotas. I could certainly see us adopting tenant quotas in the future, but that doesn't help us in this case where all of the contention is within a single tenant. Within a tenant, project quotas just seem to be too much work. I wouldn't even begin to know how to set them fairly. And what we really want to do is use as many node resources are available, but use them in a fair manner. I think we can accomplish this by augmenting our use of node request priority. Currently, we use a priority queue with very coarse granularity (high, normal, low) set by Zuul according to pipeline. I'd like to adjust the system so that each subsequent node request from a given project has a lower priority. In other words, at any given time, the first change from project A in gate has the highest priority, and the same is true for project B. The second change from A has a slightly lower priority, and so forth. The problem with this is that the node request priorities much change over time as the contents of the pipeline queues change. Once the first change from A merges, the second change from A should now have the highest priority. That makes this incompatible with our current priority queue system. Fortunately, Tobias has been working on adding caching to Nodepool's use of Zookeeper. If we start having node requests cached in-memory in the launchers, the in-memory records can be automatically updated by Zookeeper callbacks. This means that we can encode our additional priority information on an attribute in the Znode, updating it as necessary, and combine it with the already existing information from the priority queue to determine the order in which to fulfill requests. To summarize, I propose the following: * The current priority queue system for node requests will remain in place. I'll call that "precedence" to distinguish it from the new bits. * Add a "priority" field to the node request znodes in Zookeeper. This is an integer which indicates the priority of this request relative to others at the same pipeline precedence level. * Zuul will update the priority field after each time a pipeline queue processor runs, according to internal heuristics. I propose that initially we decrement the priority for each subsequent change from the same project, though we may also add an option to do so by project-group, as determined by membership in a shared change queue. * We add temporary feature flags to Zuul so that we can try out these new features (including different options for how to determine the priority) before committing to future support. Whatever we decide to keep, we may also want to make optional, so these flags may eventually turn into permanent configuration flags. * We implement a TreeCache for node requests in nodepool. * When nodepool begins to iterate over all node requests in order to handle them, it first sorts by precedence, then priority. Items with the same precedence and priority will remain in their chronological order of submission. I think that will help make node distribution more fair, and it builds on top of the current system without changing it so it should be forward/backward compatible, and even optional. -Jim
On 2018-10-26 06:59:33 -0700 (-0700), James E. Blair wrote:
In OpenStack, we occasionally have periods where, for various reasons, one project, or a group of projects, uses a sufficiently large number of node resources that it significantly delays results for other projects. We happen to be in one of those periods, so I've had some time to think about ways to mitigate this. [...]
I think the proposed fair queuing idea sounds grand, and am very excited to see where this plan leads. -- Jeremy Stanley
On 2018-10-26 06:59:33 -0700 (-0700), James E. Blair wrote: [...]
I propose that initially we decrement the priority for each subsequent change from the same project, though we may also add an option to do so by project-group, as determined by membership in a shared change queue. [...]
Thinking through this, it seems like satisfying based on a strict per-project priority would be inefficient for shared change queues in dependent pipelines (consider three changes from one project followed by one from another, the changes at positions 1 and 4 get satisfied first but then 4 is stuck waiting for 2 and 3 to be satisfied and tested anyway). So perhaps we want both the by-project fair queuing algorithm and a by-group fair queuing algorithm and then allow them to be applied to queues as desired? Maybe have dependent default to by-group and independent et al default to by-project? -- Jeremy Stanley
On 10/26/18 7:32 AM, Jeremy Stanley wrote:
On 2018-10-26 06:59:33 -0700 (-0700), James E. Blair wrote: [...]
I propose that initially we decrement the priority for each subsequent change from the same project, though we may also add an option to do so by project-group, as determined by membership in a shared change queue. [...]
Thinking through this, it seems like satisfying based on a strict per-project priority would be inefficient for shared change queues in dependent pipelines (consider three changes from one project followed by one from another, the changes at positions 1 and 4 get satisfied first but then 4 is stuck waiting for 2 and 3 to be satisfied and tested anyway). So perhaps we want both the by-project fair queuing algorithm and a by-group fair queuing algorithm and then allow them to be applied to queues as desired? Maybe have dependent default to by-group and independent et al default to by-project?
I think you're right about this -- each manager may need to implement it's own handling of this. So perhaps we'll end up with a global flag to enable the behavior generally. -Jim
On Fri, Oct 26, 2018 at 9:59 AM James E. Blair <corvus@inaugust.com> wrote:
* When nodepool begins to iterate over all node requests in order to handle them, it first sorts by precedence, then priority. Items with the same precedence and priority will remain in their chronological order of submission.
Out of curiosity, what does the current node allocation algorithm look like ? I ask because, from an untrained eye, it seems the current allocation does not priorize finishing individual changes. I often see things like in this screenshot: https://i.imgur.com/gngyZS8.png This is at a small scale (two changes) but we see that the second change has had all it's job node requests fulfilled while the first one is still waiting for a node. If you take a closer look at the status page, you'll see this phenomenon occurring at a larger scale -- there are a lot of jobs later in the queue/pipeline that are getting nodes allocated to jobs while there are changes ahead in the queue that are still waiting. Would this concept of priority/precedence help us churn through individual changes faster ? David Moreau Simard Service Reliability Engineering | OpenShift dmsimard = [irc, github, twitter]
On 2018-10-26 11:33:30 -0400 (-0400), David Moreau Simard wrote: [...]
I often see things like in this screenshot: https://i.imgur.com/gngyZS8.png This is at a small scale (two changes) but we see that the second change has had all it's job node requests fulfilled while the first one is still waiting for a node. [...]
This can happen if a job fails in pre phase and then has to be requeued. -- Jeremy Stanley
On Fri, Oct 26, 2018 at 11:43 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2018-10-26 11:33:30 -0400 (-0400), David Moreau Simard wrote: [...]
I often see things like in this screenshot: https://i.imgur.com/gngyZS8.png This is at a small scale (two changes) but we see that the second change has had all it's job node requests fulfilled while the first one is still waiting for a node. [...]
This can happen if a job fails in pre phase and then has to be requeued.
Oh, that could explain it. I wonder if we have a way to highlight from the status page that a job is in the process of being re-tried ? David Moreau Simard Service Reliability Engineering | OpenShift dmsimard = [irc, github, twitter]
On Fri, Oct 26, 2018, at 8:43 AM, Jeremy Stanley wrote:
On 2018-10-26 11:33:30 -0400 (-0400), David Moreau Simard wrote: [...]
I often see things like in this screenshot: https://i.imgur.com/gngyZS8.png This is at a small scale (two changes) but we see that the second change has had all it's job node requests fulfilled while the first one is still waiting for a node. [...]
This can happen if a job fails in pre phase and then has to be requeued.
I think it usually happens because we tie a queued job directly to a node request against a provider. This request can fail or timeout against that provider 3 times before moving to the next provider (for a total of three providers). This means that if your request is tied to a cloud that is slow today or otherwise unhappy that job may have to wait longer than its siblings. I expect this will happen under the proposed priority redesign as that proposal doesn't address what happens when a request times out 3 times in a row against a single provider (which takes 15 minutes). To fix this I think what we want is a fifo node queue. For example if we have 5 requests for ubuntu and 5 requests for centos, as those are booted they go into the fifo queue and the highest priority node request takes the first valid node(s), then the second takes the next node(s) off the queue and so on. Now it doesn't matter if a single provider is slow, the highest priority request gets the node first. Clark
Clark Boylan <cboylan@sapwetik.org> writes:
On Fri, Oct 26, 2018, at 8:43 AM, Jeremy Stanley wrote:
On 2018-10-26 11:33:30 -0400 (-0400), David Moreau Simard wrote: [...]
I often see things like in this screenshot: https://i.imgur.com/gngyZS8.png This is at a small scale (two changes) but we see that the second change has had all it's job node requests fulfilled while the first one is still waiting for a node. [...]
This can happen if a job fails in pre phase and then has to be requeued.
I think it usually happens because we tie a queued job directly to a node request against a provider. This request can fail or timeout against that provider 3 times before moving to the next provider (for a total of three providers). This means that if your request is tied to a cloud that is slow today or otherwise unhappy that job may have to wait longer than its siblings.
I expect this will happen under the proposed priority redesign as that proposal doesn't address what happens when a request times out 3 times in a row against a single provider (which takes 15 minutes).
To fix this I think what we want is a fifo node queue. For example if we have 5 requests for ubuntu and 5 requests for centos, as those are booted they go into the fifo queue and the highest priority node request takes the first valid node(s), then the second takes the next node(s) off the queue and so on. Now it doesn't matter if a single provider is slow, the highest priority request gets the node first.
Yes, I think that would be a good idea, though it is a significant change to the algorithm -- and is made more complex by the need for nodes for the same request to be in the same region. -Jim
participants (5)
-
Clark Boylan
-
corvus@inaugust.com
-
David Moreau Simard
-
James E. Blair
-
Jeremy Stanley