Using nodepool v3 with Jenkins
Hi, Hoping that we can make use of nodepool v3 with Jenkins to solve some of our existing problems, and at the same time move a bit closer to upgrading from zuul v2 -> v3. Going to outline our current problem, my initial thinking that resulted in some of my questions last night in IRC and some further thinking on it that may provide a way forward for us. Hoping that someone can chime in and identify if this is realistic or if we're heading down the wrong path before getting to far. Keep in mind that I'm only starting with nodepool, so I'm unsure if I even understand some of the concepts. e.g. does nodepool get a request for a particular type of node or is it infered by there being none of that type available in the queues in zookeeper? Current problem: We have some resources that are limited and are not directly accessible as workers for Jenkins, and so need a way to manage the allocation of resources to certain jobs. Some of these resources are provided by teams and some are from a set of resources managed by the CI team. So we probably have ~4 sets of these types of resources. At the moment this is done using some (too much) hackier with groovy, the Global Event Listener Plugin (GELP), and Lockable Resources plugin. The GELP allows us to hook into various points in Jenkins, such as when the job is scheduled but not yet running Jenkins (QUEUED). Unfortunately there doesn't appear to be any way with the gearman protocol to inspect what job is going to be assigned before it is picked up, so we can only do manipulation after the job has been scheduled and before it starts running. The data required to provide the job with the details required to use the limited resource is retrieved from vault and injected right before the job gets fully launched on Jenkins. This is done by creating lockable resource entries dynamically based on the number of the resources available, and then catching the event on when Jenkins allows the job needing this to start and injecting the required data into the environment. Unfortunately this means the queuing for the jobs requiring these limited resources occurs within Jenkins instead of Zuul because the job has still be scheduled even if it is not yet running. We've run into a number of problems resulting in workers getting allocated for jobs that don't get to start, and taking them away from jobs that could be run, and effectively lightly DOSing the system. Initial thoughts: So enter nodepool and the hope that if we move the management of both these resources and the workers out of Jenkins and into something a little more flexible, it can prevent the job from getting scheduled in the first place if not enough of the limited resources are available, and allow jobs that do not need these to be scheduled because a worker isn't getting assigned to the job that is queued waiting. Initially I was thinking of having two resources, making the one representing the limited resource a fake resource of some kind that Jenkins doesn't connect to, but as it appears that it's a requirement of the Jenkins Nodepool agent that nodes returned must be accessible to Jenkins, that doesn't seem to fly. Next idea was whether it would be possible to have a static pool and a another driver that could grab a node from the static pool if a request for the limited resource node type came in, which prompted some of my questions in IRC last night. Further thinking done: After spending a bit more time with the static nodepool driver, and depending on how it works (there are lots of gaps in my understanding), I'm wondering if what we need is to create a custom driver and place all of our static nodes in a internal pool inside of this driver. Additionally query our vault instance for a list of these limited resource groups and build some pools of those internally as well. And assign labels as part of the requirements for the jobs using these limited resources. Then instead of immediate registering all of the nodes with zookeeper, instead depending on the node label request that comes in, populate zookeeper with an entry containing suitable labels. If it is one requiring a limited resource identified by a name such as teamA-resources, the driver code can test if there are any not in use and if so create a node entry in zookeeper that contains labels "teamA-resources" and "teamA-resource-5". The second label can be used in Jenkins by our GELP code to determine which set of data to inject into the environment of the job scheduled on that node. Obviously there is still some hackier in there, but does a custom driver building on the idea of combining some of the ideas of the static driver along with the dynamic drivers where instead of launching a new node, it simply creates a node entry on demand using both a static node and details on what resource is required? Possibly I'm misunderstanding how nodepool works in this case, and maybe it just populates a queue in zookeeper and tries to keep some in there, which may mean that what needs to happen is that one entry allocating a node to each of the sets of limited resources needs to be added as entries to zookeeper queues by the driver with further entries added if there are enough resources available as those queues are consumed? Which is a bit different, but seems like it would still work to limit the amount of damage where multiple jobs were launched and required more of the limited resource than was available. Currently can end up seeing 3 queue instances for a single limited resource queued up in Jenkins, so as we migrate another case were this will be needed we could end up seeing a queue of 9-10. Seems nodepool might limit it to only losing one per team, until we can migrate to zuulv3 and use some of the mutex capabilities to handle this a bit better. Thoughts? Is this the best way to dig up as not digging isn't an option here... -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
First, a disclaimer: I mention this mostly because of the subject line, which might cause people to jump to conclusions -- our goal with Nodepool is not to explicitly support Jenkins or any system other than Zuul at this point. We are currently keeping our options open and trying to limit explicit ties to Zuul so that, in the future, it might be generically useful. But at the moment, we don't guarantee the stability of the API, and we don't intend to merge or support features that aren't in service of Zuul. Having said that, I think you and the rest of the team are on the same page about this, having discussed it previously. This may be a use case that applies equally well to Zuul, and if so, I think there may be options that could help in your situation and improve things for Zuul as well. Darragh Bailey <daragh.bailey@gmail.com> writes: ...
We have some resources that are limited and are not directly accessible as workers for Jenkins, and so need a way to manage the allocation of resources to certain jobs. Some of these resources are provided by teams and some are from a set of resources managed by the CI team. So we probably have ~4 sets of these types of resources. At the moment this is done using some (too much) hackier with groovy, the Global Event Listener Plugin (GELP), and Lockable Resources plugin.
... There's one thing I don't understand here: considering that you go and get credentials to access the resources from vault, why is it that it's not permissible for Jenkins (or Zuul) to access it? Put another way, are these nodes that you will SSH into from within the job? If not, why not? Generally what Nodepool will do is expect a list of all of the required resources for a job, do what is needed to obtain those resources (whether that's locking a static node or creating a dynamic node), and then when they are all present, return the data to Zuul. Ultimately, any solution will involve Nodepool having an explicit understanding of each of the resources. But first I think I need to understand why we can't just tell it this is a static node running SSH or winrm. -Jim
Hi, On Tue, 29 Jan 2019 at 23:59, James E. Blair <corvus@inaugust.com> wrote:
First, a disclaimer:
I mention this mostly because of the subject line, which might cause people to jump to conclusions -- our goal with Nodepool is not to explicitly support Jenkins or any system other than Zuul at this point. We are currently keeping our options open and trying to limit explicit ties to Zuul so that, in the future, it might be generically useful. But at the moment, we don't guarantee the stability of the API, and we don't intend to merge or support features that aren't in service of Zuul.
Understood, and in case anyone else is reading the goal here is to try and do a phased transition to zuul v3, by sharing the resources it becomes easier to run both zuul v2 & v3 side by side until ready to complete a switch over.
Having said that, I think you and the rest of the team are on the same page about this, having discussed it previously. This may be a use case that applies equally well to Zuul, and if so, I think there may be options that could help in your situation and improve things for Zuul as well.
Darragh Bailey <daragh.bailey@gmail.com> writes:
...
We have some resources that are limited and are not directly accessible as workers for Jenkins, and so need a way to manage the allocation of resources to certain jobs. Some of these resources are provided by teams and some are from a set of resources managed by the CI team. So we probably have ~4 sets of these types of resources. At the moment this is done using some (too much) hackier with groovy, the Global Event Listener Plugin (GELP), and Lockable Resources plugin.
...
There's one thing I don't understand here: considering that you go and get credentials to access the resources from vault, why is it that it's not permissible for Jenkins (or Zuul) to access it?
Put another way, are these nodes that you will SSH into from within the job? If not, why not?
The resources are k8s systems, and there isn't an issue with providing the credentials to access it, but rather than there appears to be a limitation in the current nodepool agent plugin for Jenkins that expects the node to be attached as a ssh-agent, which I can't see these working in that way. I'm hoping to avoid needing to extensive work on the nodepool agent plugin to allow it to handle other node types given that this is very much a stepping stone to migrating and once in zuulv3 I think based on some of the feedback in the IRC channel it seems likely that we could use the k8s driver to subsequently manage these.
Generally what Nodepool will do is expect a list of all of the required resources for a job, do what is needed to obtain those resources (whether that's locking a static node or creating a dynamic node), and then when they are all present, return the data to Zuul.
Ultimately, any solution will involve Nodepool having an explicit understanding of each of the resources. But first I think I need to understand why we can't just tell it this is a static node running SSH or winrm.
-Jim
I think this is a case of some limitations due to how Jenkins understands nodes. It expects that each node is an agent that can run some job. In this case the k8s resources are being used by a job that cannot run directly on the k8s cluster but rather is updating it and then exercising it. It's a test system, there are a limited number available to be used to shared across multiple teams, and need to have somewhere to run the code that will deploy the changes under test to it. My hope was that we could use a pool of machines that can be provided to Jenkins as standard slaves, and when asked for one of these k8s environments take both the k8s system and one of the static machines and provide it to Jenkins, but it would have to be done in a way that Jenkins only sees the machine that it can run the job on and doesn't try to launch an agent on the k8s instance. One solution around this is to do a 1-to-1 mapping in that we reserve some machines or VMs that are managed by nodepool under a static label and when one of these resources are needed. However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes. So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months. Some of this may be unnecessary, so trying to balance complexity to support the base requirements: - Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3 -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
Darragh Bailey <daragh.bailey@gmail.com> writes:
The resources are k8s systems, and there isn't an issue with providing the credentials to access it, but rather than there appears to be a limitation in the current nodepool agent plugin for Jenkins that expects the node to be attached as a ssh-agent, which I can't see these working in that way.
I'm hoping to avoid needing to extensive work on the nodepool agent plugin to allow it to handle other node types given that this is very much a stepping stone to migrating and once in zuulv3 I think based on some of the feedback in the IRC channel it seems likely that we could use the k8s driver to subsequently manage these.
If the issue is that Nodepool collects host keys for static nodes... well, here is the line of code you'd have to change: http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py... If, however, the issue is related to providing data to Jenkins, I'm not sure changes to Nodepool will help.
Generally what Nodepool will do is expect a list of all of the required resources for a job, do what is needed to obtain those resources (whether that's locking a static node or creating a dynamic node), and then when they are all present, return the data to Zuul.
Ultimately, any solution will involve Nodepool having an explicit understanding of each of the resources. But first I think I need to understand why we can't just tell it this is a static node running SSH or winrm.
-Jim
I think this is a case of some limitations due to how Jenkins understands nodes. It expects that each node is an agent that can run some job. In this case the k8s resources are being used by a job that cannot run directly on the k8s cluster but rather is updating it and then exercising it. It's a test system, there are a limited number available to be used to shared across multiple teams, and need to have somewhere to run the code that will deploy the changes under test to it.
My hope was that we could use a pool of machines that can be provided to Jenkins as standard slaves, and when asked for one of these k8s environments take both the k8s system and one of the static machines and provide it to Jenkins, but it would have to be done in a way that Jenkins only sees the machine that it can run the job on and doesn't try to launch an agent on the k8s instance.
One solution around this is to do a 1-to-1 mapping in that we reserve some machines or VMs that are managed by nodepool under a static label and when one of these resources are needed.
However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes.
You could probably accomplish this in a self-service fashion today in Zuul with a combination of semaphores and secrets (with no involvement from Nodepool).
So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months.
Some of this may be unnecessary, so trying to balance complexity to support the base requirements:
- Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3
Based on the above, I don't think you will need a custom Nodepool driver. -Jim
On Wed, 30 Jan 2019 at 22:53, James E. Blair <corvus@inaugust.com> wrote:
If the issue is that Nodepool collects host keys for static nodes... well, here is the line of code you'd have to change:
http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py...
If, however, the issue is related to providing data to Jenkins, I'm not sure changes to Nodepool will help.
It is unfortunately the latter :-(
However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes.
You could probably accomplish this in a self-service fashion today in Zuul with a combination of semaphores and secrets (with no involvement from Nodepool).
That's very good to know, and may result in trying to aim for a temporary solution in doing the 1-to-1 mapping to get to nodepool v3 and then solve it properly with switching to zuul v3 So thanks, that's definitely something I'll share and see if the rest of the team think it's a reasonable approach.
So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months.
Some of this may be unnecessary, so trying to balance complexity to support the base requirements:
- Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3
Based on the above, I don't think you will need a custom Nodepool driver.
-Jim
My hope is that your suggestion on using semaphores in zuul v3 means we can live with doing a less efficient solution. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"
participants (2)
-
corvus@inaugust.com
-
Darragh Bailey