Using nodepool v3 with Jenkins

Darragh Bailey daragh.bailey at gmail.com
Tue Jan 29 11:32:41 UTC 2019


Hi,

Hoping that we can make use of nodepool v3 with Jenkins to solve some of
our existing problems, and at the same time move a bit closer to upgrading
from zuul v2 -> v3.

Going to outline our current problem, my initial thinking that resulted in
some of my questions last night in IRC and some further thinking on it that
may provide a way forward for us. Hoping that someone can chime in and
identify if this is realistic or if we're heading down the wrong path
before getting to far.

Keep in mind that I'm only starting with nodepool, so I'm unsure if I even
understand some of the concepts. e.g. does nodepool get a request for a
particular type of node or is it infered by there being none of that type
available in the queues in zookeeper?


Current problem:

We have some resources that are limited and are not directly accessible as
workers for Jenkins, and so need a way to manage the allocation of
resources to certain jobs. Some of these resources are provided by teams
and some are from a set of resources managed by the CI team. So we probably
have ~4 sets of these types of resources. At the moment this is done using
some (too much) hackier with groovy, the Global Event Listener Plugin
(GELP), and Lockable Resources plugin.

The GELP allows us to hook into various points in Jenkins, such as when the
job is scheduled but not yet running Jenkins (QUEUED). Unfortunately there
doesn't appear to be any way with the gearman protocol to inspect what job
is going to be assigned before it is picked up, so we can only do
manipulation after the job has been scheduled and before it starts running.

The data required to provide the job with the details required to use the
limited resource is retrieved from vault and injected right before the job
gets fully launched on Jenkins. This is done by creating lockable resource
entries dynamically based on the number of the resources available, and
then catching the event on when Jenkins allows the job needing this to
start and injecting the required data into the environment.

Unfortunately this means the queuing for the jobs requiring these limited
resources occurs within Jenkins instead of Zuul because the job has still
be scheduled even if it is not yet running. We've run into a number of
problems resulting in workers getting allocated for jobs that don't get to
start, and taking them away from jobs that could be run, and effectively
lightly DOSing the system.


Initial thoughts:

So enter nodepool and the hope that if we move the management of both these
resources and the workers out of Jenkins and into something a little more
flexible, it can prevent the job from getting scheduled in the first place
if not enough of the limited resources are available, and allow jobs that
do not need these to be scheduled because a worker isn't getting assigned
to the job that is queued waiting.

Initially I was thinking of having two resources, making the one
representing the limited resource a fake resource of some kind that Jenkins
doesn't connect to, but as it appears that it's a requirement of the
Jenkins Nodepool agent that nodes returned must be accessible to Jenkins,
that doesn't seem to fly.

Next idea was whether it would be possible to have a static pool and a
another driver that could grab a node from the static pool if a request for
the limited resource node type came in, which prompted some of my questions
in IRC last night.


Further thinking done:

After spending a bit more time with the static nodepool driver, and
depending on how it works (there are lots of gaps in my understanding), I'm
wondering if what we need is to create a custom driver and place all of our
static nodes in a internal pool inside of this driver. Additionally query
our vault instance for a list of these limited resource groups and build
some pools of those internally as well. And assign labels as part of the
requirements for the jobs using these limited resources. Then instead of
immediate registering all of the nodes with zookeeper, instead depending on
the node label request that comes in, populate zookeeper with an entry
containing suitable labels.

If it is one requiring a limited resource identified by a name such as
teamA-resources, the driver code can test if there are any not in use and
if so create a node entry in zookeeper that contains labels
"teamA-resources" and "teamA-resource-5". The second label can be used in
Jenkins by our GELP code to determine which set of data to inject into the
environment of the job scheduled on that node.


Obviously there is still some hackier in there, but does a custom driver
building on the idea of combining some of the ideas of the static driver
along with the dynamic drivers where instead of launching a new node, it
simply creates a node entry on demand using both a static node and details
on what resource is required?

Possibly I'm misunderstanding how nodepool works in this case, and maybe it
just populates a queue in zookeeper and tries to keep some in there, which
may mean that what needs to happen is that one entry allocating a node to
each of the sets of limited resources needs to be added as entries to
zookeeper queues by the driver with further entries added if there are
enough resources available as those queues are consumed? Which is a bit
different, but seems like it would still work to limit the amount of damage
where multiple jobs were launched and required more of the limited resource
than was available. Currently can end up seeing 3 queue instances for a
single limited resource queued up in Jenkins, so as we migrate another case
were this will be needed we could end up seeing a queue of 9-10. Seems
nodepool might limit it to only losing one per team, until we can migrate
to zuulv3 and use some of the mutex capabilities to handle this a bit
better.


Thoughts? Is this the best way to dig up as not digging isn't an option
here...

-- 
Darragh Bailey
"Nothing is foolproof to a sufficiently talented fool"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.zuul-ci.org/pipermail/zuul-discuss/attachments/20190129/80f99850/attachment-0001.html>


More information about the Zuul-discuss mailing list