Hi,

On Tue, 29 Jan 2019 at 23:59, James E. Blair <corvus@inaugust.com> wrote:
First, a disclaimer:

I mention this mostly because of the subject line, which might cause
people to jump to conclusions -- our goal with Nodepool is not to
explicitly support Jenkins or any system other than Zuul at this point.
We are currently keeping our options open and trying to limit explicit
ties to Zuul so that, in the future, it might be generically useful.
But at the moment, we don't guarantee the stability of the API, and we
don't intend to merge or support features that aren't in service of
Zuul.

Understood, and in case anyone else is reading the goal here is to try and do a phased transition to zuul v3, by sharing the resources it becomes easier to run both zuul v2 & v3 side by side until ready to complete a switch over.
 
Having said that, I think you and the rest of the team are on the same
page about this, having discussed it previously.  This may be a use case
that applies equally well to Zuul, and if so, I think there may be
options that could help in your situation and improve things for Zuul as
well.

Darragh Bailey <daragh.bailey@gmail.com> writes:

...

> We have some resources that are limited and are not directly accessible as
> workers for Jenkins, and so need a way to manage the allocation of
> resources to certain jobs. Some of these resources are provided by teams
> and some are from a set of resources managed by the CI team. So we probably
> have ~4 sets of these types of resources. At the moment this is done using
> some (too much) hackier with groovy, the Global Event Listener Plugin
> (GELP), and Lockable Resources plugin.

...

There's one thing I don't understand here: considering that you go and
get credentials to access the resources from vault, why is it that it's
not permissible for Jenkins (or Zuul) to access it?

Put another way, are these nodes that you will SSH into from within the
job?  If not, why not?

The resources are k8s systems, and there isn't an issue with providing the credentials to access it, but rather than there appears to be a limitation in the current nodepool agent plugin for Jenkins that expects the node to be attached as a ssh-agent, which I can't see these working in that way.

I'm hoping to avoid needing to extensive work on the nodepool agent plugin to allow it to handle other node types given that this is very much a stepping stone to migrating and once in zuulv3 I think based on some of the feedback in the IRC channel it seems likely that we could use the k8s driver to subsequently manage these.
 
Generally what Nodepool will do is expect a list of all of the required
resources for a job, do what is needed to obtain those resources
(whether that's locking a static node or creating a dynamic node), and
then when they are all present, return the data to Zuul.

Ultimately, any solution will involve Nodepool having an explicit
understanding of each of the resources.  But first I think I need to
understand why we can't just tell it this is a static node running SSH
or winrm.

-Jim

I think this is a case of some limitations due to how Jenkins understands nodes. It expects that each node is an agent that can run some job. In this case the k8s resources are being used by a job that cannot run directly on the k8s cluster but rather is updating it and then exercising it. It's a test system, there are a limited number available to be used to shared across multiple teams, and need to have somewhere to run the code that will deploy the changes under test to it.

My hope was that we could use a pool of machines that can be provided to Jenkins as standard slaves, and when asked for one of these k8s environments take both the k8s system and one of the static machines and provide it to Jenkins, but it would have to be done in a way that Jenkins only sees the machine that it can run the job on and doesn't try to launch an agent on the k8s instance.


One solution around this is to do a 1-to-1 mapping in that we reserve some machines or VMs that are managed by nodepool under a static label and when one of these resources are needed.


However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes.

So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months.


Some of this may be unnecessary, so trying to balance complexity to support the base requirements:

--
Darragh Bailey
"Nothing is foolproof to a sufficiently talented fool"