Darragh Bailey <daragh.bailey@gmail.com> writes:
The resources are k8s systems, and there isn't an issue with providing the credentials to access it, but rather than there appears to be a limitation in the current nodepool agent plugin for Jenkins that expects the node to be attached as a ssh-agent, which I can't see these working in that way.
I'm hoping to avoid needing to extensive work on the nodepool agent plugin to allow it to handle other node types given that this is very much a stepping stone to migrating and once in zuulv3 I think based on some of the feedback in the IRC channel it seems likely that we could use the k8s driver to subsequently manage these.
If the issue is that Nodepool collects host keys for static nodes... well, here is the line of code you'd have to change: http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py... If, however, the issue is related to providing data to Jenkins, I'm not sure changes to Nodepool will help.
Generally what Nodepool will do is expect a list of all of the required resources for a job, do what is needed to obtain those resources (whether that's locking a static node or creating a dynamic node), and then when they are all present, return the data to Zuul.
Ultimately, any solution will involve Nodepool having an explicit understanding of each of the resources. But first I think I need to understand why we can't just tell it this is a static node running SSH or winrm.
-Jim
I think this is a case of some limitations due to how Jenkins understands nodes. It expects that each node is an agent that can run some job. In this case the k8s resources are being used by a job that cannot run directly on the k8s cluster but rather is updating it and then exercising it. It's a test system, there are a limited number available to be used to shared across multiple teams, and need to have somewhere to run the code that will deploy the changes under test to it.
My hope was that we could use a pool of machines that can be provided to Jenkins as standard slaves, and when asked for one of these k8s environments take both the k8s system and one of the static machines and provide it to Jenkins, but it would have to be done in a way that Jenkins only sees the machine that it can run the job on and doesn't try to launch an agent on the k8s instance.
One solution around this is to do a 1-to-1 mapping in that we reserve some machines or VMs that are managed by nodepool under a static label and when one of these resources are needed.
However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes.
You could probably accomplish this in a self-service fashion today in Zuul with a combination of semaphores and secrets (with no involvement from Nodepool).
So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months.
Some of this may be unnecessary, so trying to balance complexity to support the base requirements:
- Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3
Based on the above, I don't think you will need a custom Nodepool driver. -Jim