On Wed, 30 Jan 2019 at 22:53, James E. Blair <corvus@inaugust.com> wrote:
If the issue is that Nodepool collects host keys for static nodes... well, here is the line of code you'd have to change:
http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/static/provider.py...
If, however, the issue is related to providing data to Jenkins, I'm not sure changes to Nodepool will help.
It is unfortunately the latter :-(
However there is another hitch. One of the reasons we're using vault is that some teams are providing a system built and managed by themselves, and currently they can add this to be used by their jobs only by adding an entry under a set path in vault. We grant permissions for them to write to a specific location and then pick it up dynamically. This allows them to add a replacement system to be used on the next run, and stop the current k8s cluster from being used without any interaction with ourselves. This is something that we'll need to continue to do, so some dynamic adding/removing of nodes from being used would be highly desirable to avoid unnecessary requests for us to make changes.
You could probably accomplish this in a self-service fashion today in Zuul with a combination of semaphores and secrets (with no involvement from Nodepool).
That's very good to know, and may result in trying to aim for a temporary solution in doing the 1-to-1 mapping to get to nodepool v3 and then solve it properly with switching to zuul v3 So thanks, that's definitely something I'll share and see if the rest of the team think it's a reasonable approach.
So if we have to do a custom driver to support that, it might be useful to have the system be relatively efficient and apply the same dynamic adding/removing to all of our static nodes to that instead of needing to reserve 8+ for these k8s systems which are only used some of the time, we can reuse the nodes used to run the jobs by only injecting the creds needed for the job that is supposed to have access to the allocated k8s cluster. Though it was mentioned that such dynamic behaviour causes issues when listing nodes, but if that doesn't impact the Jenkins nodepool agent, I think could live with it for a few months.
Some of this may be unnecessary, so trying to balance complexity to support the base requirements:
- Some teams can run jobs on a k8s cluster they provide in an exclusive manner - These can remain self managed - Avoid jobs needing such resources queuing in Jenkins if the cluster is not yet available - Avoid unnecessarily blocking other jobs due to no available machines to run jobs on (even though a number are sitting idle) - Make it easier to upgrade to zuul v3
Based on the above, I don't think you will need a custom Nodepool driver.
-Jim
My hope is that your suggestion on using semaphores in zuul v3 means we can live with doing a less efficient solution. -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool"