On 27/11/2018 22:57, James E. Blair wrote:
Brendan Jackman <brendan.jackman@bluwirelesstechnology.com> writes:
Sure, it definitely makes sense how the differences in the workflows & models led to the differences in behaviour. In fact my org currently uses Gitlab, which in its current incarnation, is basically a Github clone with a basic revision-tracking feature. So I considered just writing a Gitlab driver for Zuul. Maybe that's a more pragmatic choice but Gerrit is (IMO...) a better code review tool than Gitlab, and my test HW availability issues would be a sad reason to be prevented from using it. A way to reduce pressure on test resources seems like something that would be useful to others too so that's why I wanted to investigate this idea despite it introducing a friction against Gerrit's natural model. We'd love a gitlab driver and there are other folks who would be interested in helping if you decided to go that direction.
I haven't though them through in detail but there are 3 ways I considered this could potentially be done:
1. Time-based chunking: if there are multiple proposed commits in a dependency chain and the first Gerrit change gets the required approval, Zuul waits N seconds to see if the second one gets an approval. If it doesn't it tests and potentially merges the first commit. If it does then it resets the timer and waits to see if the next Gerrit change gets the approval. If the timer runs out or we reach a Gerrit change with no child, the whole chain gets tested as one Zuul change and potentially merged. This strategy feels a little wild to me to to configure/explain but it could be quite ergonomic once set up. It also feels a bit like a fundamental change to Zuul's model, just jimmied into the Gerrit driver instead of where it really belongs. This is similar to something else that's come up recently in discussion, which I'm calling the bisecting pipeline manager. It would be a dependent pipeline manager (so it can gate) which, rather than testing each commit, would test batches of commits (largely determined by the timing of when it runs), merging them all, or, if they fail, splitting the batch in half on each failure (modulo their internal dependencies). It's not exactly what you described, but it is a similar mechanism where the goal is to use fewer test resources when gating (and therefore support longer test times). It could also actually be combined with the NNFI algorithm to operate in parallel as well (if, say, you wanted to support two or more batches running at the same time).
It's still not exactly a trivial problem, however, it is generally useful and isn't Gerrit specific.
That's really interesting - what you've described is certainly a more elegant and generic version of what I was imagining, it would certainly solve my issue. Maybe that's the way to go. I'll try to dig up the discusson and read up on the idea. (The only negative thing that comes to mind is that when you (or, when someone with programming habits like mine) post a patch series, the first N-1 are often nothing but preparatory refactoring to enable the final commit's diff to be focussed on adding the feature/fixing the bug. It would be a little annoying if the gate job found that the business end of the patch-stack was broken in an unforeseen way, but then went ahead and merged all the preparatory refactoring commits anyway. Maybe you need to re-write the final commit and change the approach to all your refactoring. But that's a problem (if you see it as a problem) with the current dependent manager anyway, obviously not a dealbreaker.)
But back to your situation -- I think we've beet talking mostly about a gate pipeline -- what do you want to do about check (where Zuul runs jobs on patchset upload)? Do you want to avoid having a check pipeline? Have a check pipeline, but not require a result from it in order to gate and put it at very low precedence (so that your system will get around to checking changes as capacity permits)? Or do you also want to modify check pipeline behavior so that it somehow only runs jobs on the end of a dependency chain? Or something else?
I haven't thought too much about how my proposal applies to the independent pipeline manager. In my scenario the check pipeline jobs don't put much pressure on test resources - they don't take very long and often don't require special HW (unit/module tests, compilation, formatting checks), it's the sort of thing I'd want to run against every commit. I guess I assumed that's how everyone uses the check/gate model. Cheers, Brendan