On 2018-11-26 07:48:45 -0800 (-0800), James E. Blair wrote:
Jeremy Stanley <fungi@yuggoth.org> writes:
I don't see how to make such atomicity guarantees atop other backend services which lack any "atomic" merge feature themselves. What should Zuul do if it encounters an error when attempting to merge some of the required changes after testing succeeds (force in a revert of the others somehow, or leave things in an inconsistent state)? I think if we can answer that or work with the upstream maintainers of the various backends to implement some sort of transactions which can be rolled back prior to committing them, it might be safer. That almost certainly implies a performance cost for the backend itself, however, since it can't claim the necessary atomicity either without some sort of (likely intractable from their perspectives) locking solution.
I replied around the same time as you, and touched on this some; in short, having Zuul perform the merges and pushes (rather than making the API call to have the remote gerrit/github do it) gives us confidence that the result will be accepted, and, in the unlikely case that something goes wrong, I think we should unwind back to the previous state (presumably if we encounter an error on the second of two pushes, we can still push a revert to the first repo).
For that reason, I consider zuul-push to be the first step in implementing this.
An excellent solution. We may want to make sure Zuul is able to push --force as well so that it can "roll back" incomplete sets if one push in a set fails for some reason. I have a feeling this would all be extremely failure-prone in cases where Zuul is not the only account pushing commits directly to the repositories, but it's also fair to declare that danger and let the operator shoot themselves in the foot if they disregard the warning. -- Jeremy Stanley