Hi Andreea, On Thu, 2019-05-09 at 08:14 +0000, Florescu, Andreea wrote: Hey everyone, During PTG we discussed about having a centralized place for the buildkite pipeline. The buildkite pipeline is the definition of steps (or commands) that run as part of the CI. Since all crates will have to pass common tests like `cargo test`, `cargo build` and `cargo fmt` the idea would be to have these steps in a centralized repository so we do not have to duplicate the pipeline in all repositories. I created an issue for this purpose [1]. The pipeline would live in a `buildkite-ci` repository or similar. Maybe rust-vmm-ci is a better option so we can have the coverage test in the same repository as well, but that is a different discussion. Having a centralized repo for all the testing also simplifies where to host the common integration tests that we might run to validate several crates together (dummy VMM we mentioned during PTG). Centralized repo does not have to contain a single pipeline as you mentioned later in this email. About the naming, I would definitely prefer something generic like "rust-vmm-ci" or "ci" or "tests". Each GitHub repository would then use `buildkite-ci` as a git submodule. To better explain this, let's take an example. Let's say we have `buildkite-ci.git` and in it we have a `.buildkite` directory. In the buildkite directory we have `pipeline.yml` which defines steps/commands like cargo build and cargo test. Now, say we want to enable the buildkite CI for the vm-memory repository. To do that, we would need to do the following: 1. Add `buildlite-ci` as a git submodule of vm-memory. 2. Add a new vm-memory-ci pipeline using the Buildkite web interface. 3. Specify which pipeline to use to run the tests. This is done via a command called "pipeline upload" which takes as a parameter a path that is relative to the root of the repository. You can read more about it here [2]. The command in this case will look something like: buildkite-agent pipeline upload buildkite-ci/.buidkite/pipeline.yml With this design you get 2 things: 1. You don't have to duplicate the pipeline for all repositories 2. You can still have custom integration tests for each repository This looks good, but I also wanted to mention that using git submodule is not the only way to achieve this. You could have a dedicated "ci" directory (for every crate) in which you could have the same script that pulls the content of the "ci" repo before to call into one pipeline or another (that's the part specific to each crate). The only concern I have with git submodule is that we will have to pin it to a specific version on each and every crate, which means that when we will add more tests or modify them on the "ci" repo, we will have to update the submodule version on every crate. On the other hand, without submodule, if we introduce some breaking changes in the tests (should not happen), we will have to update each and every crate, otherwise CI will be broken. Now, after some thought I started to believe that the better choice is to actually provide multiple pipelines instead of one. My proposal would be to have one pipeline for each platform and operating system we want to support. And let me tell you why: 1. Some crates (like vm-memory) will also have support for Windows so you want to the integration tests to pass on windows as well. But you can't have it in the default pipeline because some crates cannot have it (like kvm-ioctls). 2. Ideally we would have *most* of the crates work on arm and x86. That means that in our pipelines we would have same tests on x86 and arm as well. But some tools are not available on arm, so the steps are not exactly the same. Examples of things that aren't available (yet) on arm are clippy and kcov. Now, when we start development we might not want to add support from both arm and x86 from the first PR because it might turn out to be very complex and hard to review. So we can start with x86 and only use the x86 pipeline till support for arm is added. With our current configuration of crates, platforms and operating systems we would end up having 3 pipelines: - x86_linux_pipeline.yml (used by kvm-ioctls, vm-memory, vmm-sys-utils) - x86_windows_pipeline.yml (used by vm-memory) - arm_linux_pipeline.yml (used by kvm-ioctls, vmm-sys-utils) Don't we want vm-memory also to run the arm_linux_pipeline.yml? What do you all think? Sounds good, and we should make sure to parallelize those pipelines when crates needs to test more than one pipeline. Waiting for the CI on a PR is the worst thing, let's make sure to keep it efficient :) Thanks, Sebastien Regards, Andreea [1] https://github.com/rust-vmm/community/issues/56 [2] https://buildkite.com/docs/pipelines/defining-steps Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm