Hi! Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort. I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge. Some ideas I have for what would make good core creates: 1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me. As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term. 2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate. 3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there. 4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe. Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here. Regards, Anthony Liguori
Hi, I also started a "research" so to say about what crates we can share. I am working on a proposal and should have something ready in a few days. For the device models, I think a better approach would be to also include virtio devices, but export each of them individually using rust features (like we are currently doing in Firecracker with the vsock device). One question I have is where would we host the crates? An organization on GitHub with all these crates would be super cool. Something like "rust-vmm" maybe? Andreea ________________________________ From: Liguori, Anthony <aliguori@amazon.com> Sent: Tuesday, December 18, 2018 1:30 AM To: rust-vmm@lists.opendev.org Subject: [UNVERIFIED SENDER] [Rust-vmm] Goals for this list Hi! Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort. I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge. Some ideas I have for what would make good core creates: 1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me. As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term. 2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate. 3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there. 4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe. Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here. Regards, Anthony Liguori Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Hi, Glad to meet you all here! I’m learning firecracker design and code these days, it’s really a great work. I have some questions about the memory model and virtio subsystems. 1) Currently the memory_model subsystem is designed for specific scenarios, is it possible to extend the memory_model to support more scenarios, such as memfd based mapping and memory subregions with different permission properties? 2) Do you have any plan to support vhost-user VirtIO drivers? 3) Is it possible to customize KVM kernel module for serverless usage cases? Thanks, Gerry
On Dec 18, 2018, at 11:31 PM, Florescu, Andreea <fandree@amazon.com> wrote:
Hi,
I also started a "research" so to say about what crates we can share. I am working on a proposal and should have something ready in a few days. For the device models, I think a better approach would be to also include virtio devices, but export each of them individually using rust features (like we are currently doing in Firecracker with the vsock device).
One question I have is where would we host the crates? An organization on GitHub with all these crates would be super cool. Something like "rust-vmm" maybe?
Andreea
From: Liguori, Anthony <aliguori@amazon.com <mailto:aliguori@amazon.com>> Sent: Tuesday, December 18, 2018 1:30 AM To: rust-vmm@lists.opendev.org <mailto:rust-vmm@lists.opendev.org> Subject: [UNVERIFIED SENDER] [Rust-vmm] Goals for this list
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here.
Regards,
Anthony Liguori
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm <http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm>
Jiang Liu <liuj97@gmail.com> writes:
Hi, Glad to meet you all here! I’m learning firecracker design and code these days, it’s really a great work. I have some questions about the memory model and virtio subsystems. 1) Currently the memory_model subsystem is designed for specific scenarios, is it possible to extend the memory_model to support more scenarios, such as memfd based mapping and memory subregions with different permission properties? 2) Do you have any plan to support vhost-user VirtIO drivers? 3) Is it possible to customize KVM kernel module for serverless usage cases? Thanks, Gerry
You should ask this at: https://lists.infradead.org/mailman/listinfo/firecracker-devel Regards, Anthony Liguori
On Dec 18, 2018, at 11:31 PM, Florescu, Andreea <fandree@amazon.com> wrote:
Hi,
I also started a "research" so to say about what crates we can share. I am working on a proposal and should have something ready in a few days. For the device models, I think a better approach would be to also include virtio devices, but export each of them individually using rust features (like we are currently doing in Firecracker with the vsock device).
One question I have is where would we host the crates? An organization on GitHub with all these crates would be super cool. Something like "rust-vmm" maybe?
Andreea
From: Liguori, Anthony <aliguori@amazon.com <mailto:aliguori@amazon.com>> Sent: Tuesday, December 18, 2018 1:30 AM To: rust-vmm@lists.opendev.org <mailto:rust-vmm@lists.opendev.org> Subject: [UNVERIFIED SENDER] [Rust-vmm] Goals for this list
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here.
Regards,
Anthony Liguori
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm <http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm>
"Florescu, Andreea" <fandree@amazon.com> writes:
Hi,
I also started a "research" so to say about what crates we can share. I am working on a proposal and should have something ready in a few days.
For the device models, I think a better approach would be to also include virtio devices, but export each of them individually using rust features (like we are currently doing in Firecracker with the vsock device).
One question I have is where would we host the crates? An organization on GitHub with all these crates would be super cool. Something like "rust-vmm" maybe?
I think we should get some code for the easy stuff (like KVM crate), demonstrate it working with Firecracker and Crosvm, and then we can figure this out. Probably a new GitHub org though. Regards, Anthony Liguori
Andreea
---------------------------------------------------------------------- From: Liguori, Anthony <aliguori@amazon.com> Sent: Tuesday, December 18, 2018 1:30 AM To: rust-vmm@lists.opendev.org Subject: [UNVERIFIED SENDER] [Rust-vmm] Goals for this list
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here.
Regards,
Anthony Liguori
On Mon, Dec 17, 2018 at 3:31 PM Liguori, Anthony <aliguori@amazon.com> wrote:
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
My concerns are only around build time. That's certainly something that can be optimized later and isn't specific to this use of bindgen. It would be great to improve the current, manual bindgen process. It will make using new KVM features easier.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
sys_util can be broken in to smaller, easier to consume parts. The data_model crate has proven very useful, there is also a similar craten in the Fuchsia tree: https://fuchsia.googlesource.com/garnet/+/master/public/rust/zerocopy/src/li... It might be worth considering too.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here.
Regards,
Anthony Liguori
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
On Mon, Dec 17, 2018 at 3:31 PM Liguori, Anthony <aliguori@amazon.com> wrote:
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
Curious what other people think here too. I've got a couple weeks off coming up so I'd love to have some projects to have on here.
Regards,
Anthony Liguori
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
On 12/18/18 2:02 PM, Dylan Reid wrote:
On Mon, Dec 17, 2018 at 3:31 PM Liguori, Anthony <aliguori@amazon.com> wrote:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
My concerns are only around build time. That's certainly something that can be optimized later and isn't specific to this use of bindgen.
It would be great to improve the current, manual bindgen process. It will make using new KVM features easier.
My experience with bindgen specifically for KVM, was that the result was a horrible mess, because linux/kvm.h and ioctl.h made such extensive use of #defines embedded within #defines, which bindgen translates into very poor Rust macros. (Rust macros are great, far more powerful and expressive than C # defines, but the Rust macros that bindgen generates automatically from C #defines are poor.) I experimented with simply defining the consts[1] and structs[2] in Rust, instead of using bindgen for translation, and found that a reasonable, modern interface to KVM actually only needs a small subset of ioctl.h and linux/kvm.h, and that Rust definitions of all the consts and structs are cleaner, more maintainable, and easier to review for security than the pile of garbage generated by bindgen. Also, the KVM ioctl interface is in more of a stable maintenance phase of development, rather than rapid evolution, so it really isn't much effort to maintain the Rust consts and structs in parallel to the C header files. In general, I think bindgen is great, but it's not a good fit for KVM specifically. There is a chance that bindgen could be modified to produce more sane Rust consts from C #defines, but at the moment, a human translator can do far better work than bindgen, and it's faster and more reliable to just do the hand-translation than it is to attempt to modify bindgen to do extensive compile-time optimizations on macros. I thought about finishing up the Rust consts and structs, and offering to contribute them to the KVM project. I could still do that if it would be useful to anyone. My own work has moved beyond a focus on Rust hypervisors, but I'm actively interested in seeing them move forward, so I'm happy to help where I can. Allison [1] https://gitlab.com/preon/libkvm/blob/master/src/linux/kvm_consts.rs (not complete, just a proof-of-concept) [2] https://gitlab.com/preon/libkvm/blob/master/src/linux/kvm_structs.rs (not complete, just a proof-of-concept)
Allison Randal <allison@lohutok.net> writes:
On 12/18/18 2:02 PM, Dylan Reid wrote:
On Mon, Dec 17, 2018 at 3:31 PM Liguori, Anthony <aliguori@amazon.com> wrote:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
My concerns are only around build time. That's certainly something that can be optimized later and isn't specific to this use of bindgen.
It would be great to improve the current, manual bindgen process. It will make using new KVM features easier.
My experience with bindgen specifically for KVM, was that the result was a horrible mess, because linux/kvm.h and ioctl.h made such extensive use of #defines embedded within #defines, which bindgen translates into very poor Rust macros. (Rust macros are great, far more powerful and expressive than C # defines, but the Rust macros that bindgen generates automatically from C #defines are poor.)
I experimented with simply defining the consts[1] and structs[2] in Rust, instead of using bindgen for translation, and found that a reasonable, modern interface to KVM actually only needs a small subset of ioctl.h and linux/kvm.h, and that Rust definitions of all the consts and structs are cleaner, more maintainable, and easier to review for security than the pile of garbage generated by bindgen. Also, the KVM ioctl interface is in more of a stable maintenance phase of development, rather than rapid evolution, so it really isn't much effort to maintain the Rust consts and structs in parallel to the C header files.
In general, I think bindgen is great, but it's not a good fit for KVM specifically. There is a chance that bindgen could be modified to produce more sane Rust consts from C #defines, but at the moment, a human translator can do far better work than bindgen, and it's faster and more reliable to just do the hand-translation than it is to attempt to modify bindgen to do extensive compile-time optimizations on macros.
I thought about finishing up the Rust consts and structs, and offering to contribute them to the KVM project. I could still do that if it would be useful to anyone. My own work has moved beyond a focus on Rust hypervisors, but I'm actively interested in seeing them move forward, so I'm happy to help where I can.
I did look at he work you've done and it's great! I do think it's important to cover everything though and especially to get architectures like aarch64. I think while the bindgen results are not beautiful, having a bindgen layer gives us something we can build nicer interfaces on top of. Regards, Anthony Liguori
Allison
[1] https://gitlab.com/preon/libkvm/blob/master/src/linux/kvm_consts.rs (not complete, just a proof-of-concept)
[2] https://gitlab.com/preon/libkvm/blob/master/src/linux/kvm_structs.rs (not complete, just a proof-of-concept)
On 12/19/18 5:19 PM, Anthony Liguori wrote:
I did look at he work you've done and it's great! I do think it's important to cover everything though and especially to get architectures like aarch64.
Thanks, it really was just a proof-of-concept. Rust is much better than C at selective compilation for different architectures, so that's another area where well-written Rust would be a huge improvement over poorly-translated C-to-Rust.
I think while the bindgen results are not beautiful, having a bindgen layer gives us something we can build nicer interfaces on top of.
My primary concern was tight control over security at the lower layers, hence the emphasis on keeping that interface to KVM extremely clean and small. Totally understandable that other Rust hypervisor implementations may have other priorities. I'm curious whether rewriting (a subset of) the KVM kernel module itself in Rust would end up being a significant advantage for memory safety and security at an even lower layer. (Rust's ability to drop in as a compatible replacement for C components is a great advantage for gradually improving the memory safety of critical pieces of legacy C code.) I haven't decided whether to do a proof-of-concept on that yet, though, as my current focus is security vulnerabilities at the hardware level that punch through VM isolation (L1TF, etc.). Allison
Allison Randal <allison@lohutok.net> writes:
On 12/19/18 5:19 PM, Anthony Liguori wrote:
I did look at he work you've done and it's great! I do think it's important to cover everything though and especially to get architectures like aarch64.
Thanks, it really was just a proof-of-concept. Rust is much better than C at selective compilation for different architectures, so that's another area where well-written Rust would be a huge improvement over poorly-translated C-to-Rust.
I think while the bindgen results are not beautiful, having a bindgen layer gives us something we can build nicer interfaces on top of.
My primary concern was tight control over security at the lower layers, hence the emphasis on keeping that interface to KVM extremely clean and small. Totally understandable that other Rust hypervisor implementations may have other priorities.
I'm curious whether rewriting (a subset of) the KVM kernel module itself in Rust would end up being a significant advantage for memory safety and security at an even lower layer. (Rust's ability to drop in as a compatible replacement for C components is a great advantage for gradually improving the memory safety of critical pieces of legacy C code.) I haven't decided whether to do a proof-of-concept on that yet, though, as my current focus is security vulnerabilities at the hardware level that punch through VM isolation (L1TF, etc.).
Yes! But in addition to rewriting that subset, moving it to userspace gives you yet another layer of protection. The two biggest sources of CVEs in KVM have been instruction emulation and device emulation. Moving the x86_emulate code to userspace and rewritting it in Rust would eliminate one of the larger attack surfaces in KVM and likewise, moving IO APIC and PIT emulation to userspace would help a lot there too. On modern processors, LAPIC is handled almost entirely in hardware so the remaining complexity in KVM is really around EPT handling and hardware interaction. I don't think either can reasonably be moved. Regards, Anthony Liguori
Allison
On 20/12/18 16:05, Anthony Liguori wrote:
The two biggest sources of CVEs in KVM have been instruction emulation and device emulation. Moving the x86_emulate code to userspace and rewritting it in Rust would eliminate one of the larger attack surfaces in KVM and likewise, moving IO APIC and PIT emulation to userspace would help a lot there too.
On modern processors, LAPIC is handled almost entirely in hardware so the remaining complexity in KVM is really around EPT handling and hardware interaction. I don't think either can reasonably be moved.
Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM (Linux 4.4 or newer I think; QEMU will make it the default for the q35 machine type in the next release, for now you need -machine kernel_irqchip=split). Paolo
On Thu, Dec 20, 2018, 7:34 AM Paolo Bonzini <pbonzini@redhat.com wrote:
On 20/12/18 16:05, Anthony Liguori wrote:
The two biggest sources of CVEs in KVM have been instruction emulation and device emulation. Moving the x86_emulate code to userspace and rewritting it in Rust would eliminate one of the larger attack surfaces in KVM and likewise, moving IO APIC and PIT emulation to userspace would help a lot there too.
On modern processors, LAPIC is handled almost entirely in hardware so the remaining complexity in KVM is really around EPT handling and hardware interaction. I don't think either can reasonably be moved.
Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM (Linux 4.4 or newer I think; QEMU will make it the default for the q35 machine type in the next release, for now you need -machine kernel_irqchip=split).
+ Miriam who is working on pit and apic on crosvm
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
Reviving this thread! As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off. This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT. The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM. What do you all think about this? Is there anything I missed that makes this proposal not feasible? Thanks, Sebastien ________________________________ From: Dylan Reid [dgreid@google.com] Sent: Thursday, December 20, 2018 11:55 AM To: Paolo Bonzini; Miriam Zimmerman Cc: rust-vmm@lists.opendev.org Subject: Re: [Rust-VMM] [Rust-vmm] Goals for this list On Thu, Dec 20, 2018, 7:34 AM Paolo Bonzini <pbonzini@redhat.com<mailto:pbonzini@redhat.com> wrote: On 20/12/18 16:05, Anthony Liguori wrote:
The two biggest sources of CVEs in KVM have been instruction emulation and device emulation. Moving the x86_emulate code to userspace and rewritting it in Rust would eliminate one of the larger attack surfaces in KVM and likewise, moving IO APIC and PIT emulation to userspace would help a lot there too.
On modern processors, LAPIC is handled almost entirely in hardware so the remaining complexity in KVM is really around EPT handling and hardware interaction. I don't think either can reasonably be moved.
Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM (Linux 4.4 or newer I think; QEMU will make it the default for the q35 machine type in the next release, for now you need -machine kernel_irqchip=split). + Miriam who is working on pit and apic on crosvm Paolo _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
On Mon, Feb 25, 2019 at 10:53 AM Boeuf, Sebastien <sebastien.boeuf@intel.com> wrote:
Reviving this thread!
As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off.
Thanks for your interest! It's great to know that the work I'm doing will be more broadly useful! :-)
This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT.
I believe that the Google Cloud folks tried using a userspace LAPIC, but when they benchmarked, the performance was unacceptably bad. (since LAPICs are used much more frequently than IOAPICs/PICs/PITs) I can't watch it right now to confirm, but I believe the "Performant Security Hardening of KVM" from KVM forum 2016 (http://www.linux-kvm.org/page/KVM_Forum_2016) goes into some more detail on this.
The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM.
What do you all think about this? Is there anything I missed that makes this proposal not feasible?
Thanks, Sebastien
Miriam
On Mon, 2019-02-25 at 15:32 -0800, Miriam Zimmerman wrote:
On Mon, Feb 25, 2019 at 10:53 AM Boeuf, Sebastien <sebastien.boeuf@intel.com> wrote:
Reviving this thread!
As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace ( https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off.
Thanks for your interest! It's great to know that the work I'm doing will be more broadly useful! :-)
This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT.
I believe that the Google Cloud folks tried using a userspace LAPIC, but when they benchmarked, the performance was unacceptably bad. (since LAPICs are used much more frequently than IOAPICs/PICs/PITs)
I can't watch it right now to confirm, but I believe the "Performant Security Hardening of KVM" from KVM forum 2016 (http://www.linux-kvm.org/page/KVM_Forum_2016) goes into some more detail on this.
Yes, Steve's presentation summarizes it well. Gerry, I think you mentioned some work you did using NetBSD and uKVM regarding the LAPIC emulation being done in userspace, right? How were the performances? Do you have some pointers to the code? Thanks, Sebastien
The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM.
What do you all think about this? Is there anything I missed that makes this proposal not feasible?
Thanks, Sebastien
Miriam
* Boeuf, Sebastien (sebastien.boeuf@intel.com) wrote:
Reviving this thread!
As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off.
This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT.
The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM.
Would a new ACPI mechanism really be any easy than some really basic PCI? You don't really need to provide a true PCI hierarchy or anything. Dave
What do you all think about this? Is there anything I missed that makes this proposal not feasible?
Thanks, Sebastien ________________________________ From: Dylan Reid [dgreid@google.com] Sent: Thursday, December 20, 2018 11:55 AM To: Paolo Bonzini; Miriam Zimmerman Cc: rust-vmm@lists.opendev.org Subject: Re: [Rust-VMM] [Rust-vmm] Goals for this list
On Thu, Dec 20, 2018, 7:34 AM Paolo Bonzini <pbonzini@redhat.com<mailto:pbonzini@redhat.com> wrote: On 20/12/18 16:05, Anthony Liguori wrote:
The two biggest sources of CVEs in KVM have been instruction emulation and device emulation. Moving the x86_emulate code to userspace and rewritting it in Rust would eliminate one of the larger attack surfaces in KVM and likewise, moving IO APIC and PIT emulation to userspace would help a lot there too.
On modern processors, LAPIC is handled almost entirely in hardware so the remaining complexity in KVM is really around EPT handling and hardware interaction. I don't think either can reasonably be moved.
Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM (Linux 4.4 or newer I think; QEMU will make it the default for the q35 machine type in the next release, for now you need -machine kernel_irqchip=split).
+ Miriam who is working on pit and apic on crosvm
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On 26/02/19 10:09, Dr. David Alan Gilbert wrote:
But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM.
Would a new ACPI mechanism really be any easy than some really basic PCI? You don't really need to provide a true PCI hierarchy or anything.
Yeah, I don't see the problem with configuring MSI via the PCI configuration space. Being able to use MSI is just one more reason to drop virtio-mmio and switch to virtio-pci. (Without IOAPIC, the PCI devices would not be able to serve INTX interrupts, but that's not a problem for most modern devices). Paolo
On Tue, 2019-02-26 at 12:38 +0100, Paolo Bonzini wrote:
On 26/02/19 10:09, Dr. David Alan Gilbert wrote:
But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM.
Would a new ACPI mechanism really be any easy than some really basic PCI? You don't really need to provide a true PCI hierarchy or anything.
Yeah, I don't see the problem with configuring MSI via the PCI configuration space. Being able to use MSI is just one more reason to drop virtio-mmio and switch to virtio-pci.
Yes you're right, and if you go all the way with PCI, you solve a lot of issues. But let's say you don't have PCI support, you need a way to notify your guest about things like hotplug, and you do so using a GED or GPE ACPI device. In that case, we still want to get rid of the IOAPIC dependency and make sure those ACPI devices would support MSI.
(Without IOAPIC, the PCI devices would not be able to serve INTX interrupts, but that's not a problem for most modern devices).
Yes but that's not a problem if you consider using only modern devices.
Paolo
Dylan Reid <dgreid@chromium.org> writes:
On Mon, Dec 17, 2018 at 3:31 PM Liguori, Anthony <aliguori@amazon.com> wrote:
Hi!
Very excited about this list and eager to begin discussions. I wanted to start with what I see as the low hanging fruit and I what I see as the goal of this effort.
I'd like to build a set of crates, preferrably with a common name, that make it really easy to write VMMs in Rust. I think Rust is an ideal language for building VMMs and that if the right primitives are there, a lot of interesting VMMs will emerge.
Some ideas I have for what would make good core creates:
1) KVM bindgen bindings. Right now, there is an issue with bindgen misgenerating code when encountering empty trailing arrays which are used in KVM. Both crosvm and Firecracker get around this by modifying the headers by hand and then committing the results of bindgen. Dylan also mentioned that they were unhappy about bindgen performance but this seems like an eventual optimization to me.
My concerns are only around build time. That's certainly something that can be optimized later and isn't specific to this use of bindgen.
It would be great to improve the current, manual bindgen process. It will make using new KVM features easier.
Now that I've thought a bit about it, I think I prefer to commit bindgen results because then the version of the crate becomes meaningful. New KVM features are tied to specific crate versions and you can set dependencies accordingly. The build.rs can definitely be improved though to make it easy to update the committed results.
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Maybe there's room for a wrapper around the three but it's not something that interesting to me in the near term.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
sys_util can be broken in to smaller, easier to consume parts.
The data_model crate has proven very useful, there is also a similar craten in the Fuchsia tree: https://fuchsia.googlesource.com/garnet/+/master/public/rust/zerocopy/src/li... It might be worth considering too.
Oh, neat. I will take a look. Regards, Anthony Liguori
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM. *However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API. The control plane (your item 3) by comparison is a bit less interesting. Paolo
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere. On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google.com> wrote:
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream.
On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
On Wed, Dec 19, 2018 at 03:02:15PM -0800, Dylan Reid wrote:
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google.com> wrote:
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream.
That sounds great; the less complexity and attack surface in the kernel, the better. (That also makes it easier to turn off that emulation entirely, for specialty VMs.)
On Wed, 2018-12-19 at 15:02 -0800, Dylan Reid wrote:
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google. com> wrote:
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream.
That's nice! Quick question, why do you want to emulate the PIT? If you emulate APIC feature X86_FEATURE_TSC_DEADLINE_TIMER, the legacy timer emulation should not be required, right? Oh I guess that's because some other architectures might need the PIT. Are you planning to make this modular so that we could choose to pick only the APIC emulation?
On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
On Dec 20, 2018, at 7:17 AM, Boeuf, Sebastien <sebastien.boeuf@intel.com> wrote:
On Wed, 2018-12-19 at 15:02 -0800, Dylan Reid wrote:
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google. com> wrote:
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream.
That's nice! Quick question, why do you want to emulate the PIT? If you emulate APIC feature X86_FEATURE_TSC_DEADLINE_TIMER, the legacy timer emulation should not be required, right? Oh I guess that's because some other architectures might need the PIT.
Are you planning to make this modular so that we could choose to pick only the APIC emulation?
Yeah, APIC deadline timer could be used here so we could remove PIT, even PIC and Local APIC. We have done a quick PoC by using NetBSD and uKVM. I think it should work with Linux too.
On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm <http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm>
On Dec 20, 2018, at 9:40 AM, Jiang Liu <liuj97@gmail.com> wrote:
On Dec 20, 2018, at 7:17 AM, Boeuf, Sebastien <sebastien.boeuf@intel.com <mailto:sebastien.boeuf@intel.com>> wrote:
On Wed, 2018-12-19 at 15:02 -0800, Dylan Reid wrote:
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google. com> wrote:
Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream.
That's nice! Quick question, why do you want to emulate the PIT? If you emulate APIC feature X86_FEATURE_TSC_DEADLINE_TIMER, the legacy timer emulation should not be required, right? Oh I guess that's because some other architectures might need the PIT.
Are you planning to make this modular so that we could choose to pick only the APIC emulation?
Yeah, APIC deadline timer could be used here so we could remove PIT, even PIC and Local APIC. We have done a quick PoC by using NetBSD and uKVM. I think it should work with Linux too. Sorry, “local APIC” should be “IO APIC”, we still use the in-kernel Local APIC for MSI and IPI.
On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> wrote:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
Rust-vmm mailing list Rust-vmm@lists.opendev.org <mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm <http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm>
On Thu, 2018-12-20 at 09:55 +0800, Jiang Liu wrote: On Dec 20, 2018, at 9:40 AM, Jiang Liu <liuj97@gmail.com<mailto:liuj97@gmail.com>> wrote: On Dec 20, 2018, at 7:17 AM, Boeuf, Sebastien <sebastien.boeuf@intel.com<mailto:sebastien.boeuf@intel.com>> wrote: On Wed, 2018-12-19 at 15:02 -0800, Dylan Reid wrote: On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford@google. com> wrote: Hint received :) Getting userspace instruction emulation to work with nested has been a bit of a fight, and we've been waiting to push stuff upstream until we have it everywhere. The crosvm team is currently investigating moving the PIT and APIC emulation to user space. If that works, instruction emulation will be next on the list, including helping to get the kernel side landed upstream. That's nice! Quick question, why do you want to emulate the PIT? If you emulate APIC feature X86_FEATURE_TSC_DEADLINE_TIMER, the legacy timer emulation should not be required, right? Oh I guess that's because some other architectures might need the PIT. Are you planning to make this modular so that we could choose to pick only the APIC emulation? Yeah, APIC deadline timer could be used here so we could remove PIT, even PIC and Local APIC. We have done a quick PoC by using NetBSD and uKVM. I think it should work with Linux too. Sorry, “local APIC” should be “IO APIC”, we still use the in-kernel Local APIC for MSI and IPI. Yes, if you don't expect any pin based interrupt, then removing the IOAPIC and the PIC should also work just fine and the APIC (local APIC) itself should be enough. Talking about linux, if you take a look here: https://github.com/torvalds/linux/blob/master/arch/x86/kernel/apic/apic.c#L8..., you see that having the deadline timer would skip the whole calibration of the APIC timer, hence this will prevent from falling back onto the legacy timers (https://github.com/torvalds/linux/blob/master/arch/x86/kernel/apic/apic.c#L8...) which are specifically expected to be a PIT or a HPET. On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini@redhat.com<mailto:pbonzini@redhat.com>> wrote: On 18/12/18 00:30, Liguori, Anthony wrote: As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool. Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM. *However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student. 2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate. 3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there. 4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe. virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API. The control plane (your item 3) by comparison is a bit less interesting. Paolo _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm _______________________________________________ Rust-vmm mailing list Rust-vmm@lists.opendev.org<mailto:Rust-vmm@lists.opendev.org> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
Paolo Bonzini <pbonzini@redhat.com> writes:
On 18/12/18 00:30, Liguori, Anthony wrote:
As a side note, I think having OS X hypervisor framework bindings and whatever the new Windows thing is would be pretty cool.
Yes, indeed. Hypervisor.framework however is much more complex than KVM or WHP because you deal manually with VMCSes and have to do instruction emulation in userspace. QEMU takes a stab at it, but it's not as stable as KVM.
*However* Google does have patches for KVM to do instruction emulation in userspace, and I'd like to apply them upstream too now that KVM has an API test framework (and thus we can know they won't bitrot). (Steve, you are in Cc because hint, hint :)). Once that is in place, I guess a minimal x86 emulator written in Rust, porting the emulator code that QEMU has for Hypervisor.framework, would be a fun GSoC project for a very good student.
That does sound like an awesome project.
2) The crosvm data_model crate. This one is super critical but easy to misunderstand as it allows for safe access to volatile memory. Somewhat related is the mmap() bits from sys_util. Not sure how the crosvm folks feel but I think there is some refactoring here that could be useful to build a memory crate.
3) Some traits for device model implementations. It's easy to really bike shed here so I reckon it's best to start with a concrete device model like a UART, work through what is required for interfaces, and then iterate from there.
4) Common device models with only a single implementation (i.e. 16650A). Not sure about virtio, maybe.
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API.
Maybe. I'll poke around. One challenge with DMA is that you have to decide whether you are only supporting synchronous DMA or you are also trying to support asynchronous DMA. My general preference is to avoid complex infrastructure and keep to simple traits. A DMA trait can be pretty darn simple with all of the gooey logic being up to the implementor. Not sure you would get meaningful benchmarks there. Regards, Anthony Liguori
The control plane (your item 3) by comparison is a bit less interesting.
Paolo
On 19/12/18 23:23, Anthony Liguori wrote:
virtio would be interesting. One initial target could be a demo vhost-user client, it has to set up a memory map, parse vrings, handle endianness, etc. It would be an interesting benchmark for a DMA API. Maybe. I'll poke around.
One challenge with DMA is that you have to decide whether you are only supporting synchronous DMA or you are also trying to support asynchronous DMA.
My general preference is to avoid complex infrastructure and keep to simple traits. A DMA trait can be pretty darn simple with all of the gooey logic being up to the implementor. Not sure you would get meaningful benchmarks there.
Sorry, I meant benchmark in the (non-computer-science) dictionary meaning, i.e. a basis to evaluate the API in a simplified but still realistic scenario. It would be more or less what you wrote in your original message: a few full-fledged basic crates (mostly data_model, or alternatively the Fuchsia one), a toyish implementation of standardized memory map and DMA traits, and a sample application (vhost-user-blk). Paolo
participants (13)
-
Allison Randal
-
Anthony Liguori
-
Boeuf, Sebastien
-
Dr. David Alan Gilbert
-
Dylan Reid
-
Dylan Reid
-
Florescu, Andreea
-
Jiang Liu
-
Josh Triplett
-
Liguori, Anthony
-
Miriam Zimmerman
-
Paolo Bonzini
-
Steve Rutherford