Hi,
The following is a breakdown (as best I can figure) of the work needed
to demonstrate VirtIO backends in Rust on the Xen hypervisor. It
requires work across a number of projects but notably core rust and virtio
enabling in the Xen project (building on the work EPAM has already done)
and the start of enabling rust-vmm crate to work with Xen.
The first demo is a fairly simple toy to exercise the direct hypercall
approach for a unikernel backend. On it's own it isn't super impressive
but hopefully serves as a proof of concept for the idea of having
backends running in a single exception level where latency will be
important.
The second is a much more ambitious bridge between Xen and vhost-user to
allow for re-use of the existing vhost-user backends with the bridge
acting as a proxy for what would usually be a full VMM in the type-2
hypervisor case. With that in mind the rust-vmm work is only aimed at
doing the device emulation and doesn't address the larger question of
how type-1 hypervisors can be integrated into the rust-vmm hypervisor
model.
A quick note about the estimates. They are exceedingly rough guesses
plucked out of the air and I would be grateful for feedback from the
appropriate domain experts on if I'm being overly optimistic or
pessimistic.
The links to the Stratos JIRA should be at least read accessible to all
although they contain the same information as the attached document
(albeit with nicer PNG renderings of my ASCII art ;-). There is a
Stratos sync-up call next Thursday:
https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnA…
and I'm sure there will also be discussion in the various projects
(hence the wide CC list). The Stratos calls are open to anyone who wants
to attend and we welcome feedback from all who are interested.
So on with the work breakdown:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATOS PLANNING FOR 21 TO 22
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Xen Rust Bindings ([STR-51])
.. 1. Upstream an "official" rust crate for Xen ([STR-52])
.. 2. Basic Hypervisor Interactions hypercalls ([STR-53])
.. 3. [#10] Access to XenStore service ([STR-54])
.. 4. VirtIO support hypercalls ([STR-55])
2. Xen Hypervisor Support for Stratos ([STR-56])
.. 1. Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
.. 2. Tweaks to tooling to launch VirtIO guests
3. rust-vmm support for Xen VirtIO ([STR-59])
.. 1. Make vm-memory Xen aware ([STR-60])
.. 2. Xen IO notification and IRQ injections ([STR-61])
4. Stratos Demos
.. 1. Rust based stubdomain monitor ([STR-62])
.. 2. Xen aware vhost-user master ([STR-63])
1 Xen Rust Bindings ([STR-51])
══════════════════════════════
There exists a [placeholder repository] with the start of a set of
x86_64 bindings for Xen and a very basic hello world uni-kernel
example. This forms the basis of the initial Xen Rust work and will be
available as a [xen-sys crate] via cargo.
[STR-51] <https://linaro.atlassian.net/browse/STR-51>
[placeholder repository] <https://gitlab.com/cardoe/oxerun.git>
[xen-sys crate] <https://crates.io/crates/xen-sys>
1.1 Upstream an "official" rust crate for Xen ([STR-52])
────────────────────────────────────────────────────────
To start with we will want an upstream location for future work to be
based upon. The intention is the crate is independent of the version
of Xen it runs on (above the baseline version chosen). This will
entail:
• ☐ agreeing with upstream the name/location for the source
• ☐ documenting the rules for the "stable" hypercall ABI
• ☐ establish an internal interface to elide between ioctl mediated
and direct hypercalls
• ☐ ensure the crate is multi-arch and has feature parity for arm64
As such we expect the implementation to be standalone, i.e. not
wrapping the existing Xen libraries for mediation. There should be a
close (1-to-1) mapping between the interfaces in the crate and the
eventual hypercall made to the hypervisor.
Estimate: 4w (elapsed likely longer due to discussion)
[STR-52] <https://linaro.atlassian.net/browse/STR-52>
1.2 Basic Hypervisor Interactions hypercalls ([STR-53])
───────────────────────────────────────────────────────
These are the bare minimum hypercalls implemented as both ioctl and
direct calls. These allow for a very basic binary to:
• ☐ console_io - output IO via the Xen console
• ☐ domctl stub - basic stub for domain control (different API?)
• ☐ sysctl stub - basic stub for system control (different API?)
The idea would be this provides enough hypercall interface to query
the list of domains and output their status via the xen console. There
is an open question about if the domctl and sysctl hypercalls are way
to go.
Estimate: 6w
[STR-53] <https://linaro.atlassian.net/browse/STR-53>
1.3 [#10] Access to XenStore service ([STR-54])
───────────────────────────────────────────────
This is a shared configuration storage space accessed via either Unix
sockets (on dom0) or via the Xenbus. This is used to access
configuration information for the domain.
Is this needed for a backend though? Can everything just be passed
direct on the command line?
Estimate: 4w
[STR-54] <https://linaro.atlassian.net/browse/STR-54>
1.4 VirtIO support hypercalls ([STR-55])
────────────────────────────────────────
These are the hypercalls that need to be implemented to support a
VirtIO backend. This includes the ability to map another guests memory
into the current domains address space, register to receive IOREQ
events when the guest knocks at the doorbell and inject kicks into the
guest. The hypercalls we need to support would be:
• ☐ dmop - device model ops (*_ioreq_server, setirq, nr_vpus)
• ☐ foreignmemory - map and unmap guest memory
The DMOP space is larger than what we need for an IOREQ backend so
I've based it just on what arch/arm/dm.c exports which is the subset
introduced for EPAM's virtio work.
Estimate: 12w
[STR-55] <https://linaro.atlassian.net/browse/STR-55>
2 Xen Hypervisor Support for Stratos ([STR-56])
═══════════════════════════════════════════════
These tasks include tasks needed to support the various different
deployments of Stratos components in Xen.
[STR-56] <https://linaro.atlassian.net/browse/STR-56>
2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57])
───────────────────────────────────────────────────────────────
Currently the foreign memory mapping support only works for dom0 due
to reference counting issues. If we are to support backends running in
their own domains this will need to get fixed.
Estimate: 8w
[STR-57] <https://linaro.atlassian.net/browse/STR-57>
2.2 Tweaks to tooling to launch VirtIO guests
─────────────────────────────────────────────
There might not be too much to do here. The EPAM work already did
something similar for their PoC for virtio-block. Essentially we need
to ensure:
• ☐ DT bindings are passed to the guest for virtio-mmio device
discovery
• ☐ Our rust backend can be instantiated before the domU is launched
This currently assumes the tools and the backend are running in dom0.
Estimate: 4w
3 rust-vmm support for Xen VirtIO ([STR-59])
════════════════════════════════════════════
This encompasses the tasks required to get a vhost-user server up and
running while interfacing to the Xen hypervisor. This will require the
xen-sys.rs crate for the actual interface to the hypervisor.
We need to work out how a Xen configuration option would be passed to
the various bits of rust-vmm when something is being built.
[STR-59] <https://linaro.atlassian.net/browse/STR-59>
3.1 Make vm-memory Xen aware ([STR-60])
───────────────────────────────────────
The vm-memory crate is the root crate for abstracting access to the
guests memory. It currently has multiple configuration builds to
handle difference between mmap on Windows and Unix. Although mmap
isn't directly exposed the public interfaces support a mmap like
interface. We would need to:
• ☐ work out how to expose foreign memory via the vm-memory mechanism
I'm not sure if this just means implementing the GuestMemory trait for
a GuestMemoryXen or if we need to present a mmap like interface.
Estimate: 8w
[STR-60] <https://linaro.atlassian.net/browse/STR-60>
3.2 Xen IO notification and IRQ injections ([STR-61])
─────────────────────────────────────────────────────
The KVM world provides for ioeventfd (notifications) and irqfd
(injection) to signal asynchronously between the guest and the
backend. As far a I can tell this is currently handled inside the
various VMMs which assume a KVM backend.
While the vhost-user slave code doesn't see the
register_ioevent/register_irqfd events it does deal with EventFDs
throughout the code. Perhaps the best approach here would be to create
a IOREQ crate that can create EventFD descriptors which can then be
passed to the slaves to use for notification and injection.
Otherwise there might be an argument for a new crate that can
encapsulate this behaviour for both KVM/ioeventd and Xen/IOREQ setups?
Estimate: 8w?
[STR-61] <https://linaro.atlassian.net/browse/STR-61>
4 Stratos Demos
═══════════════
These tasks cover the creation of demos that brig together all the
previous bits of work to demonstrate a new area of capability that has
been opened up by Stratos work.
4.1 Rust based stubdomain monitor ([STR-62])
────────────────────────────────────────────
This is a basic demo that is a proof of concept for a unikernel style
backend written in pure Rust. This work would be a useful precursor
for things such as the RTOS Dom0 on a safety island ([STR-11]) or as a
carrier for the virtio-scmi backend.
The monitor program will periodically poll the state of the other
domains and echo their status to the Xen console.
Estimate: 4w
#+name: stub-domain-example
#+begin_src ditaa :cmdline -o :file stub_domain_example.png
Dom0 | DomU | DomStub
| |
: /-------------\ :
| |cPNK | |
| | | |
| | | |
/------------------------------------\ | | GuestOS | |
|cPNK | | | | |
EL0 | Dom0 Userspace (xl tools, QEMU) | | | | | /---------------\
| | | | | | |cYEL |
\------------------------------------/ | | | | | |
+------------------------------------+ | | | | | Rust Monitor |
EL1 |cA1B Dom0 Kernel | | | | | | |
+------------------------------------+ | \-------------/ | \---------------/
-------------------------------------------------------------------------------=------------------
+-------------------------------------------------------------------------------------+
EL2 |cC02 Xen Hypervisor |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-62] <https://linaro.atlassian.net/browse/STR-62>
[STR-11] <https://linaro.atlassian.net/browse/STR-11>
4.2 Xen aware vhost-user master ([STR-63])
──────────────────────────────────────────
Usually the master side of a vhost-user system is embedded directly in
the VMM itself. However in a Xen deployment their is no overarching
VMM but a series of utility programs that query the hypervisor
directly. The Xen tooling is also responsible for setting up any
support processes that are responsible for emulating HW for the guest.
The task aims to bridge the gap between Xen's normal HW emulation path
(ioreq) and VirtIO's userspace device emulation (vhost-user). The
process would be started with some information on where the
virtio-mmio address space is and what the slave binary will be. It
will then:
• map the guest into Dom0 userspace and attach to a MemFD
• register the appropriate memory regions as IOREQ regions with Xen
• create EventFD channels for the virtio kick notifications (one each
way)
• spawn the vhost-user slave process and mediate the notifications and
kicks between the slave and Xen itself
#+name: xen-vhost-user-master
#+begin_src ditaa :cmdline -o :file xen_vhost_user_master.png
Dom0 DomU
|
|
|
|
|
|
+-------------------+ +-------------------+ |
| |----------->| | |
| vhost-user | vhost-user | vhost-user | : /------------------------------------\
| slave | protocol | master | | | |
| (existing) |<-----------| (rust) | | | |
+-------------------+ +-------------------+ | | |
^ ^ | ^ | | Guest Userspace |
| | | | | | |
| | | IOREQ | | | |
| | | | | | |
v v V | | \------------------------------------/
+---------------------------------------------------+ | +------------------------------------+
| ^ ^ | ioctl ^ | | | |
| | iofd/irqfd eventFD | | | | | | Guest Kernel |
| +---------------------------+ | | | | | +-------------+ |
| | | | | | | virtio-dev | |
| Host Kernel V | | | | +-------------+ |
+---------------------------------------------------+ | +------------------------------------+
| ^ | | ^
| hyper | | |
----------------------=------------- | -=--- | ----=------ | -----=- | --------=------------------
| call | Trap | | IRQ
V | V |
+-------------------------------------------------------------------------------------+
| | ^ | ^ |
| | +-------------+ | |
EL2 | Xen Hypervisor | | |
| +-------------------------------+ |
| |
+-------------------------------------------------------------------------------------+
#+end_src
[STR-63] <https://linaro.atlassian.net/browse/STR-63>
--
Alex Bennée
Dear QEMU, KVM, and rust-vmm communities,
QEMU will apply for Google Summer of Code 2022
(https://summerofcode.withgoogle.com/) and has been accepted into
Outreachy May-August 2022 (https://www.outreachy.org/) You can now
submit internship project ideas for QEMU, KVM, and rust-vmm!
If you have experience contributing to QEMU, KVM, or rust-vmm you can
be a mentor. It's a great way to give back and you get to work with
people who are just starting out in open source.
Please reply to this email by February 21st with your project ideas.
Good project ideas are suitable for remote work by a competent
programmer who is not yet familiar with the codebase. In
addition, they are:
- Well-defined - the scope is clear
- Self-contained - there are few dependencies
- Uncontroversial - they are acceptable to the community
- Incremental - they produce deliverables along the way
Feel free to post ideas even if you are unable to mentor the project.
It doesn't hurt to share the idea!
I will review project ideas and keep you up-to-date on QEMU's
acceptance into GSoC.
Internship program details:
- Paid, remote work open source internships
- GSoC projects are 175 or 350 hours, Outreachy projects are 30
hrs/week for 12 weeks
- Mentored by volunteers from QEMU, KVM, and rust-vmm
- Mentors typically spend at least 5 hours per week during the coding period
Changes since last year: GSoC now has 175 or 350 hour project sizes
instead of 12 week full-time projects. GSoC will accept applicants who
are not students, before it was limited to students.
For more background on QEMU internships, check out this video:
https://www.youtube.com/watch?v=xNVCX7YMUL8
Please let me know if you have any questions!
Stefan
Hello Everyone!
We are now accepting Forum [1] submissions for the 2022 Open Infrastructure
Summit Forum [2]. Please submit your ideas through the Summit CFP tool [3]
through April 8th. Don't forget to put your brainstorming etherpad up on
the Berlin Forum page [4].
This is not a classic conference track with speakers and presentations.
OpenInfra community members (participants in development teams, operators,
working groups, SIGs, and other interested individuals) discuss the topics
they want to cover and get alignment on and we welcome your participation.
The Forum is your opportunity to help shape the development of future
project releases. More information about the Forum [1].
The timeline for submissions is as follows:
March 21st | Formal topic submission tool opens: https://cfp.openstack.org.
April 8th | Deadline for proposing Forum topics. Scheduling committee
meeting to make draft agenda.
April 15th | Draft Forum schedule published. Crowd sourced session conflict
detection. Forum promotion begins.
April 22nd | Forum schedule final
June 20th | Forum begins!
If you have questions or concerns, please reach out to
speakersupport(a)openinfra.dev.
Cheers,
- Kendall Nelson (diablo_rojo)
[1] https://wiki.openstack.org/wiki/Forum
[2] https://openinfra.dev/summit
[3] https://cfp.openstack.org
[4] https://wiki.openstack.org/wiki/Forum/Berlin2022
Hello Everyone,
Don't forget to sign your team up for the next Project Teams Gathering
(PTG), which will be held virtually from Monday, April 4 to Friday, April
8, 2022!
If you haven't already done so and your team is interested in
participating, please complete BOTH the survey[1] AND reserve time in the
ethercalc[2] by end of day March 11.
Then make sure to register[3] for the PTG because that's how you'll receive
event details, passwords, and other relevant information about the PTG.
Thanks!
-Kendall (diablo_rojo)
[1] Team Survey:
https://openinfrafoundation.formstack.com/forms/april2022_vptg_survey
[2] Ethercalc Signup: https://ethercalc.openstack.org/7yxdas7suqnd
[3] PTG Registration: https://openinfra-ptg.eventbrite.com
Hello!
Last week, we announced the next virtual PTG will be held virtually from
Monday, April 4th to Friday, April 8th, 2022. We will have the same
schedule set up available as last time with three windows of time spread
across the day to cover all timezones with breaks in between.
To signup your team, you must complete BOTH the survey[1] AND reserve time
in the ethercalc[2] by end of day March 11th.
We ask that the PTL/SIG Chair/Team lead sign up for time to have their
discussions with 3 rules/guidelines:
1. Cross project discussions (like SIGs or support project teams) should be
scheduled towards the start of the week so that any discussions that might
shape those of other teams happen first.
2. No team should sign up for more than 4 hours per UTC day to help keep
participants actively engaged.
3. No team should sign up for more than 16 hours across all time slots to
avoid burning out our contributors and to enable participation in multiple
teams discussions.
Again, you need to fill out BOTH the ethercalc AND the survey to complete
your team's sign up. If you have any issues with signing up your team, due
to conflict or otherwise, please let me know! While we are trying to
empower you to make your own decisions as to when you meet and for how long
(after all, you know your needs and teams timezones better than we do), we
are here to help!
Once your team is signed up, please register[3]! And remind your team to
register! Registration is free, but it's important that you sign up to let
us know you'll be attending because that's how you'll receive event
details, passwords, and other relevant information about the PTG.
Continue to visit openinfra.dev/ptg for updates.
-Kendall Nelson (diablo_rojo)
[1] Team Survey:
https://openinfrafoundation.formstack.com/forms/april2022_vptg_survey
[2] Ethercalc Signup: https://ethercalc.openstack.org/7yxdas7suqnd
[3] PTG Registration: https://openinfra-ptg.eventbrite.com/
?Hey folks,
I will not be able to join the meeting on Monday. Can someone else be the MC (and pls take meeting notes)?
Thanks,
Andreea
________________________________
From: Florescu, Andreea
Sent: Wednesday, March 3, 2021 3:21 PM
To: Loghin, Laura; Ochescu, Catalin; Iordache, Alexandra; Agache, Alexandru; meet(a)chime.aws; pin+6592165432(a)chime.aws; rust-vmm(a)lists.opendev.org; liuwe(a)microsoft.com; slp(a)redhat.com; samuel.e.ortiz(a)protonmail.com; alex.bennee(a)linaro.org; Cihodaru, Alexandru-Ciprian; Berciu, Liviu-Marian; Sardan, Alexandru-cezar; gsserge(a)amazon.nl
Cc: Catangiu, Adrian Costin; M. Dodson; Ceyhun Ertürk; eli(a)patch.sh; Zach Reizner; Popa, Diana-Maria; Wise, Bob; luka.perkov(a)sartura.hr; Brendan Burns; Gowdar, Meena; borna.blazevic(a)sartura.hr; Boeuf, Sebastien; Liguori, Anthony; Dima, Alin; Sandu, Andrei; liuj97(a)gmail.com; Iliescu, Radu; bogdangrigoruta(a)gmail.com; misla011(a)fiu.edu; Muminul Islam; Voicu, Luminita; Liang Yan; Meyerhans, Noah; Lucian Petrut; Eftime, Petre; Stanciu, Adrian
Subject: rust-vmm sync meeting
When: Monday, November 29, 2021 6:00 PM-7:00 PM.
Where:
Update: refreshing the series.
Meeting Agenda: https://etherpad.opendev.org/p/rust-vmm-sync-2021
==============Conference Bridge Information==============
You have been invited to an online meeting, powered by Amazon Chime.
Chime meeting ID: 6592165432
Join via Chime clients (manually): Select 'Meetings > Join a Meeting', and enter 6592165432
Join via Chime clients (auto-call): If you invite auto-call as attendee, Chime will call you when the meeting starts, select 'Answer'
Join via browser screen share: https://chime.aws/6592165432
Join via phone (US): +1-929-432-4463,,,6592165432#<tel:+1-929-432-4463,,,6592165432#>
Join via phone (US toll-free): +1-855-552-4463,,,6592165432#<tel:+1-855-552-4463,,,6592165432#>
International dial-in: https://chime.aws/dialinnumbers/
In-room video system: Ext: 62000, Meeting PIN: 6592165432#
=================================================
================Before your meeting:================
* Learn how to use the touch panel<https://it.amazon.com/en/help/articles/use-the-meeting-room-touch-panel>.
* Prefer a video? Watch these touch panel how-to videos<https://broadcast.amazon.com/videos/98363>.
* Find out more about room layouts<https://it.amazon.com/en/help/articles/conference-room-configurations>.
* Get more information at it.amazon.com/meetings<https://it.amazon.com/en/servicecatalog/meetings>.
================================================
Created with Amazon Meetings<https://meetings.amazon.com> (fandree@, edit this series<https://meetings.amazon.com/#/meeting/new?entryID=Meetings-1614777589627-fa…>)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Hi,
As we consider the next cycle for Project Stratos I would like to make
some more progress on hypervisor agnosticism for our virtio backends.
While we have implemented a number of virtio vhost-user backends using C
we've rapidly switched to using rust-vmm based ones for virtio-i2c,
virtio-rng and virtio-gpio. Given the interest in Rust for implementing
backends does it make sense to do some enabling work in rust-vmm to
support Xen?
There are two chunks of work I can think of:
1. Enough of libxl/hypervisor interface to implement an IOREQ end point.
This would require supporting enough of the hypervisor interface to
support the implementation of an IOREQ server. We would also need to
think about how we would map the IOREQ view of the world into the
existing vhost-user interface so we can re-use the current vhost-user
backends code base. The two approaches I can think of are:
a) implement a vhost-user master that speaks IOREQ to the hypervisor
and vhost-user to the vhost-user slave. In this case the bridge
would be standing in for something like QEMU.
b) implement some variants of the vhost-user slave traits that can
talk directly to the hypervisor to get/send the equivalent
kick/notify events. I don't know if this might be too complex as the
impedance matching between the two interfaces might be too great.
This assumes most of the setup is done by the existing toolstack, so
the existing libxl tools are used to create, connect and configure the
domains before the backend is launched.
which leads to:
2. The rest of the libxl/hypervisor interface.
This would be the rest of the interface to allow rust-vmm tools to be
written that could create, configure and manage Xen domains with pure
rust tools. My main concern about this is how rust-vmm's current model
(which is very much KVM influenced) will be able to handle the
differences for a type-1 hypervisor. Wei's pointed me to the Linux
support that was added to expose a Hyper-V control interface via the
Linux kernel. While I can see support has been merged on other rust
based projects I think the rust-vmm crate is still outstanding:
https://github.com/rust-vmm/community/issues/50
and I guess this would need revisiting for Xen to see if the proposed
abstraction would scale across other hypervisors.
Finally there is the question of how/if any of this would relate to the
concept of bare-metal rust backends? We've talked about bare metal
backends before but I wonder if the programming model for them is going
to be outside the scope of rust-vmm? Would be program just be hardwired
to IRQs and be presented a doorbell port to kick or would we want to
have at least some of the higher level rust-vmm abstractions for dealing
with navigating the virtqueues and responding and filling in data?
Thoughts?
--
Alex Bennée