3. HVM Driver Domain with Backends
HVM Driver Domain
Dom 0
(w/ HAP and Direct I/O)
HVM Guest
X86-64
(xm/xend)
Models
Models
Device
Device
Control
Linux
Panel
Unmodified OS
PV Linux
Backend
FE Drivers
driver
Native
FE Drivers
Drivers
FE Virtual
Drivers
Device
Native
Drivers Guest BIOS Guest BIOS
Device
Virtual Platform PV Virtual Platform
PV
Xen API
Xen API
VMExit VMExit
Event channel
Control Interface Scheduler Event Channel Hypercalls
Processor Memory I/O: PIT, APIC, PIC, IOAPIC, RTC Device models
Xen Hypervisor
Direct I/O
4. Benefits of HVM Driver Domain
• HVM driver domain is kind of IDD (Isolated driver domain)
− Reuse existing Linux
• Same binaries for (certified) device drivers
• Backend drivers with minor modifications
• Provide more scalable and efficient I/O than dom0
− Use HVM with HAP
− Use direct I/O feature to access physical devices
• Allow to have restartable I/O capability
− Restart HVM Driver domain if fails
− Need additional work to restore BE-FE connections
• Multiple instances of HVM driver domains
− Avoid single point of faulure
Xen Summit NA 2010
4
5. Data from Proof of Concept
• Added network backend driver to HVM Linux
− Xenbus driver and minor modifications (some are for Xen as well)
− Frontend in PV Linux talks to BE in HVM Linux
• Better performance compared with PV domain (about 28%)
with lower CPU utilization
− PV-FE <-> HVM BE (CPU total utilization: 131%), 7.62 Gb/sec
− PV-FE <-> PV BE (CPU total utilization: 156%), 5.95 Gb/sec
• Comparing performance of EPT page flipping and grant-
copying
− Page flipping in EPT sounds much simpler than in PV
Xen Summit NA 2010
5
6. EPT Page Flipping vs. Page-Copying
• Page-flipping:
− Exchange pages of data between FE drivers and BE driver without
copying data
• For PV we replaced page flipping with grant-copying
− Total cost of mapping/unmapping/TLB-shootdown/refcount-updates
overheads was not cheaper
• However, things are much simpler with HAP
− Update HAP page tables entries
− Invalidate HAP (e.g. INVEPT) like TLB shoot-down
− Potentially need to do the same thing for VT-d/IOMMU
Xen Summit NA 2010
6
7. EPT Page-Flipping vs. Page-Copying
(cont.)
Mode hypercalls Cycles per Throughput CPU
(per sec.) hypercall (Gb/s) Utiliziation
(%)
HVM (BE) 35208 52982 7.35 129
Page-flipping
HVM (BE) 35531 52417 7.62 131
Page-copying
PV BE 5.95 156
• Page-copying is faster with (a bit) higher CPU utilization
Xen Summit NA 2010
7
8. Dom0 w/o Physical Device Drivers
HVM Driver Domain
Dom 0
(w/ HAP and Direct I/O)
HVM Guest
X86-64
(xm/xend)
Models
Models
Device
Device
Control
Linux
Panel
Unmodified OS
PV Linux
Backend
FE Drivers
driver
Native
FE Drivers
FE Drivers
Device
FE Virtual
Drivers
Drivers
Semi-physical BIOS Guest BIOS
Virtual Platform PV Virtual Platform
PV
Xen API
Xen API
VMExit VMExit
Event channel
Control Interface Scheduler Event Channel Hypercalls
Processor Memory I/O: PIT, APIC, PIC, IOAPIC, RTC Device models
Xen Hypervisor
Direct I/O
9. Booting w/o drivers in dom0
1. Grub loads Xen, dom0, and RAM filesystem image
− RAM filesystem image contains HVM driver domain
2. Xen boots, and starts initialization
3. Dom0 is created, and mounts RAM filesystem
4. Dom0 creates HVM driver domain
5. HVM driver domain boots, and it runs device drivers and
backend drivers
6. System is ready
Xen Summit NA 2010
9
10. Issues with Dom0 w/o Drivers
• No NIC drivers on dom0
− We can use FE on dom0
− Network is not available until HVM driver domain is ready
• We don’t want dom0 to depend on guest…
• No storage drivers on dom0
− No persistent storage. This may be desirable.
− Workaround: Move HBA between dom0 and HVM driver domain
• A bit complex
Xen Summit NA 2010
10
11. Summary
• Network backend driver in HVM provides better performance
with EPT and Direct I/O enabled, compared with BE in PV
domain (about 28%)
• Next Steps:
− More data analysis on “EPT page-flipping and page-copying”
• Understand why HAP page-flipping is not faster
• Opportunities for optimizations
− Send out patches
Xen Summit NA 2010
11