Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.
How to Troubleshoot Apps for the Modern Connected Worker
Linux PV on HVM
1. Linux PV on HVM
paravirtualized interfaces in HVM guests
Stefano Stabellini
2. Linux as a guests: problems
Linux PV guests have limitations:
- difficult “different” to install
- some performance issue on 64 bit
- limited set of virtual hardware
Linux HVM guests:
- install the same way as native
- very slow
3. Linux PV on HVM: the solution
- install the same way as native
- PC-like hardware
- access to fast paravirtualized devices
- exploit nested paging
4. Linux PV on HVM: initial feats
Initial version in Linux 2.6.36:
- introduce the xen platform device driver
- add support for HVM hypercalls, xenbus and
grant table
- enables blkfront, netfront and PV timers
- add support to PV suspend/resume
- the vector callback mechanism
13. Benchmarks: the setup
Hardware setup:
Dell PowerEdge R710
CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHz
RAM: 22GB
Software setup:
Xen 4.1, 64 bit
Dom0 Linux 2.6.32, 64 bit
DomU Linux 3.0 rc4, 8GB of memory, 8 vcpus
14. PCI passthrough: benchmark
PCI passthrough of an Intel Gigabit NIC
CPU usage: the lower the better:
200
180
160
140
120
100
CPU usage domU
80
CPU usage dom0
60
40
20
0
interrupt remapping no interrupt remapping
15. Kernbench
Results: percentage of native, the lower the better
140
135
130
125
120
115
110
105
100
95
90
PV on HVM 32 bit HVM 32 bit PV 32 bit
PV on HVM 64 bit HVM 64 bit PV 64 bit
16. Kernbench
Results: percentage of native, the lower the better
140
135
130
125
120
115
110
105
100
95
90
PV on HVM 32 bit HVM 64 bit PV 64 bit
PV on HVM 64 bit KVM 64 bit HVM 32 bit PV 32 bit
17. PBZIP2
Results: percentage of native, the lower the better
160
150
140
130
120
110
100
PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
18. PBZIP2
Results: percentage of native, the lower the better
160
150
140
130
120
110
100
KVM 64 bit PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit
21. Iperf tcp
Results: gbit/sec, the higher the better
8
7
6
5
4
3
2
1
0
PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
22. Iperf tcp
Results: gbit/sec, the higher the better
8
7
6
5
4
3
2
1
0
PV 64 bit PV on HVM 64 bit KVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit
23. Conclusions
PV on HVM guests are very close to PV guests
in benchmarks that favor PV MMUs
PV on HVM guests are far ahead of PV guests
in benchmarks that favor nested paging