SlideShare una empresa de Scribd logo
1 de 54
Autor: 2013/09/20
Disk I/O Performance Comparison
Xen v.s. KVM
Autor: 2013/09/20
Agenda
1. About This Research
2. Benchmark Configuration
3. Test Result with HDD
4. Test Result with SSD
5. Consideration
Autor: 2013/09/20
1. About This Research
Autor: 2013/09/20
Goal
Get enough information on I/O performance
characteristic of Xen and KVM so that we can rightly
judge which shoud be adopted for a use case.
Autor: 2013/09/20
Premise
This document is writen on a premise that you know
what Xen and KVM is.
The explanation about them is skipped mainly because
author's laziness.
Autor: 2013/09/20
2. Benchmark Configuration
Autor: 2013/09/20
Benchmarked Virtulization Softwares
Two famous open source virtulization softwares are
tested.
● KVM
● Xen 4.2.2
● XenServer 6.2
→ Could not test because the installer did not find
any disk to install...
Autor: 2013/09/20
VM Host Server
VM host server specification:
CPU model Core i 5 2500K 3.3GHz 2Core4Thread
CPU settings Hyper-Threading enabled
Turbo Core disabled
Power Saving disabled
Memory 16GB DDR3 1,333MHz dual channel
Disk 1 80GB HDD 3.5inch 7,200rpm
Hitachi HDS721680PLA380
Disk 2 128GB SSD
CFD CSSD-S6T128MHG5Q (Toshiba HG5Q)
OS CentOS 6.4
filesystem Ext4
Autor: 2013/09/20
VM Guest Server
VM guest server common specification:
VCPU 2
Memory 512MB
Disk Size 10GB
Thin Provisioning No
OS CentOS 6.4
Disk Driver Para-Virtulized Driver
filesystem Ext4
Autor: 2013/09/20
Benchmark Tool
Flexible I/O benchmark tool "fio" is used.
http://freecode.com/projects/fio
● Setting item exmaples:
read/write, sequential/random,
direct false/true (use file cache/do not use)...
● Abalable data examples:
bandwidth, IOPS, latency distribution,
CPU load, IO %util ...
Autor: 2013/09/20
Benchmark Configurations
Tested Configuration:
I/O size(byte) Load Type Direct Test Limit
11 random read false 1GB or 180sec
random write false 1GB or 180sec
512 random read true 1GB or 180sec
random write true 1GB or 180sec
4k random read true 3GB or 180sec
random write true 3GB or 180sec
32k random read true 3GB or 180sec
random write true 3GB or 180sec
512k random read true 3GB or 180sec
random write true 3GB or 180sec
1m sequential read true 3GB or 180sec
sequential write true 3GB or 180sec
Autor: 2013/09/20
KVM disk cache
Two benchmarked disk cache configurations:
・ writethrough/default
Defalut setting by virt-manager
on Ubuntu 12.10
・ none
Defalut setting by virt-manager
on CentOS 6.4
http://infoliser.com/a-guide-to-kvm-guest-disk-cache/
Autor: 2013/09/20
KVM disk cache
Other disk cache configurations:
Not benchmarked in this test.
・ writeback ・ diretsync
http://infoliser.com/a-guide-to-kvm-guest-disk-cache/
Autor: 2013/09/20
Criteria
● IOPS: Higher is better
● Bandwidth: Higher is better (I/O size * IOPS)
● Latency: Lower is better, lower variance is better
● CPU usage: Lower is better
Emuration cost is
in KVM: host CPU – guest CPU
in Xen: Domain0 CPU?
● I/O %util
Autor: 2013/09/20
Benchmark Scripts
KVM/Xen installation and benchmarks are done by
scripts published here
https://github.com/nknytk/disk-performance-xen-
kvm
Autor: 2013/09/20
3. Test Result with HDD
Autor: 2013/09/20
Tested Machines
● Host
● KVM guest1: disk cache = writethrough
● KVM guest2: disk cache = none
● Xen guest
In all machines, disk scheduler is "cfq."
Autor: 2013/09/20
Summary
● KVM with cache="writethrough" performs well
only in small size reading.
● Xen generally performs best. Even better than host.
● Xen's unique latency distribution seem to be
resulting from some optimization.
● Performance of KVM with cache=“none“ is slightly
worse than host, but the delay is within 10%.
● KVM with cache=“none“ has very similar latency
distribution to host.
Autor: 2013/09/20
Detalied Results
● Only typical data is represented in this presentation.
● You can get full data from the repository bellow.
https://github.com/nknytk/disk-performance-xen-
kvm/tree/master/result_hdd_examples
Autor: 2013/09/20
IOPS iosize = 11byte
xen
kvm2
kvm1
host
0 100 200 300 400 500 600 700 800 900 1000
206
139
680116
143
11byte direct=false
randomread IOPS
xen
kvm2
kvm1
host
0 100 200 300 400 500 600 700 800 900 1000
335
126
372087
127
11byte direct=false
randomwrite IOPS
Autor: 2013/09/20
Latency Distribution iosize = 11byte0.002
0.004
0.010
0.020
0.050
0.100
0.250
0.500
0.750
1.000
2.000
4.000
10.000
20.000
50.000
100.000
250.000
1000.000
0
20
40
60
80
100
11byte direct=false
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.002
0.004
0.010
0.020
0.050
0.100
0.250
0.500
0.750
1.000
2.000
4.000
10.000
20.000
50.000
100.000
250.000
500.000
750.000
1000.000
0
20
40
60
80
100
11byte direct=false
random write latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 4kbyte
xen
kvm2
kvm1
host
0 20 40 60 80 100 120 140 160 180
133
119
156
126
4kbyte direct=true
randomread IOPS
xen
kvm2
kvm1
host
0 200 400 600 800 1000 1200
1122
164
97
185
4kbyte direct=true
randomwrite IOPS
Autor: 2013/09/20
Latency Distribution iosize = 4kbyte0.1
0.25
0.5
0.75
1
2
4
10
20
50
100
250
500
0
10
20
30
40
50
60
70
80
90
100
4kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.05
0.1
0.25
0.5
0.75
1
2
4
10
20
50
100
250
500
0
10
20
30
40
50
60
70
80
90
100
4kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 512kbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70
50
61
41
64
512kbyte direct=true
randomread IOPS
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90
85
61
46
60
512kbyte direct=true
randomwrite IOPS
Autor: 2013/09/20
Latency Distribution iosize = 512kbyte0.25
0.5
0.75
4
10
20
50
100
250
500
1000
2000
0
10
20
30
40
50
60
70
80
90
100
512kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.5
0.75
1
2
4
10
20
50
100
250
500
1000
1000
500
0
10
20
30
40
50
60
70
80
90
100
512kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 1mbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80
69
62
57
68
1mbyte direct=true
sequential read IOPS
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80
74
71
41
70
1mbyte direct=true
sequential write IOPS
Autor: 2013/09/20
Latency Distribution iosize = 1mbyte0.25
0.5
1
2
10
20
50
100
250
500
750
0
10
20
30
40
50
60
70
80
90
100
1mbyte direct=true
sequential read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.75
1
2
4
10
20
50
100
250
0
10
20
30
40
50
60
70
80
90
100
1mbyte direct=true
sequential read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
Read Bandwidth Comparison
11(direct=false) 512 4k 32k 512k 1m(sequential)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Read Bandwidth Comparison (host=1)
host
kvm1
kvm2
xen
I/O size
Autor: 2013/09/20
Write Bandwidth Comparison
11(direct=false) 512 4k 32k 512k 1m(sequential)
0
1
2
3
4
5
6
Write Bandwidth Comparison (host=1)
host
kvm1
kvm2
xen
I/O size
Autor: 2013/09/20
4. Test Result with SSD
Autor: 2013/09/20
Tested Machines
● Host
● KVM guest1
- disk cache = "writethrough"
- Another process consumes so much memory on
host that only 200mb is left for page cache
● KVM guest2
- disk cache = "none"
● Xen guest
In all machines, disk scheduler is "noop."
Autor: 2013/09/20
Summary
● All VMs are unignorably delayed in small size I/O.
● Both KVM and Xen cost nearly 100% of CPU
for emuration at the worst case.
● KVM's performance with cache=“writethrough“ is
lower than KVM with cache=“none“ by up to 90%.
● KVM with cache=“none“ has similar latency
distribution to host.
Autor: 2013/09/20
Detalied Results
● Only typical data is represented in this presentation.
● You can get full data at the repository bellow.
https://github.com/nknytk/disk-performance-xen-
kvm/tree/master/result_ssd_examples
Autor: 2013/09/20
IOPS iosize = 11byte
xen
kvm2
kvm1
host
0 100000 200000 300000 400000 500000 600000 700000 800000
5263
4514
231262
737753
11byte direct=false
randomread IOPS
xen
kvm2
kvm1
host
0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000
3006
3146
927
447745
11byte direct=false
randomwrite IOPS
Autor: 2013/09/20
CPU Usage iosize = 11byte
xen
kvm2
kvm1
host
0 20 40 60 80 100 120 140
11byte direct=false
random read CPU usage (100%=1core)
test_server host
xen
kvm2
kvm1
host
0 20 40 60 80 100 120
11byte direct=false
random write CPU usage (100%=1core)
test_server host
Autor: 2013/09/20
IO %util iosize = 11byte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
11byte direct=false
random read IO %util
test_server host
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
11byte direct=false
random write IO %util
test_server host
Autor: 2013/09/20
Latency Distribution iosize = 11byte0.002
0.004
0.010
0.020
0.050
0.100
0.250
0.500
0.750
1.000
2.000
4.000
10.000
20.000
0
10
20
30
40
50
60
70
80
90
100
11byte direct=false
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.002
0.004
0.010
0.020
0.050
0.100
0.250
0.500
0.750
1.000
2.000
4.000
10.000
20.000
50.000
100.000
250.000
500.000
0
10
20
30
40
50
60
70
80
90
100
11byte direct=false
random write latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 4kbyte
xen
kvm2
kvm1
host
0 2000 4000 6000 8000 10000 12000
3139
3029
9894
4260
4kbyte direct=true
randomread IOPS
xen
kvm2
kvm1
host
0 2000 4000 6000 8000 10000 12000 14000
11833
6246
581
10198
4kbyte direct=true
randomwrite IOPS
Autor: 2013/09/20
CPU Usage iosize = 4kbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70
4kbyte direct=true
random read CPU usage (100%=1core)
test_server host
xen
kvm2
kvm1
host
0 20 40 60 80 100 120 140 160
4kbyte direct=true
random write CPU usage (100%=1core)
test_server host
Autor: 2013/09/20
IO %util iosize = 4kbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
4kbyte direct=true
random read IO %util
test_server host
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
4kbyte direct=true
random write IO %util
test_server host
Autor: 2013/09/20
Latency Distribution iosize = 4kbyte0.002
0.004
0.010
0.020
0.050
0.100
0.250
0.500
0.750
1.000
2.000
4.000
10.000
20.000
50.000
250.000
0
10
20
30
40
50
60
70
80
90
100
4kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.05
0.1
0.25
0.5
0.75
1
2
4
10
20
50
100
250
0
10
20
30
40
50
60
70
80
90
100
4kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 512kbyte
xen
kvm2
kvm1
host
0 100 200 300 400 500 600 700 800
345
660
338
748
512kbyte direct=true
randomread IOPS
xen
kvm2
kvm1
host
0 100 200 300 400 500 600 700 800 900
748
678
454
800
512kbyte direct=true
randomwrite IOPS
Autor: 2013/09/20
CPU Usage iosize = 512kbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80
512kbyte direct=true
random read CPU usage (100%=1core)
test_server host
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
512kbyte direct=true
random write CPU usage (100%=1core)
test_server host
Autor: 2013/09/20
IO %util iosize = 512kbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
512kbyte direct=true
random read IO %util
test_server host
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
512kbyte direct=true
random write IO %util
test_server host
Autor: 2013/09/20
Latency Distribution iosize = 512kbyte2
4
10
20
50
100
250
0
10
20
30
40
50
60
70
80
90
100
512kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.5
0.75
1
2
4
10
20
50
100
250
0
10
20
30
40
50
60
70
80
90
100
512kbyte direct=true
random read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
IOPS iosize = 1mbyte
xen
kvm2
kvm1
host
0 50 100 150 200 250 300 350 400 450 500
315
434
254
440
1mbyte direct=true
sequential read IOPS
xen
kvm2
kvm1
host
0 50 100 150 200 250 300 350 400 450
369
364
259
412
1mbyte direct=true
sequential write IOPS
Autor: 2013/09/20
CPU Usage iosize = 1mbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
1mbyte direct=true
sequential read CPU usage (100%=1core)
test_server host
xen
kvm2
kvm1
host
0 20 40 60 80 100 120
1mbyte direct=true
sequential write CPU usage (100%=1core)
test_server host
Autor: 2013/09/20
IO %util iosize = 1mbyte
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
1mbyte direct=true
sequential read IO %util
test_server host
xen
kvm2
kvm1
host
0 10 20 30 40 50 60 70 80 90 100
1mbyte direct=true
sequential write IO %util
test_server host
Autor: 2013/09/20
Latency Distribution iosize = 1mbyte4
10
20
100
250
0
10
20
30
40
50
60
70
80
90
100
1mbyte direct=true
sequential read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
0.75
1
2
4
10
20
50
100
250
0
10
20
30
40
50
60
70
80
90
100
1mbyte direct=true
sequential read latency distribution
host
kvm1
kvm2
xen
latency(msec)
distribution(%)
Autor: 2013/09/20
Read Bandwidth Comparison
11(direct=false) 512 4k 32k 512k 1m(sequential)
0
0.5
1
1.5
2
2.5
3
Read Bandwidth Comparison (host=1)
host
kvm1
kvm2
xen
I/O size
Autor: 2013/09/20
Write Bandwidth Comparison
11(direct=false) 512 4k 32k 512k 1m(sequential)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Read Bandwidth Comparison (host=1)
host
kvm1
kvm2
xen
I/O size
Autor: 2013/09/20
5. Consideration
Autor: 2013/09/20
Use Cases
● If your VM host server's disks are slow, Xen will
offer better I/O performance than KVM.
● KVM with cache="none" is relatively appropriate
for performance simulation of physical server
because of the similarity of latency distribution.
● Both Xen and KVM are not suitable for high
random I/O load on super fast device,
e.g. OLTP DB server with ioDrive.
Autor: 2013/09/20
Questions
● In this test, I/O load is single-threaded.
What if prarel I/O load by many guests?
● Is Xen's data safe?
It seems to me that
more optimization
→ longer time in mermory
→ larger data loss in server fault

Más contenido relacionado

La actualidad más candente

KVM tools and enterprise usage
KVM tools and enterprise usageKVM tools and enterprise usage
KVM tools and enterprise usagevincentvdk
 
[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험NHN FORWARD
 
virtualization and hypervisors
virtualization and hypervisorsvirtualization and hypervisors
virtualization and hypervisorsGaurav Suri
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentationaleyeldean
 
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...
 Openstack - An introduction/Installation - Presented at Dr Dobb's conference... Openstack - An introduction/Installation - Presented at Dr Dobb's conference...
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...Rahul Krishna Upadhyaya
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture componentsDavid Pasek
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The BasicsSumit Lahiri
 
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...Edureka!
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
 
Virtual machines and containers
Virtual machines and containersVirtual machines and containers
Virtual machines and containersPatrick Pierson
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
The Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitchThe Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitchTe-Yen Liu
 
Vmware virtualization in data centers
Vmware virtualization in data centersVmware virtualization in data centers
Vmware virtualization in data centersHarshitTaneja13
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStackMinh Le
 
What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?Insight
 

La actualidad más candente (20)

Network virtualization
Network virtualizationNetwork virtualization
Network virtualization
 
KVM tools and enterprise usage
KVM tools and enterprise usageKVM tools and enterprise usage
KVM tools and enterprise usage
 
Proxmox for DevOps
Proxmox for DevOpsProxmox for DevOps
Proxmox for DevOps
 
[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험
 
virtualization and hypervisors
virtualization and hypervisorsvirtualization and hypervisors
virtualization and hypervisors
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
 
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...
 Openstack - An introduction/Installation - Presented at Dr Dobb's conference... Openstack - An introduction/Installation - Presented at Dr Dobb's conference...
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The Basics
 
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
VMware Tutorial For Beginners | VMware Workstation | VMware Virtualization | ...
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
Virtual machines and containers
Virtual machines and containersVirtual machines and containers
Virtual machines and containers
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Linux Network Stack
Linux Network StackLinux Network Stack
Linux Network Stack
 
The Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitchThe Basic Introduction of Open vSwitch
The Basic Introduction of Open vSwitch
 
Vmware virtualization in data centers
Vmware virtualization in data centersVmware virtualization in data centers
Vmware virtualization in data centers
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStack
 
The kvm virtualization way
The kvm virtualization wayThe kvm virtualization way
The kvm virtualization way
 
What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?
 
Microsoft Hyper-V
Microsoft Hyper-VMicrosoft Hyper-V
Microsoft Hyper-V
 

Similar a Disk Performance Comparison Xen v.s. KVM

SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/StableSR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stablejuet-y
 
x86_64 Hardware Deep dive
x86_64 Hardware Deep divex86_64 Hardware Deep dive
x86_64 Hardware Deep diveNaoto MATSUMOTO
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?Pradeep Kumar
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM ShapeBlue
 
Nytro-XV_NWD_VM_Performance_Acceleration
Nytro-XV_NWD_VM_Performance_AccelerationNytro-XV_NWD_VM_Performance_Acceleration
Nytro-XV_NWD_VM_Performance_AccelerationKhai Le
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVMAchieving the ultimate performance with KVM
Achieving the ultimate performance with KVMStorPool Storage
 
Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Suresh Kumar
 
Some analysis of BlueStore and RocksDB
Some analysis of BlueStore and RocksDBSome analysis of BlueStore and RocksDB
Some analysis of BlueStore and RocksDBXiao Yan Li
 
Presentation v mware v-sphere advanced troubleshooting by eric sloof
Presentation   v mware v-sphere advanced troubleshooting by eric sloofPresentation   v mware v-sphere advanced troubleshooting by eric sloof
Presentation v mware v-sphere advanced troubleshooting by eric sloofsolarisyourep
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackBoden Russell
 
Optimization_of_Virtual_Machines_for_High_Performance
Optimization_of_Virtual_Machines_for_High_PerformanceOptimization_of_Virtual_Machines_for_High_Performance
Optimization_of_Virtual_Machines_for_High_PerformanceStorPool Storage
 
Optimization of OpenNebula VMs for Higher Performance - Boyan Krosnov
Optimization of OpenNebula VMs for Higher Performance - Boyan KrosnovOptimization of OpenNebula VMs for Higher Performance - Boyan Krosnov
Optimization of OpenNebula VMs for Higher Performance - Boyan KrosnovOpenNebula Project
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsYanghua Zhang
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareRyan Aydelott
 
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceExtreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceScyllaDB
 
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, Citrix
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, CitrixXPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, Citrix
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, CitrixThe Linux Foundation
 
vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformanceProfessionalVMware
 
Xen Virtualization 2008
Xen Virtualization 2008Xen Virtualization 2008
Xen Virtualization 2008mwlang88
 

Similar a Disk Performance Comparison Xen v.s. KVM (20)

SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/StableSR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
 
x86_64 Hardware Deep dive
x86_64 Hardware Deep divex86_64 Hardware Deep dive
x86_64 Hardware Deep dive
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM
 
Nytro-XV_NWD_VM_Performance_Acceleration
Nytro-XV_NWD_VM_Performance_AccelerationNytro-XV_NWD_VM_Performance_Acceleration
Nytro-XV_NWD_VM_Performance_Acceleration
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVMAchieving the ultimate performance with KVM
Achieving the ultimate performance with KVM
 
Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02
 
Some analysis of BlueStore and RocksDB
Some analysis of BlueStore and RocksDBSome analysis of BlueStore and RocksDB
Some analysis of BlueStore and RocksDB
 
Presentation v mware v-sphere advanced troubleshooting by eric sloof
Presentation   v mware v-sphere advanced troubleshooting by eric sloofPresentation   v mware v-sphere advanced troubleshooting by eric sloof
Presentation v mware v-sphere advanced troubleshooting by eric sloof
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
 
Optimization_of_Virtual_Machines_for_High_Performance
Optimization_of_Virtual_Machines_for_High_PerformanceOptimization_of_Virtual_Machines_for_High_Performance
Optimization_of_Virtual_Machines_for_High_Performance
 
Optimization of OpenNebula VMs for Higher Performance - Boyan Krosnov
Optimization of OpenNebula VMs for Higher Performance - Boyan KrosnovOptimization of OpenNebula VMs for Higher Performance - Boyan Krosnov
Optimization of OpenNebula VMs for Higher Performance - Boyan Krosnov
 
Perf Vsphere Storage Protocols
Perf Vsphere Storage ProtocolsPerf Vsphere Storage Protocols
Perf Vsphere Storage Protocols
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
 
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceExtreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
 
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, Citrix
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, CitrixXPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, Citrix
XPDS14 - Scaling Xen's Aggregate Storage Performance - Felipe Franciosi, Citrix
 
Mysql talk
Mysql talkMysql talk
Mysql talk
 
vSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting PerformancevSphere vStorage: Troubleshooting Performance
vSphere vStorage: Troubleshooting Performance
 
Database Hardware Benchmarking
Database Hardware BenchmarkingDatabase Hardware Benchmarking
Database Hardware Benchmarking
 
Xen Virtualization 2008
Xen Virtualization 2008Xen Virtualization 2008
Xen Virtualization 2008
 

Último

Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 

Último (20)

Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

Disk Performance Comparison Xen v.s. KVM

  • 1. Autor: 2013/09/20 Disk I/O Performance Comparison Xen v.s. KVM
  • 2. Autor: 2013/09/20 Agenda 1. About This Research 2. Benchmark Configuration 3. Test Result with HDD 4. Test Result with SSD 5. Consideration
  • 4. Autor: 2013/09/20 Goal Get enough information on I/O performance characteristic of Xen and KVM so that we can rightly judge which shoud be adopted for a use case.
  • 5. Autor: 2013/09/20 Premise This document is writen on a premise that you know what Xen and KVM is. The explanation about them is skipped mainly because author's laziness.
  • 7. Autor: 2013/09/20 Benchmarked Virtulization Softwares Two famous open source virtulization softwares are tested. ● KVM ● Xen 4.2.2 ● XenServer 6.2 → Could not test because the installer did not find any disk to install...
  • 8. Autor: 2013/09/20 VM Host Server VM host server specification: CPU model Core i 5 2500K 3.3GHz 2Core4Thread CPU settings Hyper-Threading enabled Turbo Core disabled Power Saving disabled Memory 16GB DDR3 1,333MHz dual channel Disk 1 80GB HDD 3.5inch 7,200rpm Hitachi HDS721680PLA380 Disk 2 128GB SSD CFD CSSD-S6T128MHG5Q (Toshiba HG5Q) OS CentOS 6.4 filesystem Ext4
  • 9. Autor: 2013/09/20 VM Guest Server VM guest server common specification: VCPU 2 Memory 512MB Disk Size 10GB Thin Provisioning No OS CentOS 6.4 Disk Driver Para-Virtulized Driver filesystem Ext4
  • 10. Autor: 2013/09/20 Benchmark Tool Flexible I/O benchmark tool "fio" is used. http://freecode.com/projects/fio ● Setting item exmaples: read/write, sequential/random, direct false/true (use file cache/do not use)... ● Abalable data examples: bandwidth, IOPS, latency distribution, CPU load, IO %util ...
  • 11. Autor: 2013/09/20 Benchmark Configurations Tested Configuration: I/O size(byte) Load Type Direct Test Limit 11 random read false 1GB or 180sec random write false 1GB or 180sec 512 random read true 1GB or 180sec random write true 1GB or 180sec 4k random read true 3GB or 180sec random write true 3GB or 180sec 32k random read true 3GB or 180sec random write true 3GB or 180sec 512k random read true 3GB or 180sec random write true 3GB or 180sec 1m sequential read true 3GB or 180sec sequential write true 3GB or 180sec
  • 12. Autor: 2013/09/20 KVM disk cache Two benchmarked disk cache configurations: ・ writethrough/default Defalut setting by virt-manager on Ubuntu 12.10 ・ none Defalut setting by virt-manager on CentOS 6.4 http://infoliser.com/a-guide-to-kvm-guest-disk-cache/
  • 13. Autor: 2013/09/20 KVM disk cache Other disk cache configurations: Not benchmarked in this test. ・ writeback ・ diretsync http://infoliser.com/a-guide-to-kvm-guest-disk-cache/
  • 14. Autor: 2013/09/20 Criteria ● IOPS: Higher is better ● Bandwidth: Higher is better (I/O size * IOPS) ● Latency: Lower is better, lower variance is better ● CPU usage: Lower is better Emuration cost is in KVM: host CPU – guest CPU in Xen: Domain0 CPU? ● I/O %util
  • 15. Autor: 2013/09/20 Benchmark Scripts KVM/Xen installation and benchmarks are done by scripts published here https://github.com/nknytk/disk-performance-xen- kvm
  • 16. Autor: 2013/09/20 3. Test Result with HDD
  • 17. Autor: 2013/09/20 Tested Machines ● Host ● KVM guest1: disk cache = writethrough ● KVM guest2: disk cache = none ● Xen guest In all machines, disk scheduler is "cfq."
  • 18. Autor: 2013/09/20 Summary ● KVM with cache="writethrough" performs well only in small size reading. ● Xen generally performs best. Even better than host. ● Xen's unique latency distribution seem to be resulting from some optimization. ● Performance of KVM with cache=“none“ is slightly worse than host, but the delay is within 10%. ● KVM with cache=“none“ has very similar latency distribution to host.
  • 19. Autor: 2013/09/20 Detalied Results ● Only typical data is represented in this presentation. ● You can get full data from the repository bellow. https://github.com/nknytk/disk-performance-xen- kvm/tree/master/result_hdd_examples
  • 20. Autor: 2013/09/20 IOPS iosize = 11byte xen kvm2 kvm1 host 0 100 200 300 400 500 600 700 800 900 1000 206 139 680116 143 11byte direct=false randomread IOPS xen kvm2 kvm1 host 0 100 200 300 400 500 600 700 800 900 1000 335 126 372087 127 11byte direct=false randomwrite IOPS
  • 21. Autor: 2013/09/20 Latency Distribution iosize = 11byte0.002 0.004 0.010 0.020 0.050 0.100 0.250 0.500 0.750 1.000 2.000 4.000 10.000 20.000 50.000 100.000 250.000 1000.000 0 20 40 60 80 100 11byte direct=false random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.002 0.004 0.010 0.020 0.050 0.100 0.250 0.500 0.750 1.000 2.000 4.000 10.000 20.000 50.000 100.000 250.000 500.000 750.000 1000.000 0 20 40 60 80 100 11byte direct=false random write latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 22. Autor: 2013/09/20 IOPS iosize = 4kbyte xen kvm2 kvm1 host 0 20 40 60 80 100 120 140 160 180 133 119 156 126 4kbyte direct=true randomread IOPS xen kvm2 kvm1 host 0 200 400 600 800 1000 1200 1122 164 97 185 4kbyte direct=true randomwrite IOPS
  • 23. Autor: 2013/09/20 Latency Distribution iosize = 4kbyte0.1 0.25 0.5 0.75 1 2 4 10 20 50 100 250 500 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.05 0.1 0.25 0.5 0.75 1 2 4 10 20 50 100 250 500 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 24. Autor: 2013/09/20 IOPS iosize = 512kbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 50 61 41 64 512kbyte direct=true randomread IOPS xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 85 61 46 60 512kbyte direct=true randomwrite IOPS
  • 25. Autor: 2013/09/20 Latency Distribution iosize = 512kbyte0.25 0.5 0.75 4 10 20 50 100 250 500 1000 2000 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.5 0.75 1 2 4 10 20 50 100 250 500 1000 1000 500 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 26. Autor: 2013/09/20 IOPS iosize = 1mbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 69 62 57 68 1mbyte direct=true sequential read IOPS xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 74 71 41 70 1mbyte direct=true sequential write IOPS
  • 27. Autor: 2013/09/20 Latency Distribution iosize = 1mbyte0.25 0.5 1 2 10 20 50 100 250 500 750 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.75 1 2 4 10 20 50 100 250 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 28. Autor: 2013/09/20 Read Bandwidth Comparison 11(direct=false) 512 4k 32k 512k 1m(sequential) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Read Bandwidth Comparison (host=1) host kvm1 kvm2 xen I/O size
  • 29. Autor: 2013/09/20 Write Bandwidth Comparison 11(direct=false) 512 4k 32k 512k 1m(sequential) 0 1 2 3 4 5 6 Write Bandwidth Comparison (host=1) host kvm1 kvm2 xen I/O size
  • 30. Autor: 2013/09/20 4. Test Result with SSD
  • 31. Autor: 2013/09/20 Tested Machines ● Host ● KVM guest1 - disk cache = "writethrough" - Another process consumes so much memory on host that only 200mb is left for page cache ● KVM guest2 - disk cache = "none" ● Xen guest In all machines, disk scheduler is "noop."
  • 32. Autor: 2013/09/20 Summary ● All VMs are unignorably delayed in small size I/O. ● Both KVM and Xen cost nearly 100% of CPU for emuration at the worst case. ● KVM's performance with cache=“writethrough“ is lower than KVM with cache=“none“ by up to 90%. ● KVM with cache=“none“ has similar latency distribution to host.
  • 33. Autor: 2013/09/20 Detalied Results ● Only typical data is represented in this presentation. ● You can get full data at the repository bellow. https://github.com/nknytk/disk-performance-xen- kvm/tree/master/result_ssd_examples
  • 34. Autor: 2013/09/20 IOPS iosize = 11byte xen kvm2 kvm1 host 0 100000 200000 300000 400000 500000 600000 700000 800000 5263 4514 231262 737753 11byte direct=false randomread IOPS xen kvm2 kvm1 host 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 3006 3146 927 447745 11byte direct=false randomwrite IOPS
  • 35. Autor: 2013/09/20 CPU Usage iosize = 11byte xen kvm2 kvm1 host 0 20 40 60 80 100 120 140 11byte direct=false random read CPU usage (100%=1core) test_server host xen kvm2 kvm1 host 0 20 40 60 80 100 120 11byte direct=false random write CPU usage (100%=1core) test_server host
  • 36. Autor: 2013/09/20 IO %util iosize = 11byte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 11byte direct=false random read IO %util test_server host xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 11byte direct=false random write IO %util test_server host
  • 37. Autor: 2013/09/20 Latency Distribution iosize = 11byte0.002 0.004 0.010 0.020 0.050 0.100 0.250 0.500 0.750 1.000 2.000 4.000 10.000 20.000 0 10 20 30 40 50 60 70 80 90 100 11byte direct=false random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.002 0.004 0.010 0.020 0.050 0.100 0.250 0.500 0.750 1.000 2.000 4.000 10.000 20.000 50.000 100.000 250.000 500.000 0 10 20 30 40 50 60 70 80 90 100 11byte direct=false random write latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 38. Autor: 2013/09/20 IOPS iosize = 4kbyte xen kvm2 kvm1 host 0 2000 4000 6000 8000 10000 12000 3139 3029 9894 4260 4kbyte direct=true randomread IOPS xen kvm2 kvm1 host 0 2000 4000 6000 8000 10000 12000 14000 11833 6246 581 10198 4kbyte direct=true randomwrite IOPS
  • 39. Autor: 2013/09/20 CPU Usage iosize = 4kbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 4kbyte direct=true random read CPU usage (100%=1core) test_server host xen kvm2 kvm1 host 0 20 40 60 80 100 120 140 160 4kbyte direct=true random write CPU usage (100%=1core) test_server host
  • 40. Autor: 2013/09/20 IO %util iosize = 4kbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random read IO %util test_server host xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random write IO %util test_server host
  • 41. Autor: 2013/09/20 Latency Distribution iosize = 4kbyte0.002 0.004 0.010 0.020 0.050 0.100 0.250 0.500 0.750 1.000 2.000 4.000 10.000 20.000 50.000 250.000 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.05 0.1 0.25 0.5 0.75 1 2 4 10 20 50 100 250 0 10 20 30 40 50 60 70 80 90 100 4kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 42. Autor: 2013/09/20 IOPS iosize = 512kbyte xen kvm2 kvm1 host 0 100 200 300 400 500 600 700 800 345 660 338 748 512kbyte direct=true randomread IOPS xen kvm2 kvm1 host 0 100 200 300 400 500 600 700 800 900 748 678 454 800 512kbyte direct=true randomwrite IOPS
  • 43. Autor: 2013/09/20 CPU Usage iosize = 512kbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 512kbyte direct=true random read CPU usage (100%=1core) test_server host xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random write CPU usage (100%=1core) test_server host
  • 44. Autor: 2013/09/20 IO %util iosize = 512kbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random read IO %util test_server host xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random write IO %util test_server host
  • 45. Autor: 2013/09/20 Latency Distribution iosize = 512kbyte2 4 10 20 50 100 250 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.5 0.75 1 2 4 10 20 50 100 250 0 10 20 30 40 50 60 70 80 90 100 512kbyte direct=true random read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 46. Autor: 2013/09/20 IOPS iosize = 1mbyte xen kvm2 kvm1 host 0 50 100 150 200 250 300 350 400 450 500 315 434 254 440 1mbyte direct=true sequential read IOPS xen kvm2 kvm1 host 0 50 100 150 200 250 300 350 400 450 369 364 259 412 1mbyte direct=true sequential write IOPS
  • 47. Autor: 2013/09/20 CPU Usage iosize = 1mbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read CPU usage (100%=1core) test_server host xen kvm2 kvm1 host 0 20 40 60 80 100 120 1mbyte direct=true sequential write CPU usage (100%=1core) test_server host
  • 48. Autor: 2013/09/20 IO %util iosize = 1mbyte xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read IO %util test_server host xen kvm2 kvm1 host 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential write IO %util test_server host
  • 49. Autor: 2013/09/20 Latency Distribution iosize = 1mbyte4 10 20 100 250 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%) 0.75 1 2 4 10 20 50 100 250 0 10 20 30 40 50 60 70 80 90 100 1mbyte direct=true sequential read latency distribution host kvm1 kvm2 xen latency(msec) distribution(%)
  • 50. Autor: 2013/09/20 Read Bandwidth Comparison 11(direct=false) 512 4k 32k 512k 1m(sequential) 0 0.5 1 1.5 2 2.5 3 Read Bandwidth Comparison (host=1) host kvm1 kvm2 xen I/O size
  • 51. Autor: 2013/09/20 Write Bandwidth Comparison 11(direct=false) 512 4k 32k 512k 1m(sequential) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Read Bandwidth Comparison (host=1) host kvm1 kvm2 xen I/O size
  • 53. Autor: 2013/09/20 Use Cases ● If your VM host server's disks are slow, Xen will offer better I/O performance than KVM. ● KVM with cache="none" is relatively appropriate for performance simulation of physical server because of the similarity of latency distribution. ● Both Xen and KVM are not suitable for high random I/O load on super fast device, e.g. OLTP DB server with ioDrive.
  • 54. Autor: 2013/09/20 Questions ● In this test, I/O load is single-threaded. What if prarel I/O load by many guests? ● Is Xen's data safe? It seems to me that more optimization → longer time in mermory → larger data loss in server fault