3. Disclaimer
•This presentation may contain product features that are
currently under development.
•Features are subject to change, and must not be included in contracts,
purchase orders, or sales agreements of any kind.
•Technical feasibility and market demand will affect final delivery.
•Pricing and packaging for any new technologies or features discussed or
presented have not been determined.
•In other words, VMware in no way promises to deliver on any of the
products or features shown in the following presentation.
•And just to be clear, neither does Cormac Hogan.
3 Virtual Machine User Group – November 2012
4. Introduction
•vSphere 5.1 builds on the storage features introduced in vSphere 5.0.
• More scalability
• Increased performance
• Increased interoperability between VMware products & features
•The purpose of this presentation is to quickly highlight the major storage
enhancements in vSphere 5.0 and what improvements have been made
to storage features in vSphere 5.1.
•We will also take a look at some of the storage features which were tech
previewed at VMworld 2012.
4 Virtual Machine User Group – November 2012
6. VMFS-5 Upgrade Considerations
•A live, non-disruptive upgrade mechanism is available to upgrade from
VMFS-3 to VMFS-5 (with running VMs) but you do not get the full
complement of features.
Feature Upgraded VMFS-5 New VMFS-5
Maximum files 30720 130689
(inherited from VMFS-3)
File Block Size 1, 2, 4 or 8MB 1MB
(inherited from VMFS-3)
Sub-Blocks 64KB 8KB
(inherited from VMFS-3)
ATS Complete No Yes
(same as VMFS-3)
•Best Practice: If you have the luxury of doing so, create a brand new
VMFS-5 datastore, and use Storage vMotion to move your VMs to it.
6 Virtual Machine User Group – November 2012
7. Increasing VMFS-5 File Sharing Limits in vSphere 5.1
VMFS-5
•In previous versions of vSphere, the maximum number of hosts which
could share a read-only file (linked clone base disk) on VMFS was 8.
•In vSphere 5.1, this has been increased to 32.
•VMFS is now as scalable as NFS for linked-clones.
7 Virtual Machine User Group – November 2012
8. VOMA - vSphere On-Disk Metadata Analyzer
•VOMA is a VMFS meta-data consistency checker tool which will be made
available in the CLI of vSphere 5.1 ESXi systems.
•It has the ability to check various On-Disk metadata structures on a given
VMFS datastore (both versions 3 & 5) and report any consistencies.
•VOMA is not a data recovery tool!
8 Virtual Machine User Group – November 2012
10. VAAI Primitives
Primitive vSphere 4.1 vSphere 5.0 vSphere 5.1
ATS (Atomic Test & Set) Yes Yes Yes
XCOPY (Clone) Yes Yes Yes
Write Same (Zero) Yes Yes Yes
Full File Clone (NAS) No Yes Yes
Fast File Clone (NAS) No Yes Yes
Reserve Space (NAS) No Yes Yes
Extended Statistics (NAS) No Yes Yes
Thin Provisioning OOS
No Yes Yes
Alarm/VM Stun
Thin Provisioning UNMAP No Yes* Yes*
10 Virtual Machine User Group – November 2012
11. A note about UNMAP - Dead Space Reclamation
•Dead space is previously written
blocks that are no longer used, for
instance, after a Storage vMotion
operation on a VM. VMware
•Through VAAI, storage system will
now reclaim the dead blocks
•Although the objective is to make Storage vMotion
this procedure automated, this
mechanism is currently only VMFS VMFS
supported via a manual volume A volume B
vmkfstools command in vSphere
5.0 & 5.1.
•More detail on the VAAI UNMAP
primitive can be found here –
http://kb.vmware.com/kb/2007427 VM’s file data blocks will be
released through a manually
issued vmkfstools command
11 Virtual Machine User Group – November 2012
12. VAAI NAS Support for vCloud Director
•vSphere 5.0 introduced the offloading of linked clones for VMware View to
native snapshots on the array via NAS VAAI primitives.
vSphere 5.1 will allow storage array vCloud vApps
based snapshots to be used by
vCloud Director vApps, leveraging
the VAAI Fast File Clone primitive.
vCloud Director vApps are based on
linked clones.
This will minimizing CPU & memory
usage and on the hosts and network
bandwidth consumption in vCloud
Director deployments using NFS.
• This will also require a special VAAI
NAS plug-in from vendors.
12 Virtual Machine User Group – November 2012
14. Storage I/O Control Revisited
What you see What you want to see
online Microsoft data online Microsoft data
store Exchange mining store Exchange mining
Datastore Datastore
14 Virtual Machine User Group – November 2012
15. Storage I/O Control Enhancements in vSphere 5.1
•Stats Only Mode
• SIOC is now turned on in stats only mode automatically.
It doesn't enforce throttling but gathers statistics.
This gives more granular performance statistics in the vSphere client.
Storage DRS can also use these statistics for characterizing new datastores
added to a datastore cluster.
•Automatic Threshold Computation
• A new automatic latency threshold detection mechanism has been
added.
The default SIOC latency threshold in previous versions is 30msecs.
Previously we relied on customers selecting the appropriate threshold.
The latency thresholds is now automatically set using device modeling
rather (I/O injector mechanism).
15 Virtual Machine User Group – November 2012
16. SIOC Automatic Threshold Detection in vSphere 5.1
•Through device modeling, SIOC determines
the peak throughput of the device.
Latency
Lpeak
•It first measures the peak latency value when
the throughput is at its peak. La
•The latency threshold is then set (by default)
to 90% of this value. Load
•Admin still has the option to:
• Change % value. Tpeak
• Manually set congestion threshold.
Throughput
Ta
Load
16 Virtual Machine User Group – November 2012
17. Storage DRS Revisited
•Storage DRS was introduced in vSphere 5.0, and has since become
recognised as one of VMware’s more innovative features
•Benefits of Storage DRS:
• Automatic selection of the best datastore for your initial VM
placement, avoiding hot-spots, disk space imbalances & I/O
imbalances
• Advanced balancing mechanism to avoid storage performance
bottlenecks or “out of space” problems using Storage vMotion
• Smart Placement Rules which allow the placing of VMs with a similar
task on different datastores, as well as keeping VMs together on the
same datastore when required
•Storage DRS works on VMFS-5, VMFS-3 & NFS datastores.
17 Virtual Machine User Group – November 2012
18. Storage DRS Enhancements in vSphere 5.1 (1 of 2)
•vCloud Director Interoperability/Support
• The major enhancement in Storage DRS in vSphere 5.1 is to have
interoperability with vCloud Director
• vCloud Director will use Storage DRS for the initial placement of
vCloud vApps during Fast Provisioning
• vCloud Director will also use Storage DRS for the on-going
management of space utilization and I/O load balancing
18 Virtual Machine User Group – November 2012
19. Storage DRS Enhancements in vSphere 5.1 (2 of 2)
•SDRS introduces a new datastore correlation detector.
• Datastore correlation means datastores are backed by the same disk spindles.
•If we see latency increases on different datastores when load placed on
one datastore, we assume the datastores are correlated.
•Anti-Affinity rules (keeping VMs or VMDKs apart on different datastores)
can also use correlation to ensure the VMs/VMDKs are on different
spindles.
Datastore Cluster
Storage
Array
19 Virtual Machine User Group – November 2012
20. Storage vMotion 5.1 Enhancements
•In vSphere 5.1 Storage vMotion performs up to 4 parallel disk migrations
per Storage vMotion operation.
• In previous versions, Storage vMotion used to copy virtual disks serially.
• This does not impact the ability to do concurrent Storage vMotion operations
per datastore.
20 Virtual Machine User Group – November 2012
22. 1: Software FCoE Adapter
•vSphere 5.0 introduces a new software FCoE adapter.
•A software FCoE adapter is software code that performs some of the
FCoE processing & can be used with a number of NICs that support
partial FCoE offload.
•The software adapter needs to be activated, similar to Software iSCSI.
In vSphere 5.1, Boot from Software FCoE enables an ESXi host to boot
from an FCoE LUN using a Network Interface Card with FCoE boot
capabilities and VMware's Software FCoE driver.
22 Virtual Machine User Group – November 2012
23. 2: Support 16Gb FC HBAs
• VMware introduced support for 16Gb FC HBA with vSphere 5.0.
However the 16Gb HBA had to be throttled to work at 8Gb.
• vSphere 5.1 introduces support for 16Gb FC HBAs running at 16Gb.
• There is no 16Gb end-to-end support for FC in vSphere 5.1, so to get
full bandwidth, you will need to zone to multiple 8Gb FC array ports as
shown below.
8Gb
16Gb
23 Virtual Machine User Group – November 2012
25. Advanced IO Device Management (IODM)
•New commands in vSphere 5.1 to help administrators monitor &
troubleshoot issues with I/O devices and fabrics.
•Enable diagnosis and querying of Fibre Channel, FCoE, iSCSI & SAS
Protocol Statistics.
•The commands provide layered statistic information to narrow down
issues to ESXi, HBA, Fabric and Storage Port.
• Includes framework to log frame loss and other critical events.
• Includes options to initiate an HBA reset.
25 Virtual Machine User Group – November 2012
26. Advanced IO Device Management (IODM)
Some of the detail you can get
from ESXi with the new IODM
feature
26 Virtual Machine User Group – November 2012
27. SSD Monitoring
•VMware provides a default plugin for monitoring certain SSD attributes in
vSphere 5.1:
• Media Wearout Indicator
• Temperature
• Reallocated Sector Count
•Enables customers to query SMART details for SAS and SATA SSD.
• SMART - Self Monitoring, Analysis And Reporting Technology
• A monitoring system for hard disk drives
• Works on non-SSD drives too
•VMware provides a mechanism for other SSD vendors to provide their
own plugins for monitoring additional statistics.
27 Virtual Machine User Group – November 2012
29. Space Efficient Sparse Virtual Disks (1 of 2)
•A new Space Efficient Sparse Virtual Disk aims to address certain
limitations with Virtual Disks.
1. A variable block allocation unit size
Currently, linked clones have a 512 bytes block allocation size.
This leads to alignment and partial write issues.
SE Sparse disks have variable block allocation sizes.
Tuned to suit applications running in the Guest OS and storage arrays.
2. Stale/Stranded data in the Guest OS filesystem/database.
An automated mechanism for reclaiming stranded space.
•A future release of VMware View will be required to use SE Sparse
Disks. This is the only use case defined thus far.
29 Virtual Machine User Group – November 2012
30. Space Efficient Sparse Virtual Disks (2 of 2)
Initiate Scan filesystem
Wipe for unused Filesystem
VMware space
Tools
Inform VMkernel
about unused blocks
ESXi Via SCSI UNMAP
vSCSI Layer
Initiate Shrink which
Reorganises SE Sparse issues SCSI UNMAP
disk to create contiguous command and reclaims
free space at end of disk blocks on array
30 Virtual Machine User Group – November 2012
32. Introducing Virtual Flash
Flash Infrastructure
Cache software Cache software
•Integrate solid state storage into
the vSphere storage stack
Flash Infrastructure
•Permitting flash storage
consumers to reserve, access,
and use flash storage in a
flexible manner
•A mechanism to insert 3rd party
flash services into vSphere stack
Cache software
•VM-transparent - sharing a pool
of flash resources based on
Flash as a new Tier in vSphere reservations, shares and limits.
•VM-aware – a dedicated chunk
of cache is assigned to the VM.
32 Virtual Machine User Group – November 2012
33. Caching Modes
Virtual Machine Virtual Machine Virtual Machine
without local flash transparent flash aware flash cache
cache cache
Cache
presented as
block to VM
Cache SW Cache SW
Flash Infrastructure
33 Virtual Machine User Group – November 2012
35. Per VM Data Services on storage systems
Goals
• Provide customers option to use per-
VM data operations on storage array
• Build framework to offload per-VM
data operations to the storage array
Challenge
• Data management on storage array
Granularity mismatch between is at LUN or Volume granularity
vSphere and Storage systems
• Data management on vSphere is at
the VMDK level
35 Virtual Machine User Group – November 2012
36. Introducing Virtual Volumes...
•A Virtual Volume (VVOL) is a VMDK (or its derivative – clone, snapshot
replica) stored natively inside a storage array.
•Storage array is now involved in VM lifecycle by virtue of managing VM
storage natively
• Application/VM requirements can now be conveyed to storage system
• Policies set at Virtual Volume granularity
How do vSphere hosts access these VMDK objects?
Is this model scalable?
36 Virtual Machine User Group – November 2012
37. Scalable Connectivity for Virtual Volumes
Traditional Storage system
VVOL enabled Storage system
•Protocol Endpoint is an IO
channel from the host to the
entire storage system
• PE is SCSI LUN or NFS
mount point, but holds no data
• VMDKs are not visible on the
network
PE
• VM admin configures
multipathing, path policies,
etc, once per PE
What about:
I/Os to each LUN or Volume
I/Os to a single Protocol Endpoint
Capacity management?
Access control?
Storage Capabilities?
37 Virtual Machine User Group – November 2012
38. Capacity Management for Virtual Volumes
VVOL enabled Storage system
•Storage Container is a logical
entity which describes:
• How much physical space can be
allocated for VMDKs
• Access Control
• A set of data services offered on
PE any part of that storage space
• The storage container can span
the entire data center.
•It is Created and managed by
storage administrator; Used by
Manage capacity, access control on the storage vSphere administrator to store
system, and defines storage capabilities VMs
(snapshot, clone, replication, etc)
38 Virtual Machine User Group – November 2012
40. Distributed Storage Technology is…
•Many things
• A new VMware developed Storage Solution
• A Storage Solution that is fully integrated with vSphere
• A platform for Policy Based Storage to simplify Virtual Machine
deployments decisions
• A Highly Available Clustered Storage Solution
• A Scale-Out Storage System
• An Quality Of Service implementation (for its storage objects)
40 Virtual Machine User Group – November 2012
41. Distributed Storage Hardware Requirements Summary
10G NIC (recommended)
Server on SAS/SATA RAID Controller
vSphere HCL (with “passthru” or “HBA”
mode)
SAS/SATA SSD
At least 1
of each
SAS/SATA HDD
• Not every node in a Distributed Storage cluster needs to bear storage
• The expected overhead of the Distributed Storage s/w itself is ~10%
41 Virtual Machine User Group – November 2012
42. Distributed Storage Design Principles
Virtual • Distributed Storage aggregates
Machine locally attached storage on each
ESXi host in the cluster.
virtual
disk
• The storage is a combination of
Distributed SSD & spinning disks.
Storage Cluster Datastore • Datastores consist of multiple
storage components distributed
RAIN-1 across the ESXi hosts in the
replica-1 replica-2
cluster.
• Storage Policy Profiles are built
with certain desired capabilities
ESX ESX ESX ESX (Availability, Reliability, &
Performance)
• The VMDK is then instantiated
through the policy profile settings
(based on VM requirements).
42 Virtual Machine User Group – November 2012
43. Distributed Storage Datastore
Replica 1
Replica 2
vSphere
Distributed Storage Cluster
Distributed Storage Datastore
…
SSD Hard disks SSD Hard disks SSD Hard disks
•The object is laid out across the cluster based on the storage policy of the
VM and the optimization goals.
•The replica may end up on any host and any storage.
43 Virtual Machine User Group – November 2012
44. Conclusion
•vSphere 5.1 has many new compelling storage features.
• VMFS Scalability and a new consistency checking tool
• VAAI Enhancements for View & vCloud Director
• vCloud Director interoperability with Storage DRS & Profile Driven
Storage
• Storage I/O Control, Storage DRS & Storage vMotion enhancements
• Additional protocol features (FC, FCoE & iSCSI)
• More visibility into low level storage behaviours with IODM & SSD
Monitoring
• A new Space-Efficient Sparse Virtual Disk with granular block
allocation size and space reclaim mechanism.
•VMware has many additional storage initiatives underway to
provide even greater integration with the underlying hardware.
44 Virtual Machine User Group – November 2012
45. Questions?
http://CormacHogan.com
http://blogs.vmware.com/vSphere/Storage
@VMwareStorage
45 Virtual Machine User Group – November 2012