SlideShare una empresa de Scribd logo
1 de 15
Descargar para leer sin conexión
What’s New in VMware®
Virtual SAN (VSAN)
T e c h n i c a l W HI T E P A P E R
v 0 . 1 c /A U G U S T 2 0 1 3
What’s New in VMware Virtual SAN (VSAN)
T ECHNICAL W HI T E P A P E R / 2
Table of Contents
1. Introduction.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4
1.1 Software-Defined Datacenter.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4
1.2 Software-Defined Storage.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4
1.3 What is VSAN? .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2. Requirements. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.1 vSphere Requirements.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.1.1 vCenter Server.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.1.2 vSphere .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.2 Storage Requirements. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.2.1 Disk Controllers .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.2.2 Hard Disk Drives. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.2.3 Solid-State Disks .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
2.3 Network Requirements.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
2.3.1 Network Interface Cards.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
2.3.2 Supported Virtual Switch Types. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
2.3.3 VMkernel Network.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
3. Install and Configure .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
3.1 Creating a Virtual SAN Cluster .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
3.1.1 Manually Add Disks to Disk Groups. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
3.1.2 Automatic Creation of Disk Groups. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 6
3.1.3 Virtual SAN Cluster Creation Example.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 7
3.2 The Virtual SAN Datastore.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 7
3.2.1 VSAN Datastore Properties. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 7
3.3 Defining Virtual Machine Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4. Architecture Details.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
4.1 Distributed RAID. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
4.2 Witnesses and Replicas. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
4.3 The Role of Solid-State Disks. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
4.3.1 Purpose of Read Cache.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
4.3.2 Purpose of Write Cache.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
What’s New in VMware Virtual SAN (VSAN)
T ECHNICAL W HI T E P A P E R / 3
5. Storage Policy Based Management.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 9
5.1 Virtual SAN Capabilities.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 9
5.1.1 Number of Failures to Tolerate.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 9
5.1.2 Number of Disk Stripes per Object.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 10
5.1.3 Flash Read Cache Reservation. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 10
5.1.4 Object Space Reservation.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 10
5.1.5 Force Provisioning.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
5.2 Witness Example .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
5.3 VM Storage Policies.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
5.3.1 Enabling VM Storage Policies .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
5.3.2 Creating VM Storage Policies.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
5.3.3 Assigning a VM Storage Policy During Virtual Machine Provisioning. .  .  .  .  .  . 12
5.3.4 Virtual Machine Objects.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 12
5.3.5 Matching Resource .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 13
5.3.6 Compliance.  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 13
Conclusion .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 14
Acknowledgments. .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 14
About the Author .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 14
T ECHNICAL W HI T E P A P E R / 4
What’s New in VMware Virtual SAN (VSAN)
1. Introduction
1.1 Software-Defined Datacenter
The annual VMware® user conference, VMworld, introduced the vision of VMware for the software-defined
datacenter (SDDC) in 2012. The SDDC is the architecture for the cloud by VMware in which all pillars of the
datacenter, including compute, storage, networks and associated services, are virtualized. In this white paper, we
look at one aspect of the SDDC of VMware, the storage pillar. We specifically discuss how a new product called
Virtual SAN (VSAN) fits into this vision.
1.2 Software-Defined Storage
The plan of VMware for software-defined storage is to focus on a set of VMware initiatives around local storage,
shared storage and storage/data services. In essence, we want to make VMware vSphere® a platform for
storage services.
Software-defined storage is designed to provide storage services and service-level agreement automation
through a software layer on the hosts that integrates with and abstracts the underlying hardware.
Historically, storage was something that was configured and deployed at the beginning of a project and was not
changed during its life cycle. If there was a requirement to change some aspects or features of the LUN or
volume being leveraged by virtual machines, the original LUN or volume was deleted in many cases and a new
volume with the required features was created. This was a very intrusive and time-consuming operation that
might have taken weeks to coordinate. With software-defined storage, virtual machine storage requirements
can be dynamically instantiated. There is no need to repurpose LUNs or volumes. Virtual machine workloads
might change over time, and the underlying storage can be adapted to the workload at any time.
A key factor for software-defined storage is Storage Policy Based Management (SPBM). This is featured in the
vSphere 5.5 release. SPBM can be considered the next generation of VMware vSphere Storage Profile features.
vSphere Storage Profile was first introduced with vSphere 5.0. In vSphere 5.5, an enhanced feature called
VM Storage Policies is introduced.
SPBM is a critical component for VMware in implementing software-defined storage. Using SPBM and
VMware vSphere APIs, the underlying storage technology surfaces an abstracted pool of storage space with
various capabilities that is presented to vSphere administrators for virtual machine provisioning. The capabilities
can relate to performance, availability or storage services such as VMware vSphere Thin Provisioning. A vSphere
administrator can then create a VM Storage Policy using a subset of the capabilities required by the application
running in the virtual machine. At deployment time, the vSphere administrator selects the VM Storage Policy
appropriate for the needs of that virtual machine. SPBM will push the requirements down to the storage layer.
Datastores that provide the capabilities placed in the VM Storage Policy are made available for selection. This
means that the virtual machine is always instantiated on the appropriate underlying storage based on the
requirements placed in the VM Storage Policy. If the virtual machine’s workload changes over time, it is
simply a matter of applying a policy to the virtual machine with new, updated requirements that reflect the
new workload.
This document thoroughly examines VSAN, which provides storage services by utilizing local disks on vSphere hosts.
T ECHNICAL W HI T E P A P E R / 5
What’s New in VMware Virtual SAN (VSAN)
1.3 What is VSAN?
VSAN is a new storage solution from VMware that is fully integrated with vSphere. It automatically aggregates
server disks in a cluster to create shared storage that can be rapidly provisioned from VMware vCenter during
VM creation. It is an object-based storage system and a platform for VM Storage Policies designed to simplify
virtual machine storage placement decisions for vSphere administrators. It is fully integrated with core vSphere
features such as VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler™
(vSphere DRS) and VMware vSphere vMotion®. Its goal is to provide both high availability and scale-out storage
functionality. It can also be considered in the context of quality of service (QoS) because VM Storage Policies can
be created that define the level of performance and availability required on a per–virtual machine basis.
2. Requirements
The following section details the hardware and software requirements that must be met to create a VSAN cluster.
2.1 vSphere Requirements
2.1.1 vCenter Server
VSAN requires at a minimum that the VMware vCenter Server™ version is 5.5. Both the Microsoft Windows
version of vCenter Server and the VMware vCenter Server Appliance™ can manage VSAN. VSAN is configured
and monitored via the VMware vSphere Web Client and this also requires version 5.5.
2.1.2 vSphere
VSAN requires at least three vSphere hosts (in which each host has local storage) to form a supported
VSAN cluster. This enables the cluster to meet the minimum availability requirement of at least one host, disk,
or network failure tolerated. The vSphere hosts requires at a minimum vSphere version 5.5.
2.2 Storage Requirements
2.2.1 Disk Controllers
Each vSphere host participating in the VSAN cluster requires a disk controller. This can be a SAS/SATA host bus
adapter (HBA) or a RAID controller. However, the RAID controller must function in what is commonly referred to
as pass-through mode or HBA mode. In other words, it must be able to pass up the underlying hard disk drives
(HDDs) and solid-state drives (SSDs) as individual disk drives without a layer of RAID sitting on top. This is
necessary because VSAN will manage any RAID configuration when policy attributes such as availability and
performance for virtual machines are defined. The VSAN Hardware Compatibility List (HCL) will call out the
controllers that have passed the testing phase.
Each vSphere host in the cluster that contributes its local storage to VSAN must have at least one HDD and at
least one SSD.
2.2.2 Hard Disk Drives
Each vSphere host must have at least one HDD when participating in the VSAN cluster. HDDs make up the
storage capacity of the VSAN datastore. Additional HDDs increase capacity but might also improve virtual
machine performance. This is because virtual machine storage objects might be striped across multiple spindles.
This is covered in far greater detail when VM Storage Policies are discussed later in this paper.
2.2.3 Solid-State Disks
Each vSphere host must have at least one SSD when participating in the VSAN cluster. The SSD provides both a
write buffer and a read cache. The more SSD capacity the host has, the greater the performance, because more
I/O can be cached.
NOTE: The SSDs do not contribute to the overall size of the distributed VSAN datastore.
T ECHNICAL W HI T E P A P E R / 6
What’s New in VMware Virtual SAN (VSAN)
2.3 Network Requirements
2.3.1 Network Interface Cards
Each vSphere host must have at least one network interface card (NIC). The NIC must be 1Gb capable. However,
as a best practice, VMware is recommending 10Gb NICs. For redundancy, a team of NICs can be configured on a
per-host basis. VMware considers this a best practice but does not deem it necessary when building a fully
functional VSAN cluster.
2.3.2 Supported Virtual Switch Types
VSAN is supported on both the VMware vSphere Distributed Switch™ (VDS) and the vSphere Standard Switch
(VSS). No other virtual switch types are supported in the initial release.
2.3.3 VMkernel Network
On each vSphere host, a VMkernel port for VSAN communication must be created. The VMkernel port is labeled
Virtual SAN. This port is used for intercluster node communication and also for reads and writes when one of the
vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual
machine files are located on a different vSphere host in the cluster. In this case, I/O must traverse the network
configured between the hosts in the cluster.
3. Install and Configure
This chapter includes a detailed description of the install and configure process, along with any initial
preparation that might be required before deployment of a VSAN cluster.
3.1 Creating a Virtual SAN Cluster
The creation of a VSAN cluster requires many of the same steps a vSphere administrator might take to set up a
DRS or vSphere HA cluster. A cluster object is created in the vCenter server inventory, and one can either choose
to enable the VSAN cluster functionality and then add in the hosts to the cluster or add the hosts first and then
enable VSAN. When you enable the VSAN functionality, a single option is displayed asking the administrator to
choose a manual or automatic cluster. This gives the vSphere administrator an option: They can assign VSAN to
discover all the local disks on the hosts and automatically add the disks to the VSAN datastore or the
administrator can opt to manually select the disks to add to the cluster.
3.1.1 Manually Add Disks to Disk Groups
If this option is selected, the VSAN cluster is still formed. However, the VSAN datastore is 0 bytes in size initially.
The administrator must manually create disk groups on a per-host basis and add at least one HDD and at most
one SSD disk to each of the disk groups. Each disk group can only contain a single SSD, so there might be
multiple disk groups, each containing an SSD and any number of HDDs. After each disk group is created on a
per-host basis, the size of the VSAN datastore will grow according to the amount of HDD that is added.
NOTE: The SSDs function as read caches and write buffers and are not included in the capacity of the
VSAN datastore.
3.1.2 Automatic Creation of Disk Groups
If an automatic method is chosen to create the VSAN cluster, VSAN will automatically discover local HDDs and
SSDs on each host and build disk groups on every host in the cluster. All hosts with valid storage have a disk
group containing their local HDDs and SSDs. Finally, after this is completed, the VSAN datastore is created
and its size reflects the capacity of all the HDDs across all the hosts in the cluster, except for some
metadata overhead.
Hosts that do not have valid storage or no storage can still access the VSAN datastore. This is a very
advantageous feature of VSAN, because a VSAN cluster can now also be scaled on compute requirements,
not just on storage requirements.
T ECHNICAL W HI T E P A P E R / 7
What’s New in VMware Virtual SAN (VSAN)
3.1.3 Virtual SAN Cluster Creation Example
After the host, storage and networking requirements are met, the creation of the VSAN cluster can commence.
As mentioned, the configuration is very much the same as the creation of a vSphere HA or DRS cluster, and it is
user interface (UI) driven. To create a VSAN cluster, create a cluster object in the vCenter inventory, add the
vSphere hosts to the cluster object that you intend to have participate in the VSAN, and enable VSAN on the cluster.
Figure 1. Create a New VSAN Cluster
3.2 The Virtual SAN Datastore
3.2.1 VSAN Datastore Properties
The size of a VSAN datastore is governed by the number of HDDs per vSphere host and the number of vSphere
hosts in the cluster. There is also some metadata overhead to consider. For example, if a host has 6 x 2TB HDDs
and there are eight hosts in the cluster, the raw capacity would be 6 x 2TB x 8 = 96TB of raw capacity.
The number of supported disks per host and the number of hosts in the cluster govern the maximum size of a
VSAN datastore.
After the VSAN datastore is formed, a number of datastore capabilities are surfaced up into vCenter Server.
These include stripe width, component failures to tolerate, force provisioning and proportional capacity. The
storage layer is surfacing what it can support in terms of performance, availability and provisioning. These
capabilities will be discussed in greater detail later in this paper, but for now it’s important to know that they can
be used to create a virtual machine policy that defines the storage requirements of a virtual machine. Further,
they enable a vSphere administrator to now specify other capabilities that must be provided by the underlying
storage, including performance, availability and data services, during the virtual machine provisioning process.
All of this enables the administrator to select the correct storage for that virtual machine.
3.3 Defining Virtual Machine Requirements
On creation of the VSAN cluster, a vsanDatastore is also created. The vsanDatastore has a set of capabilities that
it surfaces up to vCenter.
When a vSphere administrator sets out to design a virtual machine, the application that runs in that virtual
machine will have a set of requirements, including storage requirements.
The vSphere administrator uses a VM Storage Policy to specify the application’s storage requirements. The
VM Storage Policy contains storage requirements in the form of a set of the storage capabilities.
The VM Storage Policy indicates which of these storage capabilities the virtual machine must have to meet the
application’s storage requirements.
In effect, then, storage provides capabilities and virtual machines consume them via requirements placed in the
VM Storage Policy.
T ECHNICAL W HI T E P A P E R / 8
What’s New in VMware Virtual SAN (VSAN)
4. Architecture Details
4.1 Distributed RAID
In traditional storage environments, redundant array of independent disks (RAID) refers to disk redundancy
inside the storage chassis to withstand the failure of one or more disk drives. A VSAN cluster uses the concept
of distributed RAID, whereby an environment can now contend with the failure of a vSphere host—or
components within that host such as a disk drive or network interface—and continue to provide complete
functionality for all virtual machines. Availability is now defined on a per–virtual machine basis through the use
of VM Storage Policies, with which vSphere administrators can define how many host, network or disk failures a
virtual machine can tolerate in a VSAN cluster. If an administrator sets a zero capability in the VM Storage Policy,
a host or disk failure can impact the availability of a virtual machine.
Using VM Storage Policies along with VSAN distributed RAID architecture—whereby copies of the virtual
machine and its contents can reside on multiple nodes in the cluster—it is not necessary to migrate data on a
failing node to other nodes in the cluster in the event of a failure.
4.2 Witnesses and Replicas
Replicas are copies of the virtual machine storage objects that are instantiated when an availability capability is
specified for the virtual machine. The availability capability dictates how many replicas are created. It enables
virtual machines to continue running with a full complement of objects when there are host, network or disk
failures in the cluster.
Witnesses are part of every storage object. They are components that do not contain data, only metadata. Their
purpose is to serve as tiebreakers when availability decisions are made in the VSAN cluster. A witness consumes
about 2MB of space for metadata on the VSAN datastore.
NOTE: For an object to be accessible in VSAN, more than 50 percent of its components must be accessible.
4.3 The Role of Solid-State Disks
Solid-state disks (SSDs) have two purposes in Virtual SAN. These are to provide a read cache and a write buffer.
This dramatically improves the performance of virtual machines. In some respects, VSAN can be compared to a
number of “hybrid” storage solutions in the market, which also use a combination of SSD and HDD storage to
boost the performance of the I/O and which have the ability to scale out based on low-cost HDD storage.
4.3.1 Purpose of Read Cache
The read cache keeps a cache of commonly accessed disk blocks. This reduces the I/O read latency in the event
of a cache hit. The actual block that is read by the application running in the virtual machine might not be on the
same vSphere host on which the virtual machine is running. To handle this situation, VSAN distributes a
directory of cached blocks between vSphere hosts in the VSAN cluster. This enables a vSphere host to
determine if another host has data cached that is not in local cache. If this is the case, the vSphere host can
retrieve cached blocks from another host over the interconnect. If the block is not in cache on any VSAN host, it
is retrieved directly from the HDD.
4.3.2 Purpose of Write Cache
The write cache behaves as a nonvolatile write buffer. The fact that we can use SSD storage for writes also
reduces the latency for write operations.
Because the writes go to SSD storage, we must of course ensure that there is a copy of the data elsewhere in the
VSAN cluster. All virtual machines deployed to VSAN have an availability policy setting ensuring that at least one
additional copy of the virtual machine data is available. This includes the write cache contents.
T ECHNICAL W HI T E P A P E R / 9
What’s New in VMware Virtual SAN (VSAN)
After a write is initiated by the application running inside of the guest operating system (OS), the write is sent
in parallel to both the local write cache on the owning host and also to the write cache on the remote host(s).
The write must be committed to the SSD on both hosts before it is acknowledged.
This means that in the event of a host failure, we also have a copy of the data on another SSD in the VSAN
cluster so that no data loss will occur. The virtual machine will access the replicated copy of the data on another
VSAN host via the VSAN interconnect.
5. Storage Policy Based Management
In the introduction, it was mentioned that Storage Policy Based Management (SPBM) now plays a major role in
the policies and automation for the software-defined storage vision of VMware. Using VM Storage Policies,
administrators can specify a set of required storage capabilities for a virtual machine, or more specifically a set
of requirements for the application running in the virtual machine. This set of required storage capabilities is
pushed down to the storage layer, which then checks where the storage objects for that virtual machine can be
instantiated to match this set of requirements. For instance, are there enough stripe widths available in the
cluster if this virtual machine requires it? Or are there enough hosts in the cluster to provide the number of
failures to tolerate? If the VSAN datastore understands the capabilities placed in the VM Storage Policy, it is said
to be a matching resource and is highlighted as such in the provisioning wizard. Subsequently, when the virtual
machine is deployed, if the requirements in the VM Storage Policy attached to the virtual machine can be met by
the VSAN datastore, the datastore is said to be compliant from a storage perspective in its own summary
window. If the VSAN datastore is overcommitted or cannot meet the capability requirements, it might still be
shown as a matching resource in the deployment wizard, but the provisioning task might fail.
SPBM now provides an automated, policy-driven mechanism for selecting an appropriate datastore for virtual
machines, based on the required storage capabilities placed in the VM Storage Policies.
5.1 Virtual SAN Capabilities
This section examines the required storage capabilities that can be placed in a VM Storage Policy. These
capabilities, which are surfaced by the VSAN datastore when the cluster is configured successfully, highlight
the availability, performance and sizing requirements that can then be required of the storage on a per–virtual
machine basis.
If you do not correctly state the requirements—in other words, if you put a capability in the storage level that
cannot be detected by the VSAN datastore—the VSAN datastore no longer appears as a matching resource
during provisioning.
5.1.1 Number of Failures to Tolerate
This property requires the storage object to tolerate at least NumberOfFailuresToTolerate of concurrent host,
network or disk failures in the cluster and still ensure availability of the object.
If this property is populated, it specifies that configurations must contain at least NumberOfFailuresToTolerate + 1
replicas and might also contain an additional number of witness objects to ensure that the object’s data is available
(to maintain quorum in cases of split-brain), even in the presence of as many as NumberOfFailuresToTolerate
concurrent host failures. Therefore, to tolerate n failures, at least (n + 1) copies of the object must exist and at
least (2n + 1) hosts are required.
NOTE: Any disk failure on a single host is treated as a “failure” for this metric. Therefore, the object cannot
persist if there is a disk failure on host A and another disk failure on host B when you have
NumberOfFailuresToTolerate set to 1.
T ECHNICAL W HI T E P A P E R / 1 0
What’s New in VMware Virtual SAN (VSAN)
5.1.2 Number of Disk Stripes per Object
This defines the number of physical disks across which each replica of a storage object is distributed. A value
higher than 1 might result in better performance if read caching is not effective, but it will also result in a greater
use of system resources.
To understand the impact of Disk Stripes, we examine it first in the context of write operations and then in the
context of read operations. Because all writes go to the SSD write buffer, the value of an increased Disk Stripes
number might not improve write performance. This is because there is no guarantee that the new stripe will use
a different SSD. The new stripe might be in the same disk group and thus use the same SSD. The only instance in
which an increased Disk Stripes number might add value is when there are many writes to destage from SSD
to disk.
From a read perspective, an increased Disk Stripes number helps when you are experiencing many cache
misses. If one takes the example of a virtual machine consuming 2,000 read operations per second and is
experiencing a cache-hit rate of 90 percent, then there are 200 read operations per second that must be
serviced from HDD storage. In this case, a single HDD might not be able to service those read operations, so an
increase in the Disk Stripes number would help in this instance.
In general, the default Disk Stripes number of 1 should meet the requirements of most if not all virtual machine
workloads. Disk Stripes is a requirement that should change only when a small number of high-performance
virtual machines are running.
5.1.3 Flash Read Cache Reservation
This is the amount of flash capacity reserved on the SSD as read cache for the storage object. It is specified as a
percentage of the logical size of the virtual machine disk storage object. This is expressed as a percentage value
(%) with as many as four decimal places. A fine, granular unit size is required so that administrators can express
sub–1 percent units. Take the example of a 1TB disk. If we limited the read cache reservation to 1-percent
increments, this would result in cache reservations in increments of 10GB, which in most cases is far too large for
a single virtual machine.
NOTE: It is not required to set a reservation in order to get cache. The reservation should be set to 0 unless you
are trying to solve a real performance problem.
If cache space is not reserved, the VSAN scheduler manages fair cache allocation. Cache reserved for one virtual
machine is not available to other virtual machines. Unreserved cache is shared fairly between all objects.
A final point on Flash Read Cache Reservation involves read performance. Even if read performance is already
acceptable, you can use additional cache to prevent more reads from going to the HDD, decreasing cache
misses. This enables more writes to the HDDs. Therefore, write performance can be improved indirectly by
increasing Flash Read Cache Reservation. However, setting a reservation should only be considered if flushing
data from SSD to HDD storage creates a bottleneck.
5.1.4 Object Space Reservation
This capability defines the percentage of the logical size of the storage object that should be reserved on HDDs
during initialization. By default, provisioning on the VSAN datastore is “thin.” The ObjectSpaceReservation is the
amount of space to reserve on the VSAN datastore, specified as a percentage of the virtual machine disk. The
value is the minimum amount of capacity that must be reserved. This is a property used for specifying a thinly
provisioned storage object. If ObjectSpaceReservation is set to 100 percent, all of the storage capacity of the
virtual machine is offered up front.
NOTE: If you provision a virtual machine and select the disk format to be thick, either lazy zero or eager zero,
this setting overrides the ObjectSpaceReservation setting in the VM Storage Policy.
T ECHNICAL W HI T E P A P E R / 1 1
What’s New in VMware Virtual SAN (VSAN)
5.1.5 Force Provisioning
If this option is enabled, the object will be provisioned even if the capabilities specified in the VM Storage Policy
cannot be satisfied with the resources available in the cluster at the time. VSAN attempts to bring the object into
compliance if and when resources become available. If this parameter is set to a nonzero value, the object will be
provisioned even if the policy specified in the VM Storage Policy is not satisfied by the datastore. However, if
there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, the
provisioning will fail even if ForceProvisioning is turned on. This option is disabled by default.
5.2 Witness Example
The following example of a witness involves the deployment of a virtual machine that has a Disk Stripes number
of 1 and a NumberOfFailuresToTolerate number of 1. In this case, two replica copies of the virtual machine are
created. Effectively, this is a RAID-1 virtual machine with two replicas. However, with two replicas there is no way
to differentiate between a network partition and a host failure. Therefore, a third entity called the “witness” is
added to the configuration. For an object on VSAN to be available, the following two conditions must be met:
•	At least one replica must be intact for data access.
•	More than 50 percent of all components must be available.
In the above example, the object is accessible only when there is access to one replica copy and a witness or
to two replica copies. That way, one part of the cluster at most can ever access an object in the event of a
network partition.
5.3 VM Storage Policies
VM Storage Policies in VSAN works in a similar fashion to vSphere Storage Profiles, introduced in vSphere 5.0,
because you build a policy containing your virtual machine provisioning requirements. There is a major
difference in the way VM Storage Policies works in VSAN when compared to previous versions of
vSphere Storage Profiles. With the original version of vSphere Storage Profiles, one used the capabilities in the
policy to select an appropriate datastore when provisioning the virtual machine. The new VM Storage Policies
not only selects the appropriate datastore, but also communicates to the underlying storage layer that there
are certain availability and performance requirements for a specified virtual machine. Therefore, whereas
the VSAN datastore might be the destination datastore when that virtual machine is provisioned with a
VM Storage Policy, other policy settings stipulate a requirement for the number of replicas of the virtual
machine files for availability and might also contain a stripe-width requirement for performance.
5.3.1 Enabling VM Storage Policies
VM Storage Policies is automatically enabled when VSAN is enabled. To manually enable VM Storage Policies,
one must navigate to the VMware vSphere Client™ Home position and then select Rules and Profiles. The
VM Storage Policies section is located here. Click VM Storage Policies, and then a number of icons appear. One
of these enables the VM Storage Policies functionality. This can be enabled on a per-host or a per-cluster basis.
5.3.2 Creating VM Storage Policies
After VM Storage Policies is enabled, a vSphere administrator can create individual levels using another icon in
this window. As already mentioned, a number of capabilities are surfaced by vSphere APIs related to availability
and performance. At this point the administrator must decide which of these capabilities would be required,
from a performance and availability perspective, for the applications running inside of the virtual machines.
For example, how many component failures (hosts, network, and disk drive) does the administrator require this
virtual machine to tolerate while still continuing to function? Also, is the application running in this virtual
machine considered demanding from an IOPS perspective? If so, then a reservation of Flash read cache might
be required as a capability to meet performance. Other considerations might involve deciding if the virtual
machine is to be thinly provisioned or thickly provisioned.
T ECHNICAL W HI T E P A P E R / 1 2
What’s New in VMware Virtual SAN (VSAN)
In this example, a VM Storage Policy with two capabilities is being created. The capabilities are for availability
and performance. NumberOfFailuresToTolerate defines the availability and Disk Stripes defines
the performance.
Figure 2. Create a New VM Storage Policy
NOTE: vSphere 5.5 also supports the use of tags for provisioning. Therefore, instead of using VSAN datastore
capabilities for the creation of a VM Storage Policy, tag-based policies might be created. The use of tag-based
policies is outside the scope of this white paper, but further information might be found in the vSphere storage
documentation.
5.3.3 Assigning a VM Storage Policy During Virtual Machine Provisioning
The assignment of a VM Storage Policy takes place during virtual machine provisioning. When the vSphere
administrator must select a destination datastore, they select the appropriate level from the drop-down menu of
available VM Storage Policies. The datastores are then separated into compatible and incompatible datastores,
enabling the vSphere administrator to choose the appropriate and correct placement of the virtual machine.
5.3.4 Virtual Machine Objects
When it comes to the layout of objects on the VSAN datastore, a virtual machine has a number of different
storage objects. Virtual machine disk files (VMDKs), the virtual machine namespace directory (the directory
where virtual machine configuration and log files reside), a virtual machine swap file and snapshot delta files are
all unique storage objects. Each of these objects might have different VM Storage Policies. The vSphere UI also
enables administrators to interrogate the layout of a virtual machine object and to see where each component
(stripes, replicas and witnesses) of a storage object resides.
T ECHNICAL W HI T E P A P E R / 1 3
What’s New in VMware Virtual SAN (VSAN)
Figure 3. Storage Object Physical Mappings
5.3.5 Matching Resource
When the virtual machine Provisioning Wizard reports that a VSAN datastore is a matching resource for a
VM Storage Policy, it guarantees that any requirements defined in the policy are understood by the datastore.
This does not mean that the datastore will be able to meet those capabilities, and it’s possible that the
provisioning process might still fail. Compatibility does not guarantee that the datastore can actually meet the
requirements. For example, if VSAN capabilities are used, only VSAN datastores will match, independent of
whether they have the resources to provision the virtual machine.
5.3.6 Compliance
The Virtual Machine Summary tab reveals the compliant state of a virtual machine. As long as VSAN meets the
capabilities defined in the VM Storage Policy, the virtual machine is said to be compliant. In the event of a
component failure in the cluster, VSAN might not be able to meet the policy requirements. In this case, the
virtual machine is said to be Not Compliant. This Not Compliant status is displayed in the virtual machine summary.
Figure 4. Virtual Machine Summary – Not Compliant
T ECHNICAL W HI T E P A P E R / 1 4
What’s New in VMware Virtual SAN (VSAN)
If the failure persists for more than 30 minutes, VSAN will rebuild the storage object and add new components
to address the components of the storage objects that were impacted by the failure to bring the virtual machine
back to a compliant state. This might involve the creation of new stripes, new replicas or new witnesses. If the
failure is addressed within this 30-minute window, no new components are created.
Conclusion
VSAN is a software-based distributed storage platform that combines the compute and storage resources of
vSphere hosts. It provides enterprise-class features and performance with a much simpler management
experience for the user. It is a VMware-designed storage solution to make software-defined storage a reality for
VMware customers.
Acknowledgments
I would like to thank Christian Dickmann and Christos Karamanolis of VMware R&D, whose deep knowledge and
understanding of virtual SAN was leveraged throughout this paper. I would also like to thank Kiran Madnani of
Product Management for VMware for the guidance on accuracy of content. Finally, I would like to thank
James Senicka and Rawlinson Rivera for reviewing this paper.
About the Author
Cormac Hogan is a senior technical marketing architect within the Cloud Infrastructure Product Marketing group
at VMware. He is responsible for storage in general, with a focus on core VMware vSphere storage technologies
and virtual storage, including Virtual SAN. Cormac has written a number of storage-related white papers and
has given numerous presentations on storage best practices and new features. He has been in technical
marketing since 2011. 
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright © 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed
at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW-WP-WHATS-NEW-VSAN-USLET-101	 Docsource: OIC - 13VM004.01

Más contenido relacionado

La actualidad más candente

WebHost Manager Online Help 1.0
WebHost Manager Online Help 1.0WebHost Manager Online Help 1.0
WebHost Manager Online Help 1.0webhostingguy
 
Sqlmap readme
Sqlmap readmeSqlmap readme
Sqlmap readmefangjiafu
 
KSS Techniques - Joel Burton
KSS Techniques - Joel BurtonKSS Techniques - Joel Burton
KSS Techniques - Joel BurtonJeffrey Clark
 
Information extraction systems aspects and characteristics
Information extraction systems  aspects and characteristicsInformation extraction systems  aspects and characteristics
Information extraction systems aspects and characteristicsGeorge Ang
 
Newfies dialer Auto dialer Software
Newfies dialer Auto dialer SoftwareNewfies dialer Auto dialer Software
Newfies dialer Auto dialer SoftwareAreski Belaid
 
Cinelerra Video Editing Manual
Cinelerra Video Editing ManualCinelerra Video Editing Manual
Cinelerra Video Editing Manualduquoi
 
RDB Synchronization, Transcoding and LDAP Directory Services ...
RDB Synchronization, Transcoding and LDAP Directory Services ...RDB Synchronization, Transcoding and LDAP Directory Services ...
RDB Synchronization, Transcoding and LDAP Directory Services ...Videoguy
 
Platform Migration Guide
Platform Migration GuidePlatform Migration Guide
Platform Migration Guidewhite paper
 
Code Conventions
Code ConventionsCode Conventions
Code Conventions51 lecture
 
Ad Ch.1 8 (1)
Ad Ch.1 8 (1)Ad Ch.1 8 (1)
Ad Ch.1 8 (1)gopi1985
 
Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500Banking at Ho Chi Minh city
 
Ibm system storage productivity center deployment guide sg247560
Ibm system storage productivity center deployment guide sg247560Ibm system storage productivity center deployment guide sg247560
Ibm system storage productivity center deployment guide sg247560Banking at Ho Chi Minh city
 
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design:  Designing XenServer Network  ConfigurationsCitrix XenServer Design:  Designing XenServer Network  Configurations
Citrix XenServer Design: Designing XenServer Network ConfigurationsNuno Alves
 
CALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast SecurityCALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast SecurityJohnson Liu
 
Experiences with oracle 10g database for linux on z series sg246482
Experiences with oracle 10g database for linux on z series sg246482Experiences with oracle 10g database for linux on z series sg246482
Experiences with oracle 10g database for linux on z series sg246482Banking at Ho Chi Minh city
 

La actualidad más candente (19)

WebHost Manager Online Help 1.0
WebHost Manager Online Help 1.0WebHost Manager Online Help 1.0
WebHost Manager Online Help 1.0
 
Cluster in linux
Cluster in linuxCluster in linux
Cluster in linux
 
Sqlmap readme
Sqlmap readmeSqlmap readme
Sqlmap readme
 
KSS Techniques - Joel Burton
KSS Techniques - Joel BurtonKSS Techniques - Joel Burton
KSS Techniques - Joel Burton
 
Information extraction systems aspects and characteristics
Information extraction systems  aspects and characteristicsInformation extraction systems  aspects and characteristics
Information extraction systems aspects and characteristics
 
Newfies dialer Auto dialer Software
Newfies dialer Auto dialer SoftwareNewfies dialer Auto dialer Software
Newfies dialer Auto dialer Software
 
Cinelerra Video Editing Manual
Cinelerra Video Editing ManualCinelerra Video Editing Manual
Cinelerra Video Editing Manual
 
RDB Synchronization, Transcoding and LDAP Directory Services ...
RDB Synchronization, Transcoding and LDAP Directory Services ...RDB Synchronization, Transcoding and LDAP Directory Services ...
RDB Synchronization, Transcoding and LDAP Directory Services ...
 
Platform Migration Guide
Platform Migration GuidePlatform Migration Guide
Platform Migration Guide
 
JM White
JM WhiteJM White
JM White
 
Code Conventions
Code ConventionsCode Conventions
Code Conventions
 
Ad Ch.1 8 (1)
Ad Ch.1 8 (1)Ad Ch.1 8 (1)
Ad Ch.1 8 (1)
 
Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500
 
Book hudson
Book hudsonBook hudson
Book hudson
 
Doctrine Manual
Doctrine ManualDoctrine Manual
Doctrine Manual
 
Ibm system storage productivity center deployment guide sg247560
Ibm system storage productivity center deployment guide sg247560Ibm system storage productivity center deployment guide sg247560
Ibm system storage productivity center deployment guide sg247560
 
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design:  Designing XenServer Network  ConfigurationsCitrix XenServer Design:  Designing XenServer Network  Configurations
Citrix XenServer Design: Designing XenServer Network Configurations
 
CALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast SecurityCALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast Security
 
Experiences with oracle 10g database for linux on z series sg246482
Experiences with oracle 10g database for linux on z series sg246482Experiences with oracle 10g database for linux on z series sg246482
Experiences with oracle 10g database for linux on z series sg246482
 

Destacado

Point of-sale-malware-backoff
Point of-sale-malware-backoffPoint of-sale-malware-backoff
Point of-sale-malware-backoffEMC
 
Dedicated Networks For IP Storage
Dedicated Networks For IP StorageDedicated Networks For IP Storage
Dedicated Networks For IP StorageEMC
 
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...EMC
 
THE GLOBAL FOOD BOMB AND THE VIOLENT HUNGRY MILLIONS
THE GLOBAL FOOD BOMB  AND THE VIOLENT HUNGRY MILLIONS THE GLOBAL FOOD BOMB  AND THE VIOLENT HUNGRY MILLIONS
THE GLOBAL FOOD BOMB AND THE VIOLENT HUNGRY MILLIONS Dr. Raju M. Mathew
 
The Evolution of IP Storage and Its Impact on the Network
The Evolution of IP Storage and Its Impact on the NetworkThe Evolution of IP Storage and Its Impact on the Network
The Evolution of IP Storage and Its Impact on the NetworkEMC
 
รวมไอเดียแบบบ้านชั้นเดียว
รวมไอเดียแบบบ้านชั้นเดียวรวมไอเดียแบบบ้านชั้นเดียว
รวมไอเดียแบบบ้านชั้นเดียวKamthon Sarawan
 
Tues examples thinking at the margin
Tues examples thinking at the marginTues examples thinking at the margin
Tues examples thinking at the marginTravis Klein
 
Les centrals telefòniques anònim
Les centrals telefòniques anònimLes centrals telefòniques anònim
Les centrals telefòniques anònimmgonellgomez
 
Fotonovel·la tutorial adrià, roger i gerard
Fotonovel·la tutorial adrià, roger i gerardFotonovel·la tutorial adrià, roger i gerard
Fotonovel·la tutorial adrià, roger i gerardmgonellgomez
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage EMC
 
Badanie wody rzeki_kamieniczki_spz
Badanie wody rzeki_kamieniczki_spzBadanie wody rzeki_kamieniczki_spz
Badanie wody rzeki_kamieniczki_spzbiogimnazjum
 
Israeli palestine problems thur fri
Israeli palestine problems thur friIsraeli palestine problems thur fri
Israeli palestine problems thur friTravis Klein
 

Destacado (20)

Point of-sale-malware-backoff
Point of-sale-malware-backoffPoint of-sale-malware-backoff
Point of-sale-malware-backoff
 
Coworker Crime
Coworker CrimeCoworker Crime
Coworker Crime
 
Editing trailer
Editing trailerEditing trailer
Editing trailer
 
Nada por aquí
Nada por aquíNada por aquí
Nada por aquí
 
Dedicated Networks For IP Storage
Dedicated Networks For IP StorageDedicated Networks For IP Storage
Dedicated Networks For IP Storage
 
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...
Rethink Storage: Transform the Data Center with EMC ViPR Software-Defined Sto...
 
THE GLOBAL FOOD BOMB AND THE VIOLENT HUNGRY MILLIONS
THE GLOBAL FOOD BOMB  AND THE VIOLENT HUNGRY MILLIONS THE GLOBAL FOOD BOMB  AND THE VIOLENT HUNGRY MILLIONS
THE GLOBAL FOOD BOMB AND THE VIOLENT HUNGRY MILLIONS
 
The Evolution of IP Storage and Its Impact on the Network
The Evolution of IP Storage and Its Impact on the NetworkThe Evolution of IP Storage and Its Impact on the Network
The Evolution of IP Storage and Its Impact on the Network
 
รวมไอเดียแบบบ้านชั้นเดียว
รวมไอเดียแบบบ้านชั้นเดียวรวมไอเดียแบบบ้านชั้นเดียว
รวมไอเดียแบบบ้านชั้นเดียว
 
Tues examples thinking at the margin
Tues examples thinking at the marginTues examples thinking at the margin
Tues examples thinking at the margin
 
5 s___toyota
5  s___toyota5  s___toyota
5 s___toyota
 
Les centrals telefòniques anònim
Les centrals telefòniques anònimLes centrals telefòniques anònim
Les centrals telefòniques anònim
 
Day2
Day2 Day2
Day2
 
Fotonovel·la tutorial adrià, roger i gerard
Fotonovel·la tutorial adrià, roger i gerardFotonovel·la tutorial adrià, roger i gerard
Fotonovel·la tutorial adrià, roger i gerard
 
13 tipos de_memoria
13 tipos de_memoria13 tipos de_memoria
13 tipos de_memoria
 
Icecreammark
IcecreammarkIcecreammark
Icecreammark
 
Clientes2
Clientes2Clientes2
Clientes2
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
 
Badanie wody rzeki_kamieniczki_spz
Badanie wody rzeki_kamieniczki_spzBadanie wody rzeki_kamieniczki_spz
Badanie wody rzeki_kamieniczki_spz
 
Israeli palestine problems thur fri
Israeli palestine problems thur friIsraeli palestine problems thur fri
Israeli palestine problems thur fri
 

Similar a What's New in VMware Virtual SAN

DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...IBM India Smarter Computing
 
Virtual box usermanual
Virtual box usermanualVirtual box usermanual
Virtual box usermanualfmartins_esas
 
Implementing the ibm storwize v3700
Implementing the ibm storwize v3700Implementing the ibm storwize v3700
Implementing the ibm storwize v3700Diego Alberto Tamayo
 
Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415Banking at Ho Chi Minh city
 
devicetree-specification
devicetree-specificationdevicetree-specification
devicetree-specificationSurajRGupta2
 
Intel добавит в CPU инструкции для глубинного обучения
Intel добавит в CPU инструкции для глубинного обученияIntel добавит в CPU инструкции для глубинного обучения
Intel добавит в CPU инструкции для глубинного обученияAnatol Alizar
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Banking at Ho Chi Minh city
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Banking at Ho Chi Minh city
 
Virtual box documentação tecnica
Virtual box documentação tecnicaVirtual box documentação tecnica
Virtual box documentação tecnicaALEXANDRE MEDINA
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155Accenture
 
Os linux complete notes
Os linux complete notesOs linux complete notes
Os linux complete notesDreams Design
 
Manual de usuario de virtualbox
Manual de  usuario de virtualboxManual de  usuario de virtualbox
Manual de usuario de virtualboxRuben A Lozada S
 
Ibm midrange system storage implementation and best practices guide sg246363
Ibm midrange system storage implementation and best practices guide sg246363Ibm midrange system storage implementation and best practices guide sg246363
Ibm midrange system storage implementation and best practices guide sg246363Banking at Ho Chi Minh city
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Banking at Ho Chi Minh city
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Banking at Ho Chi Minh city
 
VirtualBox - User manual
VirtualBox - User manualVirtualBox - User manual
VirtualBox - User manualMariana Hiyori
 

Similar a What's New in VMware Virtual SAN (20)

LSI_SAS2008_Manual_v100.pdf
LSI_SAS2008_Manual_v100.pdfLSI_SAS2008_Manual_v100.pdf
LSI_SAS2008_Manual_v100.pdf
 
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
 
Virtual box usermanual
Virtual box usermanualVirtual box usermanual
Virtual box usermanual
 
Sg248107 Implementing the IBM Storwize V3700
Sg248107 Implementing the IBM Storwize V3700Sg248107 Implementing the IBM Storwize V3700
Sg248107 Implementing the IBM Storwize V3700
 
Implementing the ibm storwize v3700
Implementing the ibm storwize v3700Implementing the ibm storwize v3700
Implementing the ibm storwize v3700
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
 
Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415
 
devicetree-specification
devicetree-specificationdevicetree-specification
devicetree-specification
 
Intel добавит в CPU инструкции для глубинного обучения
Intel добавит в CPU инструкции для глубинного обученияIntel добавит в CPU инструкции для глубинного обучения
Intel добавит в CPU инструкции для глубинного обучения
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794
 
Virtual box documentação tecnica
Virtual box documentação tecnicaVirtual box documentação tecnica
Virtual box documentação tecnica
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155
 
Os linux complete notes
Os linux complete notesOs linux complete notes
Os linux complete notes
 
Oracle VM VirtualBox User Manual
Oracle VM VirtualBox User ManualOracle VM VirtualBox User Manual
Oracle VM VirtualBox User Manual
 
Manual de usuario de virtualbox
Manual de  usuario de virtualboxManual de  usuario de virtualbox
Manual de usuario de virtualbox
 
Ibm midrange system storage implementation and best practices guide sg246363
Ibm midrange system storage implementation and best practices guide sg246363Ibm midrange system storage implementation and best practices guide sg246363
Ibm midrange system storage implementation and best practices guide sg246363
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
 
VirtualBox - User manual
VirtualBox - User manualVirtualBox - User manual
VirtualBox - User manual
 

Más de EMC

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote EMC
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremioEMC
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lakeEMC
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereEMC
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History EMC
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewEMC
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeEMC
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic EMC
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityEMC
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015EMC
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesEMC
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 

Más de EMC (20)

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 

Último

A Glance At The Java Performance Toolbox
A Glance At The Java Performance ToolboxA Glance At The Java Performance Toolbox
A Glance At The Java Performance ToolboxAna-Maria Mihalceanu
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
QMMS Lesson 2 - Using MS Excel Formula.pdf
QMMS Lesson 2 - Using MS Excel Formula.pdfQMMS Lesson 2 - Using MS Excel Formula.pdf
QMMS Lesson 2 - Using MS Excel Formula.pdfROWELL MARQUINA
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...Karmanjay Verma
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Nikki Chapple
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...BookNet Canada
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessWSO2
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Jeffrey Haguewood
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 

Último (20)

A Glance At The Java Performance Toolbox
A Glance At The Java Performance ToolboxA Glance At The Java Performance Toolbox
A Glance At The Java Performance Toolbox
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
QMMS Lesson 2 - Using MS Excel Formula.pdf
QMMS Lesson 2 - Using MS Excel Formula.pdfQMMS Lesson 2 - Using MS Excel Formula.pdf
QMMS Lesson 2 - Using MS Excel Formula.pdf
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
How Tech Giants Cut Corners to Harvest Data for A.I.
How Tech Giants Cut Corners to Harvest Data for A.I.How Tech Giants Cut Corners to Harvest Data for A.I.
How Tech Giants Cut Corners to Harvest Data for A.I.
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with Platformless
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 

What's New in VMware Virtual SAN

  • 1. What’s New in VMware® Virtual SAN (VSAN) T e c h n i c a l W HI T E P A P E R v 0 . 1 c /A U G U S T 2 0 1 3
  • 2. What’s New in VMware Virtual SAN (VSAN) T ECHNICAL W HI T E P A P E R / 2 Table of Contents 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 Software-Defined Datacenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Software-Defined Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 What is VSAN? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 vSphere Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 vCenter Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Storage Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.1 Disk Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.2 Hard Disk Drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.3 Solid-State Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Network Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.1 Network Interface Cards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.2 Supported Virtual Switch Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.3 VMkernel Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3. Install and Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1 Creating a Virtual SAN Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1.1 Manually Add Disks to Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1.2 Automatic Creation of Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1.3 Virtual SAN Cluster Creation Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 The Virtual SAN Datastore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 VSAN Datastore Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Defining Virtual Machine Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4. Architecture Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 Distributed RAID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.2 Witnesses and Replicas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.3 The Role of Solid-State Disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.3.1 Purpose of Read Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.3.2 Purpose of Write Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
  • 3. What’s New in VMware Virtual SAN (VSAN) T ECHNICAL W HI T E P A P E R / 3 5. Storage Policy Based Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.1 Virtual SAN Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.1.1 Number of Failures to Tolerate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.1.2 Number of Disk Stripes per Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.1.3 Flash Read Cache Reservation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.1.4 Object Space Reservation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.1.5 Force Provisioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.2 Witness Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.3 VM Storage Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.3.1 Enabling VM Storage Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.3.2 Creating VM Storage Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.3.3 Assigning a VM Storage Policy During Virtual Machine Provisioning. . . . . . . 12 5.3.4 Virtual Machine Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.3.5 Matching Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.6 Compliance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
  • 4. T ECHNICAL W HI T E P A P E R / 4 What’s New in VMware Virtual SAN (VSAN) 1. Introduction 1.1 Software-Defined Datacenter The annual VMware® user conference, VMworld, introduced the vision of VMware for the software-defined datacenter (SDDC) in 2012. The SDDC is the architecture for the cloud by VMware in which all pillars of the datacenter, including compute, storage, networks and associated services, are virtualized. In this white paper, we look at one aspect of the SDDC of VMware, the storage pillar. We specifically discuss how a new product called Virtual SAN (VSAN) fits into this vision. 1.2 Software-Defined Storage The plan of VMware for software-defined storage is to focus on a set of VMware initiatives around local storage, shared storage and storage/data services. In essence, we want to make VMware vSphere® a platform for storage services. Software-defined storage is designed to provide storage services and service-level agreement automation through a software layer on the hosts that integrates with and abstracts the underlying hardware. Historically, storage was something that was configured and deployed at the beginning of a project and was not changed during its life cycle. If there was a requirement to change some aspects or features of the LUN or volume being leveraged by virtual machines, the original LUN or volume was deleted in many cases and a new volume with the required features was created. This was a very intrusive and time-consuming operation that might have taken weeks to coordinate. With software-defined storage, virtual machine storage requirements can be dynamically instantiated. There is no need to repurpose LUNs or volumes. Virtual machine workloads might change over time, and the underlying storage can be adapted to the workload at any time. A key factor for software-defined storage is Storage Policy Based Management (SPBM). This is featured in the vSphere 5.5 release. SPBM can be considered the next generation of VMware vSphere Storage Profile features. vSphere Storage Profile was first introduced with vSphere 5.0. In vSphere 5.5, an enhanced feature called VM Storage Policies is introduced. SPBM is a critical component for VMware in implementing software-defined storage. Using SPBM and VMware vSphere APIs, the underlying storage technology surfaces an abstracted pool of storage space with various capabilities that is presented to vSphere administrators for virtual machine provisioning. The capabilities can relate to performance, availability or storage services such as VMware vSphere Thin Provisioning. A vSphere administrator can then create a VM Storage Policy using a subset of the capabilities required by the application running in the virtual machine. At deployment time, the vSphere administrator selects the VM Storage Policy appropriate for the needs of that virtual machine. SPBM will push the requirements down to the storage layer. Datastores that provide the capabilities placed in the VM Storage Policy are made available for selection. This means that the virtual machine is always instantiated on the appropriate underlying storage based on the requirements placed in the VM Storage Policy. If the virtual machine’s workload changes over time, it is simply a matter of applying a policy to the virtual machine with new, updated requirements that reflect the new workload. This document thoroughly examines VSAN, which provides storage services by utilizing local disks on vSphere hosts.
  • 5. T ECHNICAL W HI T E P A P E R / 5 What’s New in VMware Virtual SAN (VSAN) 1.3 What is VSAN? VSAN is a new storage solution from VMware that is fully integrated with vSphere. It automatically aggregates server disks in a cluster to create shared storage that can be rapidly provisioned from VMware vCenter during VM creation. It is an object-based storage system and a platform for VM Storage Policies designed to simplify virtual machine storage placement decisions for vSphere administrators. It is fully integrated with core vSphere features such as VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) and VMware vSphere vMotion®. Its goal is to provide both high availability and scale-out storage functionality. It can also be considered in the context of quality of service (QoS) because VM Storage Policies can be created that define the level of performance and availability required on a per–virtual machine basis. 2. Requirements The following section details the hardware and software requirements that must be met to create a VSAN cluster. 2.1 vSphere Requirements 2.1.1 vCenter Server VSAN requires at a minimum that the VMware vCenter Server™ version is 5.5. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance™ can manage VSAN. VSAN is configured and monitored via the VMware vSphere Web Client and this also requires version 5.5. 2.1.2 vSphere VSAN requires at least three vSphere hosts (in which each host has local storage) to form a supported VSAN cluster. This enables the cluster to meet the minimum availability requirement of at least one host, disk, or network failure tolerated. The vSphere hosts requires at a minimum vSphere version 5.5. 2.2 Storage Requirements 2.2.1 Disk Controllers Each vSphere host participating in the VSAN cluster requires a disk controller. This can be a SAS/SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in what is commonly referred to as pass-through mode or HBA mode. In other words, it must be able to pass up the underlying hard disk drives (HDDs) and solid-state drives (SSDs) as individual disk drives without a layer of RAID sitting on top. This is necessary because VSAN will manage any RAID configuration when policy attributes such as availability and performance for virtual machines are defined. The VSAN Hardware Compatibility List (HCL) will call out the controllers that have passed the testing phase. Each vSphere host in the cluster that contributes its local storage to VSAN must have at least one HDD and at least one SSD. 2.2.2 Hard Disk Drives Each vSphere host must have at least one HDD when participating in the VSAN cluster. HDDs make up the storage capacity of the VSAN datastore. Additional HDDs increase capacity but might also improve virtual machine performance. This is because virtual machine storage objects might be striped across multiple spindles. This is covered in far greater detail when VM Storage Policies are discussed later in this paper. 2.2.3 Solid-State Disks Each vSphere host must have at least one SSD when participating in the VSAN cluster. The SSD provides both a write buffer and a read cache. The more SSD capacity the host has, the greater the performance, because more I/O can be cached. NOTE: The SSDs do not contribute to the overall size of the distributed VSAN datastore.
  • 6. T ECHNICAL W HI T E P A P E R / 6 What’s New in VMware Virtual SAN (VSAN) 2.3 Network Requirements 2.3.1 Network Interface Cards Each vSphere host must have at least one network interface card (NIC). The NIC must be 1Gb capable. However, as a best practice, VMware is recommending 10Gb NICs. For redundancy, a team of NICs can be configured on a per-host basis. VMware considers this a best practice but does not deem it necessary when building a fully functional VSAN cluster. 2.3.2 Supported Virtual Switch Types VSAN is supported on both the VMware vSphere Distributed Switch™ (VDS) and the vSphere Standard Switch (VSS). No other virtual switch types are supported in the initial release. 2.3.3 VMkernel Network On each vSphere host, a VMkernel port for VSAN communication must be created. The VMkernel port is labeled Virtual SAN. This port is used for intercluster node communication and also for reads and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O must traverse the network configured between the hosts in the cluster. 3. Install and Configure This chapter includes a detailed description of the install and configure process, along with any initial preparation that might be required before deployment of a VSAN cluster. 3.1 Creating a Virtual SAN Cluster The creation of a VSAN cluster requires many of the same steps a vSphere administrator might take to set up a DRS or vSphere HA cluster. A cluster object is created in the vCenter server inventory, and one can either choose to enable the VSAN cluster functionality and then add in the hosts to the cluster or add the hosts first and then enable VSAN. When you enable the VSAN functionality, a single option is displayed asking the administrator to choose a manual or automatic cluster. This gives the vSphere administrator an option: They can assign VSAN to discover all the local disks on the hosts and automatically add the disks to the VSAN datastore or the administrator can opt to manually select the disks to add to the cluster. 3.1.1 Manually Add Disks to Disk Groups If this option is selected, the VSAN cluster is still formed. However, the VSAN datastore is 0 bytes in size initially. The administrator must manually create disk groups on a per-host basis and add at least one HDD and at most one SSD disk to each of the disk groups. Each disk group can only contain a single SSD, so there might be multiple disk groups, each containing an SSD and any number of HDDs. After each disk group is created on a per-host basis, the size of the VSAN datastore will grow according to the amount of HDD that is added. NOTE: The SSDs function as read caches and write buffers and are not included in the capacity of the VSAN datastore. 3.1.2 Automatic Creation of Disk Groups If an automatic method is chosen to create the VSAN cluster, VSAN will automatically discover local HDDs and SSDs on each host and build disk groups on every host in the cluster. All hosts with valid storage have a disk group containing their local HDDs and SSDs. Finally, after this is completed, the VSAN datastore is created and its size reflects the capacity of all the HDDs across all the hosts in the cluster, except for some metadata overhead. Hosts that do not have valid storage or no storage can still access the VSAN datastore. This is a very advantageous feature of VSAN, because a VSAN cluster can now also be scaled on compute requirements, not just on storage requirements.
  • 7. T ECHNICAL W HI T E P A P E R / 7 What’s New in VMware Virtual SAN (VSAN) 3.1.3 Virtual SAN Cluster Creation Example After the host, storage and networking requirements are met, the creation of the VSAN cluster can commence. As mentioned, the configuration is very much the same as the creation of a vSphere HA or DRS cluster, and it is user interface (UI) driven. To create a VSAN cluster, create a cluster object in the vCenter inventory, add the vSphere hosts to the cluster object that you intend to have participate in the VSAN, and enable VSAN on the cluster. Figure 1. Create a New VSAN Cluster 3.2 The Virtual SAN Datastore 3.2.1 VSAN Datastore Properties The size of a VSAN datastore is governed by the number of HDDs per vSphere host and the number of vSphere hosts in the cluster. There is also some metadata overhead to consider. For example, if a host has 6 x 2TB HDDs and there are eight hosts in the cluster, the raw capacity would be 6 x 2TB x 8 = 96TB of raw capacity. The number of supported disks per host and the number of hosts in the cluster govern the maximum size of a VSAN datastore. After the VSAN datastore is formed, a number of datastore capabilities are surfaced up into vCenter Server. These include stripe width, component failures to tolerate, force provisioning and proportional capacity. The storage layer is surfacing what it can support in terms of performance, availability and provisioning. These capabilities will be discussed in greater detail later in this paper, but for now it’s important to know that they can be used to create a virtual machine policy that defines the storage requirements of a virtual machine. Further, they enable a vSphere administrator to now specify other capabilities that must be provided by the underlying storage, including performance, availability and data services, during the virtual machine provisioning process. All of this enables the administrator to select the correct storage for that virtual machine. 3.3 Defining Virtual Machine Requirements On creation of the VSAN cluster, a vsanDatastore is also created. The vsanDatastore has a set of capabilities that it surfaces up to vCenter. When a vSphere administrator sets out to design a virtual machine, the application that runs in that virtual machine will have a set of requirements, including storage requirements. The vSphere administrator uses a VM Storage Policy to specify the application’s storage requirements. The VM Storage Policy contains storage requirements in the form of a set of the storage capabilities. The VM Storage Policy indicates which of these storage capabilities the virtual machine must have to meet the application’s storage requirements. In effect, then, storage provides capabilities and virtual machines consume them via requirements placed in the VM Storage Policy.
  • 8. T ECHNICAL W HI T E P A P E R / 8 What’s New in VMware Virtual SAN (VSAN) 4. Architecture Details 4.1 Distributed RAID In traditional storage environments, redundant array of independent disks (RAID) refers to disk redundancy inside the storage chassis to withstand the failure of one or more disk drives. A VSAN cluster uses the concept of distributed RAID, whereby an environment can now contend with the failure of a vSphere host—or components within that host such as a disk drive or network interface—and continue to provide complete functionality for all virtual machines. Availability is now defined on a per–virtual machine basis through the use of VM Storage Policies, with which vSphere administrators can define how many host, network or disk failures a virtual machine can tolerate in a VSAN cluster. If an administrator sets a zero capability in the VM Storage Policy, a host or disk failure can impact the availability of a virtual machine. Using VM Storage Policies along with VSAN distributed RAID architecture—whereby copies of the virtual machine and its contents can reside on multiple nodes in the cluster—it is not necessary to migrate data on a failing node to other nodes in the cluster in the event of a failure. 4.2 Witnesses and Replicas Replicas are copies of the virtual machine storage objects that are instantiated when an availability capability is specified for the virtual machine. The availability capability dictates how many replicas are created. It enables virtual machines to continue running with a full complement of objects when there are host, network or disk failures in the cluster. Witnesses are part of every storage object. They are components that do not contain data, only metadata. Their purpose is to serve as tiebreakers when availability decisions are made in the VSAN cluster. A witness consumes about 2MB of space for metadata on the VSAN datastore. NOTE: For an object to be accessible in VSAN, more than 50 percent of its components must be accessible. 4.3 The Role of Solid-State Disks Solid-state disks (SSDs) have two purposes in Virtual SAN. These are to provide a read cache and a write buffer. This dramatically improves the performance of virtual machines. In some respects, VSAN can be compared to a number of “hybrid” storage solutions in the market, which also use a combination of SSD and HDD storage to boost the performance of the I/O and which have the ability to scale out based on low-cost HDD storage. 4.3.1 Purpose of Read Cache The read cache keeps a cache of commonly accessed disk blocks. This reduces the I/O read latency in the event of a cache hit. The actual block that is read by the application running in the virtual machine might not be on the same vSphere host on which the virtual machine is running. To handle this situation, VSAN distributes a directory of cached blocks between vSphere hosts in the VSAN cluster. This enables a vSphere host to determine if another host has data cached that is not in local cache. If this is the case, the vSphere host can retrieve cached blocks from another host over the interconnect. If the block is not in cache on any VSAN host, it is retrieved directly from the HDD. 4.3.2 Purpose of Write Cache The write cache behaves as a nonvolatile write buffer. The fact that we can use SSD storage for writes also reduces the latency for write operations. Because the writes go to SSD storage, we must of course ensure that there is a copy of the data elsewhere in the VSAN cluster. All virtual machines deployed to VSAN have an availability policy setting ensuring that at least one additional copy of the virtual machine data is available. This includes the write cache contents.
  • 9. T ECHNICAL W HI T E P A P E R / 9 What’s New in VMware Virtual SAN (VSAN) After a write is initiated by the application running inside of the guest operating system (OS), the write is sent in parallel to both the local write cache on the owning host and also to the write cache on the remote host(s). The write must be committed to the SSD on both hosts before it is acknowledged. This means that in the event of a host failure, we also have a copy of the data on another SSD in the VSAN cluster so that no data loss will occur. The virtual machine will access the replicated copy of the data on another VSAN host via the VSAN interconnect. 5. Storage Policy Based Management In the introduction, it was mentioned that Storage Policy Based Management (SPBM) now plays a major role in the policies and automation for the software-defined storage vision of VMware. Using VM Storage Policies, administrators can specify a set of required storage capabilities for a virtual machine, or more specifically a set of requirements for the application running in the virtual machine. This set of required storage capabilities is pushed down to the storage layer, which then checks where the storage objects for that virtual machine can be instantiated to match this set of requirements. For instance, are there enough stripe widths available in the cluster if this virtual machine requires it? Or are there enough hosts in the cluster to provide the number of failures to tolerate? If the VSAN datastore understands the capabilities placed in the VM Storage Policy, it is said to be a matching resource and is highlighted as such in the provisioning wizard. Subsequently, when the virtual machine is deployed, if the requirements in the VM Storage Policy attached to the virtual machine can be met by the VSAN datastore, the datastore is said to be compliant from a storage perspective in its own summary window. If the VSAN datastore is overcommitted or cannot meet the capability requirements, it might still be shown as a matching resource in the deployment wizard, but the provisioning task might fail. SPBM now provides an automated, policy-driven mechanism for selecting an appropriate datastore for virtual machines, based on the required storage capabilities placed in the VM Storage Policies. 5.1 Virtual SAN Capabilities This section examines the required storage capabilities that can be placed in a VM Storage Policy. These capabilities, which are surfaced by the VSAN datastore when the cluster is configured successfully, highlight the availability, performance and sizing requirements that can then be required of the storage on a per–virtual machine basis. If you do not correctly state the requirements—in other words, if you put a capability in the storage level that cannot be detected by the VSAN datastore—the VSAN datastore no longer appears as a matching resource during provisioning. 5.1.1 Number of Failures to Tolerate This property requires the storage object to tolerate at least NumberOfFailuresToTolerate of concurrent host, network or disk failures in the cluster and still ensure availability of the object. If this property is populated, it specifies that configurations must contain at least NumberOfFailuresToTolerate + 1 replicas and might also contain an additional number of witness objects to ensure that the object’s data is available (to maintain quorum in cases of split-brain), even in the presence of as many as NumberOfFailuresToTolerate concurrent host failures. Therefore, to tolerate n failures, at least (n + 1) copies of the object must exist and at least (2n + 1) hosts are required. NOTE: Any disk failure on a single host is treated as a “failure” for this metric. Therefore, the object cannot persist if there is a disk failure on host A and another disk failure on host B when you have NumberOfFailuresToTolerate set to 1.
  • 10. T ECHNICAL W HI T E P A P E R / 1 0 What’s New in VMware Virtual SAN (VSAN) 5.1.2 Number of Disk Stripes per Object This defines the number of physical disks across which each replica of a storage object is distributed. A value higher than 1 might result in better performance if read caching is not effective, but it will also result in a greater use of system resources. To understand the impact of Disk Stripes, we examine it first in the context of write operations and then in the context of read operations. Because all writes go to the SSD write buffer, the value of an increased Disk Stripes number might not improve write performance. This is because there is no guarantee that the new stripe will use a different SSD. The new stripe might be in the same disk group and thus use the same SSD. The only instance in which an increased Disk Stripes number might add value is when there are many writes to destage from SSD to disk. From a read perspective, an increased Disk Stripes number helps when you are experiencing many cache misses. If one takes the example of a virtual machine consuming 2,000 read operations per second and is experiencing a cache-hit rate of 90 percent, then there are 200 read operations per second that must be serviced from HDD storage. In this case, a single HDD might not be able to service those read operations, so an increase in the Disk Stripes number would help in this instance. In general, the default Disk Stripes number of 1 should meet the requirements of most if not all virtual machine workloads. Disk Stripes is a requirement that should change only when a small number of high-performance virtual machines are running. 5.1.3 Flash Read Cache Reservation This is the amount of flash capacity reserved on the SSD as read cache for the storage object. It is specified as a percentage of the logical size of the virtual machine disk storage object. This is expressed as a percentage value (%) with as many as four decimal places. A fine, granular unit size is required so that administrators can express sub–1 percent units. Take the example of a 1TB disk. If we limited the read cache reservation to 1-percent increments, this would result in cache reservations in increments of 10GB, which in most cases is far too large for a single virtual machine. NOTE: It is not required to set a reservation in order to get cache. The reservation should be set to 0 unless you are trying to solve a real performance problem. If cache space is not reserved, the VSAN scheduler manages fair cache allocation. Cache reserved for one virtual machine is not available to other virtual machines. Unreserved cache is shared fairly between all objects. A final point on Flash Read Cache Reservation involves read performance. Even if read performance is already acceptable, you can use additional cache to prevent more reads from going to the HDD, decreasing cache misses. This enables more writes to the HDDs. Therefore, write performance can be improved indirectly by increasing Flash Read Cache Reservation. However, setting a reservation should only be considered if flushing data from SSD to HDD storage creates a bottleneck. 5.1.4 Object Space Reservation This capability defines the percentage of the logical size of the storage object that should be reserved on HDDs during initialization. By default, provisioning on the VSAN datastore is “thin.” The ObjectSpaceReservation is the amount of space to reserve on the VSAN datastore, specified as a percentage of the virtual machine disk. The value is the minimum amount of capacity that must be reserved. This is a property used for specifying a thinly provisioned storage object. If ObjectSpaceReservation is set to 100 percent, all of the storage capacity of the virtual machine is offered up front. NOTE: If you provision a virtual machine and select the disk format to be thick, either lazy zero or eager zero, this setting overrides the ObjectSpaceReservation setting in the VM Storage Policy.
  • 11. T ECHNICAL W HI T E P A P E R / 1 1 What’s New in VMware Virtual SAN (VSAN) 5.1.5 Force Provisioning If this option is enabled, the object will be provisioned even if the capabilities specified in the VM Storage Policy cannot be satisfied with the resources available in the cluster at the time. VSAN attempts to bring the object into compliance if and when resources become available. If this parameter is set to a nonzero value, the object will be provisioned even if the policy specified in the VM Storage Policy is not satisfied by the datastore. However, if there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, the provisioning will fail even if ForceProvisioning is turned on. This option is disabled by default. 5.2 Witness Example The following example of a witness involves the deployment of a virtual machine that has a Disk Stripes number of 1 and a NumberOfFailuresToTolerate number of 1. In this case, two replica copies of the virtual machine are created. Effectively, this is a RAID-1 virtual machine with two replicas. However, with two replicas there is no way to differentiate between a network partition and a host failure. Therefore, a third entity called the “witness” is added to the configuration. For an object on VSAN to be available, the following two conditions must be met: • At least one replica must be intact for data access. • More than 50 percent of all components must be available. In the above example, the object is accessible only when there is access to one replica copy and a witness or to two replica copies. That way, one part of the cluster at most can ever access an object in the event of a network partition. 5.3 VM Storage Policies VM Storage Policies in VSAN works in a similar fashion to vSphere Storage Profiles, introduced in vSphere 5.0, because you build a policy containing your virtual machine provisioning requirements. There is a major difference in the way VM Storage Policies works in VSAN when compared to previous versions of vSphere Storage Profiles. With the original version of vSphere Storage Profiles, one used the capabilities in the policy to select an appropriate datastore when provisioning the virtual machine. The new VM Storage Policies not only selects the appropriate datastore, but also communicates to the underlying storage layer that there are certain availability and performance requirements for a specified virtual machine. Therefore, whereas the VSAN datastore might be the destination datastore when that virtual machine is provisioned with a VM Storage Policy, other policy settings stipulate a requirement for the number of replicas of the virtual machine files for availability and might also contain a stripe-width requirement for performance. 5.3.1 Enabling VM Storage Policies VM Storage Policies is automatically enabled when VSAN is enabled. To manually enable VM Storage Policies, one must navigate to the VMware vSphere Client™ Home position and then select Rules and Profiles. The VM Storage Policies section is located here. Click VM Storage Policies, and then a number of icons appear. One of these enables the VM Storage Policies functionality. This can be enabled on a per-host or a per-cluster basis. 5.3.2 Creating VM Storage Policies After VM Storage Policies is enabled, a vSphere administrator can create individual levels using another icon in this window. As already mentioned, a number of capabilities are surfaced by vSphere APIs related to availability and performance. At this point the administrator must decide which of these capabilities would be required, from a performance and availability perspective, for the applications running inside of the virtual machines. For example, how many component failures (hosts, network, and disk drive) does the administrator require this virtual machine to tolerate while still continuing to function? Also, is the application running in this virtual machine considered demanding from an IOPS perspective? If so, then a reservation of Flash read cache might be required as a capability to meet performance. Other considerations might involve deciding if the virtual machine is to be thinly provisioned or thickly provisioned.
  • 12. T ECHNICAL W HI T E P A P E R / 1 2 What’s New in VMware Virtual SAN (VSAN) In this example, a VM Storage Policy with two capabilities is being created. The capabilities are for availability and performance. NumberOfFailuresToTolerate defines the availability and Disk Stripes defines the performance. Figure 2. Create a New VM Storage Policy NOTE: vSphere 5.5 also supports the use of tags for provisioning. Therefore, instead of using VSAN datastore capabilities for the creation of a VM Storage Policy, tag-based policies might be created. The use of tag-based policies is outside the scope of this white paper, but further information might be found in the vSphere storage documentation. 5.3.3 Assigning a VM Storage Policy During Virtual Machine Provisioning The assignment of a VM Storage Policy takes place during virtual machine provisioning. When the vSphere administrator must select a destination datastore, they select the appropriate level from the drop-down menu of available VM Storage Policies. The datastores are then separated into compatible and incompatible datastores, enabling the vSphere administrator to choose the appropriate and correct placement of the virtual machine. 5.3.4 Virtual Machine Objects When it comes to the layout of objects on the VSAN datastore, a virtual machine has a number of different storage objects. Virtual machine disk files (VMDKs), the virtual machine namespace directory (the directory where virtual machine configuration and log files reside), a virtual machine swap file and snapshot delta files are all unique storage objects. Each of these objects might have different VM Storage Policies. The vSphere UI also enables administrators to interrogate the layout of a virtual machine object and to see where each component (stripes, replicas and witnesses) of a storage object resides.
  • 13. T ECHNICAL W HI T E P A P E R / 1 3 What’s New in VMware Virtual SAN (VSAN) Figure 3. Storage Object Physical Mappings 5.3.5 Matching Resource When the virtual machine Provisioning Wizard reports that a VSAN datastore is a matching resource for a VM Storage Policy, it guarantees that any requirements defined in the policy are understood by the datastore. This does not mean that the datastore will be able to meet those capabilities, and it’s possible that the provisioning process might still fail. Compatibility does not guarantee that the datastore can actually meet the requirements. For example, if VSAN capabilities are used, only VSAN datastores will match, independent of whether they have the resources to provision the virtual machine. 5.3.6 Compliance The Virtual Machine Summary tab reveals the compliant state of a virtual machine. As long as VSAN meets the capabilities defined in the VM Storage Policy, the virtual machine is said to be compliant. In the event of a component failure in the cluster, VSAN might not be able to meet the policy requirements. In this case, the virtual machine is said to be Not Compliant. This Not Compliant status is displayed in the virtual machine summary. Figure 4. Virtual Machine Summary – Not Compliant
  • 14. T ECHNICAL W HI T E P A P E R / 1 4 What’s New in VMware Virtual SAN (VSAN) If the failure persists for more than 30 minutes, VSAN will rebuild the storage object and add new components to address the components of the storage objects that were impacted by the failure to bring the virtual machine back to a compliant state. This might involve the creation of new stripes, new replicas or new witnesses. If the failure is addressed within this 30-minute window, no new components are created. Conclusion VSAN is a software-based distributed storage platform that combines the compute and storage resources of vSphere hosts. It provides enterprise-class features and performance with a much simpler management experience for the user. It is a VMware-designed storage solution to make software-defined storage a reality for VMware customers. Acknowledgments I would like to thank Christian Dickmann and Christos Karamanolis of VMware R&D, whose deep knowledge and understanding of virtual SAN was leveraged throughout this paper. I would also like to thank Kiran Madnani of Product Management for VMware for the guidance on accuracy of content. Finally, I would like to thank James Senicka and Rawlinson Rivera for reviewing this paper. About the Author Cormac Hogan is a senior technical marketing architect within the Cloud Infrastructure Product Marketing group at VMware. He is responsible for storage in general, with a focus on core VMware vSphere storage technologies and virtual storage, including Virtual SAN. Cormac has written a number of storage-related white papers and has given numerous presentations on storage best practices and new features. He has been in technical marketing since 2011. 
  • 15. VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-WP-WHATS-NEW-VSAN-USLET-101 Docsource: OIC - 13VM004.01