SlideShare una empresa de Scribd logo
1 de 47
© 2014 VMware Inc. All rights reserved.
VMware Virtual SAN 5.5
Technical Deep Dive – March 2014
Alberto Farronato, VMware
Wade Holmes, VMware
March, 2014
© 2014 VMware Inc. All rights reserved.
Download this slide
http://ouo.io/A68RB
Software-Defined Storage
3
Bringing the efficient operational model of virtualization to storage
Virtual Data Services
Data Protection Mobility Performance
Policy-driven Control Plane
SAN / NAS
SAN/NAS Pool
Virtual Data Plane
x86 Servers
Hypervisor-converged
Storage pool
Object Storage Pool
Cloud Object
Storage
Virtual SAN
Virtual SAN: Radically Simple Hypervisor-Converged Storage
4
vSphere + VSAN
…
• Runs on any standard x86 server
• Policy-based management framework
• Embedded in vSphere kernel
• High performance flash architecture
• Built-in resiliency
• Deep integration with VMware stack
The Basics
Hard disks
SSD
Hard disks
SSD
Hard disks
SSD
VSAN Shared Datastore
12,000+
Virtual SAN Beta
Participants
95%
Beta customers
Recommend
VSAN
90%
Believe VSAN will
Impact Storage like
vSphere did to
Compute
Unprecedented Customer Interest And Validation
5
Why Virtual SAN?
6
• Two click Install
• Single pane of glass
• Policy-driven
• Self-tuning
• Integrated with VMware stack
Radically Simple
• Embedded in vSphere kernel
• Flash-accelerated
• Up to 2M IOPs from 32 nodes
cluster
• Granular and linear scaling
High Performance Lower TCO
• Server-side economics
• No large upfront investments
• Grow-as-you-go
• Easy to operate with powerful
automation
• No specialized skillset
Two Ways to Build a Virtual SAN Node
7
Completely Hardware Independent
1. Virtual SAN Ready Node
…with multiple options available at GA + 30
Preconfigured server ready to use Virtual
SAN…
2. Build Your Own
…using the Virtual SAN Compatibility Guide*
Choose individual components …
SSD or PCIe
SAS/NL-SAS/ SATA HDDs
Any Server on vSphere
Hardware Compatibility List
HBA/RAID Controller
⃰ Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide Page
⃰ Components for Virtual SAN must be chosen from Virtual SAN HCL, using any other components is unsupported
Broad Partner Ecosystem Support for Virtual SAN
8
Storage
Server / Systems
Solution
Data Protection
Solution
Virtual SAN Simplifies And Automates Storage Management
9
Per VM Storage Service Levels From a Single Self-tuning Datastore
Storage Policy-Based Management
Virtual SAN
Shared Datastore
vSphere + Virtual SAN
SLAs
Software Automates
Control of Service Levels
No more LUNs/Volumes!
Policies Set Based
on Application Needs
Capacity
Performance
Availability
Per VM
Storage Policies
“Virtual SAN is easy to deploy,
just a few check boxes. No
need to configure RAID.”
— Jim Streit
IT Architect, Thomson Reuters
Virtual SAN Delivers Enterprise-Grade Scale
10
2M
IOPS
3,200
VMs
4.4
Petabytes
Maximum Scalability per Virtual SAN Cluster
32
Hosts
“Virtual SAN’s allows us to build out
scalable heterogeneous storage
infrastructure like the Facebooks and
Googles of the world. Virtual SAN allows
us to add scale, add resources, while
being able to service high performance
workloads.”
— Dave Burns
VP of Tech Ops, Cincinnati Bell
High Performance with Elastic and Linear Scalability
11
80K 160K
320K
480K
640K
253K
505K
1M
1.5M
2M
4 8 16 24 32
IOPS
Number of Hosts In Virtual SAN Cluster
Mixed 100% Read
286
473
677
767
805
3 5 7 8
Number of Hosts In Virtual SAN Cluster
Number of VDI VMs
VSAN All SSD Array
Notes: based on IOmeter benchmark
Mixed = 70% Read, 4K 80% random Notes: Based on View Planner benchmark
Up to 2M IOPs in 32 Node Cluster Comparable VDI density to an All Flash Array
Virtual SAN is Deeply Integrated with VMware Stack
12
Ideal for VMware Environments
CONFIDENTIAL – NDA ONLY
vMotion
vSphere HA
DRS
Storage vMotion
vSphere
Snapshots
Linked Clones
VDP Advanced
vSphere Replication
Data Protection
VMware View
Virtual Desktop
vCenter Operations Manager
vCloud Automation Center
IaaS
Cloud Ops and Automation
Site Recovery Manager
Disaster Recovery
Site A Site B
Storage Policy-Based Management
Virtual SAN 5.5 – Pricing And Packing
13
VSAN Editions and Bundles
Virtual SAN
Virtual SAN with Data
Protection
Virtual SAN for Desktop
Overview
• Standalone edition
• No capacity, scale or
workload restriction
• Bundle of Virtual SAN and
vSphere Data Protection Adv.
• Standalone edition
• VDI only (VMware or Citrix)
• Concurrent or named users
Licensing Per CPU Per CPU Per User
Price (USD) $2,495
$2,875
(Promo ends Sept 15th 2014)
$50
Features
Persistent data store   
Read / Write caching   
Policy-based Management   
Virtual Distributed Switch   
Replication
(vSphere Replication)
  
Snapshots and clones
(vSphere Snapshots & Clones)
  
Backup
(vSphere Data Protection Advanced)

Not for Public Disclosure
NDA Material only
Do not share with Public until GA
Note: Regional pricing in standard VMware currencies applies. Please check local pricelists for more detail.
Virtual SAN – Launch Promotions
14
Virtual SAN
with Data
Protection
Virtual SAN
(1 CPU)
vSphere Data
Protection
Advanced
(1 CPU)
VSA to VSAN
upgrade
Virtual SAN
(6 CPUs per
bundle)
Register and
download promo
Virtual SAN
(1 CPU)
Beta PromoBundle Promos
20% 20% 20%
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
$9,180 / bundle$2,875 / CPU $1,996 / CPU
Promo Discount
Promo Price
End Date
Terms
9/15/2014 9/15/2014 6/15/2014
• Min purchase of 10 CPUs
• First purchase only
Note: Regional pricing for promotions exist in standard VMware currencies. Please check local pricelists for more detail.
Virtual SAN Reduces CAPEX and OPEX for Better TCO
15
CAPEX
• Server-side economics
• No Fibre Channel network
• Pay-as-you-grow
OPEX
• Simplified storage configuration
• No LUNs
• Managed directly through
vSphere Web Client
• Automated VM provisioning
• Simplified capacity planning
As Low as
$0.50/GB2
As Low as
$0.25/IOPS
5X Lower
OPEX4
Up to 50%
TCO
Reduction
As Low as
$50/Desktop
1
1. Full clones
2. Usable capacity
3. Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
4. Source: Taneja Group
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
Flexibly Configure For Performance And Capacity
16
Performance
2xCPU – 8-core
128GB Memory
2xCPU – 8-core
128GB Memory
2xCPU – 8-core
128GB Memory
1x
400GB MLC SSD
(~15% of usable capacity)
1x
400GB MLC SSD
(~10% of usable capacity)
2x
400GB MLC SSD
(~4% of usable capacity)
5x
1.2TB 10K SAS
7x
2TB 7.2K NL-SAS
10x
4TB 7.2K NL-SAS
IOPS1
Raw Capacity
~20-15K
6TB
~15-10K
14TB
~10-5K
40TB
Capacity
1. Mix workload 70% Read, 80% Random
Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
$0.32/IOPS
$2.12/GB
$0.57/IOPS
$1.02/GB
$1.38/IOPS
$0.52/GB
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
• Compared to external storage at scale
• Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
• Additional savings come from reduced Opex through automation
• Virtual SAN configuration: 9 VMs per core, with 40GB per VM, 2 copies for availability and 10% SSD for performance
Granular Scaling Eliminates Overprovisioning
Delivers Predictable Scaling and ability to Control Costs
VSAN enables
predictable linear
scaling
Spikes correspond to
scaling out due to
IOPs requirements
17
$40
$90
$140
$190
$240
500 1000 1500 2000 2500 3000
StorageCostPerDesktop
Number of Desktops
$/VDI Storage Cost
Virtual SAN Midrange Hybrid Array
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
Running a Google-like Datacenter
18
Modular infrastructure. Break-Replace Operations
"From a break fix perspective, I think
there's a huge difference in what
needs to be done when a piece of
hardware fails. I can have anyone
on my team go back and replace a
1U or 2U servers. … essentially
modularizing my datacenter and
delivering a true Software-Defined
Storage architecture."
— Ryan Hoenle
Director of IT, DOE Fund
Hardware Requirements
19
Any Server on the VMware
Compatibility Guide
• SSD, HDD, and Storage Controllers must be listed on the VMware Compatibility Guide for VSAN
http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
• Minimum 3 ESXi 5.5 Hosts, Maximum Hosts “I’ll tell you later……”
1Gb/10Gb NIC
SAS/SATA Controllers (RAID Controllers must work in
“pass-through” or RAID0” mode
SAS/SATA/PCIe SSD
SAS/NL-SAS/SATA HDD
At least 1 of
each
4GB to 8GB USB, SD Cards
Flash Based Devices
VMware SSD Performance Classes
– Class A: 2,500-5,000 writes per second
– Class B: 5,000-10,000 writes per second
– Class C: 10,000-20,000 writes per second
– Class D: 20,000-30,000 writes per second
– Class E: 30,000+ writes per second
Examples
– Intel DC S3700 SSD ~36000 writes per second -> Class E
– Toshiba SAS SSD MK2001GRZB ~16000 writes per second
-> Class C
Workload Definition
– Queue Depth: 16 or less
– Transfer Length: 4KB
– Operations: write
– Pattern: 100% random
– Latency: less than 5 ms
Endurance
– 10 Drive Writes per Day (DWPD), and
– Random write endurance up to 3.5 PB on 8KB transfer size
per NAND module, or 2.5 PB on 4KB transfer size per
NAND module
20
Flash Capacity Sizing
 The general recommendation for sizing Virtual SAN's flash capacity is to have 10% of the anticipated
consumed storage capacity before the Number of Failures To Tolerate is considered.
 Total flash capacity percentage should be based on use case, capacity and performance requirements.
– 10% is a general recommendation, could be too much or it may not be enough.
Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash capacity percentage 10%
Total flash capacity required 20TB x .10 = 2 TB
Multi-level cell SSD (or better) or
PCIe SSD
SAS/NL-SAS HDD
Select SATA HDDs
Any Server on vSphere
Hardware Compatibility List
* Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide
6Gb enterprise-grade
HBA/RAID Controller
1 2 Build your ownVSAN Ready Node
…with 10 different options between
multiple 3rd party vendors available at GA
Preconfigured server ready to use VSAN…
…using the VSAN Compatibility Guide*
Choose individual components …
Two Ways to Build a Virtual SAN Node
Radically Simple Hypervisor-Converged Storage
Virtual SAN Implementation Requirements
• Virtual SAN requires:
– Minimum of 3 hosts in a cluster configuration
– All 3 host MUST!!! contribute storage
• vSphere 5.5 U1 or later
– Locally attached disks
• Magnetic disks (HDD)
• Flash-based devices (SSD)
– Network connectivity
• 1GB Ethernet
• 10GB Ethernet (preferred)
23
esxi-01
local storage local storage local storage
vSphere 5.5 U1 Cluster
esxi-02 esxi-03
cluster
HDDHDD HDD
Virtual SAN Scalable Architecture
24
• Scale up and Scale out architecture – granular and linearly storage, performance and compute
scaling capabilities
– Per magnetic disks – for capacity
– Per flash based device – for performance
– Per disk group – for performance and capacity
– Per node – for compute capacity
disk group disk group disk group
VSAN network VSAN networkVSAN network
vsanDatastore
HDD
disk group
HDD HDD HDD
disk group
VSAN network
HDD
scaleup
scale out
Oh yeah! Scalability…..
25
vsanDatastore
4.4 Petabytes
2 Million IOPS
32 Hosts
Storage Policy-based Management
• SPBM is a storage policy framework built into vSphere that enables virtual machine policy
driven provisioning.
• Virtual SAN leverages this new framework in conjunction with VASA API’s to expose
storage characteristics to vCenter:
– Storage capabilities
• Underlying storage surfaces up to vCenter and what it is capable of offering.
– Virtual machine storage requirements
• Requirements can only be used against available capabilities.
– VM Storage Policies
• Construct that stores virtual machine’s storage provisioning requirements based on storage capabilities.
26
Storage Policy Wizard
SPBM
VSAN object
VSAN object manager
virtual disk
VSAN objects may be
(1) mirrored across hosts &
(2) striped across disks/hosts to meet VM
storage profile policies
Datastore Profile
Virtual SAN SPBM Object Provisioning Mechanism
Virtual SAN Disk Groups
• Virtual SAN uses the concept of disk groups to pool together flash devices and magnetic disks
as single management constructs.
• Disk groups are composed of at least 1 flash device and 1 magnetic disk.
– Flash devices are use for performance (Read cache + Write buffer).
– Magnetic disks are used for storage capacity.
– Disk groups cannot be created without a flash device.
28
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
HDD HDDHDDHDDHDD
Virtual SAN Datastore
• Virtual SAN is an object store solution that is presented to vSphere as a file system.
• The object store mounts the VMFS volumes from all hosts in a cluster and presents them as a
single shared datastore.
– Only members of the cluster can access the Virtual SAN datastore
– Not all hosts need to contribute storage, but its recommended.
29
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
VSAN network VSAN network VSAN network VSAN networkVSAN network
vsanDatastore
HDD HDDHDDHDDHDD
Virtual SAN Network
• New Virtual SAN traffic VMkernel interface.
– Dedicated for Virtual SAN intra-cluster communication and data replication.
• Supports both Standard and Distributes vSwitches
– Leverage NIOC for QoS in shared scenarios
• NIC teaming – used for availability and not for bandwidth aggregation.
• Layer 2 Multicast must be enabled on physical switches.
– Much easier to manage and implement than Layer 3 Multicast
30
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
20 shares 30 shares 50 shares 100 shares
uplink1 uplink2
vmk1 vmk2vmk0
Virtual SAN Network
• NIC teamed and load balancing algorithms:
– Route based on Port ID
• active / passive with explicit failover
– Route based on IP Hash
• active / active with LACP port channel
– Route based on Physical NIC load
• active / active with LACP port channel
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
100 shares 150 shares 250 shares 500 shares
uplink1 uplink2
vmk1 vmk2vmk0
Multi chassis link aggregation capable switches
VMware Virtual SAN
Interoperability Technologies and Products
VMware Virtual SAN
Configuration Walkthrough
Configuring VMware Virtual SAN
• Radically Simple configuration procedure
34
Setup Virtual SAN
Network
Enable Virtual SAN
on the Cluster
Select Manual or
Automatic
If Manual, create
disk groups
Configure Network
35
• Configure the new dedicated Virtual SAN network
– vSphere Web Client network template configuration feature.
Enable Virtual SAN
• One click away!!!
– Virtual SAN configured in Automatic mode, all empty local disks are claimed by Virtual SAN for the
creation of the distributed vsanDatastore.
– Virtual SAN configured in Manual mode, the administrator must manually select disks to add the the
distributed vsanDatastore by creating Disk Groups.
36
Virtual SAN Datastore
• A single Virtual SAN Datastore is created and mounted, using storage from all multiple hosts
and disk groups in the cluster.
• Virtual SAN Datastore is automatically presented to all hosts in the cluster.
• Virtual SAN Datastore enforces thin-provisioning storage allocation by default.
37
Virtual SAN Capabilities
• Virtual SAN currently surfaces five unique storage capabilities to vCenter.
38
Number of Failures to Tolerate
• Number of failures to tolerate
– Defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures
tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required.
39
vsan network
vmdkvmdk witness
esxi-01 esxi-02 esxi-03 esxi-04
~50% of I/O ~50% of I/O
Virtual SAN Policy: “Number of failures to tolerate = 1”
raid-1
Number of Disk Stripes Per Object
• Number of disk stripes per object
– The number of HDDs across which each replica of a storage object is distributed. Higher values may
result in better performance.
40
vsan network
stripe-2b witness
esxi-01 esxi-02 esxi-03 esxi-04
stripe-1b
stripe-1a stripe-2a
raid-0raid-0
VSAN Policy: “Number of failures to tolerate = 1” + “Stripe Width =2”
raid-1
Managing Failure Scenarios
 Through policies, VM’s on Virtual SAN can tolerate multiple failures
– Disk Failure – degraded event
– SSD Failure – degraded event
– Controller Failure – degraded event
– Network Failure – absent event
– Server Failure – absent event
 VM’s continue to run
 Parallel rebuilds minimize performance pain
– SSD Fail – immediately
– HDD Fail – immediately
– Controller Fail – immediately
– Network Fail – 60 minutes
– Host Fail – 60 minutes
41
Virtual SAN Storage Capabilities
• Force provisioning
– if yes, the object will be provisioned even is the policy specified in the storage policy is not satisfiable
with the resources currently available.
• Flash read cache reservation (%)
– Flash capacity reserved as read cache for the storage object. Specified as a percentage of logical size
of the object.
• Object space reservation (%)
– Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM
provisioning. The rest of the storage object is thin provisioned.
42
VM Storage Policies Recommendations
• Number of Disk Stripes per object
– Should be left at 1, unless the IOPS requirements of the VM is not being met by the flash layer.
• Flash Read Cache Reservation
– Should be left at 0, unless there is a specific performance requirement to be met by a VM.
• Proportional Capacity
– Should be left at 0, unless thick provisioning of virtual machines is required.
• Force Provisioning
– Should be left disabled, unless the VM needs to be provisioned, even if not in compliance.
43
Failure Handling Philosophy
 Traditional SANs
– Physical drive needs to be replaced to get back to full redundancy
– Hot-spare disks are set aside to take role of failed disks immediately
– In both cases: 1:1 replacement of disk
 Virtual SAN
– Entire cluster is a “hot-spare”, we always want to get back to full redundancy
– When a disk fails, many small components (stripes or mirrors of objects) fail
– New copies of these components can be spread around the cluster for balancing
– Replacement of the physical disk just adds back resources
Understanding Failure Events
 Degraded events are responsible to trigger the immediate recovery operations.
– Triggers the immediate recovery operation of objects and components
– Not configurable
 Any of the following detected I/O errors are always deemed degraded:
– Magnetic disk failures
– Flash based devices failures
– Storage controller failures
 Any of the following detected I/O errors are always deemed absent:
– Network failures
– Network Interface Cards (NICs)
– Host failures
45
Maintenance Mode – planned downtime
 3 Maintenance mode options:
 Ensure accessibility
 Full data migration
 No data migration
For more information, visit:
http://www.vmware.com/products/virtual-san

Más contenido relacionado

La actualidad más candente

09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctripYong Luo
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data centerHoward Marks
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructurexKinAnx
 
Yair Hershko - Building Software Defined Storage Cloud Using OpenStack
Yair Hershko - Building Software Defined Storage Cloud Using OpenStackYair Hershko - Building Software Defined Storage Cloud Using OpenStack
Yair Hershko - Building Software Defined Storage Cloud Using OpenStackCloud Native Day Tel Aviv
 
eFolder Webinar — Big News: Get Ready for Next-Gen BDR
eFolder Webinar — Big News: Get Ready for Next-Gen BDReFolder Webinar — Big News: Get Ready for Next-Gen BDR
eFolder Webinar — Big News: Get Ready for Next-Gen BDReFolder
 
Software defined storage real or bs-2014
Software defined storage real or bs-2014Software defined storage real or bs-2014
Software defined storage real or bs-2014Howard Marks
 
Linux and H/W optimizations for MySQL
Linux and H/W optimizations for MySQLLinux and H/W optimizations for MySQL
Linux and H/W optimizations for MySQLYoshinori Matsunobu
 
Application acceleration from the data storage perspective
Application acceleration from the data storage perspectiveApplication acceleration from the data storage perspective
Application acceleration from the data storage perspectiveInterop
 
Benefity Oracle Cloudu (3/4): Compute
Benefity Oracle Cloudu (3/4): ComputeBenefity Oracle Cloudu (3/4): Compute
Benefity Oracle Cloudu (3/4): ComputeMarketingArrowECS_CZ
 
Vm13 vnx mixed workloads
Vm13 vnx mixed workloadsVm13 vnx mixed workloads
Vm13 vnx mixed workloadspittmantony
 
What is Trove, the Database as a Service on OpenStack?
What is Trove, the Database as a Service on OpenStack?What is Trove, the Database as a Service on OpenStack?
What is Trove, the Database as a Service on OpenStack?OpenStack_Online
 
VMWARE Professionals - Storage and Resources
VMWARE Professionals -  Storage and ResourcesVMWARE Professionals -  Storage and Resources
VMWARE Professionals - Storage and ResourcesPaulo Freitas
 
Windows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined StorageWindows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined StorageAidan Finn
 
Building Storage for Clouds (ONUG Spring 2015)
Building Storage for Clouds (ONUG Spring 2015)Building Storage for Clouds (ONUG Spring 2015)
Building Storage for Clouds (ONUG Spring 2015)Howard Marks
 
Varrow datacenter storage today and tomorrow
Varrow   datacenter storage today and tomorrowVarrow   datacenter storage today and tomorrow
Varrow datacenter storage today and tomorrowpittmantony
 
TSM 6.4.1 intro
TSM 6.4.1 intro TSM 6.4.1 intro
TSM 6.4.1 intro Solv AS
 
Build your own cloud server
Build your own cloud serverBuild your own cloud server
Build your own cloud serverRandall Spence
 
Reducing Database Pain & Costs with Postgres
Reducing Database Pain & Costs with PostgresReducing Database Pain & Costs with Postgres
Reducing Database Pain & Costs with PostgresEDB
 

La actualidad más candente (20)

09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data center
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Yair Hershko - Building Software Defined Storage Cloud Using OpenStack
Yair Hershko - Building Software Defined Storage Cloud Using OpenStackYair Hershko - Building Software Defined Storage Cloud Using OpenStack
Yair Hershko - Building Software Defined Storage Cloud Using OpenStack
 
eFolder Webinar — Big News: Get Ready for Next-Gen BDR
eFolder Webinar — Big News: Get Ready for Next-Gen BDReFolder Webinar — Big News: Get Ready for Next-Gen BDR
eFolder Webinar — Big News: Get Ready for Next-Gen BDR
 
Software defined storage real or bs-2014
Software defined storage real or bs-2014Software defined storage real or bs-2014
Software defined storage real or bs-2014
 
Linux and H/W optimizations for MySQL
Linux and H/W optimizations for MySQLLinux and H/W optimizations for MySQL
Linux and H/W optimizations for MySQL
 
Application acceleration from the data storage perspective
Application acceleration from the data storage perspectiveApplication acceleration from the data storage perspective
Application acceleration from the data storage perspective
 
Benefity Oracle Cloudu (3/4): Compute
Benefity Oracle Cloudu (3/4): ComputeBenefity Oracle Cloudu (3/4): Compute
Benefity Oracle Cloudu (3/4): Compute
 
Vm13 vnx mixed workloads
Vm13 vnx mixed workloadsVm13 vnx mixed workloads
Vm13 vnx mixed workloads
 
What is Trove, the Database as a Service on OpenStack?
What is Trove, the Database as a Service on OpenStack?What is Trove, the Database as a Service on OpenStack?
What is Trove, the Database as a Service on OpenStack?
 
VMWARE Professionals - Storage and Resources
VMWARE Professionals -  Storage and ResourcesVMWARE Professionals -  Storage and Resources
VMWARE Professionals - Storage and Resources
 
IaaS for DBAs in Azure
IaaS for DBAs in AzureIaaS for DBAs in Azure
IaaS for DBAs in Azure
 
Storage for VDI
Storage for VDIStorage for VDI
Storage for VDI
 
Windows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined StorageWindows Server 2012 R2 Software-Defined Storage
Windows Server 2012 R2 Software-Defined Storage
 
Building Storage for Clouds (ONUG Spring 2015)
Building Storage for Clouds (ONUG Spring 2015)Building Storage for Clouds (ONUG Spring 2015)
Building Storage for Clouds (ONUG Spring 2015)
 
Varrow datacenter storage today and tomorrow
Varrow   datacenter storage today and tomorrowVarrow   datacenter storage today and tomorrow
Varrow datacenter storage today and tomorrow
 
TSM 6.4.1 intro
TSM 6.4.1 intro TSM 6.4.1 intro
TSM 6.4.1 intro
 
Build your own cloud server
Build your own cloud serverBuild your own cloud server
Build your own cloud server
 
Reducing Database Pain & Costs with Postgres
Reducing Database Pain & Costs with PostgresReducing Database Pain & Costs with Postgres
Reducing Database Pain & Costs with Postgres
 

Similar a V mware virtual san 5.5 deep dive

VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014David Davis
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overviewsolarisyougood
 
VMware Vsan vtug 2014
VMware Vsan vtug 2014VMware Vsan vtug 2014
VMware Vsan vtug 2014csharney
 
Virtual san hardware guidance & best practices
Virtual san hardware guidance & best practicesVirtual san hardware guidance & best practices
Virtual san hardware guidance & best practicessolarisyougood
 
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...VMware
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0solarisyougood
 
Server side caching Vs other alternatives
Server side caching Vs other alternativesServer side caching Vs other alternatives
Server side caching Vs other alternativesBappaditya Sinha
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021Gene Leyzarovich
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentationvirtualsouthwest
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
Virtual san pricing and packaging deck
Virtual san pricing and packaging deckVirtual san pricing and packaging deck
Virtual san pricing and packaging decksolarisyougood
 
VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN VMworld
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
VirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash StorageVirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash StorageGIGABYTE Technology
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMUG IT
 
Why Software Defined Storage is Critical for Your IT Strategy
Why Software Defined Storage is Critical for Your IT StrategyWhy Software Defined Storage is Critical for Your IT Strategy
Why Software Defined Storage is Critical for Your IT Strategyandreas kuncoro
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
 

Similar a V mware virtual san 5.5 deep dive (20)

VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overview
 
VMware Vsan vtug 2014
VMware Vsan vtug 2014VMware Vsan vtug 2014
VMware Vsan vtug 2014
 
Virtual san hardware guidance & best practices
Virtual san hardware guidance & best practicesVirtual san hardware guidance & best practices
Virtual san hardware guidance & best practices
 
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...
Accelerate Your Sales with Application-Centric Storage-as-a-Service Using VMw...
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0
 
Server side caching Vs other alternatives
Server side caching Vs other alternativesServer side caching Vs other alternatives
Server side caching Vs other alternatives
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
Virtual san pricing and packaging deck
Virtual san pricing and packaging deckVirtual san pricing and packaging deck
Virtual san pricing and packaging deck
 
VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN VMworld 2013: VMware Virtual SAN
VMworld 2013: VMware Virtual SAN
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
VirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash StorageVirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash Storage
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
 
Why Software Defined Storage is Critical for Your IT Strategy
Why Software Defined Storage is Critical for Your IT StrategyWhy Software Defined Storage is Critical for Your IT Strategy
Why Software Defined Storage is Critical for Your IT Strategy
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
 

Más de solarisyougood

Emc recoverpoint technical
Emc recoverpoint technicalEmc recoverpoint technical
Emc recoverpoint technicalsolarisyougood
 
Emc vmax3 technical deep workshop
Emc vmax3 technical deep workshopEmc vmax3 technical deep workshop
Emc vmax3 technical deep workshopsolarisyougood
 
EMC Atmos for service providers
EMC Atmos for service providersEMC Atmos for service providers
EMC Atmos for service providerssolarisyougood
 
Cisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewCisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewsolarisyougood
 
Designing your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guideDesigning your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guidesolarisyougood
 
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...solarisyougood
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopsolarisyougood
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopsolarisyougood
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvmsolarisyougood
 
Power vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & tricksPower vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & trickssolarisyougood
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshopsolarisyougood
 
Ibm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopIbm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopsolarisyougood
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopsolarisyougood
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshopsolarisyougood
 
Emc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopEmc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopsolarisyougood
 
Cisco mds 9148 s training workshop
Cisco mds 9148 s training workshopCisco mds 9148 s training workshop
Cisco mds 9148 s training workshopsolarisyougood
 
Cisco cloud computing deploying openstack
Cisco cloud computing deploying openstackCisco cloud computing deploying openstack
Cisco cloud computing deploying openstacksolarisyougood
 
Se training storage grid webscale technical overview
Se training   storage grid webscale technical overviewSe training   storage grid webscale technical overview
Se training storage grid webscale technical overviewsolarisyougood
 

Más de solarisyougood (20)

Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
 
Emc recoverpoint technical
Emc recoverpoint technicalEmc recoverpoint technical
Emc recoverpoint technical
 
Emc vmax3 technical deep workshop
Emc vmax3 technical deep workshopEmc vmax3 technical deep workshop
Emc vmax3 technical deep workshop
 
EMC Atmos for service providers
EMC Atmos for service providersEMC Atmos for service providers
EMC Atmos for service providers
 
Cisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewCisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overview
 
Designing your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guideDesigning your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guide
 
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshop
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshop
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvm
 
Power vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & tricksPower vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & tricks
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
Ibm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopIbm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshop
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshop
 
Emc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopEmc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshop
 
Emc vplex deep dive
Emc vplex deep diveEmc vplex deep dive
Emc vplex deep dive
 
Cisco mds 9148 s training workshop
Cisco mds 9148 s training workshopCisco mds 9148 s training workshop
Cisco mds 9148 s training workshop
 
Cisco cloud computing deploying openstack
Cisco cloud computing deploying openstackCisco cloud computing deploying openstack
Cisco cloud computing deploying openstack
 
Se training storage grid webscale technical overview
Se training   storage grid webscale technical overviewSe training   storage grid webscale technical overview
Se training storage grid webscale technical overview
 

Último

JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...amber724300
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...itnewsafrica
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...Karmanjay Verma
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialJoão Esperancinha
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentMahmoud Rabie
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Mark Simos
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFMichael Gough
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sectoritnewsafrica
 

Último (20)

JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...
Irene Moetsana-Moeng: Stakeholders in Cybersecurity: Collaborative Defence fo...
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorial
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career Development
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDF
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
 

V mware virtual san 5.5 deep dive

  • 1. © 2014 VMware Inc. All rights reserved. VMware Virtual SAN 5.5 Technical Deep Dive – March 2014 Alberto Farronato, VMware Wade Holmes, VMware March, 2014
  • 2. © 2014 VMware Inc. All rights reserved. Download this slide http://ouo.io/A68RB
  • 3. Software-Defined Storage 3 Bringing the efficient operational model of virtualization to storage Virtual Data Services Data Protection Mobility Performance Policy-driven Control Plane SAN / NAS SAN/NAS Pool Virtual Data Plane x86 Servers Hypervisor-converged Storage pool Object Storage Pool Cloud Object Storage Virtual SAN
  • 4. Virtual SAN: Radically Simple Hypervisor-Converged Storage 4 vSphere + VSAN … • Runs on any standard x86 server • Policy-based management framework • Embedded in vSphere kernel • High performance flash architecture • Built-in resiliency • Deep integration with VMware stack The Basics Hard disks SSD Hard disks SSD Hard disks SSD VSAN Shared Datastore
  • 5. 12,000+ Virtual SAN Beta Participants 95% Beta customers Recommend VSAN 90% Believe VSAN will Impact Storage like vSphere did to Compute Unprecedented Customer Interest And Validation 5
  • 6. Why Virtual SAN? 6 • Two click Install • Single pane of glass • Policy-driven • Self-tuning • Integrated with VMware stack Radically Simple • Embedded in vSphere kernel • Flash-accelerated • Up to 2M IOPs from 32 nodes cluster • Granular and linear scaling High Performance Lower TCO • Server-side economics • No large upfront investments • Grow-as-you-go • Easy to operate with powerful automation • No specialized skillset
  • 7. Two Ways to Build a Virtual SAN Node 7 Completely Hardware Independent 1. Virtual SAN Ready Node …with multiple options available at GA + 30 Preconfigured server ready to use Virtual SAN… 2. Build Your Own …using the Virtual SAN Compatibility Guide* Choose individual components … SSD or PCIe SAS/NL-SAS/ SATA HDDs Any Server on vSphere Hardware Compatibility List HBA/RAID Controller ⃰ Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide Page ⃰ Components for Virtual SAN must be chosen from Virtual SAN HCL, using any other components is unsupported
  • 8. Broad Partner Ecosystem Support for Virtual SAN 8 Storage Server / Systems Solution Data Protection Solution
  • 9. Virtual SAN Simplifies And Automates Storage Management 9 Per VM Storage Service Levels From a Single Self-tuning Datastore Storage Policy-Based Management Virtual SAN Shared Datastore vSphere + Virtual SAN SLAs Software Automates Control of Service Levels No more LUNs/Volumes! Policies Set Based on Application Needs Capacity Performance Availability Per VM Storage Policies “Virtual SAN is easy to deploy, just a few check boxes. No need to configure RAID.” — Jim Streit IT Architect, Thomson Reuters
  • 10. Virtual SAN Delivers Enterprise-Grade Scale 10 2M IOPS 3,200 VMs 4.4 Petabytes Maximum Scalability per Virtual SAN Cluster 32 Hosts “Virtual SAN’s allows us to build out scalable heterogeneous storage infrastructure like the Facebooks and Googles of the world. Virtual SAN allows us to add scale, add resources, while being able to service high performance workloads.” — Dave Burns VP of Tech Ops, Cincinnati Bell
  • 11. High Performance with Elastic and Linear Scalability 11 80K 160K 320K 480K 640K 253K 505K 1M 1.5M 2M 4 8 16 24 32 IOPS Number of Hosts In Virtual SAN Cluster Mixed 100% Read 286 473 677 767 805 3 5 7 8 Number of Hosts In Virtual SAN Cluster Number of VDI VMs VSAN All SSD Array Notes: based on IOmeter benchmark Mixed = 70% Read, 4K 80% random Notes: Based on View Planner benchmark Up to 2M IOPs in 32 Node Cluster Comparable VDI density to an All Flash Array
  • 12. Virtual SAN is Deeply Integrated with VMware Stack 12 Ideal for VMware Environments CONFIDENTIAL – NDA ONLY vMotion vSphere HA DRS Storage vMotion vSphere Snapshots Linked Clones VDP Advanced vSphere Replication Data Protection VMware View Virtual Desktop vCenter Operations Manager vCloud Automation Center IaaS Cloud Ops and Automation Site Recovery Manager Disaster Recovery Site A Site B Storage Policy-Based Management
  • 13. Virtual SAN 5.5 – Pricing And Packing 13 VSAN Editions and Bundles Virtual SAN Virtual SAN with Data Protection Virtual SAN for Desktop Overview • Standalone edition • No capacity, scale or workload restriction • Bundle of Virtual SAN and vSphere Data Protection Adv. • Standalone edition • VDI only (VMware or Citrix) • Concurrent or named users Licensing Per CPU Per CPU Per User Price (USD) $2,495 $2,875 (Promo ends Sept 15th 2014) $50 Features Persistent data store    Read / Write caching    Policy-based Management    Virtual Distributed Switch    Replication (vSphere Replication)    Snapshots and clones (vSphere Snapshots & Clones)    Backup (vSphere Data Protection Advanced)  Not for Public Disclosure NDA Material only Do not share with Public until GA Note: Regional pricing in standard VMware currencies applies. Please check local pricelists for more detail.
  • 14. Virtual SAN – Launch Promotions 14 Virtual SAN with Data Protection Virtual SAN (1 CPU) vSphere Data Protection Advanced (1 CPU) VSA to VSAN upgrade Virtual SAN (6 CPUs per bundle) Register and download promo Virtual SAN (1 CPU) Beta PromoBundle Promos 20% 20% 20% Not for Public Disclosure NDA Material only Do not share with Public until GA $9,180 / bundle$2,875 / CPU $1,996 / CPU Promo Discount Promo Price End Date Terms 9/15/2014 9/15/2014 6/15/2014 • Min purchase of 10 CPUs • First purchase only Note: Regional pricing for promotions exist in standard VMware currencies. Please check local pricelists for more detail.
  • 15. Virtual SAN Reduces CAPEX and OPEX for Better TCO 15 CAPEX • Server-side economics • No Fibre Channel network • Pay-as-you-grow OPEX • Simplified storage configuration • No LUNs • Managed directly through vSphere Web Client • Automated VM provisioning • Simplified capacity planning As Low as $0.50/GB2 As Low as $0.25/IOPS 5X Lower OPEX4 Up to 50% TCO Reduction As Low as $50/Desktop 1 1. Full clones 2. Usable capacity 3. Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) 4. Source: Taneja Group Not for Public Disclosure NDA Material only Do not share with Public until GA
  • 16. Flexibly Configure For Performance And Capacity 16 Performance 2xCPU – 8-core 128GB Memory 2xCPU – 8-core 128GB Memory 2xCPU – 8-core 128GB Memory 1x 400GB MLC SSD (~15% of usable capacity) 1x 400GB MLC SSD (~10% of usable capacity) 2x 400GB MLC SSD (~4% of usable capacity) 5x 1.2TB 10K SAS 7x 2TB 7.2K NL-SAS 10x 4TB 7.2K NL-SAS IOPS1 Raw Capacity ~20-15K 6TB ~15-10K 14TB ~10-5K 40TB Capacity 1. Mix workload 70% Read, 80% Random Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) $0.32/IOPS $2.12/GB $0.57/IOPS $1.02/GB $1.38/IOPS $0.52/GB Not for Public Disclosure NDA Material only Do not share with Public until GA
  • 17. • Compared to external storage at scale • Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) • Additional savings come from reduced Opex through automation • Virtual SAN configuration: 9 VMs per core, with 40GB per VM, 2 copies for availability and 10% SSD for performance Granular Scaling Eliminates Overprovisioning Delivers Predictable Scaling and ability to Control Costs VSAN enables predictable linear scaling Spikes correspond to scaling out due to IOPs requirements 17 $40 $90 $140 $190 $240 500 1000 1500 2000 2500 3000 StorageCostPerDesktop Number of Desktops $/VDI Storage Cost Virtual SAN Midrange Hybrid Array Not for Public Disclosure NDA Material only Do not share with Public until GA
  • 18. Running a Google-like Datacenter 18 Modular infrastructure. Break-Replace Operations "From a break fix perspective, I think there's a huge difference in what needs to be done when a piece of hardware fails. I can have anyone on my team go back and replace a 1U or 2U servers. … essentially modularizing my datacenter and delivering a true Software-Defined Storage architecture." — Ryan Hoenle Director of IT, DOE Fund
  • 19. Hardware Requirements 19 Any Server on the VMware Compatibility Guide • SSD, HDD, and Storage Controllers must be listed on the VMware Compatibility Guide for VSAN http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan • Minimum 3 ESXi 5.5 Hosts, Maximum Hosts “I’ll tell you later……” 1Gb/10Gb NIC SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0” mode SAS/SATA/PCIe SSD SAS/NL-SAS/SATA HDD At least 1 of each 4GB to 8GB USB, SD Cards
  • 20. Flash Based Devices VMware SSD Performance Classes – Class A: 2,500-5,000 writes per second – Class B: 5,000-10,000 writes per second – Class C: 10,000-20,000 writes per second – Class D: 20,000-30,000 writes per second – Class E: 30,000+ writes per second Examples – Intel DC S3700 SSD ~36000 writes per second -> Class E – Toshiba SAS SSD MK2001GRZB ~16000 writes per second -> Class C Workload Definition – Queue Depth: 16 or less – Transfer Length: 4KB – Operations: write – Pattern: 100% random – Latency: less than 5 ms Endurance – 10 Drive Writes per Day (DWPD), and – Random write endurance up to 3.5 PB on 8KB transfer size per NAND module, or 2.5 PB on 4KB transfer size per NAND module 20
  • 21. Flash Capacity Sizing  The general recommendation for sizing Virtual SAN's flash capacity is to have 10% of the anticipated consumed storage capacity before the Number of Failures To Tolerate is considered.  Total flash capacity percentage should be based on use case, capacity and performance requirements. – 10% is a general recommendation, could be too much or it may not be enough. Measurement Requirements Values Projected VM space usage 20GB Projected number of VMs 1000 Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB Target flash capacity percentage 10% Total flash capacity required 20TB x .10 = 2 TB
  • 22. Multi-level cell SSD (or better) or PCIe SSD SAS/NL-SAS HDD Select SATA HDDs Any Server on vSphere Hardware Compatibility List * Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide 6Gb enterprise-grade HBA/RAID Controller 1 2 Build your ownVSAN Ready Node …with 10 different options between multiple 3rd party vendors available at GA Preconfigured server ready to use VSAN… …using the VSAN Compatibility Guide* Choose individual components … Two Ways to Build a Virtual SAN Node Radically Simple Hypervisor-Converged Storage
  • 23. Virtual SAN Implementation Requirements • Virtual SAN requires: – Minimum of 3 hosts in a cluster configuration – All 3 host MUST!!! contribute storage • vSphere 5.5 U1 or later – Locally attached disks • Magnetic disks (HDD) • Flash-based devices (SSD) – Network connectivity • 1GB Ethernet • 10GB Ethernet (preferred) 23 esxi-01 local storage local storage local storage vSphere 5.5 U1 Cluster esxi-02 esxi-03 cluster HDDHDD HDD
  • 24. Virtual SAN Scalable Architecture 24 • Scale up and Scale out architecture – granular and linearly storage, performance and compute scaling capabilities – Per magnetic disks – for capacity – Per flash based device – for performance – Per disk group – for performance and capacity – Per node – for compute capacity disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD scaleup scale out
  • 25. Oh yeah! Scalability….. 25 vsanDatastore 4.4 Petabytes 2 Million IOPS 32 Hosts
  • 26. Storage Policy-based Management • SPBM is a storage policy framework built into vSphere that enables virtual machine policy driven provisioning. • Virtual SAN leverages this new framework in conjunction with VASA API’s to expose storage characteristics to vCenter: – Storage capabilities • Underlying storage surfaces up to vCenter and what it is capable of offering. – Virtual machine storage requirements • Requirements can only be used against available capabilities. – VM Storage Policies • Construct that stores virtual machine’s storage provisioning requirements based on storage capabilities. 26
  • 27. Storage Policy Wizard SPBM VSAN object VSAN object manager virtual disk VSAN objects may be (1) mirrored across hosts & (2) striped across disks/hosts to meet VM storage profile policies Datastore Profile Virtual SAN SPBM Object Provisioning Mechanism
  • 28. Virtual SAN Disk Groups • Virtual SAN uses the concept of disk groups to pool together flash devices and magnetic disks as single management constructs. • Disk groups are composed of at least 1 flash device and 1 magnetic disk. – Flash devices are use for performance (Read cache + Write buffer). – Magnetic disks are used for storage capacity. – Disk groups cannot be created without a flash device. 28 disk group disk group disk group disk group Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs disk group HDD HDDHDDHDDHDD
  • 29. Virtual SAN Datastore • Virtual SAN is an object store solution that is presented to vSphere as a file system. • The object store mounts the VMFS volumes from all hosts in a cluster and presents them as a single shared datastore. – Only members of the cluster can access the Virtual SAN datastore – Not all hosts need to contribute storage, but its recommended. 29 disk group disk group disk group disk group Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs disk group VSAN network VSAN network VSAN network VSAN networkVSAN network vsanDatastore HDD HDDHDDHDDHDD
  • 30. Virtual SAN Network • New Virtual SAN traffic VMkernel interface. – Dedicated for Virtual SAN intra-cluster communication and data replication. • Supports both Standard and Distributes vSwitches – Leverage NIOC for QoS in shared scenarios • NIC teaming – used for availability and not for bandwidth aggregation. • Layer 2 Multicast must be enabled on physical switches. – Much easier to manage and implement than Layer 3 Multicast 30 Management Virtual Machines vMotion Virtual SAN Distributed Switch 20 shares 30 shares 50 shares 100 shares uplink1 uplink2 vmk1 vmk2vmk0
  • 31. Virtual SAN Network • NIC teamed and load balancing algorithms: – Route based on Port ID • active / passive with explicit failover – Route based on IP Hash • active / active with LACP port channel – Route based on Physical NIC load • active / active with LACP port channel Management Virtual Machines vMotion Virtual SAN Distributed Switch 100 shares 150 shares 250 shares 500 shares uplink1 uplink2 vmk1 vmk2vmk0 Multi chassis link aggregation capable switches
  • 32. VMware Virtual SAN Interoperability Technologies and Products
  • 34. Configuring VMware Virtual SAN • Radically Simple configuration procedure 34 Setup Virtual SAN Network Enable Virtual SAN on the Cluster Select Manual or Automatic If Manual, create disk groups
  • 35. Configure Network 35 • Configure the new dedicated Virtual SAN network – vSphere Web Client network template configuration feature.
  • 36. Enable Virtual SAN • One click away!!! – Virtual SAN configured in Automatic mode, all empty local disks are claimed by Virtual SAN for the creation of the distributed vsanDatastore. – Virtual SAN configured in Manual mode, the administrator must manually select disks to add the the distributed vsanDatastore by creating Disk Groups. 36
  • 37. Virtual SAN Datastore • A single Virtual SAN Datastore is created and mounted, using storage from all multiple hosts and disk groups in the cluster. • Virtual SAN Datastore is automatically presented to all hosts in the cluster. • Virtual SAN Datastore enforces thin-provisioning storage allocation by default. 37
  • 38. Virtual SAN Capabilities • Virtual SAN currently surfaces five unique storage capabilities to vCenter. 38
  • 39. Number of Failures to Tolerate • Number of failures to tolerate – Defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required. 39 vsan network vmdkvmdk witness esxi-01 esxi-02 esxi-03 esxi-04 ~50% of I/O ~50% of I/O Virtual SAN Policy: “Number of failures to tolerate = 1” raid-1
  • 40. Number of Disk Stripes Per Object • Number of disk stripes per object – The number of HDDs across which each replica of a storage object is distributed. Higher values may result in better performance. 40 vsan network stripe-2b witness esxi-01 esxi-02 esxi-03 esxi-04 stripe-1b stripe-1a stripe-2a raid-0raid-0 VSAN Policy: “Number of failures to tolerate = 1” + “Stripe Width =2” raid-1
  • 41. Managing Failure Scenarios  Through policies, VM’s on Virtual SAN can tolerate multiple failures – Disk Failure – degraded event – SSD Failure – degraded event – Controller Failure – degraded event – Network Failure – absent event – Server Failure – absent event  VM’s continue to run  Parallel rebuilds minimize performance pain – SSD Fail – immediately – HDD Fail – immediately – Controller Fail – immediately – Network Fail – 60 minutes – Host Fail – 60 minutes 41
  • 42. Virtual SAN Storage Capabilities • Force provisioning – if yes, the object will be provisioned even is the policy specified in the storage policy is not satisfiable with the resources currently available. • Flash read cache reservation (%) – Flash capacity reserved as read cache for the storage object. Specified as a percentage of logical size of the object. • Object space reservation (%) – Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM provisioning. The rest of the storage object is thin provisioned. 42
  • 43. VM Storage Policies Recommendations • Number of Disk Stripes per object – Should be left at 1, unless the IOPS requirements of the VM is not being met by the flash layer. • Flash Read Cache Reservation – Should be left at 0, unless there is a specific performance requirement to be met by a VM. • Proportional Capacity – Should be left at 0, unless thick provisioning of virtual machines is required. • Force Provisioning – Should be left disabled, unless the VM needs to be provisioned, even if not in compliance. 43
  • 44. Failure Handling Philosophy  Traditional SANs – Physical drive needs to be replaced to get back to full redundancy – Hot-spare disks are set aside to take role of failed disks immediately – In both cases: 1:1 replacement of disk  Virtual SAN – Entire cluster is a “hot-spare”, we always want to get back to full redundancy – When a disk fails, many small components (stripes or mirrors of objects) fail – New copies of these components can be spread around the cluster for balancing – Replacement of the physical disk just adds back resources
  • 45. Understanding Failure Events  Degraded events are responsible to trigger the immediate recovery operations. – Triggers the immediate recovery operation of objects and components – Not configurable  Any of the following detected I/O errors are always deemed degraded: – Magnetic disk failures – Flash based devices failures – Storage controller failures  Any of the following detected I/O errors are always deemed absent: – Network failures – Network Interface Cards (NICs) – Host failures 45
  • 46. Maintenance Mode – planned downtime  3 Maintenance mode options:  Ensure accessibility  Full data migration  No data migration
  • 47. For more information, visit: http://www.vmware.com/products/virtual-san

Notas del editor

  1. With Software-Defined Storage, we’re taking the operational model we pioneered in compute – and extending that to storage Software-Defined Storage allows businesses to more efficiently manage their same storage infrastructure with software. How? CLICK First, Abstracting and pooling physical storage resources to create flexible logical pools of storage in the virtual data plane. We see three main pools going forward: SAN/NAS pool (enabled by VVOL), hypervisor-converged (enabled by Virtual SAN) and Cloud CLICK Second, providing VM-level data services like replication, snapshots caching, etc. from a broad partner ecosystem CLICK Lastly, enabling application-centric approach based on a common policy-based control plane. Storage requirements are captured for each individual VM in simple intuitive policies that follow the VM through its life cycle on any infrastructure. This policy-based management framework allows for seamless automation and orchestration, with the Virtual SAN software dynamically making adjustments to underlying storage pools to ensure application-driven policies are compliant and SLAs are met. CLICK Integration and interoperability with our storage ecosystem is a key element of our strategy. Across all elements SDS we plan to enable integration points through APIs that will allow our partners to enable value added capabilities on top of our platform. Above are a list of partners that we have been working with to make the Software-Defined Storage solution a reality for our customers. For example, EMC’s ViPR technology abstracts and pools third party external storage to create a virtual control plane for heterogeneous external storage. This is a great example of how Software-Defined Storage ecosystem vendors leverage the VMware platform to give customers more choice and the ability to transform their storage model. Software-Defined Storage is using virtualization software to create a fundamentally new approach to storage that removes unnecessary complexity, puts the application in charge, and delivers many of the same benefits we see from SDDC… including simplicity, high performance, and increased efficiency.   T: Today, we’re excited to announce Virtual SAN…
  2. BEN TALKING: Abstracts and pools server-side disks and flash => shared datastore CLICK Decouples software from hardware // Converts physical to virtual Embedded in ESXi kernel to create high performance storage tier running on x86 servers Policy-based management framework automates routine tasks Creates a resilient, scalable storage tier that is easy to use Gives users the flexibility to configure the storage they need T: Virtual SAN is a true Software-Defined Storage product that runs on standard x86 servers, giving users deployment flexibility…
  3. We announced the public beta of Virtual SAN at Vmworld last year and it’s been a great success story. We had over 10,000 registered participants We’ve seen a lot of excitement and response from customers. The team has over-achieved. We promised we’d deliver vSAN in the first half of 2014. As you know, that usually means June 32nd. But I’m glad to announce that we’re almost ready and will be releasing vSAN ahead of schedule in Q1. We also promised an 8-node solution for the first release, but I’m proud to announce that we’re going to support 16 nodes at GA. Finally, to thank our Beta Customers, we’re offering a 20% discount on their first purchase.
  4. BEN TALKING: 2 ways to deploy => ready node or component based VSAN is completely HW independent Flexibility of configuration to optimize for performance or capacity Ready Node: VMW working with OEM server vendors => “VIRTUAL SAN Ready Nodes” Servers designed to make it easy to run Virtual SAN Build Your Own: VMW certifying VSAN to run on many different types of hardware Servers, magnetic disks, solid state drives and controllers. Gives you the flexibility to choose… build storage system based on your needs VMware believes that a true Software-Defined Storage product gives users the flexibility when constructing storage architectures T: VMware has been working with a broad array of ecosystem vendors to make this a reality…
  5. BEN TALKING: We have built a robust, global eco-system around Virtual SAN Includes all major server manufacturers and systems solutions.. Includes a broad range of hardware components such as controllers and disks… And a variety of data protection solutions. As part of the SDDC approach Pat laid out, it is VMware offer customers great flexibility of choice T: In addition to being hardware independent, VSAN has a policy-based management framework built-in to simplify storage
  6. BEN TALKING: SPBM framework allows you to define storage requirements based on application needs. CLICK It is simple => capacity, performance and availability CLICK VSAN matches requirements to underlying capabilities. Unlike traditional external storage => provisioning done at array layer Automation: policies governed by SLAs CLICK Orchestration: software abstracts underlying hardware End result => No more LUNs of Volumes… T: To give you a better idea, let me show you how all of this works together (DEMO) John: You mentioned policy-based framework. Help me understand how that works as I believe that is a fairly new concept when it comes to storage.
  7. BEN TALKING: Beyond the big numbers on this page…. …Virtual SAN scales to the needs of your environment Powerful storage tier running on heterogeneous server hardware Most importantly…scales to the needs of customers. 32 node VSAN cluster 4.4 TBs of capacity 2M IOPs 3,200 VMs Not a toy Ideal and viable storage tier for vSphere environments VSAN is high performance, scalable and resilient… and runs on heterogeneous hardware   JOHN TALKING That’s great, Ben. Couldn’t you just add more hardware to any other storage technologies in the market today to increase capacity? T: What is impressive about Virtual SAN is not just its maximum capacity or IOPs… it is its efficiency and how it gets to these numbers…
  8. BEN TALKING Yes… Virtual SAN scales to 32 nodes and 2M IOPs, but it does so in a predictable and linear fashion This is particularly helpful if you are trying to forecast storage capacity…. … or have a latent application in need of more performance Virtual SAN gives you the ability to granularly scale-up or scale-out your cluster Add more resources to achieve an intended outcome One customer quote I liked from the beta was … “We can customize IO and capacity on demand” Eliminates costly overprovisioning Pause… As customers look for every edge possible about efficiency, Virtual SAN delivers on this This gives you the control to have Google-like and Amazon-like efficiency within your private cloud On the left… Linear and Predictable performance Scales with your environment Same functionality across different types of workloads On the right… High VM density in VDI environments. Performance isn’t a constraint VSAN has VM densities comparable to an all-flash array  
  9. (SLIDE AUTOMATICALLY BUILDS) BEN TALKING: Interoperability a key differentiator for Virtual SAN Makes the product easy to use for our customers [GO AROUND TO TALK THROUGH PRODUCTS] High degree of convenience … makes storage simple for customers   John: This is great to hear that Virtual SAN is resilient and interoperates with other VMware products. Could you show me how this works? BEN: Sure T: Let me show you how this works in the product
  10. Drivers on the right – Arrow – Bubbles (with range) $2.5GB 50% tco reduction 5-10x opex Align Costs with Revenue Take advantage of decreasing HW prices
  11. Increase the performance Get better economics Save on CPU resources -- So the cost of an I/O, in CPU cycles and overhead, is important. Gray and Shenoy derive some rules of thumb for I/O costs: A disk I/O costs 5,000 instructions and 0.1 instructions per byte The CPU cost of a Systems Area Network (SAN) network message is 3,000 clocks and 1 clock per byte A network message costs 10,000 instructions and 10 instructions per byte for an 8KB I/O, which is a standard I/O size for Unix systems, it costs Disk: 5,000 + 800 = 5,800 instructions SAN: 3,000 + 8,000 = 11,000 clocks Network: 10,000 + 80,000 = 90,000 instructions Thus it is obvious why IDCs implement local disks in general preference to SANs or networks. Not only is it cheaper economically, it is much cheaper in CPU resources. Looked another way, this simply confirms what many practitioners already have ample experience with: the EDC architecture doesn’t scale easily or well. ------------------ Two I/O intensive techniques are RAID 5 and RAID 6. In RAID 5, writing a block typically requires four disk accesses: two to read the existing data and parity and two more to write the new data and parity (RAID 6 requires even more). Not surprisingly, Google avoids RAID 5 or RAID 6 and favors mirroring, typically mirroring each chunk of data at least three times and many more times if it is hot. This effectively increases the IOPS per chunk of data at the expense of capacity, which is much cheaper than additional bandwidth or cache.
  12. SSD Interface PCIe vs SAS vs SATA – not really a decision point for performance, as the corresponding IOPS performance will dictate the interface selection.
  13. Speaker notes: vCenter is requirement for management since the VSAN is fully integrated into vSphere. A minimum of 3 nodes and a maximum of 8 nodes (though there is some discussion around a higher node count in later versions). SSD must make up 10% of all storage, but it could be larger than that. We are also recommending a dedicated 10Gb network for VSAN too. We are in fact recommending a NIC team of 2 x 10Gb NICs for availability purposes. vCenter server version 5.5 Central point of management 3 vSphere Hosts minimum Running ESXi version 5.5 or later Not all hosts need to have local storage. Some can be just compute nodes Maximum of 8 nodes in a cluster in version 1.0. Greater than 8 nodes planned for future releases Local Storage Combination of HDD & SSD SSD’s used as a read cache and write buffer HDD’s used as persistent store SAS/SATA Controller Raid Controller must work in “pass-thru” or “HBA” mode (no RAID) 1Gb or 10Gb Network (preferred) Cluster communication/replication We have not completed any real characterization yet, but it is expected that the overhead of CPU/Memory for VSAN is in the region of 10%. VSAN supports the concept of compute nodes – ESXi hosts which do not present any storage, but still has access to, and can run VMs on the distributed datastore. Best Practices: - min 3 nodes with storage - have a balanced cluster using identical host configurations - Regarding boot image: no stateless, preferred is to use SD card/USB/satadom
  14. Largest storage capacities: 5 disk groups * 7 HDDs * 4TB* 8 hosts = 1.1 PT 5 disk groups * 7 HDDs * 4TB* 16 hosts = 2.2 PT
  15. Enable multicast Disabling IGMP snooping Configuring IGMP snooping for selective traffic VSAN vmkernel multicast traffic should be isolated to a layer 2 non-routable VLAN Layer 2 multicast traffic can be limited to specific port group using IGMP snooping We do not recommend implementing multicast flooding across all ports as a best practice We do not require layer 3 multicast