SlideShare una empresa de Scribd logo
1 de 84
Storage for Virtual Environments Stephen Foskett Foskett Services and Gestalt IT Live Footnotes: ,[object Object]
#VirtualStorage,[object Object]
Agenda
Introducing the Virtual Data Center
This Hour’s Focus:What Virtualization Does Introducing storage and server virtualization The future of virtualization The virtual datacenter Virtualization confounds storage Three pillars of performance Other issues Storage features for virtualization What’s new in VMware
Virtualization of Storage, Serverand Network Storage has been stuck in the Stone Age since the Stone Age! Fake disks, fake file systems, fixed allocation Little integration and no communication Virtualization is a bridge to the future Maintains functionality for existing apps Improves flexibility and efficiency
A Look at the Future
Server Virtualization is On the Rise Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
Server Virtualization is a Pile of Lies! What the OS thinks it’s running on… What the OS is actually running on… Physical Hardware VMkernel Binary Translation, Paravirtualization, Hardware Assist Guest OS VM Guest OS VM Scheduler and Memory Allocator vNIC vSwitch NIC Driver vSCSI/PV VMDK VMFS I/O Driver
And It Gets Worse Outside the Server!
The Virtual Data Center of Tomorrow Management Applications The Cloud™ Applications Legacy Applications Applications Applications CPU Network Backup Storage
The Real Future of IT Infrastructure Orchestration Software
Three Pillars of VM Performance
Confounding Storage Presentation Storage virtualization is nothing new… RAID and NAS virtualized disks Caching arrays and SANs masked volumes New tricks: Thin provisioning, automated tiering, array virtualization But, we wrongly assume this is where it ends Volume managers and file systems Databases Now we have hypervisors virtualizing storage VMFS/VMDK = storage array? Virtual storage appliances (VSAs)
Begging for Converged I/O 4G FC Storage 1 GbE Network 1 GbE Cluster How many I/O ports and cables does a server need? Typical server has 4 ports, 2 used Application servers have 4-8 ports used! Do FC and InfiniBand make sense with 10/40/100 GbE? When does commoditization hit I/O? Ethernet momentum is unbeatable Blades and hypervisors demand greater I/O integration and flexibility Other side of the coin – need to virtualize I/O
Driving Storage Virtualization Server virtualization demands storage features Data protection with snapshots and replication Allocation efficiency with thin provisioning+ Performance and cost tweaking with automated sub-LUN tiering Improved locking and resource sharing Flexibility is the big one Must be able to create, use, modify and destroy storage on demand Must move storage logically and physically Must allow OS to move too
“The I/O Blender” Demands New Architectures Shared storage is challenging to implement Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance Server virtualization throws I/O into a blender – All I/O is now random I/O!
Server Virtualization Requires SAN and NAS Server virtualization has transformed the data center and storage requirements VMware is the #1 driver of SAN adoption today! 60% of virtual server storage is on SAN or NAS 86% have implemented some server virtualization Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before! Source: ESG, 2008
Keys to the Future For Storage Folks Ye Olde Seminar Content!
Primary Production Virtualization Platform Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
Storage Features for Virtualization
Which Features Are People Using? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
What’s New in vSphere 4 and 4.1 VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI Massive performance upgrade (400k IOPS!) vSphere 4.1 is equally huge for storage Boot from SAN vStorage APIs for Array Integration (VAAI) Storage I/O control (SIOC)
What’s New in vSphere 5 VMFS-5 – Scalability and efficiency improvements Storage DRS – Datastore clusters and improved load balancing Storage I/O Control – Cluster-wide and NFS support Profile-Driven Storage – Provisioning, compliance and monitoring FCoE Software Initiator iSCSI Initiator GUI Storage APIs – Storage Awareness (VASA) Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10 Storage vMotion - Enhanced with mirror mode vSphere Storage Appliance (VSA) vSphere Replication – New in SRM
And Then, There’s VDI… Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it: Massive I/O crunches Huge duplication of data More wasted capacity More user visibility More backup trouble
What’s next Vendor Showcase and Networking Break
Technical Considerations - Configuring Storage for VMs The mechanics of presenting and using storage in virtualized environments
This Hour’s Focus:Hypervisor Storage Features Storage vMotion VMFS Storage presentation: Shared, raw, NFS, etc. Thin provisioning Multipathing (VMware Pluggable Storage Architecture) VAAI and VASA Storage I/O control and storage DRS
Storage vMotion Introduced in ESX 3 as “Upgrade vMotion” ESX 3.5 used a snapshot while the datastore was in motion vSphere 4 used changed-block tracking (CBT) and recursive passes vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones Can be offloaded for VAAI-Block (but not NFS)
vSphere 5: What’s New in VMFS 5 Max VMDK size is still 2 TB – 512 bytes Virtual (non-passthru) RDM still limited to 2 TB Max LUNs per host is still 256
Hypervisor Storage Options:Shared Storage The common/ workstation approach VMware: VMDK image in VMFS datastore Hyper-V: VHD image in CSV datastore Block storage (direct or FC/iSCSI SAN) Why? Traditional, familiar, common (~90%) Prime features (Storage VMotion, etc) Multipathing, load balancing, failover* But… Overhead of two storage stacks (5-8%) Harder to leverage storage features Often shares storage LUN and queue Difficult storage management VM Host Guest OS VMFS VMDK DAS or SAN Storage
Hypervisor Storage Options:Shared Storage on NAS Skip VMFS and use NAS NFS or SMB is the datastore Wow! Simple – no SAN Multiple queues Flexible (on-the-fly changes) Simple snap and replicate* Enables full Vmotion Link aggregation (trunking) is possible But… Less familiar (ESX 3.0+) CPU load questions Limited to 8 NFS datastores (ESX default) Snapshot consistency for multiple VMDK VM Host Guest OS NAS Storage VMDK
Hypervisor Storage Options:Guest iSCSI Skip VMFS and use iSCSI directly Access a LUN just like any physical server VMware ESX can even boot from iSCSI! Ok… Storage folks love it! Can be faster than ESX iSCSI Very flexible (on-the-fly changes) Guest can move and still access storage But… Less common to VM folks CPU load questions No Storage VMotion (but doesn’t need it) VM Host Guest OS iSCSI Storage LUN
Hypervisor Storage Options:Raw Device Mapping (RDM) Guest VM’s access storage directly over iSCSI or FC VM’s can even boot from raw devices Hyper-V pass-through LUN is similar Great! Per-server queues for performance Easier measurement The only method for clustering Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!) But… Tricky VMotion and dynamic resource scheduling (DRS) No storage VMotion More management overhead Limited to 256 LUNs per data center VM Host Guest OS I/O Mapping File SAN Storage
Hypervisor Storage Options:Direct I/O VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly Leverages AMD IOMMU or Intel VT-d Great! Potential for native performance Just like RDM but better! But… No VMotion or Storage VMotion No ESX fault tolerance (FT) No ESX snapshots or VM suspend No device hot-add No performance benefit in the real world! VM Host Guest OS I/O Mapping File SAN Storage
Which VMware Storage Method Performs Best? Mixed random I/O CPU cost per I/O VMFS, RDM (p), or RDM (v) Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
vSphere 5: Policy or Profile-Driven Storage Allows storage tiers to be defined in vCenter based on SLA, performance, etc. Used during provisioning, cloning, Storage vMotion, Storage DRS Leverages VASA for metrics and characterization All HCL arrays and types (NFS, iSCSI, FC) Custom descriptions and tagging for tiers Compliance status is a simple binary report
Native VMware Thin Provisioning VMware ESX 4 allocates storage in 1 MB chunks as capacity is used Similar support enabled for virtual disks on NFS in VI 3 Thin provisioning existed for block, could be enabled on the command line in VI 3 Present in VMware desktop products vSphere 4 fully supports and integrates thin provisioning Every version/license includes thin provisioning Allows thick-to-thin conversion during Storage VMotion In-array thin provisioning also supported (we’ll get to that…)
Four Types of VMware ESX Volumes Note: FT is not supported What will your array do? VAAI helps… Friendly to on-array thin provisioning
Storage Allocation and Thin Provisioning VMware tests show no performance impact from thin provisioning after zeroing
Pluggable Storage Architecture:Native Multipathing VMware ESX includes multipathing built in Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use Pluggable Storage Architecture (PSA) VMware NMP Third-Party MPP VMware SATP Third-Party SATP VMware PSP Third-Party PSP
Pluggable Storage Architecture: PSP and SATP vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack ESX Enterprise+ Only There are two classes of third-party plug-ins: Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays EMC PowerPath/VE for vSphere does everything
Storage Array Type Plug-ins (SATP) ESX native approaches Active/Passive Active/Active Pseudo Active Storage Array Type Plug-Ins VMW_SATP_LOCAL – Generic local direct-attached storage VMW_SATP_DEFAULT_AA – Generic for active/active arrays VMW_SATP_DEFAULT_AP – Generic for active/passive arrays VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio) VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays VMW_SATP_CX – EMC/Dell CLARiiON  and Celerra (also VMW_SATP_ALUA_CX) VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista VMW_SATP_INV – EMC Invista and VPLEX VMW_SATP_EQL – Dell EqualLogic systems Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
Path Selection Plug-ins (PSP) VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays Also, EMC PowerPath and other vendor unique
vStorage APIs for Array Integration (VAAI) VAAI integrates advanced storage features with VMware Basic requirements: A capable storage array ESX 4.1+ A software plug-in for ESX Not every implementation is equal Block zeroing can be very demanding for some arrays Zeroing might conflict with full copy
VAAI Support Matrix
vSphere 5: VAAI 2 Block (FC/iSCSI) T10 compliance is improved - No plug-in needed for many arrays File (NFS) NAS plugins come from vendors, not VMware
vSphere 5: vSphereStorage APIs – Storage Awareness (VASA) VASA is communication mechanism for vCenter to detect array capabilities RAID level, thin provisioning state, replication state, etc. Two locations in vCenter Server: “System-Defined Capabilities” – per-datastore descriptors Storage views and SMS API’s
Storage I/O Control (SIOC) Storage I/O Control (SIOC) is all about fairness: Prioritization and QoS for VMFS Re-distributes unused I/O resources Minimizes “noisy neighbor” issues ESX can provide quality of service for storage access to virtual machines Enabled per-datastore When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms) Monitors queues on storage arrays and per-VM I/O latency But: vSphere 4.1 with Enterprise Plus Disabled by default but highly recommended! Block storage only (FC or ISCSI) Whole-LUN only (no extents) No RDM
Storage I/O Control in Action
Virtual Machine Mobility Moving virtual machines is the next big challenge Physical servers are difficult to move around and between data centers Pent-up desire to move virtual machines from host to host and even to different physical locations VMware DRS would move live VMs around the data center The “Holy Grail” for server managers Requires networked storage (SAN/NAS)
vSphere 5: Storage DRS Datastore clusters aggregate multiple datastores VMs and VMDKs placement metrics: Space - Capacity utilization and availability (80% default) Performance – I/O latency (15 ms default) When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules Storage DRS works with either VMFS/block or NFS datastores Maintenance Mode evacuates a datastore
What’s next Lunch
Expanding the Conversation Converged I/O, storage virtualization and new storage architectures
This Hour’s Focus:Non-Hypervisor Storage Features Converged networking Storage protocols (FC, iSCSI, NFS) Enhanced Ethernet (DCB, CAN, FCoE) I/O virtualization Storage for virtual storage Tiered storage and SSD/flash Specialized arrays Virtual storage appliances (VSA)
Introduction: Converging on Convergence Data centers rely more on standard ingredients What will connect these systems together? IP and Ethernet are logical choices
Drivers of Convergence
Which Storage Protocol to Use? Server admins don’t know/care about storage protocols and will want whatever they are familiar with Storage admins have preconceived notions about the merits of various options: FC is fast, low-latency, low-CPU, expensive NFS is slow, high-latency, high-CPU, cheap iSCSI is medium, medium, medium, medium
vSphere Protocol Performance
vSphere CPU Utilization
vSphere Latency
Microsoft Hyper-V Performance
Which Storage Protocols Do People Use? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
The Upshot: It Doesn’t Matter Use what you have and are familiar with! FC, iSCSI, NFS all work well Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS Either/or? - 50% use a combination For IP storage Network hardware and config matter more than protocol (NFS, iSCSI, FC) Use a separate network or VLAN Use a fast switch and consider jumbo frames For FC storage 8 Gb FC/FCoE is awesome for VMs Look into NPIV Look for VAAI
The Storage Network Roadmap
Serious Performance 10 GbE is faster than most storage interconnects iSCSI and FCoE both can perform at wire-rate
Latency is Critical Too Latency is even more critical in shared storage FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
Benefits Beyond Speed 10 GbE takes performance off the table (for now…) But performance is only half the story: Simplified connectivity New network architecture Virtual machine mobility 1 GbE Cluster 4G FC Storage 1 GbE Network 10 GbE (Plus 6 Gbps extra capacity)
Enhanced 10 Gb Ethernet ,[object Object]
SCSI expects a lossless transport with guaranteed delivery
Ethernet expects higher-level protocols to take care of issues
“Data Center Bridging” is a project to create lossless Ethernet
AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
iSCSI and NFS are happy with or without DCB
DCB is a work in progress
FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
QCN (Qau) is still not readyPriority Flow Control (PFC) 802.1Qbb Congestion Management (QCN) 802.1Qau Bandwidth Management (ETS) 802.1Qaz PAUSE 802.3x Data Center Bridging Exchange Protocol (DCBX) Traffic Classes 802.1p/Q
FCoE CNAs for VMware ESX No Intel (OpenFCoE) or Broadcom support in vSphere 4…
vSphere 5: FCoE Software Initiator Dramatically expands the FCoE footprint from just a few CNAs Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
I/O Virtualization: Virtual I/O Extends I/O capabilities beyond physical connections (PCIe slots, etc) Increases flexibility and mobility of VMs and blades Reduces hardware, cabling, and cost for high-I/O machines Increases density of blades and VMs
I/O Virtualization: IOMMU (Intel VT-d) IOMMU gives devices direct access to system memory AMD IOMMU or Intel VT-d Similar to AGP GART VMware VMDirectPath leverages IOMMU Allows VMs to access devices directly May not improve real-world performance System Memory IOMMU MMU I/O Device CPU
Does SSD Change the Equation? RAM and flash promise high performance… But you have to use it right
Flash is Not A Disk Flash must be carefully engineered and integrated Cache and intelligence to offset write penalty Automatic block-level data placement to maximize ROI IF a system can do this, everything else improves Overall system performance Utilization of disk capacity Space and power efficiency Even system cost can improve!
The Tiered Storage Cliché Cost and Performance Optimized for Savings!

Más contenido relacionado

La actualidad más candente

IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM India Smarter Computing
 
Safe checkup - vmWare vSphere 5.0 22feb2012
Safe checkup - vmWare vSphere 5.0  22feb2012Safe checkup - vmWare vSphere 5.0  22feb2012
Safe checkup - vmWare vSphere 5.0 22feb2012M.Ela International Srl
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
 
Oracle on vSphere best practices
Oracle on vSphere best practices Oracle on vSphere best practices
Oracle on vSphere best practices Filip Verloy
 
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoLQ2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoLAndrew Miller
 
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
 
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...Principled Technologies
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentationaleyeldean
 
VMware HA deep Dive
VMware HA deep DiveVMware HA deep Dive
VMware HA deep DiveEric Sloof
 
Presentazione Corso VMware vSphere 6.5
Presentazione Corso VMware vSphere 6.5Presentazione Corso VMware vSphere 6.5
Presentazione Corso VMware vSphere 6.5PRAGMA PROGETTI
 
Efficient Data Protection – Backup in VMware environments
Efficient Data Protection – Backup in VMware environmentsEfficient Data Protection – Backup in VMware environments
Efficient Data Protection – Backup in VMware environmentsKingfin Enterprises Limited
 
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator VMworld
 
Configuring v sphere 5 profile driven storage
Configuring v sphere 5 profile driven storageConfiguring v sphere 5 profile driven storage
Configuring v sphere 5 profile driven storagevirtualsouthwest
 
What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9Digicomp Academy AG
 
VMware Overview
VMware OverviewVMware Overview
VMware OverviewMadhu Bala
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Sverige
 
VMworld 2013: VMware vSphere High Availability - What's New and Best Practices
VMworld 2013: VMware vSphere High Availability - What's New and Best PracticesVMworld 2013: VMware vSphere High Availability - What's New and Best Practices
VMworld 2013: VMware vSphere High Availability - What's New and Best PracticesVMworld
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееDenis Gundarev
 

La actualidad más candente (20)

IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
 
Safe checkup - vmWare vSphere 5.0 22feb2012
Safe checkup - vmWare vSphere 5.0  22feb2012Safe checkup - vmWare vSphere 5.0  22feb2012
Safe checkup - vmWare vSphere 5.0 22feb2012
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
 
Oracle on vSphere best practices
Oracle on vSphere best practices Oracle on vSphere best practices
Oracle on vSphere best practices
 
RHT Design for Security
RHT Design for SecurityRHT Design for Security
RHT Design for Security
 
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoLQ2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
 
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
 
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
 
VMware HA deep Dive
VMware HA deep DiveVMware HA deep Dive
VMware HA deep Dive
 
Presentazione Corso VMware vSphere 6.5
Presentazione Corso VMware vSphere 6.5Presentazione Corso VMware vSphere 6.5
Presentazione Corso VMware vSphere 6.5
 
Efficient Data Protection – Backup in VMware environments
Efficient Data Protection – Backup in VMware environmentsEfficient Data Protection – Backup in VMware environments
Efficient Data Protection – Backup in VMware environments
 
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
 
Configuring v sphere 5 profile driven storage
Configuring v sphere 5 profile driven storageConfiguring v sphere 5 profile driven storage
Configuring v sphere 5 profile driven storage
 
What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9What’s new in Veeam Availability Suite v9
What’s new in Veeam Availability Suite v9
 
VMware Overview
VMware OverviewVMware Overview
VMware Overview
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
 
VMworld 2013: VMware vSphere High Availability - What's New and Best Practices
VMworld 2013: VMware vSphere High Availability - What's New and Best PracticesVMworld 2013: VMware vSphere High Availability - What's New and Best Practices
VMworld 2013: VMware vSphere High Availability - What's New and Best Practices
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
 

Similar a Storage for Virtual Environments 2011 R2

Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationStephen Foskett
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes StorageStephen Foskett
 
Storage Virtualization Introduction
Storage Virtualization IntroductionStorage Virtualization Introduction
Storage Virtualization IntroductionStephen Foskett
 
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer Presentation
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer PresentationWhat’s new in vSphere 5 and vCenter Server Heartbeat – Customer Presentation
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer PresentationSuministros Obras y Sistemas
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyEMC Forum India
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015VMUG IT
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) Hitachi Vantara
 
Presentation integration vmware with emc storage
Presentation   integration vmware with emc storagePresentation   integration vmware with emc storage
Presentation integration vmware with emc storagesolarisyourep
 
VMWARE Professionals - Availability and Resiliency
VMWARE Professionals -  Availability and ResiliencyVMWARE Professionals -  Availability and Resiliency
VMWARE Professionals - Availability and ResiliencyPaulo Freitas
 
It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)Stephen Foskett
 
Storage Changes in VMware vSphere 4.1
Storage Changes in VMware vSphere 4.1Storage Changes in VMware vSphere 4.1
Storage Changes in VMware vSphere 4.1Scott Lowe
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180Suresh Kumar
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V SphereAnne Achleman
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsAnne Achleman
 
A Winning Combination: IBM Storage and VMware
A Winning Combination: IBM Storage and VMwareA Winning Combination: IBM Storage and VMware
A Winning Combination: IBM Storage and VMwarePaula Koziol
 
V sphere 5.1-storage-features-&-futures
V sphere 5.1-storage-features-&-futuresV sphere 5.1-storage-features-&-futures
V sphere 5.1-storage-features-&-futuressubtitle
 
Where Does VMware Integration Occur?
Where Does VMware Integration Occur?Where Does VMware Integration Occur?
Where Does VMware Integration Occur?Scott Lowe
 

Similar a Storage for Virtual Environments 2011 R2 (20)

Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes Storage
 
Storage Virtualization Introduction
Storage Virtualization IntroductionStorage Virtualization Introduction
Storage Virtualization Introduction
 
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer Presentation
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer PresentationWhat’s new in vSphere 5 and vCenter Server Heartbeat – Customer Presentation
What’s new in vSphere 5 and vCenter Server Heartbeat – Customer Presentation
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015
 
vSphere
vSpherevSphere
vSphere
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol)
 
Presentation integration vmware with emc storage
Presentation   integration vmware with emc storagePresentation   integration vmware with emc storage
Presentation integration vmware with emc storage
 
VMWARE Professionals - Availability and Resiliency
VMWARE Professionals -  Availability and ResiliencyVMWARE Professionals -  Availability and Resiliency
VMWARE Professionals - Availability and Resiliency
 
3487570
34875703487570
3487570
 
It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)
 
Storage Changes in VMware vSphere 4.1
Storage Changes in VMware vSphere 4.1Storage Changes in VMware vSphere 4.1
Storage Changes in VMware vSphere 4.1
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage Enhancements
 
A Winning Combination: IBM Storage and VMware
A Winning Combination: IBM Storage and VMwareA Winning Combination: IBM Storage and VMware
A Winning Combination: IBM Storage and VMware
 
V sphere 5.1-storage-features-&-futures
V sphere 5.1-storage-features-&-futuresV sphere 5.1-storage-features-&-futures
V sphere 5.1-storage-features-&-futures
 
Where Does VMware Integration Occur?
Where Does VMware Integration Occur?Where Does VMware Integration Occur?
Where Does VMware Integration Occur?
 
DBA Fundamentals VC
DBA Fundamentals VCDBA Fundamentals VC
DBA Fundamentals VC
 

Más de Stephen Foskett

What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?Stephen Foskett
 
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?Stephen Foskett
 
The Four Horsemen of Storage System Performance
The Four Horsemen of Storage System PerformanceThe Four Horsemen of Storage System Performance
The Four Horsemen of Storage System PerformanceStephen Foskett
 
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of SilosGestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of SilosStephen Foskett
 
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011Stephen Foskett
 
State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin ProvisioningStephen Foskett
 
Eleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email ArchivingEleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email ArchivingStephen Foskett
 
Email Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The DifferenceEmail Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The DifferenceStephen Foskett
 
Deep Dive Into Email Archiving Products
Deep Dive Into Email Archiving ProductsDeep Dive Into Email Archiving Products
Deep Dive Into Email Archiving ProductsStephen Foskett
 
Extreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And CloudExtreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And CloudStephen Foskett
 
The Right Approach To Cloud Storage
The Right Approach To Cloud StorageThe Right Approach To Cloud Storage
The Right Approach To Cloud StorageStephen Foskett
 
Storage Decisions Nirvanix Introduction
Storage Decisions Nirvanix IntroductionStorage Decisions Nirvanix Introduction
Storage Decisions Nirvanix IntroductionStephen Foskett
 
Solve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems TodaySolve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems TodayStephen Foskett
 

Más de Stephen Foskett (17)

The Zen of Storage
The Zen of StorageThe Zen of Storage
The Zen of Storage
 
What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?
 
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
 
The Four Horsemen of Storage System Performance
The Four Horsemen of Storage System PerformanceThe Four Horsemen of Storage System Performance
The Four Horsemen of Storage System Performance
 
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of SilosGestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
 
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
 
State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin Provisioning
 
Eleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email ArchivingEleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email Archiving
 
Email Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The DifferenceEmail Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The Difference
 
Storage School 1
Storage School 1Storage School 1
Storage School 1
 
Storage School 2
Storage School 2Storage School 2
Storage School 2
 
Deep Dive Into Email Archiving Products
Deep Dive Into Email Archiving ProductsDeep Dive Into Email Archiving Products
Deep Dive Into Email Archiving Products
 
Extreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And CloudExtreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And Cloud
 
The Right Approach To Cloud Storage
The Right Approach To Cloud StorageThe Right Approach To Cloud Storage
The Right Approach To Cloud Storage
 
Storage Decisions Nirvanix Introduction
Storage Decisions Nirvanix IntroductionStorage Decisions Nirvanix Introduction
Storage Decisions Nirvanix Introduction
 
Solve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems TodaySolve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems Today
 
Cloud Storage Benefits
Cloud Storage BenefitsCloud Storage Benefits
Cloud Storage Benefits
 

Último

Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 

Último (20)

Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 

Storage for Virtual Environments 2011 R2

  • 1.
  • 2.
  • 5. This Hour’s Focus:What Virtualization Does Introducing storage and server virtualization The future of virtualization The virtual datacenter Virtualization confounds storage Three pillars of performance Other issues Storage features for virtualization What’s new in VMware
  • 6. Virtualization of Storage, Serverand Network Storage has been stuck in the Stone Age since the Stone Age! Fake disks, fake file systems, fixed allocation Little integration and no communication Virtualization is a bridge to the future Maintains functionality for existing apps Improves flexibility and efficiency
  • 7. A Look at the Future
  • 8. Server Virtualization is On the Rise Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • 9. Server Virtualization is a Pile of Lies! What the OS thinks it’s running on… What the OS is actually running on… Physical Hardware VMkernel Binary Translation, Paravirtualization, Hardware Assist Guest OS VM Guest OS VM Scheduler and Memory Allocator vNIC vSwitch NIC Driver vSCSI/PV VMDK VMFS I/O Driver
  • 10. And It Gets Worse Outside the Server!
  • 11. The Virtual Data Center of Tomorrow Management Applications The Cloud™ Applications Legacy Applications Applications Applications CPU Network Backup Storage
  • 12. The Real Future of IT Infrastructure Orchestration Software
  • 13. Three Pillars of VM Performance
  • 14. Confounding Storage Presentation Storage virtualization is nothing new… RAID and NAS virtualized disks Caching arrays and SANs masked volumes New tricks: Thin provisioning, automated tiering, array virtualization But, we wrongly assume this is where it ends Volume managers and file systems Databases Now we have hypervisors virtualizing storage VMFS/VMDK = storage array? Virtual storage appliances (VSAs)
  • 15. Begging for Converged I/O 4G FC Storage 1 GbE Network 1 GbE Cluster How many I/O ports and cables does a server need? Typical server has 4 ports, 2 used Application servers have 4-8 ports used! Do FC and InfiniBand make sense with 10/40/100 GbE? When does commoditization hit I/O? Ethernet momentum is unbeatable Blades and hypervisors demand greater I/O integration and flexibility Other side of the coin – need to virtualize I/O
  • 16. Driving Storage Virtualization Server virtualization demands storage features Data protection with snapshots and replication Allocation efficiency with thin provisioning+ Performance and cost tweaking with automated sub-LUN tiering Improved locking and resource sharing Flexibility is the big one Must be able to create, use, modify and destroy storage on demand Must move storage logically and physically Must allow OS to move too
  • 17. “The I/O Blender” Demands New Architectures Shared storage is challenging to implement Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance Server virtualization throws I/O into a blender – All I/O is now random I/O!
  • 18. Server Virtualization Requires SAN and NAS Server virtualization has transformed the data center and storage requirements VMware is the #1 driver of SAN adoption today! 60% of virtual server storage is on SAN or NAS 86% have implemented some server virtualization Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before! Source: ESG, 2008
  • 19. Keys to the Future For Storage Folks Ye Olde Seminar Content!
  • 20. Primary Production Virtualization Platform Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • 21. Storage Features for Virtualization
  • 22. Which Features Are People Using? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • 23. What’s New in vSphere 4 and 4.1 VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI Massive performance upgrade (400k IOPS!) vSphere 4.1 is equally huge for storage Boot from SAN vStorage APIs for Array Integration (VAAI) Storage I/O control (SIOC)
  • 24. What’s New in vSphere 5 VMFS-5 – Scalability and efficiency improvements Storage DRS – Datastore clusters and improved load balancing Storage I/O Control – Cluster-wide and NFS support Profile-Driven Storage – Provisioning, compliance and monitoring FCoE Software Initiator iSCSI Initiator GUI Storage APIs – Storage Awareness (VASA) Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10 Storage vMotion - Enhanced with mirror mode vSphere Storage Appliance (VSA) vSphere Replication – New in SRM
  • 25. And Then, There’s VDI… Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it: Massive I/O crunches Huge duplication of data More wasted capacity More user visibility More backup trouble
  • 26. What’s next Vendor Showcase and Networking Break
  • 27. Technical Considerations - Configuring Storage for VMs The mechanics of presenting and using storage in virtualized environments
  • 28. This Hour’s Focus:Hypervisor Storage Features Storage vMotion VMFS Storage presentation: Shared, raw, NFS, etc. Thin provisioning Multipathing (VMware Pluggable Storage Architecture) VAAI and VASA Storage I/O control and storage DRS
  • 29. Storage vMotion Introduced in ESX 3 as “Upgrade vMotion” ESX 3.5 used a snapshot while the datastore was in motion vSphere 4 used changed-block tracking (CBT) and recursive passes vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones Can be offloaded for VAAI-Block (but not NFS)
  • 30. vSphere 5: What’s New in VMFS 5 Max VMDK size is still 2 TB – 512 bytes Virtual (non-passthru) RDM still limited to 2 TB Max LUNs per host is still 256
  • 31. Hypervisor Storage Options:Shared Storage The common/ workstation approach VMware: VMDK image in VMFS datastore Hyper-V: VHD image in CSV datastore Block storage (direct or FC/iSCSI SAN) Why? Traditional, familiar, common (~90%) Prime features (Storage VMotion, etc) Multipathing, load balancing, failover* But… Overhead of two storage stacks (5-8%) Harder to leverage storage features Often shares storage LUN and queue Difficult storage management VM Host Guest OS VMFS VMDK DAS or SAN Storage
  • 32. Hypervisor Storage Options:Shared Storage on NAS Skip VMFS and use NAS NFS or SMB is the datastore Wow! Simple – no SAN Multiple queues Flexible (on-the-fly changes) Simple snap and replicate* Enables full Vmotion Link aggregation (trunking) is possible But… Less familiar (ESX 3.0+) CPU load questions Limited to 8 NFS datastores (ESX default) Snapshot consistency for multiple VMDK VM Host Guest OS NAS Storage VMDK
  • 33. Hypervisor Storage Options:Guest iSCSI Skip VMFS and use iSCSI directly Access a LUN just like any physical server VMware ESX can even boot from iSCSI! Ok… Storage folks love it! Can be faster than ESX iSCSI Very flexible (on-the-fly changes) Guest can move and still access storage But… Less common to VM folks CPU load questions No Storage VMotion (but doesn’t need it) VM Host Guest OS iSCSI Storage LUN
  • 34. Hypervisor Storage Options:Raw Device Mapping (RDM) Guest VM’s access storage directly over iSCSI or FC VM’s can even boot from raw devices Hyper-V pass-through LUN is similar Great! Per-server queues for performance Easier measurement The only method for clustering Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!) But… Tricky VMotion and dynamic resource scheduling (DRS) No storage VMotion More management overhead Limited to 256 LUNs per data center VM Host Guest OS I/O Mapping File SAN Storage
  • 35. Hypervisor Storage Options:Direct I/O VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly Leverages AMD IOMMU or Intel VT-d Great! Potential for native performance Just like RDM but better! But… No VMotion or Storage VMotion No ESX fault tolerance (FT) No ESX snapshots or VM suspend No device hot-add No performance benefit in the real world! VM Host Guest OS I/O Mapping File SAN Storage
  • 36. Which VMware Storage Method Performs Best? Mixed random I/O CPU cost per I/O VMFS, RDM (p), or RDM (v) Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
  • 37. vSphere 5: Policy or Profile-Driven Storage Allows storage tiers to be defined in vCenter based on SLA, performance, etc. Used during provisioning, cloning, Storage vMotion, Storage DRS Leverages VASA for metrics and characterization All HCL arrays and types (NFS, iSCSI, FC) Custom descriptions and tagging for tiers Compliance status is a simple binary report
  • 38. Native VMware Thin Provisioning VMware ESX 4 allocates storage in 1 MB chunks as capacity is used Similar support enabled for virtual disks on NFS in VI 3 Thin provisioning existed for block, could be enabled on the command line in VI 3 Present in VMware desktop products vSphere 4 fully supports and integrates thin provisioning Every version/license includes thin provisioning Allows thick-to-thin conversion during Storage VMotion In-array thin provisioning also supported (we’ll get to that…)
  • 39. Four Types of VMware ESX Volumes Note: FT is not supported What will your array do? VAAI helps… Friendly to on-array thin provisioning
  • 40. Storage Allocation and Thin Provisioning VMware tests show no performance impact from thin provisioning after zeroing
  • 41. Pluggable Storage Architecture:Native Multipathing VMware ESX includes multipathing built in Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use Pluggable Storage Architecture (PSA) VMware NMP Third-Party MPP VMware SATP Third-Party SATP VMware PSP Third-Party PSP
  • 42. Pluggable Storage Architecture: PSP and SATP vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack ESX Enterprise+ Only There are two classes of third-party plug-ins: Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays EMC PowerPath/VE for vSphere does everything
  • 43. Storage Array Type Plug-ins (SATP) ESX native approaches Active/Passive Active/Active Pseudo Active Storage Array Type Plug-Ins VMW_SATP_LOCAL – Generic local direct-attached storage VMW_SATP_DEFAULT_AA – Generic for active/active arrays VMW_SATP_DEFAULT_AP – Generic for active/passive arrays VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio) VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX) VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista VMW_SATP_INV – EMC Invista and VPLEX VMW_SATP_EQL – Dell EqualLogic systems Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
  • 44. Path Selection Plug-ins (PSP) VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays Also, EMC PowerPath and other vendor unique
  • 45. vStorage APIs for Array Integration (VAAI) VAAI integrates advanced storage features with VMware Basic requirements: A capable storage array ESX 4.1+ A software plug-in for ESX Not every implementation is equal Block zeroing can be very demanding for some arrays Zeroing might conflict with full copy
  • 47. vSphere 5: VAAI 2 Block (FC/iSCSI) T10 compliance is improved - No plug-in needed for many arrays File (NFS) NAS plugins come from vendors, not VMware
  • 48. vSphere 5: vSphereStorage APIs – Storage Awareness (VASA) VASA is communication mechanism for vCenter to detect array capabilities RAID level, thin provisioning state, replication state, etc. Two locations in vCenter Server: “System-Defined Capabilities” – per-datastore descriptors Storage views and SMS API’s
  • 49. Storage I/O Control (SIOC) Storage I/O Control (SIOC) is all about fairness: Prioritization and QoS for VMFS Re-distributes unused I/O resources Minimizes “noisy neighbor” issues ESX can provide quality of service for storage access to virtual machines Enabled per-datastore When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms) Monitors queues on storage arrays and per-VM I/O latency But: vSphere 4.1 with Enterprise Plus Disabled by default but highly recommended! Block storage only (FC or ISCSI) Whole-LUN only (no extents) No RDM
  • 50. Storage I/O Control in Action
  • 51. Virtual Machine Mobility Moving virtual machines is the next big challenge Physical servers are difficult to move around and between data centers Pent-up desire to move virtual machines from host to host and even to different physical locations VMware DRS would move live VMs around the data center The “Holy Grail” for server managers Requires networked storage (SAN/NAS)
  • 52. vSphere 5: Storage DRS Datastore clusters aggregate multiple datastores VMs and VMDKs placement metrics: Space - Capacity utilization and availability (80% default) Performance – I/O latency (15 ms default) When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules Storage DRS works with either VMFS/block or NFS datastores Maintenance Mode evacuates a datastore
  • 54. Expanding the Conversation Converged I/O, storage virtualization and new storage architectures
  • 55. This Hour’s Focus:Non-Hypervisor Storage Features Converged networking Storage protocols (FC, iSCSI, NFS) Enhanced Ethernet (DCB, CAN, FCoE) I/O virtualization Storage for virtual storage Tiered storage and SSD/flash Specialized arrays Virtual storage appliances (VSA)
  • 56. Introduction: Converging on Convergence Data centers rely more on standard ingredients What will connect these systems together? IP and Ethernet are logical choices
  • 58. Which Storage Protocol to Use? Server admins don’t know/care about storage protocols and will want whatever they are familiar with Storage admins have preconceived notions about the merits of various options: FC is fast, low-latency, low-CPU, expensive NFS is slow, high-latency, high-CPU, cheap iSCSI is medium, medium, medium, medium
  • 63. Which Storage Protocols Do People Use? Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • 64. The Upshot: It Doesn’t Matter Use what you have and are familiar with! FC, iSCSI, NFS all work well Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS Either/or? - 50% use a combination For IP storage Network hardware and config matter more than protocol (NFS, iSCSI, FC) Use a separate network or VLAN Use a fast switch and consider jumbo frames For FC storage 8 Gb FC/FCoE is awesome for VMs Look into NPIV Look for VAAI
  • 66. Serious Performance 10 GbE is faster than most storage interconnects iSCSI and FCoE both can perform at wire-rate
  • 67. Latency is Critical Too Latency is even more critical in shared storage FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
  • 68. Benefits Beyond Speed 10 GbE takes performance off the table (for now…) But performance is only half the story: Simplified connectivity New network architecture Virtual machine mobility 1 GbE Cluster 4G FC Storage 1 GbE Network 10 GbE (Plus 6 Gbps extra capacity)
  • 69.
  • 70. SCSI expects a lossless transport with guaranteed delivery
  • 71. Ethernet expects higher-level protocols to take care of issues
  • 72. “Data Center Bridging” is a project to create lossless Ethernet
  • 73. AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
  • 74. iSCSI and NFS are happy with or without DCB
  • 75. DCB is a work in progress
  • 76. FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
  • 77. QCN (Qau) is still not readyPriority Flow Control (PFC) 802.1Qbb Congestion Management (QCN) 802.1Qau Bandwidth Management (ETS) 802.1Qaz PAUSE 802.3x Data Center Bridging Exchange Protocol (DCBX) Traffic Classes 802.1p/Q
  • 78. FCoE CNAs for VMware ESX No Intel (OpenFCoE) or Broadcom support in vSphere 4…
  • 79. vSphere 5: FCoE Software Initiator Dramatically expands the FCoE footprint from just a few CNAs Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
  • 80. I/O Virtualization: Virtual I/O Extends I/O capabilities beyond physical connections (PCIe slots, etc) Increases flexibility and mobility of VMs and blades Reduces hardware, cabling, and cost for high-I/O machines Increases density of blades and VMs
  • 81. I/O Virtualization: IOMMU (Intel VT-d) IOMMU gives devices direct access to system memory AMD IOMMU or Intel VT-d Similar to AGP GART VMware VMDirectPath leverages IOMMU Allows VMs to access devices directly May not improve real-world performance System Memory IOMMU MMU I/O Device CPU
  • 82. Does SSD Change the Equation? RAM and flash promise high performance… But you have to use it right
  • 83. Flash is Not A Disk Flash must be carefully engineered and integrated Cache and intelligence to offset write penalty Automatic block-level data placement to maximize ROI IF a system can do this, everything else improves Overall system performance Utilization of disk capacity Space and power efficiency Even system cost can improve!
  • 84. The Tiered Storage Cliché Cost and Performance Optimized for Savings!
  • 86. Three Approaches to SSD For VM EMC Project Lightning promises to deliver all three!
  • 87. Storage for Virtual Servers (Only!) New breed of storage solutions just for virtual servers Highly integrated (vCenter, VMkernel drivers, etc.) High-performance (SSD cache) Mostly from startups (for now) Tintri– NFS-based caching array Virsto+EvoStor – Hyper-V software, moving to VMware
  • 88. Virtual Storage Appliances (VSA) What if the SAN was pulled inside the hypervisor? VSA = A virtual storage array as a guest VM Great for lab or PoC Some are not for production Can build a whole data center in a hypervisor, including LAN, SAN, clusters, etc Physical Server Resources Hypervisor VM Guest VM Guest Virtual Storage Appliance Virtual SAN Virtual LAN CPU RAM
  • 89. vSphere 5: vSphere Storage Appliance (VSA) Aimed at SMB market Two deployment options: 2x replicates storage 4:2 3x replicates round-robin 6:3 Uses local (DAS) storage Enables HA and vMotion with no SAN or NAS Uses NFS for storage access Also manages IP addresses for HA
  • 91. Whew! Let’s Sum Up Server virtualization changes everything Throw your old assumptions about storage workloads and presentation out the window We (storage folks) have some work to do New ways of presenting storage to the server Converged I/O (Ethernet!) New demand for storage virtualization features New architectural assumptions
  • 92. Thank You! Stephen Foskett stephen@fosketts.net twitter.com/sfoskett +1(508)451-9532 FoskettServices.com blog.fosketts.net GestaltIT.com

Notas del editor

  1. Mirror Mode paper: http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdfhttp://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-2-storage-vmotion.html
  2. http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
  3. Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  4. Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  5. http://www.techrepublic.com/blog/datacenter/stretch-your-storage-dollars-with-vsphere-thin-provisioning/2655http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
  6. http://virtualgeek.typepad.com/virtual_geek/2011/07/vstorage-apis-for-array-integration-vaai-vsphere-5-edition.htmlhttp://blogs.vmware.com/vsphere/2011/07/new-enhanced-vsphere-50-storage-features-part-3-vaai.html
  7. http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  8. http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  9. http://www.slideshare.net/esloof/vsphere-5-whats-new-storage-drshttp://blogs.vmware.com/vsphere/2011/07/vsphere-50-storage-features-part-5-storage-drs-initial-placement.html
  10. http://www.ntpro.nl/blog/archives/1804-vSphere-5-Whats-New-Storage-Appliance-VSA.html
  11. http://jpaul.me/?p=2072