SlideShare a Scribd company logo
1 of 104
Simplify, Virtualize and Protect Your Datacenter Cost Savings and Business Continuity with VMware's Latest vSphere Solution 10/13/09 copyright 2007  I/O Continuity Group
Cloud Computing:  What does it mean? ,[object Object],[object Object],copyright I/O Continuity Group, LLC
Cloud Computing and  Economic Recovery ,[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Datacenter Challenges ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Copyright  © 2006 Dell Inc.
Conclusion ,[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Traditional  DAS Direct-Attached Storage External SCSI Storage Array = Stranded Capacity Parallel SCSI3 connection provides throughput of approx  200 MB/s  after overhead. LAN Each server is separately attached to  a dedicated SCSI storage array requiring  high storage maintenance  with  difficult scalability  and provisioning.  Different vendor platforms cannot share the same external array. copyright I/O Continuity Group, LLC Popular method for deploying applications was to install each on a dedicated server.
SAN- attached Storage FC Storage Array FC SAN Switches 200/400/800 MB/s OR IP SAN w/ iSCSI Ethernet Switches Tape Library Servers with NICs and FC HBA’s LAN FC SAN’s offer a SHARED, high speed, dedicated block-level infrastructure independent of the LAN.  IP SANs uses Ethernet Switches Brocade copyright I/O Continuity Group, LLC Applications able to run anywhere.
Physical Servers represent the  Before  illustration running one application per server. VMware “Converter” can migrate physical machines to Virtual Machines running on ESX in the  After  illustration. copyright I/O Continuity Group, LLC
What is a Virtual Machine? 10/13/09 copyright 2007  I/O Continuity Group Virtual Machine VM Virtual Hardware Regular Operating System Regular Application ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Shared Storage
ESX Architecture 10/13/09 copyright 2007  I/O Continuity Group Memory CPU Disk and NIC ,[object Object],[object Object],[object Object],[object Object],[object Object],Shared Hardware Resources
Storage Overview ,[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group Locally Attached Fibre Channel iSCSI or  IP SAN NAS VMware VMFS NFS Raw Device Mappings -RDM Internal or external DAS High speed SCSI on SAN SCSI over std TCP/IP File level share on LAN
ESX Datastore and VMFS 10/13/09 copyright 2007  I/O Continuity Group Volume LUN (Storage hardware) Datastore VMFS mounted on ESX from LUN VM Files Datastores are logical storage units on a physical LUN (disk device) or on a disk partition. Datastore format types are VMFS or NFS (RDMs are for VMs). Datastores can hold VM files, templates and ISO images, or the RDM used to access the raw data. VM  VM  ESX  Storage Array
VMware Deployment Conclusions ,[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],copyright I/O Continuity Group, LLC
New vSphere Storage Features ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Copyright  © 2006 Dell Inc.
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Disk Thin Provisioning  In a Nutshell ,[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Disk Thin Provisioning  Comparison 10/13/09 copyright 2007  I/O Continuity Group Without  Thin Provisioning (aka  Thick ) With  Thin Provisioning If you create a 500GB virtual disk, the VM will use entire 500GB VMFS Datastore allocated. If you create a 500GB virtual disk but only 100GB of VMFS Datastore is used, then only 100GB will be utilized, even though 500GB is technically allocated to the VM for growth. ,[object Object],[object Object]
Disk Thin Provisioning  Defined ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Without Thin Provisioning/Without VMware Thick LUN 500 GB virtual disk 10/13/09 copyright 2007  I/O Continuity Group Servers With Dedicated Disks ESX Servers on SAN Switches Storage Array holding  SHARED disks HBA’s HBA’s Thin LUN 500 GB Virtual Disk Traditional Servers with DAS- Direct Attached Storage Totally stranded storage devices. What you see is what you get. All VMs see capacity allocated, but Thin LUN offers only what’s used. . 400 GB unused but allocated 100GB application usage SCSI Adapters SCSI Adapters
Thin Provisioning 10/13/09 copyright 2007  I/O Continuity Group ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],120 GB allocated to Thin VM disks, with 60 GB used
Virtual Disk Thin Provisioning  Configured Copyright  © 2006 Dell Inc.
Thin Disk Provisioning Operations 10/13/09 copyright 2007  I/O Continuity Group
Improved Storage Management 10/13/09 copyright 2007  I/O Continuity Group Datastore now managed as an object within vCenter to  view all components in the storage layout and utilization levels . Details for each datastore reveal  which ESX servers are accessing capacity. .
Thin Provisioning Caveats ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Thin Provisioning Conclusions ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
iSCSI Software Initiator In a Nutshell ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
What is iSCSI? ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
iSCSI Software Initiator  Key Improvements ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Why is ordinary  Software iSCSI Slow? ,[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group iSCSI protocol over TCP/IP with high overhead processing Fibre Channel Protocol over a High-speed dedicated network FC SAN
vSphere Software iSCSI Configuration ,[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group General  tab changes are global and propagate down to each target. Bi-directional CHAP  is added for authentication to initiator.
iSCSI Performance  Improvements 10/13/09 copyright 2007  I/O Continuity Group SW iSCSI stack is most improved
Software iSCSI Conclusions ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Dynamic Storage Growth In a Nutshell ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Without Hot Disk Extend LUN Spanning 10/13/09 copyright 2007  I/O Continuity Group Before 20 GB Added 20 GB Each 20 GB Extent (Virtual Disk) becomes a separate partition (file system with drive letter) in the guest OS. After 40 GB If one spanned extent is lost, the entire volume becomes corrupt.
Hot Extend  VMFS Volume Growth Option ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC Before 20 GB After 40 GB  up to 2 TB Volume Grown
Dynamic Expansion up to VM copyright I/O Continuity Group, LLC VM Guest OS level Virtual Disk ESX level Datastore holding VM Virtual Disks-ESX Admin Storage level LUN (LUN presented as one Datastore- SAN Admin Datastore Volume Growth Dynamic LUN Expansion Hot Virtual Disk Extend
Virtual Disk  Hot Extend Configuration 10/13/09 copyright 2007  I/O Continuity Group Increase from 2 GB to 40 GB After updating the VM Properties, use Guest OS to format file system to use newly allocated disk space.  Must be a non-system virtual disk. Ultimate VM application capacity is not always predictable at the outset.
Virtual Disk Hot Extend  Conclusion ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Storage VMotion  In a Nutshell 10/13/09 copyright 2007  I/O Continuity Group Storage VMotion (SVM) enables live migration of virtual machine disks from one datastore to another with no disruption or downtime.  This hot migration of the storage location allows easy movement of VMs data. Like VMotion,  Storage VMotion reduces service disruptions without server downtime. Minimizes disruption when rebalancing or retiring storage arrays, reducing or eliminating planned storage downtime.  Simplifies array migration and upgrades, reducing I/O bottlenecks by moving virtual machines while the VM remains up and running.  ,[object Object],[object Object],[object Object],[object Object],[object Object]
Enhanced Storage VMotion Features ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Storage VMotion  Benchmarks 10/13/09 copyright 2007  I/O Continuity Group Change Block Tracking replaces Snapshot technology Less CPU processing consumed Shorter time to migrate data . Fewer resources consumed in process.
Storage VMotion  New Capabilities ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Storage VMotion Benefits ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Storage VMotion How it Works copyright I/O Continuity Group, LLC ,[object Object],[object Object],[object Object],Source Disk Array (FC) Destination Disk Array (iSCSI) ,[object Object],[object Object],[object Object]
Storage VMotion  Pre-requisites ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Storage VMotion  Conclusion ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Paravirtualized SCSI  In a Nutshell ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
PV SCSI ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group Configure PV SCSI drive in VM
PV SCSI  Key Benefits ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
VMware Performance Testing Reduced CPU Usage 10/13/09 copyright 2007  I/O Continuity Group ,[object Object],[object Object],Less CPU usage and overhead FC HBAs offer least overhead.
PV SCSI Configuration ,[object Object],[object Object],copyright I/O Continuity Group, LLC
PV SCSI Use Cases ,[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
PV SCSI  Conclusions ,[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Pluggable Storage Architecture In a Nutshell ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Pluggable Storage Architecture 10/13/09 copyright 2007  I/O Continuity Group ESX 3.5 did not support third-party storage vendor multi-path software. ESX 3.5 required native MPIO driver which was not optimized for dynamic load balancing and failover. vSphere ESX 4 allows storage partners to write plug-ins for their specific capabilities. Dynamic multipathing and load balancing on  “active-active arrays ”  replacing low intelligent “native multipathing”  (basic round-robin or fail-over)
Pluggable Storage Architecture (PSA) ,[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC NMP =generic VMware Native Multipathing-default without vendor plug-in PSP =Path Selection Plug-in  Third-Party PSP =vendor written path mgmt plug-in SATP =vendor Storage Array Type plug
Pluggable Storage Architecture (PSA) 10/13/09 copyright 2007  I/O Continuity Group By default, VMware provides a generic MPP called NMP (native multipathing). Multipathing plug-in
Enhanced Multipathing  with Pluggable Storage Architecture 10/13/09 copyright 2007  I/O Continuity Group Each ESX4 host will apply one of the plug-in options based on storage vendor choices.
VMDirectPath I/O  (Experimental) ,[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Third-party PSPs 10/13/09 copyright 2007  I/O Continuity Group
Higher Performance  API for Multipathing ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
EMC PowerPath/VE ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
PSA Conclusions ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],copyright I/O Continuity Group, LLC
[object Object],10/13/09 copyright 2007  I/O Continuity Group
FT in a Nutshell ,[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
HA vs FT ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
New Fault Tolerance copyright I/O Continuity Group, LLC ,[object Object],[object Object],[object Object]
Fault Tolerance  (FT) Technology ,[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
FT Lockstep Technology copyright I/O Continuity Group, LLC ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
FT System Requirements ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Other FT Configuration Restrictions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Other FT  Configuration Guidelines ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
FT Conclusions ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Data Recovery in a Nutshell ,[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Data Recovery 10/13/09 copyright 2007  I/O Continuity Group Data Recovery provides faster restores to disk than tape-based backup solutions. ,[object Object],[object Object],[object Object]
vSphere Data Recovery vs VCB ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Data Recovery Key Components 10/13/09 copyright 2007  I/O Continuity Group
Implementation Considerations ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Next Evolution of VCB  shipping with vSphere 10/13/09 copyright 2007  I/O Continuity Group Improved API enables native integration with partner backup application
Data Recovery  Conclusions ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
[object Object],10/13/09 copyright 2007  I/O Continuity Group
Understanding  vSphere Licensing ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Legacy VI3 vCenter  License Server Topology copyright I/O Continuity Group, LLC ESX3 Server ESX3 Server ESX3 Server Active Directory Domain Database Server VirtualCenter Database VirtualCenter Server VMware License Server  running on separate VM or Server ,[object Object],[object Object],[object Object]
New vSphere  ESX License Configuration copyright I/O Continuity Group, LLC ,[object Object],[object Object],[object Object],In navigation bar: Home->Administration->Licensing
Upgrading to vSphere License Keys ,[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
New License Count ,[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
License Downgrade Options ,[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
vSphere Upgrade  Requirements  ,[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
vSphere Compatibility Lists ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Survey of Upgrade Timing copyright I/O Continuity Group, LLC Majority out of 140 votes are waiting at least 3-6 months.  The preference to allow some time before implementation (survey shows 6 months) indicates interest in added, fuller support on vSphere 4 in the near future.
VMware Upgrade Conclusion ,[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
Q&A ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group
VMware on SAN Design Questions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Vendor Neutral Design Benefits ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],copyright I/O Continuity Group, LLC
Closing Remarks ,[object Object],[object Object],10/13/09 copyright 2007  I/O Continuity Group

More Related Content

What's hot

Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVMPradeep Kumar
 
Hyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: ConceptsHyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: ConceptsNick Scuola
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep diveVepsun Technologies
 
Virtualization VMWare technology
Virtualization VMWare technologyVirtualization VMWare technology
Virtualization VMWare technologysanjoysanyal
 
The Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiThe Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiHitendra Molleti
 
Virtualization
VirtualizationVirtualization
VirtualizationBirju Tank
 
VMware Esx Short Presentation
VMware Esx Short PresentationVMware Esx Short Presentation
VMware Esx Short PresentationBarcamp Cork
 
Virtualization in cloud
Virtualization in cloudVirtualization in cloud
Virtualization in cloudAshok Kumar
 
Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Novell
 
VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2Vepsun Technologies
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0solarisyougood
 
Server virtualization by VMWare
Server virtualization by VMWareServer virtualization by VMWare
Server virtualization by VMWaresgurnam73
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
VMware Virtualization
VMware Virtualization VMware Virtualization
VMware Virtualization Ashwani Kumar
 
Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Wan Leung Wong
 

What's hot (20)

Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVM
 
What is Virtualization
What is VirtualizationWhat is Virtualization
What is Virtualization
 
The kvm virtualization way
The kvm virtualization wayThe kvm virtualization way
The kvm virtualization way
 
Hyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: ConceptsHyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: Concepts
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
Virtualization VMWare technology
Virtualization VMWare technologyVirtualization VMWare technology
Virtualization VMWare technology
 
The Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiThe Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra Molleti
 
Virtualization
VirtualizationVirtualization
Virtualization
 
VMware Esx Short Presentation
VMware Esx Short PresentationVMware Esx Short Presentation
VMware Esx Short Presentation
 
Virtualization in cloud
Virtualization in cloudVirtualization in cloud
Virtualization in cloud
 
Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)
 
VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2
 
Presentation v mware virtual san 6.0
Presentation   v mware virtual san 6.0Presentation   v mware virtual san 6.0
Presentation v mware virtual san 6.0
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101
 
Server virtualization by VMWare
Server virtualization by VMWareServer virtualization by VMWare
Server virtualization by VMWare
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
VMware Virtualization
VMware Virtualization VMware Virtualization
VMware Virtualization
 
VSICM8_M02.pptx
VSICM8_M02.pptxVSICM8_M02.pptx
VSICM8_M02.pptx
 
Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)
 

Viewers also liked

VMWare Performance Tuning by Virtera (Jan 2009)
VMWare Performance Tuning by  Virtera (Jan 2009)VMWare Performance Tuning by  Virtera (Jan 2009)
VMWare Performance Tuning by Virtera (Jan 2009)vmug
 
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Suresh Kumar
 
Next-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageNext-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageScott Lowe
 
Data Protector 9.07 what is new
Data Protector 9.07 what is new Data Protector 9.07 what is new
Data Protector 9.07 what is new Andrey Karpov
 
Data Protection overview presentation
Data Protection overview presentationData Protection overview presentation
Data Protection overview presentationAndrey Karpov
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primerDuncan Epping
 
A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875Duncan Epping
 
Alphorm.com Support VMware vSphere 6, Le réseau virtuel
Alphorm.com Support VMware vSphere 6, Le réseau virtuelAlphorm.com Support VMware vSphere 6, Le réseau virtuel
Alphorm.com Support VMware vSphere 6, Le réseau virtuelAlphorm
 
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockage
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockageAlphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockage
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockageAlphorm
 
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuelles
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuellesAlphorm.com Support de la Formation VMware vSphere 6, Les machines virtuelles
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuellesAlphorm
 
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...Alphorm
 
VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014David Davis
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld
 
Alphorm.com Support de la formation Vmware Esxi 6.0
Alphorm.com Support de la formation Vmware Esxi 6.0Alphorm.com Support de la formation Vmware Esxi 6.0
Alphorm.com Support de la formation Vmware Esxi 6.0Alphorm
 
Alphorm.com support-formation-v mware-v-center-6-ss
Alphorm.com support-formation-v mware-v-center-6-ssAlphorm.com support-formation-v mware-v-center-6-ss
Alphorm.com support-formation-v mware-v-center-6-ssAlphorm
 
Virtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareVirtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareDatapath Consulting
 
alphorm.com - Formation VMware vSphere 5
alphorm.com - Formation VMware vSphere 5alphorm.com - Formation VMware vSphere 5
alphorm.com - Formation VMware vSphere 5Alphorm
 

Viewers also liked (19)

VMWare Performance Tuning by Virtera (Jan 2009)
VMWare Performance Tuning by  Virtera (Jan 2009)VMWare Performance Tuning by  Virtera (Jan 2009)
VMWare Performance Tuning by Virtera (Jan 2009)
 
Top ESXi command line v2.0
Top ESXi command line v2.0Top ESXi command line v2.0
Top ESXi command line v2.0
 
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
 
Next-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageNext-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and Storage
 
Data Protector 9.07 what is new
Data Protector 9.07 what is new Data Protector 9.07 what is new
Data Protector 9.07 what is new
 
Data Protection overview presentation
Data Protection overview presentationData Protection overview presentation
Data Protection overview presentation
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875
 
Alphorm.com Support VMware vSphere 6, Le réseau virtuel
Alphorm.com Support VMware vSphere 6, Le réseau virtuelAlphorm.com Support VMware vSphere 6, Le réseau virtuel
Alphorm.com Support VMware vSphere 6, Le réseau virtuel
 
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockage
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockageAlphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockage
Alphorm.com Support de la Formation VMmware vSphere 6, La gestion du stockage
 
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuelles
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuellesAlphorm.com Support de la Formation VMware vSphere 6, Les machines virtuelles
Alphorm.com Support de la Formation VMware vSphere 6, Les machines virtuelles
 
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...
Alphorm.com Support de la Formation VMware vSphere 6 - Clustering HA, DRS et ...
 
VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
 
Alphorm.com Support de la formation Vmware Esxi 6.0
Alphorm.com Support de la formation Vmware Esxi 6.0Alphorm.com Support de la formation Vmware Esxi 6.0
Alphorm.com Support de la formation Vmware Esxi 6.0
 
Alphorm.com support-formation-v mware-v-center-6-ss
Alphorm.com support-formation-v mware-v-center-6-ssAlphorm.com support-formation-v mware-v-center-6-ss
Alphorm.com support-formation-v mware-v-center-6-ss
 
Virtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMwareVirtualization 101: Everything You Need To Know To Get Started With VMware
Virtualization 101: Everything You Need To Know To Get Started With VMware
 
alphorm.com - Formation VMware vSphere 5
alphorm.com - Formation VMware vSphere 5alphorm.com - Formation VMware vSphere 5
alphorm.com - Formation VMware vSphere 5
 

Similar to VMware vSphere Storage Enhancements

Presentation integration vmware with emc storage
Presentation   integration vmware with emc storagePresentation   integration vmware with emc storage
Presentation integration vmware with emc storagesolarisyourep
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes StorageStephen Foskett
 
Oracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified StorageOracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified StorageDavid R. Klauser
 
Make room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storageMake room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storagePrincipled Technologies
 
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld
 
IBM System p Virtualisation.ppt
IBM System p Virtualisation.pptIBM System p Virtualisation.ppt
IBM System p Virtualisation.ppthellocn
 
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2Stephen Foskett
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...Principled Technologies
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overviewnomathjobs
 
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...KiwiSi
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storageryanwakeling
 
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...MaryJWilliams2
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshootingglbsolutions
 

Similar to VMware vSphere Storage Enhancements (20)

Presentation integration vmware with emc storage
Presentation   integration vmware with emc storagePresentation   integration vmware with emc storage
Presentation integration vmware with emc storage
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization Presentation
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes Storage
 
Oracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified StorageOracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified Storage
 
Make room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storageMake room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storage
 
Emc storag
Emc storagEmc storag
Emc storag
 
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
 
IBM System p Virtualisation.ppt
IBM System p Virtualisation.pptIBM System p Virtualisation.ppt
IBM System p Virtualisation.ppt
 
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
 
EMC VNX
EMC VNXEMC VNX
EMC VNX
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overview
 
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
 
3487570
34875703487570
3487570
 
Vmware san connectivity
Vmware san connectivityVmware san connectivity
Vmware san connectivity
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storage
 
Caching On Zos
Caching On ZosCaching On Zos
Caching On Zos
 
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...
Unveiling the Evolution: Proprietary Hardware to Agile Software-Defined Solut...
 
Challenges in Managing IT Infrastructure
Challenges in Managing IT InfrastructureChallenges in Managing IT Infrastructure
Challenges in Managing IT Infrastructure
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
 

Recently uploaded

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 

Recently uploaded (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

VMware vSphere Storage Enhancements

  • 1. Simplify, Virtualize and Protect Your Datacenter Cost Savings and Business Continuity with VMware's Latest vSphere Solution 10/13/09 copyright 2007 I/O Continuity Group
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7. Traditional DAS Direct-Attached Storage External SCSI Storage Array = Stranded Capacity Parallel SCSI3 connection provides throughput of approx 200 MB/s after overhead. LAN Each server is separately attached to a dedicated SCSI storage array requiring high storage maintenance with difficult scalability and provisioning. Different vendor platforms cannot share the same external array. copyright I/O Continuity Group, LLC Popular method for deploying applications was to install each on a dedicated server.
  • 8. SAN- attached Storage FC Storage Array FC SAN Switches 200/400/800 MB/s OR IP SAN w/ iSCSI Ethernet Switches Tape Library Servers with NICs and FC HBA’s LAN FC SAN’s offer a SHARED, high speed, dedicated block-level infrastructure independent of the LAN. IP SANs uses Ethernet Switches Brocade copyright I/O Continuity Group, LLC Applications able to run anywhere.
  • 9. Physical Servers represent the Before illustration running one application per server. VMware “Converter” can migrate physical machines to Virtual Machines running on ESX in the After illustration. copyright I/O Continuity Group, LLC
  • 10.
  • 11.
  • 12.
  • 13. ESX Datastore and VMFS 10/13/09 copyright 2007 I/O Continuity Group Volume LUN (Storage hardware) Datastore VMFS mounted on ESX from LUN VM Files Datastores are logical storage units on a physical LUN (disk device) or on a disk partition. Datastore format types are VMFS or NFS (RDMs are for VMs). Datastores can hold VM files, templates and ISO images, or the RDM used to access the raw data. VM VM ESX Storage Array
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21. Without Thin Provisioning/Without VMware Thick LUN 500 GB virtual disk 10/13/09 copyright 2007 I/O Continuity Group Servers With Dedicated Disks ESX Servers on SAN Switches Storage Array holding SHARED disks HBA’s HBA’s Thin LUN 500 GB Virtual Disk Traditional Servers with DAS- Direct Attached Storage Totally stranded storage devices. What you see is what you get. All VMs see capacity allocated, but Thin LUN offers only what’s used. . 400 GB unused but allocated 100GB application usage SCSI Adapters SCSI Adapters
  • 22.
  • 23. Virtual Disk Thin Provisioning Configured Copyright © 2006 Dell Inc.
  • 24. Thin Disk Provisioning Operations 10/13/09 copyright 2007 I/O Continuity Group
  • 25. Improved Storage Management 10/13/09 copyright 2007 I/O Continuity Group Datastore now managed as an object within vCenter to view all components in the storage layout and utilization levels . Details for each datastore reveal which ESX servers are accessing capacity. .
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34. iSCSI Performance Improvements 10/13/09 copyright 2007 I/O Continuity Group SW iSCSI stack is most improved
  • 35.
  • 36.
  • 37.
  • 38. Without Hot Disk Extend LUN Spanning 10/13/09 copyright 2007 I/O Continuity Group Before 20 GB Added 20 GB Each 20 GB Extent (Virtual Disk) becomes a separate partition (file system with drive letter) in the guest OS. After 40 GB If one spanned extent is lost, the entire volume becomes corrupt.
  • 39.
  • 40. Dynamic Expansion up to VM copyright I/O Continuity Group, LLC VM Guest OS level Virtual Disk ESX level Datastore holding VM Virtual Disks-ESX Admin Storage level LUN (LUN presented as one Datastore- SAN Admin Datastore Volume Growth Dynamic LUN Expansion Hot Virtual Disk Extend
  • 41. Virtual Disk Hot Extend Configuration 10/13/09 copyright 2007 I/O Continuity Group Increase from 2 GB to 40 GB After updating the VM Properties, use Guest OS to format file system to use newly allocated disk space. Must be a non-system virtual disk. Ultimate VM application capacity is not always predictable at the outset.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46. Storage VMotion Benchmarks 10/13/09 copyright 2007 I/O Continuity Group Change Block Tracking replaces Snapshot technology Less CPU processing consumed Shorter time to migrate data . Fewer resources consumed in process.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62. Pluggable Storage Architecture 10/13/09 copyright 2007 I/O Continuity Group ESX 3.5 did not support third-party storage vendor multi-path software. ESX 3.5 required native MPIO driver which was not optimized for dynamic load balancing and failover. vSphere ESX 4 allows storage partners to write plug-ins for their specific capabilities. Dynamic multipathing and load balancing on “active-active arrays ” replacing low intelligent “native multipathing” (basic round-robin or fail-over)
  • 63.
  • 64. Pluggable Storage Architecture (PSA) 10/13/09 copyright 2007 I/O Continuity Group By default, VMware provides a generic MPP called NMP (native multipathing). Multipathing plug-in
  • 65. Enhanced Multipathing with Pluggable Storage Architecture 10/13/09 copyright 2007 I/O Continuity Group Each ESX4 host will apply one of the plug-in options based on storage vendor choices.
  • 66.
  • 67. Third-party PSPs 10/13/09 copyright 2007 I/O Continuity Group
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86. Data Recovery Key Components 10/13/09 copyright 2007 I/O Continuity Group
  • 87.
  • 88. Next Evolution of VCB shipping with vSphere 10/13/09 copyright 2007 I/O Continuity Group Improved API enables native integration with partner backup application
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99. Survey of Upgrade Timing copyright I/O Continuity Group, LLC Majority out of 140 votes are waiting at least 3-6 months. The preference to allow some time before implementation (survey shows 6 months) indicates interest in added, fuller support on vSphere 4 in the near future.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.

Editor's Notes

  1. Fighting Complexity In The Data Center Basically, Maritz said, data centers represent expensive "pillars of complexity" for companies of all sizes, which is why they're being threatened by cloud computing. Data centers hang on, he said, because they're secure and well understood: "The beast we know and love." Well, SMBs are even more intimidated by data center complexity and they can't take advantage of many data center economies of scale -- all of which makes cloud computing an even more attractive option for smaller companies. VMware is trying to change that with its $999 Always On IT In A Box, which covers up to three servers with two processors each (which generates the $166 per processor price the company is tossing around). According to Maritz, this product dramatically lowers the complexity level for small companies with mini data centers. If something goes wrong, he said, "You don't call for help right way, you just wait for the regular weekly service visit. The software is self-healing and keeps running until the regular maintenance." Michael Dell focused on the manageability issue: "Spending $1 to acquire infrastructure and then spending $8 to manage it is unacceptable," he said. "Variation kills." To that end, VMware is working with key cloud providers like Terremark (though not Amazon EC2) to make the network more compatible with company data centers. The idea is to let companies view all resources (internal and in the cloud) as a single "private cloud," and easily move data and applications among various data centers and cloud providers as needed. Maritz also claimed that with vSphere 4's performance improvements, virtually any application can be virtualized. Good Is Good, But Cheap Is Sometimes Better For SMBs, of course, the key is often cost. "I want to have the same features enterprises want," said Campbell Clinic's Lauer. "The question is always whether I can afford them. I'd love to use Site Recovery Manager (SRM) but I can't afford it. I can't do the ROI," he said, but added that other SMBs that can't afford downtime might be able to justify it. Lauer said that many of the features he was most interested in are typically included only in the enterprise editions of vSphere 4, but he expects VMware to offer them as independent upgrades for SMB customers. "I hope to see more specific features built for SMBs." Just as important for Lauer, vSphere 4 is now at a price point much closer to Microsoft, which will help SMBs get on board with VMware, he said. Some SMBs just want to consolidate their servers, or have room to "spin up a test machine." Joe Andrews, VMware's SMB group product marketing manager, added that when you compare "high availability" functionality, VMware is now cheaper than Microsoft's virtualization solutions. Andy Woyzbun, lead analyst at Info-Tech Research , agrees that in the competition between VMware and Microsoft, "a lot of it has to do with pricing." Microsoft's virtualization offerings have been cheap enough to convince smaller companies to start using virtualization, Woyzbun said, but VMware's new packages will help keep existing customers on board. "But what about the great unwashed" who haven't yet committed to virtualization?
  2. If you missed our last seminar, the topic was broader. SANs are the underlying storage architecture. How VMware storage should be configured.
  3. The popular model for deploying applications in the past was to add a dedicated server to the datacenter. Hopefully, there was still a rack somewhere with space. No one questioned the power consumption or administration issues. This movement of the past 15 yrs has led to server sprawl supporting the email, web development and social networking revolutions. It is not a sustainable model, even though hardware prices keep going down while Moores Law of exponential processing power continued. With the higher CPU processing power of servers today, one application normally cannot efficiently utilize all the server resources. In today’s economy there is no room for waste. Whether we have standalone applications (on the right) or VMware deployment (on the left), there is still no sharing of the disk resources between these systems, making them very static and at risk of failure. Just to quickly review – this is our Before picture. The musical chairs model where only one server can attach to an external disk shelf at a time. Servers cannot boot and applications cannot run without a disk, making it the lowest common denominator hardware component after the server. Clients communicate with applications running on servers running applications on local disks. Local disks can be either within the server itself or on an external array. Why is this storage model no longer tenable? Supplying power to a server for a year is now more expensive that buying the server. Running a single mission-critical application on a large server might not utilize all the CPU and memory capacity, resulting in wasted resources. Each server would boot from disks inside the server and these internal disk may have contained adequate space to run the application. But what if the application was a database that continually added more data records consuming more disk space? An external array was the common solution to a server requiring more disk capacity. Whether or not it could grow into using all the disks in the external array was unpredictable. This led to stranded unused storage capacity.
  4. To reveiw, by having all your servers sharing one storage array on a high-speed dedicated SAN, you will not only have a system that runs by itself, you will also have the necessary foundation to install a virtualization solution like VMware. Notice how the VMware server can share the same storage as Linux or Windows- again SAN’s are flexible.
  5. Here is a good example of a before and after virtualization illustration. You should be able to see the value proposition. There are tools that will migrate any of your physical servers to virtual servers. So instead of having one application running on each physical server, you have multiple “Virtual Servers” running on two or more high availability servers. Thereafter, if you ever need another server, you simply right click and clone a virtual server. This is on-the-fly server provisioning.
  6. It is necessary to install the guest OS and application into the VM just like a physical server. Templates or Virtual Appliances are pre-installed applications running on a pre-installed OS.
  7. The new vSphere features that improve storage efficiency are Thin Provisioning and iSCSI software initiator. Storage control enhancements include Hot Expansion of VMFS volumes. Better flexibility is achieved through Storage Vmotion, Pluggable Storage Architecture and Paravirtualized SCSI VMware has introduced a new version of the hypervisor ESX 4.0 for greater resource efficiency, management and flexibility. The new features: increase the disk utilization rate provide greater control over storage resources Enable increased options for datastore protocols Format of virtual disks Simplify Storage Management Alleviating Over-Subscription Dynamic Storage Growth Migration with Storage VMotion vStorage Thin Provisioning Enhanced Performance for Software iSCSI Stack Plug-ins for Vendor Multipathing Paravirtualized SCSI Adapters Protect Data from Loss or Disaster New VM Fault Tolerance New Data Recovery Option Legacy VMware Consolidated Backup Enterprise Site Recovery Manager
  8. Here are two diagrams comparing Thick and Thin Disk Provisioning. Notice the thick disk is a fixed amount of 500MB of capacity dedicated to a host currently using 100 GB. If the application running on this disk never growth 400 GB becomes stranded if not used. The thin disk appears to the VMs using it as 500 GB, but only the portion actually used is physically allocated to the VM. http://virtualgeek.typepad.com/virtual_geek/2009/04/thin-on-thin-where-should-you-do-thin-provisioning-vsphere-40-or-array-level.html Thin - in this format, the size of the VDMK file on the datastore is only however much is used within the VM itself. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK file will be 100GB in size. As I/O occurs in the guest, the vmkernel zeroes out the space needed right before the guest I/O is committed, and growing the VMDK file similarly. Thick (otherwise known as zeroedthick) - in this format, the size of the VDMK file on the datastore is the size of the virtual disk that you create, but within the file, it is not “pre zeroed”. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK will appear to be 500 GB at the datastore filesystem, and contains 100GB of data on disk. As I/O occurs in the guest, the vmkernel zeroes out the space needed right before the guest I/O is committed, but the VDMK file size does not grow (since it was already 500GB) http://www.virtualpro.co.uk/2009/06/24/vmware-vsphere-thin-provisioning/ You need to factor in the I/O of the disk subsystem when working out VMs per LUN. Slower/older subsystems don’t preform as well and should have lower VM limit (hence lower I/O requirements) than faster/newer disk subsystems. IOMeter comes in handy here to benchmark new systems for a baseline. Basically as you say its a black art, some people go with 1 VM per LUN, others go 2TB LUNs and pile them up…. We work on the 500-600GB range per LUN with some ‘bigger’ LUNS for the larger guests. Interesting conversation – as we (EMC) have been refreshing some of the best practice guides for vSphere. We had a lot of debate (the previous versions recommended ~ 300-500GB as a max, and baseline along the formula you recommend Duncan) on this topic. In the end, capacity is one of the planning vectors. Those IOps limited cases absolutely happen (depends on the type of VM). SCSI locking/reservation is generally not an issue during steady state – but as Hany pointed out can be when there are many ESX snapshots occuring simulaneously. I’ve done posts on this topic, and Duncan is also right that they are MUCH more transient than they used to be – and will continue to improve). host-side LUN queues can also be a limiter on the number of VMS (I’ve done posts on this one). While 10-16 VMs per datastore is safe, it’s also WAY low. There’s certain logic of not only performance, but aggregated risk. So – our current thinking is – a capacity-oriented recommendation is almost impossible to give. The other thing is that between ESX snapshots and vswap – the extra space is really difficult to predict. In general, both on performance and capacity, my guidance is: - spend less time on planning up front than you would do on a physical deployment (people are used to the pain of re-configuration of storage being DEADLY disruptive) - plan using standardized building blocks (so, hey, if you want n vms per datastore, and m GB per datastore fine – just know you can go bigger with things like spanned VMFS) - monitor both capacity/performance – the nice new managed datastore objects and storage view reports are nice here. - use svmotion (along with dynamic/non-disruptive array reconfiguration) liberally to resolve point issues. With all that – are we (I) making this more complicated than it needs to be? T
  9. Capacity is comparable to the common practice in the airline industry to overbook flights. Many people make reservations but do not show up for their flts. By booking more reservations than available seats, the airlines save money on unused seats. In the storage industry, a request for disk larger than what an application may eventually need, creates a similar problem. Thin provisioning of storage allows an administrator to over allocate the amount of storage resources currently available to optimize utilization rates. A thin virtual disk is assigned only the amount of space in the datastore actually needed for the virtual disk. NFS disks are equivalent to thin provisioning.
  10. http://virtualgeek.typepad.com/virtual_geek/2009/04/thin-on-thin-where-should-you-do-thin-provisioning-vsphere-40-or-array-level.html Eagerzeroedthick - in this format, the size of the VDMK file on the datastore is the size of the virtual disk that you create, and within the file, it is “pre-zeroed”. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK will appear to be 500GB at the datastore filesystem, and contains 100GB of data and 400GB of zeros on disk. As I/O occurs in the guest, the vmkernel does not need to zero the blocks prior to the I/O occurring. This results in improved I/O latency, and less back-end storage I/O operations during normal I/O, but significantly more back-end storage I/O operation up front during the creation of the VM.
  11. Capacity is comparable to the common practice in the airline industry to overbook flights. Many people make reservations but do not show up for their flts. By booking more reservations than available seats, the airlines save money on unused seats. In the storage industry, a request for disk larger than what an application may eventually need, creates a similar problem. Thin provisioning of storage allows an administrator to over allocate the amount of storage resources currently available to optimize utilization rates. A thin virtual disk is assigned only the amount of space in the datastore actually needed for the virtual disk. NFS disks are equivalent to thin provisioning.
  12. Virtual disk thin provisioning is not the same as storage hardware “thin provisioning”. With vSphere it is possible to do virtual disk thin provisioning at the database level, in addition to thin provisioning at the storage array level.
  13. Thin Provisioning is wizard-driven and easy to use to create new VM or clone a VM, plus it works with Storage VMotion. Thin Provisioning entails much more than just creating thin virtual disks. It requires: Intelligence for how much over commitment to allow Reporting on usage over time Alerting about an out of a pending out of space issue Providing several options to create more space vSphere provides all this and more to make it safe to use Thin Provisioning and Over Commit space.
  14. Each VM and ESX host in Inventory has storage information Able to set alarms on storage use Capacity alarms are critical for thin provisioning expansion New high level Datastore details New Storage Topology map Each VM and ESX in the inventory has a tab showing storage information and allowing users to set alarms on storage use – setting capacity alarms becomes extremely important when thin provisioning is used! vCenter inventory has a view showing datastore details There’s also a nice storage topology map
  15. One of the most widespread storage features is native thin provisioning. VMware ESX 4 will allocate storage in 1 MB chunks as capacity is used. This isn’t really completely new – similar support was enabled by default for virtual disks on NFS in VI 3, and thin provisioning could be enabled on the command line for block-based storage as well. It was also present in VMware’s desktop products, including my own copy of Fusion. And ESX allows thick-to-thin conversion during Storage VMotion. The difference with vSphere 4 is that thin provisioning is fully supported and integrated into every version of ESX. Although many storage arrays now also offer thin storage , the addition of native, integrated thin provisioning right in ESX is huge. This alone will be a major capacity (and thus, cost) savings feature! VMware claims 50% storage savings in their lab tests.
  16. http://www.vmware.com/files/pdf/vsphere_performance_wp.pdf iSCSI Support Improvements vSphere 4 includes significant updates to the iSCSI stack for both software iSCSI (that is, in which the iSCSI initiator runs at the ESX layer) and hardware iSCSI (that is, in which ESX leverages a hardware-optimized iSCSI HBA). These changes offer dramatic improvement of both performance as well as functionality of both software and hardware iSCSI and delivering significant reduction of CPU overhead for software iSCSI. Efficiency gains for iSCSI stack can result in 7-26 percent CPU savings for read, 18-52 percent for write. It’s harder to claim it as a new feature, but the iSCSI software initiator has also been tweaked and tuned to use less CPU time and deliver better throughput. The iSCSI configuration process has also been smoothed out so one no longer needs to have a live Service Console connection in order to communicate with an iSCSI target. And changes made in the general tab are now global, so they’ll propagate down to each target. Bi-directional CHAP is also added, so the target can now be authenticated in addition to the initiator. vSphere also includes a paravirtualized iSCSI driver (PVSCSI) which works like vmxnet to present a higher-performance iSCSI adapter within certain supported guest OSes.
  17. To put iSCSI protocol in context it runs on TCP/IP which is the common Ethernet LAN protocol. In 2008, iSCSI storage exceeded FC storage new purchases due to it’s affordability. Software iSCSI is implemented in OS software through a standard NIC. Hardware iSCSI is implemented through hardware on an HBA, offloading CPU processing to a PCI card offering better performance. Software iSCSI is less expensive that purchasing a separate HBA, but the tradeoff is lower performance. An ESX host can only connect using only one of these two storage protocols. Therefore, vSphere has improved the software iSCSI stack to compensate. http://searchstorage.techtarget.com/generic/0,295582,sid5_gci1281317,00.html
  18. vSphere has released a new iSCSI software initiator that is far more efficient in its use of the ESX CPU cycles to drive storage I/O. The entire iSCSI software initiator stack was rewritten and tuned to optimize cache affinity, enhance the Vmkernel TCP/IP stack and make better use of internal locks. Compared to ESX3, new vSphere iSCSI software initiator, which relies on the host’s native NIC, is optimized for virtualization I/O and providing significant improvements in CPU usage and I/O throughtput. Do we still need to configure a Service Console port for iSCSI initiator ? http://transcenture.blogspot.com/2009/06/it-news-vsphere-4-and-vcenter-4.html We no longer need a Service Console for software iSCSI initator. The vmkiscsid no longer runs in the Service Console. There have been improvements made to the new iSCSI stack in the kernel and also with the use of TCP/IP2 which has multi threading capabilities. TCPIP 2 Support is Based on FreeBSD 6.1 / IPv6 / locking and threading capabilties Lock identification algorithm- Starts with an atomic instruction that writes back a different value to the lock otherwise meaning unsuccessful lock acquisition. http://www.patentstorm.us/patents/7117481/description.html Cache affinity http://www.vmware.com/files/pdf/perf-vsphere-cpu_scheduler.pdf Histograms You can get histograms of latency, I/O blocksizes and so on. ... http://www.anandtech.com/weblog/default.aspx?bcategory=15
  19. iSCSI protocol encapsulates SCSI blocks in a TCP packet over IP LAN.
  20. The configuration steps to create an iSCSI datastore are simplified in vSphere storage provisioning. The new iSCSI stack no longer requires a Service Console connection to communicate with an iSCSI target. Bi-directional CHAP authentication improves data access security, allowing initiator to authenticate target, at the same time the target is authenticating the initiator. It is also possible to override the settings in order to configure unique parameters for each array discovered by the initiator.
  21. Again the key here is the lower CPU utilization typically much higher for software iSCSI. Even the HW iSCSI is more efficient with vSphere. But clearly SW iSCSI CPU utilization is most improved in vSphere. In conclusion, this bodes well for applications with heavier workloads running on iSCSI.
  22. I am frequently asked the question about how to grow a VMware virtual disk (VMDK) and have it be recognized by the operating system.   If you are trying to simply extend a non-system volume within Windows (ie, anything other than the C: drive), then the process is pretty simple (refer to MS KB 325590 ).   But when you are trying to grow a C: with windows, you need to get around the limitation of extending the system partition.  This is just one more instance where VMware shows how powerful and flexible it truly is.
  23. VMFS volumes can now grow (and, in some cases, shrink) online without resorting to spanning to a new LUN. Under vSphere 4, VMFS volumes can grow to take advantage of expanded LUNs (up to 2 TB per LUN). Go into the VM Properties and select the hard disk (virtual disk) and increase the field “Provisioned size”. Then the Guest OS must run diskpart or W2K8 has utility to auto extendfs via command line. The old method still works as well, and multi-LUN spanned VMFS volumes can grow when any of their LUNs is expanded.
  24. To increase the size of a virtual disk, enter the VM’s Properties (right click the virtual machine icon and select Edit Settings ). Select the desired hard disk in the Hardware panel. In the resulting Disk Provisioning panel, enter the new size for the hard disk. After increasing the virtual disk size, the guest operating system disk management tools (diskpart for W2K3 or extendfs for W2K8) will allow the file system on this disk to use the newly allocated disk space. A system disk cannot be grown (without using Converter) on the fly because there are no locks on the c: drive. To create a second VM and add a disk to it (select “Add Existing Disk” not a new disk) and point to original c:drive. Shutdown second VM and move OS drive to d: partition.
  25. VI 3.5 Storage VMotion (SVM) enables live migration of virtual machine disks from one datastore to another with no disruption or downtime. Just as VMware VMotion allows IT administrators to minimize service disruption due to planned server downtime, Storage VMotion allows them to minimize disruption by reducing the planned storage downtime previously required for rebalancing or retiring storage arrays. Storage VMotion simplifies array migration and upgrade tasks, and reduces I/O bottlenecks by moving virtual machines while the VM remains up and running. It provides a hot migration of the storage location on which the vmhome resides. Bringing this live migration capability to the 3.5 release of ESX Server required a few tradeoffs. For one, the only fully supported storage source and target are Fibre Channel datastores. In addition, Storage VMotion is not fully integrated within Virtual Center and can be invoked only through the Remote Command Line option. Options and instructions on how to download the RCLI and invoke Storage VMotion, can be found at the following link: http://www.vmware.com/download/download.do?downloadGroup=VI-RCLI Storage VMotion is now fully supported (experimental before) and has much improved switchover time. For very I/O intensive VMs, this improvement can be 100x. Storage VMotion leverages a new and more efficient block copy mechanism called Changed Block Tracking, minimizing CPU and memory resource consumption on the ESX host up to two times. Changes within the mechanisms behind this function Snapshot in 3.x to do sVMotion, in vSphere however, enhanced Storage VMotion flows as follows. Copy VM home to new location Start changed block tracking Pre-copy disk to destination (multiple iterations) Copy all remaining disk blocks Fast suspend/resume VM to start running on new home and disks Delete original VM and disks Furthermore in vSphere, there is support for moving VMDK’s from thick to thin formats and migrating RDM’s to VMDK’s
  26. VMware Storage VMotion is a state-of-the-art solution that enables you to perform live migration of virtual machine disk files across heterogeneous storage arrays with complete transaction integrity and no interruption in service for critical applications. By implementing VMware Storage VMotion in your virtual infrastructure, you gain the ability to perform proactive storage migrations, simplify array refreshes/retirements, improve virtual machine storage performance, and free up valuable storage capacity in your data center. Purchasing new storage and arrays and coming out of lease or maintenance contracts have traditionally caused cumbersome, time-consuming and disruptive migrations. Storage VMotion helps you eliminate service disruptions with live, automated migration of virtual machine disk files from existing storage to their new destination. Non-disruptive migration of virtual machine disk files to different classes of storage enables cost-effective management of virtual machine disks based on usage and priority policies as part of a strategy for tiered storage. Managing storage LUN allocations to support dynamic virtual machine environments can be a very time-consuming process that requires extensive coordination between application owners, virtual machine owners and storage administrators, often resulting in downtime for critical applications. All too often, IT organizations are force to over-allocate precious storage resources in order to deal with I/O bottlenecks resulting from unusually active virtual machine I/O or poorly configured LUNs, Storage VMotion give you a better way to optimize storage I/O performance through non-disruptive movement of virtual machine disk files to alternative LUNs that are better architected to deliver the required performance. nadequate storage allocation for a virtual machine is likely to create application performance issues, but until now there has been no efficient way to reclaim unused or “stranded” storage capacity. Furthermore, increasing or decreasing storage allocation requires several manual steps, resulting in significant service downtime. Storage VMotion enables efficient storage utilization to avoid performance problems before they occur by non-disruptively moving virtual machines to larger capacity storage LUNs as virtual machine disk files approach their total available LUN size limits. Unused storage capacity can be reclaimed and allocated to more deserving virtual machine applications. Chad Sakac has a great post on his Virtual Geek blog titled The Case For And Against Stretched ESX Clusters . In this post Chad discusses the possibilities of configuring ESX Clusters between 2 different physical data centers. That is, spanning the SAN across a wide area network so that VMs can be vmotioned between sites. The concept is a frequently discussed desire of many administrators, and Chad brings to light some great points for and against this design with specific configuration details about making it work with VMware ESX.
  27. The vertical axis is measuring CPU processing cycles. So the lower the bar the better.
  28. Enable live migration of virtual machine disk files across storage arrays. VMware Storage VMotion lets you relocate virtual machine disk files between and across shared storage locations while maintaining continuous service availability and complete transaction integrity. Reduce IT costs and improve flexibility with server consolidation Decrease downtime and improve reliability with business continuity and disaster recovery Increase energy efficiency by running fewer servers and dynamically powering down unused servers with our green IT solutions
  29. Like thin provisioning, Storage VMotion has been elevated to first-class status, supported just about everywhere you’d want it. It’s in all the likely spots within vCenter. Storage VMotion gives serious storage flexibility now, enabling (almost) any-to-any migration of VMFS volumes: Pick up a Fibre Channel, iSCSI, or NFS disk image and move it to another datastore running any of those protocols to convert live. And you can do thick-to-thin provisioning at the same time. Under the hood, the whole infrastructure has been revised. Storage VMotion leverages VMware’s change block tracking instead of disk snapshots now, speeding up the migration process and reducing the (formerly excessive) memory and CPU requirements of Storage VMotion in 3.5. This is the same technology leveraged by vSphere’s High Availability features, by the way.
  30. VMware Storage VMotion is a state-of-the-art solution that enables you to perform live migration of virtual machine disk files across heterogeneous storage arrays with complete transaction integrity and no interruption in service for critical applications. By implementing VMware Storage VMotion in your virtual infrastructure, you gain the ability to perform proactive storage migrations, simplify array refreshes/retirements, improve virtual machine storage performance, and free up valuable storage capacity in your data center.
  31. RDMs come in 2 versions: Physical and Virtual Mode. Physical means direct access to the RAW Lun which will allow you to use snapshotting tools outside of the VI3, however, the VMDK itself would not be snapshotable as far as VI3 allows with its own tools - that is why it is greyed... Virtual Mode RDM will allow you to take snapshots in the VC but then you would "lose" your direct access (low level) to the Lun...
  32. http://www.vmware.com/files/pdf/vsphere_performance_wp.pdf VMware Paravirtualized SCSI (PVSCSI) Emulated versions of hardware storage adapters from BusLogic and LSILogic were the only choices available in earlier ESX releases. The advantage of this full virtualization is that most operating systems ship drivers for these devices. However, this precludes the use of performance optimizations that are possible in virtualized environments. To this end, ESX 4.0 ships with a new virtual storage adapter – Paravirtualized SCSI (PVSCSI). PVSCSI adapters are high-performance storage adapters that offer greater throughput and lower CPU utilization for virtual machines. They are best suited for environments in which guest applications are very I/O intensive. PVSCSI adapter extends to the storage stack performance gains associated with other paravirtual devices such as the network adapter VMXNET available in earlier versions of ESX. As with other device emulations, PVSCSI emulation improves efficiency by: • Reducing the cost of virtual interrupts • Batching the processing of I/O requests • Batching I/O completion interrupts A further optimization, which is specific to virtual environments, reduces the number of context switches between the guest and Virtual Machine Monitor. Efficiency gains from PVSCSI can result in additional 2x CPU savings for Fibre Channel (FC), up to 30 percent CPU savings for iSCSI.
  33. Paravirtualized SCSI is configured at the VM level. It requires supported hardware and works with certain Windows and RedHat Guests. (“Message Signaled Interrupts” use in-band vs out-of-band PCI memory space for lower interrupt latency ) http://blogs.sun.com/gnunu/entry/how_does_msi_x_work MSI, Message Signaled Interrupts, uses in-band pci memory space message to raise interrupt, instead of conventional out-band pci INTx pin. When system wants to use msi, it should setup msi pci capability control registers. Simply said, it would write address register (32b or 64b) and data register, and set enable bit of msi control register. When device chip wants to send interrupt, it will write the data in data register to the address specified in address register. MSI-X is an extension to MSI, for supporting more vectors. MSI can support at most 32 vectors while MSI-X can up to 2048. Using msi can lower interrupt latency, by giving every kind of interrupt its own vector/handler. When kernel see the message, it will directly vector to the interrupt service routine associated with the address/data. The address/data (vector) were allocated by system, while driver needs to register handler with the vector. By allocate vector area generally for all kinds of pci devices, system will reach a general solution to reporting interrupts quickly.
  34. However, to take advantage of PVSCSI a VM virtual disk configuration might need to change. Because VMware does not support PVSCSI on the operating system boot partition, VMs will need to be configured with separate virtual disks(.vmdk) for the boot drive and the data drive(s) . Note that all the posts and articles referenced mention that PVSCSI works on a .vmdk containing the boot partition. It’s just that VMware officially does not support it. So, the challenge for using PVSCSI then is to migrate services and applications that exist on VMs that contain both the boot partition and the data on a single .vmdk . Although separate boot and data partitions are the defacto standard for physical servers, the convenience of VMs has lead to a single .vmdk configuration in a lot of IT shops. Caveat - Does not support boot disks with ESX 4.0, but tests suggest it works fine while not formally supported by VMware
  35. VMware recommends that you create a primary adapter for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data, such as a database or mailbox. The primary adapter will be the default for the guest operating system on the virtual machine. For example, for virtual machines with Microsoft Windows 2008 guest operating systems, LSI Logic is the default primary adapter.
  36. Since PVSCSI adapters are not supported for boot devices (they work, just not supported by VMware), you will need to add a 2nd hard drive to use the PVSCSI adapter.   When setting up a new virtual environment on vSphere for a client, it wasn’t clear where exactly that option is located.  It seemed when adding a 2nd hard drive, it just used the existing SCSI adapter.  On the VMware KB site, I found KB article 1010398 which talks about the steps to set that up.  Below are the details from the VMware KB site.  The most important step is #12 , you NEED to select a SCSI adapter that starts from SCSI (1:0) through SCSI (3:15).  Selecting the next available SCSI interface, eg.  SCSI (0:1), uses the boot volume SCSI adapter.
  37. Multipathing in ESX has always been a difficult topic to get to grips with. If you’re running I/O intensive apps in you VMware infrastructure you want to ensure you spread the I/O load across all fibre paths. In ESx 3.5 this was a difficult thing to achieve especially with active-passive storage systems. Have a look at the following video which does a great job of explaining the issues and shows the different behaviours encountered when setting the multipath options at the GUI versus the service console. It looks like things will get a whole lot easier with vSphere if you’re using EMC storage. EMC are launching Powerpath/VE which they claim will manage the multipathing intelligently and remove the complexity. Another good explaination below. http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf Understanding Round-Robin Load Balancing ESX Server hosts can use multipathing for failover. When one path from the ESX Server host to the SAN becomes unavailable, the host switches to another path. ESX Server hosts can also use multipathing for load balancing. To achieve better load balancing across paths, administrators can specify that the ESX Server host should switch paths under certain circumstances. Different settable options determine when the ESX Server host switches paths and what paths are chosen. 􀂄 When to switch – Specify that the ESX Server host should attempt a path switch after a specified number of I/O blocks have been issued on a path or after a specified number of read or write commands have been issued on a path. If another path exists that meets the specified path policy for the target, the active path to the target is switched to the new path. The --custom-max-commands and --custom-max-blocks options specify when to switch. 􀂄 Which target to use – Specify that the next path should be on the preferred target, the most recently used target, or any target. The --custom-target-policy option specifies which target to use. 􀂄 Which HBA to use – Specify that the next path should be on the preferred HBA, the most recently used HBA, the HBA with the minimum outstanding I/O requests, or any HBA. The --custom-HBA-policy option specifies which HBA to use.
  38. The point of a SAN is not only fast access to data but also high availability. SANs are fault tolerant, meaning there are duplicate components in case something fails. This is known as No-Single-Point-of-Failure architecture. Let’s count the paths from each host to each Logical Device.
  39. VMware NMP is Native Multipathing which uses PSA (depending on the storage hardware) for its native load balancing and failover mechanisms. For example, VMware’s native path selection policies are Round Robin, Most Recently Used (MRU) and Fixed. Only “Enterprise Plus” licensees will get to use it, but the vSphere family also sports a new pluggable storage architecture (PSA) which will initially be leveraged to deliver vendor-specific multipath support. Note that the native multipath support in vSphere continues to be a basic round-robin or fail-over system – it will not dynamically load balance I/O across multiple paths or make more intelligent decisions about which paths to use. vSphere 4's Pluggable Storage Architecture allows third-party developers to replace ESX's storage I/O stack (source: VMware) As you may gather from this VMware illustration (but would probably miss since it’s not all that comprehensible), there are two classes of third-party plug-ins: Basic path-selection plugins (PSPs) will merely optimize the choice of which path to use, ideal for active/passive type arrays Full storage array type plugins (SATPs) will allow load balancing across multiple paths in addition to path selection for active/active arrays EMC also announced PowerPath/VE for vSphere , integrating their popular multi-platform path management software directly into ESX. It’s not clear at this point whether PowerPath will require an Enterprise Plus license (or if it will come with one) or if it will work with all editions, but I’m sure that will be clarified soon. My EMC contacts do tell me that PowerPath/VE is licensed on a per-socket basis (like VMware of yore) and that EMC sales reps have some room to get creative on licensing. EMC also announced PowerPath/VE for vSphere , integrating their popular multi-platform path management software directly into ESX. It’s not clear at this point whether PowerPath will require an Enterprise Plus license (or if it will come with one) or if it will work with all editions, but I’m sure that will be clarified soon. My EMC contacts do tell me that PowerPath/VE is licensed on a per-socket basis (like VMware of yore) and that EMC sales reps have some room to get creative on licensing.
  40. The Pluggable Storage Architecture (PSA) is a VMkernel layer responsible for managing multiple storage paths. PSA is a collection of VMKernel APIs that allow third-party vendors to insert code directly into the ESX storage I/O path. This code is referred to as multipathing plug-ins (MPPs). PSA allows third-party vendors to design their own load-balancing techniques and failover mechanisms for particular storage array types. PSA allows storage vendors to add support for new arrays into ESX without having to provide internal information or intellectual property about the array to VMware. VMware, by default, provides a generic MPP called Native Multipathing Plug-in (NMP). PSA coordinates the operation of the NMP and any additional third-party MPP. VMware uses PSA for its native load balancing and failover mechanisms. Examples for VMware’s path selection policies (PSPs) are Round Robin, Most Recently Used (MRU) and Fixed. VMware also uses PSA to provide support for a number of storage arrays. In the example above, VMW_SATP_CX is VMware’s Storage Array Type Plugin (SATP) for the CX3-40 storage array. VMW_SATP_LOCAL is VMware’s generic SATP for locally attached storage.
  41. If HBA1 fails the SATP plug-in redirects I/O to another available path. The selected path will be one that is available and identified by the PSP plug-in.
  42. http://www.vmguru.nl/wordpress/wp-content/uploads/2009/04/microsoft-powerpoint-vsphere-4-partner-sales-training-technical-ther.pdf VMDirectPath for Virtual Machines — VMDirectPath I/O device access enhances CPU efficiency in handling workloads that require constant and frequent access to I/O devices by allowing virtual machines to directly access the underlying hardware devices. Other virtualization features, such as VMotion™, hardware independence and sharing of physical I/O devices will not be available to the virtual machines using this feature. VMDirectPath I/O for networking I/O devices is fully supported with the Intel 82598 10 Gigabit Ethernet Controller and Broadcom 57710 and 57711 10 Gigabit Ethernet Controller. It is experimentally supported for storage I/O devices with the QLogic QLA25xx 8Gb Fibre Channel, the Emulex LPe12000 8Gb Fibre Channel, and the LSI 3442e-R and 3801e (1068 chip based) 3Gb SAS adapters. http://professionalvmware.com/2009/08/vmdirectpath-paravirtual-scsi-vsphere-vm-options-and-you/
  43. vSphere claims a 3x increase, to “over 300,000 I/O operations per second”, with 400,000 IOPS in some workloads. VMware ESX can now host just about any application with high workload demands. One question is whether these IOPS improvements require the new VMDirectPath I/O for Storage, which binds a physical Fibre Channel HBA to a single guest OS, or if they’re generalized across all systems.
  44. EMC also announced PowerPath/VE for vSphere , integrating their popular multi-platform path management software directly into ESX. It’s not clear at this point whether PowerPath will require an Enterprise Plus license (or if it will come with one) or if it will work with all editions, but I’m sure that will be clarified soon. My EMC contacts do tell me that PowerPath/VE is licensed on a per-socket basis (like VMware of yore) and that EMC sales reps have some room to get creative on licensing. CLI-based “rpowermt” commands are supported in 32-bit W2K3 and 64-bit RHEL5 U2 guest OSes. A GUI interface is also available.
  45. Fault Tolerance (FT) is a new feature in vSphere that takes VMware’s High Availability technology to the next level by providing continuous protection for a virtual machine (VM) in case of a host failure. It is based on the Record and Replay technology that was introduced with VMware Workstation that lets you record a VM’s activity and later play it back. The feature works by creating a secondary VM on another ESX host that shares the same virtual disk file as the primary VM and then transferring the CPU and virtual device inputs from the primary VM (record) to the secondary VM (replay) via a FT logging NIC so it is in sync with the primary and ready to take over in case of a failure. While both the primary and secondary VMs receive the same inputs, only the primary VM produces output (such as disk writes and network transmits). The secondary VM’s output is suppressed by the hypervisor and is not on the network until it becomes a primary VM, so essentially both VMs function as a single VM.
  46. FT can be used with any application as the guest operating system. Applications running on guest are completely unaware of FT. This new feature is only included in the Advanced, Enterprise and Enterprise Plus editions of vSphere. It eliminates the need for VMware customers to use Microsoft’s Cluster Server (MSCS) to provide continuous availability for critical applications. In fact, VMware’s documentation states the following as a use case for FT: Cases where high availability might be provided through MSCS, but MSCS is too complicated to configure and maintain. While FT is a very useful feature it does have some limitations and strict usage requirements. On the host side it requires specific, newer processor models from AMD and Intel that support Lockstep technology. You might be wondering what Lockstep technology is. Simply put, Lockstep is a technique used to achieve high reliability in a system by using a second identical processor to monitor and verify the operation of the first processor. Both processors receive the same inputs so the operation state of both processors is identical or operating in “lockstep” and the results are checked for discrepancies. If the operations are not identical and a discrepancy is found, the error is flagged and the system performs additional tests to see if a CPU is failing.
  47. For the Fault Tolerance feature to work you need specific processors that support Lockstep technology, you can read this KB article to find out which ones have this feature. VMware collaborated with AMD and Intel in providing an efficient VMware FT capability on modern x86 processors. The collaboration required changes in both the performance counter architecture and virtualization hardware assists of both Intel and AMD. These changes could only be included in recent processors from both vendors: 3rd-Generation AMD Opteron based on the AMD Barcelona, Budapest and Shanghai processor families; and Intel Xeon processors based on the Penryn and Nehalem microarchitectures and their successors. Intel Xeon based on 45nm Core 2 Microarchitecture Category: 3100 Series 3300 Series 5200 Series (DP) 5400 Series 7400 Series Intel Xeon based on Core i7 Microarchitecture Category: 5500 Series AMD 3rd Generation Opteron Category: 1300 Series 2300 Series (DP) 8300 Series (MP) This technology is integrated into certain AMD and Intel CPUs and is what the Fault Tolerance feature relies on to sync the CPU operations of a VM between two hosts so they are in identical states (VMware calls it vLockstep). This includes the AMD Barcelona quad-core processors that were first introduced in September of 2007 and the Intel Harpertown family processors that were first introduced in November of 2007. The vSphere Availability Guide references a KB Article (# 1008027 ) on compatible processors that will presumably be published when vSphere is GA. More information on compatible processor models can be found at Eric Sloof’s NTPRO.NL blog and at Gabrie van Zanten’s blog, Gabe’s Virtual World . Below are the official requirements from VMware’s documentation:
  48. Ensure datastores are not using RDM (Raw Device Mapping) in physical compatibility mode. RDM in Virtual Compatibility mode is supported. Does not support simultaneous Storage VMotion , without disabling FT first. Does not support NPIV (N-Port ID Virtualization) Thick-eagerzeroed virtual disks on VMFS3 formatted disks (thin or sparsely allocated will be converted to thick when FT is enabled) Team at least two NICs on separate physical switches (one for VMotion, one for FT and one NIC as shared failover for both). At least gigabit NICs used (10 Gbit NICs support Jumbo Frames) Configure a No-Single-Point-of-Failure environment (multipathing, redundant switches, NIC teaming, etc) Ensure primary and secondary ESX hosts and VMs are in HA cluster. Configure at least 3 ESX hosts in HA Cluster for every single host failure. DRS is not supported with FT, since only manual VMotion is supported. Enable Host Certificate checking (enabled by default) before adding ESX host to vCenter Server. Primary and Secondary hosts must be running same build of ESX. VMs cannot use more than one vCPU (SMP is not supported) VMs cannot use NPT/EPT (Nested Page Tables/Extended Page Tables) or hot plug devices or USBs No VM snapshots are supported with FT No vStorage API/VCB or VMware Data Recovery used to backup the FT VMs (which require snapshots which are not supported with FT). VM hardware must be upgraded to v7. No support for VM paravirtualized guest OS. Remove MSCS Clustering of VMs before protecting with FT.
  49. Data Recovery capability to provide quick, simple and cost-effective backup and recovery. Participating VMs can be 32-bit or 64-bit guest OS’s. VMs can be backed up independent of their power state. You can select a backup window and retention period. This determines the number of point-in-time copies. Backed up VM are deduplicated as they are copied to disk. Configuration video http://www.virtualpro.co.uk/2009/04/26/vsphere-vmware-data-recovery-demo-video/ What is OVF? Open Virtualization Format http://www.vmware.com/appliances/learn/ovf.html With the rapid adoption of virtualization, there is a great need for a standard way to package and distribute virtual machines. VMware and other leaders in the virtualization field have created the Open Virtualization Format (OVF), a platform independent, efficient, extensible, and open packaging and distribution format for virtual machines. OVF enables efficient, flexible, and secure distribution of enterprise software, facilitating the mobility of virtual machines and giving customers vendor and platform independence. Customers can deploy an OVF formatted virtual machine on the virtualization platform of their choice. With OVF, customers’ experience with virtualization is greatly enhanced, with more portability, platform independence, verification, signing, versioning, and licensing terms. OVF lets you: Improve your user experience with streamlined installations Offer customers virtualization platform independence and flexiblity Create complex pre-configured multi-tiered services more easily Efficiently deliver enterprise software through portable virtual machines Offer platform-specific enhancements and easier adoption of advances in virtualization through extensibility The portability and interoperability inherent in OVF will enable the growth of the virtual appliance market as well as virtualization as a whole.
  50. Data Recovery is an agentless, disk-based backup-and-recover solution for VMs, designed for SMBs. It provides faster restores then backup solutions that write to tape. It is a simple interface with minimal options, leveraging shared storage with disk space as the destination for the VM backups. The Data Recovery tool is a Linux Virtual Appliance with a management user interface integrated into the vSphere Client as a Plug-in. Licensed based on number of ESX hosts being backed up for writing to disks only, not tape. Multiple restore points for each VM are displayed so that you can select a copy (point-in-time) from which to restore the VM.
  51. The Virtual Appliance is a pre-configured VM, so it requires no new hardware or configuration. The VM requires VMware tools installed in the Guest OS. There is little overhead using “Changed Block Tracking” to control the backup jobs. Support is available for all types of storage subsystems and protocols. Disks are configured with deduplication to preserve disk space and reduce the complexity of tape libraries and backup software. vCenter is the central management point controlling the wizard-driven configuration. It is supported in HA, VMotion and DRS configurations.
  52. Product will automatically synthesize multiple restore points when a point in time restore is performed (file or VM). ESX host resources for snapshots (compute and storage)
  53. Next Evolution of VCB shipping with vSphere - Improved API enables native integration with partner backup application - Deployable on Windows and Linux platforms - Supports all storage architectures Enhanced Functionality - Supports incremental, differential and full VM image backup options - Supports file level backup and restore - Supports Windows and Linux guests Customer Benefits - Easy backup Integration with VI - Efficient backups - Easy restore
  54. With the release of VMware vSphere 4, VMware has introduced a fully redesigned licensing experience. The key improvements and enhancements in VMware licensing are as follows: License keys are simple 25-character strings instead of complex text files. License administration is built directly into VMware vCenter Server. There is no separate license server which must be installed and monitored. Customers receive only one single license key for a given vSphere edition (e.g. Advanced). There are no separate license keys for advanced features, such as VMware VMotion. The same vSphere license key can be used on many VMware ESX hosts. Each license key encodes a CPU quantity which determines the total number of ESX hosts that can use the license key. It is no longer using the VMware Infrastructure 3 licensing server or licensing model.
  55. Effective April 1, 2008 The VMware Single-Processor Licensing Policy clarifies the licensing of VMware ESX and VMware Infrastructure Suites (VI). Customers may install VMware ESX and VI licenses on single-processor physical hosts that are included on VMware's Hardware Compatibility List . This policy applies to servers with two sockets that are populated with a single-processor. Each processor may contain up to six cores. Please note that licenses for VMware ESX and VI are still sold in minimum increments of two processors. With this policy, VMware is clarifying that a two processor license may now be split and used on two single-processor physical hosts. To install licenses in single-processor increments, you must use the Centralized License Server model for generating and managing VMware license files, as described in the Help page for Create License File – Select Licensing Model. With this licensing model, a single license file holding your licenses is stored on a license server. The license server manages an entire license pool, making licenses available to one or more hosts.
  56. VMware is moving away from License Server, reverting back to 25 character license key which includes all features. If you upgrade to add something, like DRS, you receive a new replacement license key. License Server stays in place if upgrading to vSphere, impacting only ESX3 hosts. The new licensing scheme impacts only ESX4 hosts. A 16 CPU/socket license key can be assigned to multiple ESX 4 hosts until limit is reached. Hardware vendors can bundle licenses differently.
  57. For most customers, your vSphere keys will be automatically delivered and available in the new licensing portal shortly after vSphere becomes generally available. Certain combinations of existing Support and Subscription contracts require additional processing and may result in later delivery of your vSphere keys. If you are a customer affected by a potential delay, VMware will notify you shortly after vSphere becomes generally available and then keep you informed via email on the status of your upgraded license keys. For questions about your licenses, choose one of our Support contact options . Examples of contracts impacted by the above include: Previous license upgrades (e.g., from VI3 Standard to VI3 Enterprise) Separately purchased feature licenses (e.g., VMware VMotion) As a result of the new licensing model for VMware vSphere, the VI3 licensing portal will display quantities according to the new model. The license quantities displayed will be changed from license counts (where each license represents 2 CPUs) to single CPU counts. This means most of the numbers you see will be doubled from the the numbers displayed in the past. Licenses which are not counted in CPUs (e.g., VMware vCenter Server instances) will not be affected by this change, and their counts will remain the same. Your actual license entitlement does not change.
  58. If you have purchased VMware vSphere 4.0 licenses and wish to convert some or all of them to increase your available VMware Infrastructure 3 licenses, you may downgrade the desired number of the licenses on the vSphere license portal . Once you have completed the downgrade request, you will be able to access your newly generated VI3 license information in the VI3 License Portal . For example, you purchased 20 CPUs of vSphere Advanced and need to run 10 CPUs of VI3. You may perform a downgrade to 10 CPUs of VI3 Standard using the vSphere Licensing Portal. You will then be able to access 10 additional CPUs in the VI3 Licensing Portal. Please note your total entitlement will not change. In other words, you may deploy no more than 20 CPUs in total, but can mix and match between vSphere and VI3. Not all vSphere products are downgradable. Please see the downgrade options table (below) for specific mappings.
  59. Upgrading a VMware Infrastructure 3.x environment to VMware vSphere 4 involves more than just upgrading vCenter Server and upgrading your ESX/ESXi hosts (as if that wasn’t enough). You should also plan on upgrading your virtual machines. VMware vSphere introduces a new hardware version (version 7), and vSphere also introduces a new paravirtualized network driver (VMXNET3) as well as a new paravirtualized SCSI driver (PVSCSI). To take advantage of these new drivers as well as other new features, you’ll need to upgrade your virtual machines. This process I describe below works really well.
  60. VMware has published a vSphere Pre-requisites Checklist PDF that will be very handy document to have when planning for the upgrade of a current VMware Infrastructure . The following is part of the document’s introduction and explains it’s purpose: “ This Pre-requisites Checklist is intended for those involved in planning, designing, and upgrading an existing VMware Infrastructure to VMware vSphere. The intended audience includes the roles listed below: Solution Architects responsible for driving architecture-level decisions Consultants, Partners, and IT personnel, who require knowledge for deploying and upgrading the vSphere infrastructure it is assumed that they have knowledge and familiarity with VMware Infrastructure and have access to the VMware Infrastructure and VMware vSphere product documentation for reference.” The document is considered a draft version, and will be updated when vSphere is generally available (GA) later this Quarter (Late May?). Even in it’s current state, the PDF walks administrators and architects through all possible planning and upgrade scenarios of both vCenter and ESX  hosts and points out a lot of the potential gotchas of a vSphere 4 migration.
  61. Interestingly enough, a Virtualization Pro Blog post seems to confirm a majority of VMware users seem to be waiting at least 6 months to upgrade to vSphere. Upgrading production servers to vSphere: When and why analyzes the results of several polls conducted by the author, Eric Siebert. Although the majority of responses do not indicate  a decision to wait because of the technical limitations I indicate here, the preference to allow some time before implementation allows for a fully supported data center on vSphere 4 in the near future.
  62. Today I/O Continuity Group supports vendor-neutral solutions, meaning we focus on methodology and technology supporting best-of-breed solutions fitting all pieces of the puzzle into a well rounded solution. This is technology as it is, flaws and all, without any particular brand loyalty. If the network design employs a fundamental philosophy of using the best equipment for a particular function regardless of vendor, then the change is greatly simplified. But it does mean that you should have a migration plan. a vendor-neutral design philosophy forces you to use open, non-proprietary protocols. Without such a design philosophy, it is often impossible to introduce equipment from a new vendor.