SlideShare una empresa de Scribd logo
1 de 177
Introduction to vSphere Storage &
VM Management
Day 4
VMware vSphere:
Install, Configure, Manage
Content
• Virtual Storage
• NFS
• iSCSI
• Clone, Template, Snapshot
• vApp
• Content Library
Virtual Storage
Module Lessons
Storage Concepts
iSCSI Storage
NFS Datastores
VMFS Datastores
Virtual SAN Datastores
Virtual Volumes
Storage Concepts
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe VMware vSphere® storage technologies and datastores
• Describe the storage device naming convention
Basic Storage Overview
Storage
Technologies
Datastore
Types
FCoE iSCSI
Fibre
Channel
Direct
Attached
File
System
NAS
NFSVMFS
ESXi
Hosts
Storage Protocol Overview
Storage
Protocol
Boot from
SAN
Support
vSphere
vMotion
Support
vSphere HA
Support
vSphere
DRS
Support
Raw Device
Mapping
Support
Fibre
Channel
● ● ● ● ●
FCoE ● ● ● ● ●
iSCSI ● ● ● ● ●
NFS ● ● ●
DAS ● ●
Virtual
Volumes
● ● ●
Virtual SAN ● ● ●
About Datastores
A datastore is a logical storage unit
that can use disk space on one
physical device or span several
physical devices.
Datastores are used to hold virtual
machine files, templates, and ISO
images.
Types of datastores:
• VMFS
• NFS
• Virtual SAN
• Virtual Volumes
Host Host
Datastore
About VMFS5
VMFS5:
• Allows concurrent access to
shared storage.
• Can be dynamically expanded.
• Uses a 1 MB block size, good
for storing large virtual disk
files.
• Uses subblock addressing,
good for storing small files: the
subblock size is 8 KB.
• Provides on-disk, block-level
locking.
HostHost
VMFS Datastore
About NFS
NFS:
• Is storage shared over the
network at the file system
level
• Supports NFS version 3 and
4.1 over TCP/IP HostHost
NFS Datastore
Virtual SAN™ is hypervisor-converged, software-defined storage for
virtual environments.
By clustering host-attached hard disks (HDDs) and/or solid state drives
(SSDs), Virtual SAN creates an aggregated datastore shared by virtual
machines.
vSphere
HDD/Flash/SSD
Virtual SAN
3-64
Virtual SAN Overview
Virtual SAN Datastore
About Virtual Volumes
vSphere
Virtual Volumes
Replication Snapshots Caching
Encryption Deduplication
PE
• Native representation of VMDKs on
SAN/NAS: No LUNs or volume management.
• Works with existing SAN/NAS systems.
• A new control path for data operations at the
VM/VMDK level.
• Snapshots, replications, and other operations
at the VM level on external storage.
• Automates control of per-VM service levels.
• Protocol endpoint provides standard protocol
access to storage.
• Storage containers can span an entire array.
Overview
About Raw Device Mapping
RDM enables you to
store virtual machine
data directly on a LUN.
The mapping file is
stored on a VMFS
datastore that points to
the raw LUN.
-flat.vmdk
.vmdk
Virtual Disk
VMFS or NFS
-rdm.vmdk
.vmdk
RDM
VMFS
Raw
LUN
NTFS/ext4
Storage Device Naming Conventions
Storage devices are identified in several ways:
• Runtime name: Uses the convention vmhbaN:C:T:L. This name is not
persistent through reboots.
• Target: Identifies iSCSI target address and port.
• LUN: A unique identifier designated to individual or collections of hard disk
devices. A logical unit is addressed by the SCSI protocol or SAN protocols that
encapsulate SCSI, such as iSCSI or Fibre Channel.
Physical Storage Considerations
You should discuss vSphere storage needs with your storage
administration team, including the following items:
• LUN sizes
• I/O bandwidth
• I/O requests per second that a LUN is capable of
• Disk cache parameters
• Zoning and masking
• Identical LUN presentation to each VMware ESXi™ host
• Active-active or active-passive arrays
• Export properties for NFS datastores
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe VMware vSphere® storage technologies and datastores
• Describe the storage device naming convention
iSCSI Storage
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe uses of IP storage with ESXi
• Describe iSCSI components and addressing
• Configure iSCSI initiators
iSCSI Components
iSCSI Addressing
iSCSI target name:
iqn.1992-08.com.mycompany:stor1-47cf3c25
or
eui.fedcba9876543210
iSCSI alias: stor1
IP address: 192.168.36.101
iSCSI initiator name:
iqn.1998-01.com.vmware:train1-64ad4c29
or
eui.1234567890abcdef
iSCSI alias: train1
IP address: 192.168.36.88
iSCSI Initiators
Setting Up iSCSI Adapters
You set up software or hardware adapters before an ESXi host can work
with a SAN.
Supported iSCSI adapter types (vmhba):
• Software adapter
• Hardware adapter:
• Independent hardware adapter
• Dependent hardware adapter
ESXi Network Configuration for IP Storage
A VMkernel port must
be created for ESXi to
access software iSCSI.
The same port can be
used to access
NAS/NFS storage.
To optimize your
vSphere networking
setup, separate iSCSI
networks from
NAS/NFS networks:
• Physical separation is
preferred.
• If physical separation is
not possible, use
VLANs.
Creating Datastores and Discovering iSCSI Targets
Based on the environment and
storage needs, you can create
VMFS, NFS, or virtual
datastores as repositories for
virtual machines.
The iSCSI adapter discovers
storage resources on the
network and determines which
ones are available for access.
An ESXi host supports the
following discovery methods:
• Static
• Dynamic, also called
SendTargets
The SendTargets response
returns the IQN and all
available IP addresses.
iSCSI Target:
192.168.36.101:3260
SendTargets
Request
SendTargets
Response
192.168.36.101:3260
iSCSI Security: CHAP
iSCSI initiators use
CHAP for authentication
purposes.
By default, CHAP is not
configured.
ESXi supports two types
of CHAP authentication:
• Unidirectional
• Bidirectional
ESXi also supports per-
target CHAP
authentication.
Multipathing with iSCSI Storage
Software or dependent hardware
iSCSI:
• Use multiple NICs.
• Connect each NIC to a separate
VMkernel port.
• Associate VMkernel ports
with the iSCSI initiator.
Independent Hardware iSCSI:
• Use two or more hardware iSCSI
adapters.
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe uses of IP storage with ESXi
• Describe iSCSI components and addressing
• Configure iSCSI initiators
NFS Datastores
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe NFS components
• Describe the differences between NFS v3 and NFS v4.1
• Configure and manage NFS datastores
NFS Components
Directory to Share
with the ESXi Host
over the Network
VMkernel Port
Defined on Virtual
Switch
ESXi Host with
NIC Mapped to
Virtual Switch
NAS Device or a
Server with Storage
192.168.81.72
192.168.81.33
Configuring an NFS Datastore
Create a VMkernel port:
• For better performance and security, separate your NFS network from the
iSCSI network.
Provide the following information:
• NFS version: v3 or v4.1
• Datastore name
• NFS server names or IP addresses
• Folder on the NFS server, for example, /templates and /nfs_share
• Select hosts that will mount the datastore
• Whether to mount the NFS file system read-only
• Authentication parameters
NFS v3 and NFS v4.1
NFS v3:
• ESXi managed multipathing
• AUTH_SYS (root) authentication
• VMware proprietary file locking
• Client-side error tracking
NFS v4.1:
• Native multipathing and session
trunking
• Optional Kerberos authentication
• Built-in file locking
• Server-side error tracking
NFS Version Compatibility with Other vSphere Technologies
NFS v3 NFS v4.1
vSphere vMotion and vSphere Storage vMotion Yes Yes
vSphere HA Yes Yes
vSphere Fault Tolerance Yes Yes
vSphere DRS and vSphere DPM Yes Yes
Stateless ESXi and Host Profiles Yes Yes
vSphere Storage DRS and vSphere Storage I/O
Control
Yes No
Site Recovery Manager Yes No
Virtual Volumes Yes No
NFS Datastore Best Practices
Best practices:
• Configure an NFS array to allow only one NFS protocol.
• Use either NFS v3 or NFS v4.1 to mount the same NFS share across all ESXi
hosts.
• Exercise caution when mounting an NFS share. Mounting an NFS share as
NFS v3 on one ESXi host and as NFS v4.1 on another host can lead to data
corruption.
NFS v3 locking is not compatible with NFS v4.1:
• NFS v3 uses proprietary client-side cooperative locking. NFS v4.1 uses server-
side locking.
NFS Datastore Name and Configuration
Viewing IP Storage Information
You can view the details of the VMFS or NFS datastores that you
created.
Unmounting an NFS Datastore
Unmounting an NFS datastore causes the files on the datastore to
become inaccessible to the ESXi host.
Multipathing and NFS 4.1 Storage
One recommended configuration
for NFS version 4.1 multipathing:
• Configure one VMkernel port.
• Use adapters attached to the same
physical switch to configure NIC
teaming.
• Configure the NFS server with
multiple IP addresses:
– IP addresses can be on the same
subnet.
• To better utilize multiple links,
configure NIC teams with the IP hash
load-balancing policy.
NIC NIC
Physical
Switch
ESXi Host
vmnic0 vmnic1
Enabling Session Trunking and Multipathing
Multiple IP addresses are configured for each NFS v4.1 datastore.
192.168.0.203, 192.168.0.204
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe NFS components
• Describe the differences between NFS v3 and NFS v4.1
• Configure and manage NFS datastores
VMFS Datastores
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Create a VMFS datastore
• Increase the size of a VMFS datastore
• Delete a VMFS datastore
Using VMFS Datastores with ESXi Hosts
Use VMFS datastores whenever possible:
• VMFS is optimized for storing and accessing large files.
• A VMFS datastore can have a maximum volume size of 64 TB.
Use RDMs if the following conditions are true of your virtual machine:
• It is taking storage array-level snapshots.
• It is clustered to a physical machine.
• It has large amounts of data that you do not want to convert into a virtual disk.
Creating and Viewing VMFS Datastores
VMFS datastores serve as repositories for virtual machines.
Using the New Datastore wizard, you can create VMFS datastores on
any SCSI-based storage devices that the host discovers, including Fibre
Channel, iSCSI, and local storage devices.
Browsing Datastore Contents
Managing Overcommitted Datastores
A datastore becomes overcommitted when the total provisioned space of
thin-provisioned disks is greater than the size of the datastore.
Actively monitor your datastore capacity:
• Alarms assist through notifications:
– Datastore disk overallocation
– Virtual machine disk usage
• Use reporting to view space usage.
Actively manage your datastore capacity:
• Increase the datastore capacity when necessary.
• Use VMware vSphere® Storage vMotion® to mitigate space usage problems
on a particular datastore.
Increasing the Size of a VMFS Datastore
In general, before making any
changes to your storage
allocation:
• Perform a rescan to ensure that
all hosts see the most current
storage.
• Record the unique identifier.
Increase a VMFS datastore’s
size to give it more space or
possibly to improve
performance.
Ways to dynamically increase
the size of a VMFS datastore:
• Add an extent (LUN).
• Expand the datastore within its
extent.
Deleting or Unmounting a VMFS Datastore
An unmounted datastore
remains intact, but can no
longer be seen from the
hosts that you specify. The
datastore continues to
appear on other hosts,
where it remains mounted.
A deleted datastore is
destroyed and disappears
from all hosts that have
access to it. All virtual
machine files on the
datastore are permanently
removed.
Multipathing Algorithms
Arrays provide various
features. Some offer active-
active storage processors.
Others offer active-passive
storage processors.
vSphere offers native path
selection, load-balancing, and
failover mechanisms.
Third-party vendors can
create their own software to
be installed on ESXi hosts.
The third-party software
enables hosts to properly
interact with the storage
arrays.
Storage
Array
SP A
10
SP B
10
ESXi
Hosts
Storage
Processors
Switches
Configuring Storage Load Balancing
Path selection policies
exist for:
• Scalability:
– Round Robin:
A multipathing
policy that performs
load balancing
across paths
• Availability:
– MRU
– Fixed
Review of Learner Objectives
You should be able to meet the following objectives:
• Create a VMFS datastore
• Increase the size of a VMFS datastore
• Delete a VMFS datastore
Virtual SAN Datastores
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Explain the purpose of a VMware Virtual SAN™ datastore
• Describe the architecture and requirements of Virtual SAN configuration
• Describe the steps for configuring Virtual SAN
• Explain how to create and use Virtual SAN storage policies
About Virtual SAN
vSphere
HD/SSDHD/SSDSSD SSD
Virtual SAN
SSD
3-64
Virtual SAN Aggregated Datastore
HD/SSD
A single Virtual SAN datastore is created, using storage from multiple
hosts and multiple disks in the cluster.
Virtual SAN Requirements
• Not every node in a Virtual SAN cluster needs local storage.
• Hosts with no local storage can still use the distributed datastore.
Server on
vSphere HCL
1 Gb or 10 Gb NIC
SAS/SATA: RAID
controller must work in
passthrough or HBA mode.
PCI/SAS/SATA SSD
At least 1
of each
PCI/SAS/SATA HD/SSD
Cache
Data
Network
Controller
Configuring a Virtual SAN Datastore
A Virtual SAN datastore is configured in a few steps.
Configure
VMkernel
network for
Virtual SAN.
Enable Virtual
SAN on the
cluster.
Create disk
groups
(manual or
automatic)
Disk Groups
Virtual SAN disk groups composed of
flash-based devices and magnetic
disks require:
• One flash device:
– Maximum of one flash device per disk
group
• One HD/SSD:
– Supports up to seven devices per disk
group
• Maximum of five disk groups per host
Disk Groups
Viewing Cluster Summary
In the VMware vSphere® Web Client, the Summary tab of the Virtual
SAN cluster displays the general Virtual SAN configuration information.
Using Virtual SAN
Capabilities define the capacity, performance, and availability
characteristics of the underlying physical storage. The Virtual SAN
cluster presents these capabilities to vCenter Server, where they can be
consumed by virtual machines.
Requirements outline the needs of a virtual machine.
Virtual machine storage policies specify the virtual machine requirements
so that the virtual machine can be placed appropriately on the Virtual
SAN datastore.
Capabilities
presented
from Virtual
SAN.
VM
requirements
based on
capabilities.
Create policies
that contain VM
requirements.
Objects in Virtual SAN Datastores
In a Virtual SAN datastore, files are grouped into four types of objects:
• Namespaces
• Virtual disks
• Snapshots
• Swap files
Snapshot
VMDK
VSWP
Virtual Machine Storage Policies
vSphere
Hard disksHard disks
SSD SSD
Virtual SAN Datastore
Hard disks
SSD
…
Virtual SAN Cluster
Capacity
Availability
Performance
VM Storage Policy • Virtual machine storage
policies are built before VM
deployment to reflect the
requirements of the
application running in the
virtual machine.
• The policy is based on the
Virtual SAN capabilities.
• Select the appropriate
policy for the virtual
machine based on its
requirements.
• Storage objects for the
virtual machine are then
created that meet the policy
requirements.
Configuring Virtual Machine Storage Policies
Mirroring
Striping
Storage
Object
Viewing a Virtual Machine’s Virtual SAN Datastore
The consumption of Virtual SAN storage is based on the virtual
machine’s storage policy.
The virtual machine’s hard
disk view:
• Summarizes the total storage
size and used storage space
• Displays the virtual machine
storage policy
• Shows the location of disk files
on a Virtual SAN datastore
Disk Management (1)
Disk management in vSphere Web Client:
• Easily map the location of magnetic disks and flash-based devices.
• Mark disks and control disk LEDs.
Disk Management (2)
• Light LED on failures:
– When a solid-state disk (SSD) or a magnetic disk (MD) encounters a permanent
error, Virtual SAN automatically turns the disk LED on.
• Turn disk LED on or off:
– User might need to locate a disk, so Virtual SAN supports manually turning an SSD or
MD LED on or off.
• Marking a disk as SSD:
– Some SSDs might not be recognized as SSDs by ESXi.
– Disks can be tagged or untagged as SSDs for cache.
• Marking a disk as HDD:
– Some SSDs or MDs might not be recognized by ESXi as HDDs.
– Disks can be tagged or untagged as HDDs.
– SSDs must be marked as HDDs in order to be used for capacity.
Adding Disks to a Disk Group
Disk groups can be expanded by adding data disks to a node and adding
these disks to a particular disk group.
The vSphere Web Client shows any unclaimed disk in the disk
maintenance window.
Removing Disks from a Disk Group
Individual disks can be removed from a disk group.
Ensure that data is evacuated before the disk is removed. Alternatively,
you may place the host in maintenance mode.
Virtual SAN Cluster Member Maintenance Mode Options
Before you shut down, reboot, or disconnect a host that is a member of a
Virtual SAN cluster, you must place the host in maintenance mode.
When you place a host in maintenance mode, you can select a specific
evacuation mechanism.
When any member node of a Virtual SAN cluster enters maintenance
mode, the cluster capacity is automatically reduced because the member
node no longer contributes storage to the cluster.
Option Action
Ensure Accessibility
Moves enough components to ensure operational
integrity of objects.
Full Data Migration All components are evacuated from the host.
No Data Migration
No action is taken, which can result in degraded
objects.
To remove a host that is participating in a Virtual SAN cluster:
1. Place the host in maintenance mode.
2. Delete the disk groups associated with the host.
3. Remove the host from the cluster.
Removing a Host from a Virtual SAN Cluster
Review of Learner Objectives
You should be able to meet the following objectives:
• Explain the purpose of a Virtual SAN datastore
• Describe the architecture and requirements of Virtual SAN configuration
• Describe the steps for configuring Virtual SAN
• Explain how to create and use Virtual SAN storage policies
Virtual Volumes
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe the benefits of software-defined storage
• Describe per-virtual machine storage policy management
• Explain how VMDK data operations are offloaded to storage arrays through the
use of VMware vSphere® API for Storage Awareness™
Next-Generation Storage
Next-generation storage is required to meet certain criteria.
Management
Network/Security
Storage/Availability
Compute
Lower cost of storage.
Reduce manual processes
around storage management.
Handle explosive data growth.
Respond to new data access and
analysis requirements.
Using the Hypervisor to Transform Storage
Object-Based
Pool
SAN/NAS
Pool
Hypervisor
Converged Pool
Abstract and Pool
(Virtualized Data Plane)
Automate service-level
agreements through virtual
machine-centric policies.
(Policy-Based Control Plane)
Virtual Machine-Level Data
Services
(Virtual Data Services)
SAN/NASx86 Servers Cloud Object
Storage
vSphere
Replication Snapshots
Why Virtual Volumes
Customers have major concerns about storage.
 “Setting up storage requires too much time.”
 “Data operations are LUN-centric. We want virtual
machine-focused operations.”
Storage
management is
too complex.
 “We overprovision storage.”
 “Our storage budget keeps going up.”
 “SLAs cannot ensure predictable performance.”
 “Troubleshooting is very hard.”
Cost of
ownership
is too high.
SLAs are too
difficult to
ensure.
VMware vSphere
Virtual volumes
Replication Snapshots Caching Encryption De-duplication
VMware vSphere
VMDKs as Native Objects
Traditional
Model
VMDKs and
VMDK Data
Operations
Offloaded to
Storage Arrays
Virtual
Volumes
Storage Array Requirements
Virtual volumes require that the following criteria be met to function
properly:
• A storage array compatible with vSphere API for Storage Awareness 2.0.
• Must implement vSphere API for Storage Awareness to create the storage
provider for virtual volumes:
– Firmware
– Virtual appliance
– Physical appliance
• Use APIs to handle offloaded data services on the virtual volumes.
• Enable fine capabilities.
• Publish a VASA provider that runs on the array through a URL.
Storage Administration
vSphere
PE
No need to configure LUNs
or NFS shares.
Set up a single I/O access called
a protocol endpoint, to establish
a data path from virtual machines
to virtual volumes.
Set up a logical entity, called
storage container, to group virtual
volumes for easy management.
Protocol Endpoints
The protocol endpoint is set up by the
storage administrator.
The protocol endpoint is part of the physical
storage fabric. It is treated like a LUN.
The protocol endpoint supports typical SCSI
and NFS commands.
Virtual volumes are bound and unbound
to a protocol endpoint: ESXi or VMware
vCenter Server™ initiates
the bind and unbind operation.
Existing multipathing policies and NFS
topology requirements can be applied.
vSphere
PE
Storage Containers
In vCenter Server, the storage containers are
represented by virtual datastores:
• A storage container is configured by the storage
administrator.
• A storage container is a logical grouping of
virtual volumes.
• A storage container’s capacity is limited only by
the hardware capacity.
• You must set up at least one storage container
per storage system. You can have multiple
storage containers per array.
• You assign capabilities to storage containers.
vSphere
PE
Using Virtual Volumes
A vendor provider is a storage provider based on vSphere API for
Storage Awareness that allows the array to export its capabilities and
present them to vSphere.
A protocol endpoint is a replacement for the traditional LUN and can be
accessed with typical NFS or SCSI methods.
Virtual Volumes datastores are created on the protocol endpoint:
• Virtual volumes are objects created on the datastore.
Register a storage
provider in
vCenter Server.
Discover protocol
endpoints
(iSCSI, NFS, and
so on).
Create Virtual
Volumes
datastores.
Bidirectional Discovery Process
Protocol Endpoint
Storage administrator sets up a
protocol endpoint.
ESXi host discovers the protocol
endpoint during a scan.
vSphere API for Storage
Awareness is used to bind virtual
volumes to the protocol endpoint.
Storage Container
Storage administrator sets up a
storage container of defined
capacity and capability.
VASA provider discovers the
storage container and reports to
vCenter Server.
Virtual volumes are created in a
Virtual Volumes datastore.
Storage-Based Policy Management (1)
Storage-based policy management helps ensure that virtual machines
receive their required performance, capacity, and availability.
Per-virtual machine
storage policies.
Capacity
Performance
Availability
Policies set based
on application needs.
SAN/NAS
Virtual Volumes
Storage Policy-Based Management
Virtual Data Plane: Datastore
SLAs
External storage automates
control of service levels.
Storage-Based Policy Management (2)
Storage policies represent service levels demanded by virtual machines.
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe the benefits of software-defined storage
• Describe per-virtual machine storage policy management
• Explain how VMDK data operations are offloaded to storage arrays through the
use of VMware vSphere API for Storage Awareness
Key Points
• You use VMFS datastores to hold virtual machine files.
• Shared storage is integral to vSphere features such as vSphere vMotion,
vSphere HA, and vSphere DRS.
• Virtual SAN enables low-end configurations to use vSphere HA, vSphere
vMotion, and vSphere Storage vMotion without requiring external shared
storage.
• Virtual SAN clusters direct-attached server disks to create shared storage
designed for virtual machines.
• Virtual Volumes is a storage management approach that enables
administrators to differentiate virtual machine services per application.
• Key components of the Virtual Volumes functionality include virtual volumes,
VASA providers, storage containers, protocol endpoints, and virtual datastores.
Questions?
Troubleshooting
Storage
Storage Connectivity and
Configuration
If a virtual machine cannot access its virtual disks, the cause of the
problem might be anywhere from the virtual machine to physical storage.
iSCSIDirect
Attached
File
System
Ethernet
NFS
Virtual Disk
Datastore
Type
Transport
Backing
FC FCoE
Review of vSphere Storage Architecture
LUN LUN LUN LUN
VVOLVSAN
Storage
Container
VSAN
Cluster
Direct
Attached
FC/
Ethernet
VMFS
Review of iSCSI Storage
If the VMware ESXi™ host has iSCSI storage connectivity issues, check
the iSCSI configuration on the ESXi host and, if necessary, the iSCSI
hardware configuration.
iSCSI target name:
iqn.1992-08-com.acme:storage1
iSCSI alias: storage1
IP address: 192.168.36.101
iSCSI initiator name:
iqn.1998-01.com.vmware:train1
iSCSI alias: train1
IP address: 192.168.36.88
Storage Problem 1
Initial checks using the command line look at connectivity on the host:
• Verify that the ESXi host can see the LUN:
– esxcli storage core path list
• Check whether a rescan restores visibility to the LUNs.
– esxcli storage core adapter rescan –A vmhba##
• Check how many datastores exist and how full they are:
– df –h | grep VMFS
IP storage is not reachable by an ESXi host.
Identifying Possible Causes
If the ESXi host accessed IP storage in the past, and no recent changes
were made to the host configuration, you might take a bottom-up
approach to troubleshooting.
ESXi
Host
Possible Causes
The VMkernel interface for IP storage is misconfigured.
IP storage is not configured correctly on the ESXi host.
iSCSI TCP port 3260 is unreachable.
A firewall is interfering with iSCSI traffic.
NFS storage is not configured correctly.
VMFS datastore metadata is inconsistent.
The iSCSI storage array is not supported.
The LUN is not presented to the ESXi host.
The physical hardware is not functioning correctly.
Poor iSCSI storage performance is observed.
Hardware
(Storage Network,
Storage Array)
Possible Cause: Hardware-Level Problems
Check the VMware Compatibility Guide to see if the iSCSI HBA or iSCSI
storage array is supported.
Verify that the LUN is presented correctly to the ESXi host:
• The LUN is in the same storage group as all the ESXi hosts.
• The LUN is configured correctly for use with the ESXi host.
• The LUN is not set to read-only on the array.
• The host ID on the array for the ESXi LUN is less than 255.
If the storage device is malfunctioning, use hardware diagnostic tools to
identify the faulty component.
Possible Cause: Poor iSCSI Storage Performance
Adhere to best practices for your IP storage networks:
• Avoid oversubscribing your links.
• Isolate iSCSI traffic from NFS traffic and any other network traffic.
Monitor device latency metrics:
• Use the esxtop or resxtop command: Enter d in the window.
Device Avg. Kernel Avg. Guest Avg.
Possible Cause: VMkernel Interface Misconfiguration
A misconfigured VMkernel interface for IP storage affects any IP storage,
whether iSCSI or NFS:
• To test configuration from the ESXi host, ping the iSCSI target IP address:
– For example, ping 172.20.13.14
• 172.20.13.14 is the IP address of the iSCSI target.
• If the ping command fails, ensure that the IP settings are correct.
Possible Cause: iSCSI HBA Misconfiguration
The iSCSI initiator might be configured incorrectly on the ESXi host.
Use VMware vSphere® Web Client to check the configured components:
• iSCSI initiator name
• iSCSI target address
and port number
• CHAP
Verify that the VMkernel port bindings are configured properly.
Possible Cause: Port Unreachable
Failure could occur because iSCSI TCP port 3260 is unreachable.
• From the ESXi host, use the nc (netcat) command to reach port 3260 on
the iSCSI storage array.
– nc –z IPaddr 3260
• IPaddr is the IP address of the iSCSI storage array.
Resolve this problem by checking paths between the host and hardware:
• Verify that the iSCSI storage array is configured properly and is active.
• Verify that a firewall is not interfering with iSCSI traffic.
Possible Cause: VMFS Metadata Inconsistency
Verify that your VMware vSphere® VMFS datastore metadata is
consistent:
• Use the vSphere On-disk Metadata Analyzer to check VMFS metadata
consistency:
– voma -m vmfs
-d /vmfs/devices/disks/naa.00000000000000000000000000:1
-s /tmp/analysis.txt
A file system’s metadata must be checked under the following conditions:
• Disk replacement
• Reports of metadata errors in the vmkernel.log file
• Inability to access files on the VMFS volume that are not in use by any other
host
If you encounter VMFS inconsistencies, perform these tasks:
1. Recreate the VMFS datastore and restore files from your last backup to the
VMFS datastore.
2. If necessary, complete a support request.
Possible Cause: NFS Misconfiguration
If your virtual machines reside on NFS datastores, verify that your NFS
configuration is correct.
VMkernel port
configured with
IP address
Directory to share
with the ESXi host
over the network
Mount permission
(Read/Write or
Read-Only) and
ACLs
ESXi host with NIC
mapped to virtual
switch
NFS Server Name
or IP Address
NFS Version Compatibility with Other vSphere Technologies
vSphere Technologies NFS v3 NFS v4.1
VMware vSphere® vMotion®/VMware
vSphere® Storage vMotion®
Yes Yes
VMware vSphere® High Availability Yes Yes
VMware vSphere® Fault Tolerance Yes Yes
VMware vSphere® Distributed Resource
Scheduler™/VMware vSphere® Distributed
Power Management™
Yes Yes
Stateless ESXi/Host Profiles Yes Yes
VMware vSphere® Storage DRS™/VMware
vSphere® Storage I/O Control
Yes No
VMware Site Recovery Manager™ Yes No
VMware vSphere® Virtual Volumes™ Yes No
NFS Dual Stack Not Supported
NFS v3 and v4.1 use different locking semantics:
• NFS v3 uses proprietary client-side cooperative locking.
• NFS v4.1 uses server-side locking.
The best practices are:
• Configure an NFS array to allow only one NFS protocol.
• Use either NFS v3 or NFS v4.1 to mount the same NFS share across
all ESXi hosts.
Data corruption might occur if hosts attempt to access the same NFS
share using different NFS client versions.
Best Practice
Viewing Session Information
You use the esxcli storage nfs41 list command to view the
volume name, IP address, and other information.
Review of Learner Objectives
You should be able to meet the following objectives:
• Discuss vSphere storage architecture
• Identify possible causes of problems in various types of datastores
• Analyze common storage connectivity and configuration problems and discuss
possible causes
• Solve storage connectivity problems, correct misconfigurations, and restore
LUN visibility
Multipathing
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Review multipathing
• Identify common causes of missing paths, including PDL and APD conditions
• Solve missing path problems between hosts and storage devices
Review of iSCSI Multipathing
If your ESXi host has iSCSI multipathing issues, check the multipathing
configuration on the ESXi host and, if necessary, the iSCSI hardware
configuration.
Storage Problem 2
Initial checks of LUN paths are performed using the esxcli command:
• Find detailed information regarding multiple paths to the LUNs:
– esxcli storage core path list
• List LUN multipathing information:
– esxcli storage nmp device list
• Check whether a rescan restores visibility to the LUNs:
– esxcli storage core adapter rescan –A vmhba##
• Retrieve SMART data about a specified SSD device:
– esxcli storage core device smart get –d device_name
One or more paths to a LUN are lost.
Identifying Possible Causes
If you see errors in /var/log/vmkernel.log that refer to a permanent
device loss (PDL) or all paths down (APD) condition, then take a bottom-
up approach to troubleshooting.
ESXi
Host
Possible Causes
For iSCSI storage, NIC teaming is misconfigured.
The path selection policy for a storage device is
misconfigured.
A PDL condition has occurred.
An APD condition has occurred.
Hardware
(Storage Network,
Storage Array)
PDL Condition
A storage device is in a PDL state when it becomes permanently
unavailable to the ESXi host.
Possible causes of an unplanned PDL:
• The device is unintentionally removed.
• The device’s unique ID changes.
• The device experiences an unrecoverable hardware error.
• The device ran out of space, causing it to become inaccessible.
vSphere Web Client displays pertinent information when a device is in a
PDL state:
• The operational state of the device changes to Lost Communication.
• All paths appear as Dead.
• Datastores on the device are unavailable.
Recovering from an Unplanned PDL
If the LUN was not in use when the PDL condition occurred, the LUN is
removed automatically after the PDL condition clears.
If the LUN was in use, manually detach the device and remove the LUN
from the ESXi host.
When storage reconfiguration is complete, perform these steps:
1. Reattach the storage device.
2. Mount the datastore.
3. Restore from backups if necessary.
4. Restart the virtual machines.
APD Condition
An APD condition occurs when a storage device becomes unavailable to
your ESXi host for an unspecified amount of time:
• This condition is transient. The device is expected to be available again.
An APD condition might be caused by several causes:
• The storage device is removed in an uncontrolled manner from the host.
• The storage device fails:
– The VMkernel cannot detect how long the loss of device access will last.
• Network connectivity fails, which brings down all paths to iSCSI storage.
vSphere Web Client displays pertinent information when an APD
condition occurs:
• The operational state of the device changes to Dead or Error.
• All paths appear as Dead.
• Datastores on the device are unavailable.
Recovering from an APD Condition
The APD condition must be resolved at the storage array or fabric layer
to restore connectivity to the host:
• All affected ESXi hosts might require a reboot.
vSphere vMotion migration of unaffected virtual machines cannot be
attempted:
• Management agents might be affected by the APD condition.
To avoid APD problems, the ESXi host has a default APD handling
feature:
• Global setting: Misc.APDHandlingEnable
– By default, set to 1, which enables storage APD handling
• Timeout setting: Misc.APDTimeout
– By default, set to 140, the number of seconds that a device can be in APD before
failing
Possible Cause: NIC Teaming Misconfiguration
Verify that NIC teaming is configured properly.
Possible Cause: Path Selection Policy Misconfiguration
Verify that the path selection policy for a storage device is configured
properly.
Possible Cause: NFSv3 and v4.1 Misconfiguration
Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share
recovers from an APD state.
The lock protecting VM.vmdk has been lost error message is displayed.
This issue occurs because NFSv3 and v4.1 are two different protocols
with different behaviors. After the grace period (array vendor-specific),
the NFS server flushes the client state.
This behavior is expected in NFSv4 servers.
Possible Cause: Fault in APD Handling
When an APD event occurs, LUNs connected to ESXi might remain
inaccessible after paths to the LUNs recover.
The 140-second APD timeout expires even though paths to storage are
recovered.
This issue is due to a fault in APD handling:
• When this issue occurs, a LUN has paths available and is online following an
APD event, but the APD timer continues upcounting until the LUN enters APD
Timeout state.
• After the initial APD event, the datastore is inaccessible as long as active
workloads are associated with the datastore in question.
To solve this problem, upgrade ESXi to version 6.0 Update 1. If you are
unable to upgrade, use one of the workaround options:
• Perform the procedure to kill all outstanding I/O to the LUN.
• Reboot all hosts with volumes in the APD Timeout state.
Virtual Machine
Management
Module Lessons
Creating Templates and Clones
Modifying Virtual Machines
Creating Virtual Machine Snapshots
Creating vApps
Working with Content Libraries
Creating Templates and
Clones
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Create a template
• Deploy a virtual machine from a template
• Clone a virtual machine
• Enable guest operating system customization by VMware vCenter Server™
Using a Template
A template is a master copy of a virtual machine. It is used to create and
provision new virtual machines.
Creating a Template
Clone the virtual machine to a template:
• The virtual machine can be powered on or powered off.
Convert the virtual machine to a template:
• The virtual machine must be powered off.
Clone a template:
• Used to create a new template based on one that existed previously.
Deploying a Virtual Machine from a Template
To deploy a virtual machine, you must provide such information as the
virtual machine name, inventory location, host, datastore, and guest
operating system customization data.
Updating a Template
Update a template to include new
patches, make system changes,
and install new applications:
1. Convert the template to a virtual
machine.
2. Place the virtual machine on an
isolated network to prevent user
access.
3. Make appropriate changes to the
virtual machine.
4. Convert the virtual machine to a
template.
Cloning a Virtual Machine
Cloning a virtual machine
creates a virtual machine that
is an exact copy of the original:
• Cloning is an alternative to
deploying a virtual machine.
• The virtual machine being
cloned can be powered on or
powered off.
Customizing the Guest Operating System
Use the Guest Operating System Customization wizard to make virtual
machines created from the same template or clone unique.
Customizing a guest operating system enables you to change:
• Computer name
• Network settings
• License settings
• Windows Security Identifier
During cloning or deploying virtual machines from a template:
• You can create a specification to prepare the guest operating systems of virtual
machines.
• Specifications can be stored in the database.
• You can edit specifications in the Customization Specifications Manager.
• Windows and Linux operating systems are supported.
Review of Learner Objectives
You should be able to meet the following objectives:
• Create a template
• Deploy a virtual machine from a template
• Clone a virtual machine
• Enable guest operating system customization by VMware vCenter Server™
Modifying Virtual
Machines
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe virtual machine settings and options
• Add a hot-pluggable device
• Dynamically increase the size of a virtual disk
• Add a raw device mapping (RDM) to a virtual machine
Modifying Virtual Machine Settings
You can modify a virtual
machine’s configuration in its
Edit Settings dialog box:
• Add virtual hardware:
– Some hardware can be added
while the virtual machine is
powered on.
• Remove virtual hardware:
– Some hardware can be
removed only when the virtual
machine is powered off
• Set virtual machine options.
• Control a virtual machine’s
CPU and memory resources.
Hot-Pluggable Devices
The CPU hot-plug option
enables you to add CPU
resources to a running virtual
machine:
• Examples of hot-pluggable
devices: USB controllers,
Ethernet adapters, and hard
disk devices.
With supported guest
operating systems, you can
also add CPU and memory
while the virtual machine is
powered on.
Creating an RDM
An RDM (a -rdm.vmdk file) enables a virtual machine to gain direct
access to a physical LUN.
Encapsulating disk information in the RDM enables the VMkernel to lock
the LUN so that only one virtual machine can write to the LUN.
You must define the following items when creating an RDM:
• Target LUN: LUN that the RDM will map to
• Mapped datastore:
Stores the RDM file
with the virtual
machine or on a
different datastore
• Compatibility mode
• Virtual device node
Dynamically Increasing a Virtual Disk’s Size
You can increase the size of a virtual disk that belongs to a powered-on
virtual machine:
• The virtual disk must be
in persistent mode.
• It must not contain
snapshots.
Dynamically increase a
virtual disk from, for
example, 2 GB to 20 GB.
Increases the size
of the existing
virtual disk file.
Thin-provisioned virtual disks can be converted to a thick, eager-zeroed
format.
To inflate a thin-provisioned disk:
• The virtual machine must be powered off.
• Right-click the virtual machine’s .vmdk file and select Inflate.
Or you can use VMware vSphere® Storage vMotion® and select a thick-
provisioned disk as the destination.
Inflating a Thin-Provisioned Disk
Virtual Machine Options
On the VM Options tab, you can set or change virtual machine options
to run VMware Tools™ scripts, control user access to the remote
console, configure startup behavior, and more.
VM Directory
.vmx File Location
VM Display Name
Guest Operating
System Type
VMware Tools Options
Schedule VMware
Tools scripts.
Customize power
button actions.
Update checks
Boot Options
Delay power on.
Boot into BIOS.
Retry after
failed boot.
Troubleshooting a Failed VMware Tools Installation on a
Guest Operating System
Problems:
• VMware Tools installation errors before completion.
• VMware Tools installation fails to complete.
• Unable to complete VMware Tools for Windows or Linux installation.
• VMware Tools hangs when installing or reinstalling.
Solutions:
1. Verify that that the guest operating system that you are trying to install is fully
certified by VMware.
2. Verify that the correct operating system is selected.
3. Verify that the ISO image is not corrupted.
4. If installing on a Windows operating system, ensure that you are not
experiencing problems with your Windows registry.
5. If installing on a 64-bit Linux guest operating system, verify that no
dependencies are missing.
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe virtual machine settings and options
• Add a hot-pluggable device
• Dynamically increase the size of a virtual disk
• Add a raw device mapping (RDM) to a virtual machine
Creating Virtual
Machine Snapshots
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Take a snapshot of a virtual machine and manage multiple snapshots
• Delete virtual machine snapshots
• Consolidate snapshots
Virtual Machine Snapshots
Snapshots enable you to preserve the state of the virtual machine so that
you can repeatedly return to the same state.
Virtual Machine Snapshot Files
A snapshot consists of a set of files: the memory state file (.vmsn), the
description file (-00000#.vmdk), and the delta file (-00000#-
delta.vmdk).
The snapshot list file (.vmsd) keeps track of the virtual machine’s
snapshots.
Taking a Snapshot
You can take a snapshot while a virtual machine is powered on, powered
off, or suspended.
A snapshot captures the state of the virtual machine: memory state,
settings state, and disk state.
Virtual machine snapshots are not recommended as a virtual machine
backup strategy.
Pending
transactions
committedtodisk
.vmdk
Managing Snapshots
The Snapshot Manager enables you
to review all snapshots for the active
virtual machine and act on them
directly.
Actions you can perform:
• Revert to a snapshot.
• Delete one or all snapshots.
Deleting a Virtual Machine Snapshot (1)
If you delete a snapshot one or more levels above You Are Here, the
snapshot state is deleted. The snap01 data is committed into the
previous state (base disk) and the foundation for snap02 is retained.
base disk (5GB)
snap01 delta (1GB)
base disk (5GB) +
snap01 data
snap02 delta (2GB)
You are here.
snap02 delta (2GB)
You are here.
snap01 delta (1GB)
Deleting a Virtual Machine Snapshot (2)
If you delete the current snapshot, the changes are committed to its
parent. The snap02 data is committed into snap01 data, and the snap02
-delta.vmdk file is deleted.
base disk (5GB)
snap01 delta (1GB) +
snap02 delta (2GB)
Deleting a Virtual Machine Snapshot (3)
If you delete a snapshot one or more levels below You Are Here,
subsequent snapshots are deleted and you can no longer return to those
states. The snap02 data is deleted.
base disk (5GB)
snap01 delta (1GB)
snap02 delta (2GB)
You are here.
You are here.
Deleting All Virtual Machine Snapshots
The delete-all-snapshots mechanism uses storage space efficiently. The
size of the base disk does not increase. Just like a single snapshot
deletion, changed blocks in the snapshot overwrite their counterparts in
the base disk.
base disk (5GB)
snap01 delta (1GB)
snap02 delta (2GB)
base disk (5GB) +
snap01/02 data
You are here.
About Snapshot Consolidation
Snapshot consolidation is a method to commit a chain of snapshots to
the base disks, when the Snapshot Manager shows that no snapshots
exist, but the delta files still remain on the datastore.
Snapshot consolidation is intended to resolve problems that might occur
with snapshots:
• The snapshot descriptor file is committed correctly, but the Snapshot Manager
incorrectly shows that all the snapshots are deleted.
• The snapshot files (-delta.vmdk)are still part of the virtual machine.
• Snapshot files continue to expand until the virtual machine runs out of
datastore space.
The Snapshot Manager displays no snapshots. However, a warning on
the Monitor > Issues tab of the virtual machine notifies the user that a
consolidation is required.
Discovering When to Consolidate
Performing Snapshot Consolidation
After the snapshot consolidation warning appears, the user can use the
vSphere Web Client to consolidate the snapshots:
• Select Snapshots > Consolidate to reconcile snapshots.
• All snapshot delta disks are committed to the base disks.
Review of Learner Objectives
You should be able to meet the following objectives:
• Take a snapshot of a virtual machine and manage multiple snapshots
• Delete virtual machine snapshots
• Consolidate snapshots
Creating vApps
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe a vApp
• Build a vApp
• Use a vApp to manage virtual machines
• Deploy and export a vApp
Managing Virtual Machines with a vApp
A vApp is an object in the vCenter Server inventory:
• A vApp is a container for one or more virtual machines.
• A vApp can be used to package and manage multitiered applications.
vApp Characteristics
You can configure several
vApp settings by right-clicking
the vApp:
• CPU and memory allocation
• IP allocation policy
You can also configure the
virtual machine startup and
shutdown order.
Exporting and Deploying vApps
Exporting the vApp as an OVF
template:
• Share with others.
• Use for archive purposes.
Deploying the OVF template:
• Deploy multitier vApps.
• Deploy OVF from VMware Virtual
Appliance Marketplace.
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe a vApp
• Build a vApp
• Use a vApp to manage virtual machines
• Deploy and export a vApp
Working with Content
Libraries
Learner Objectives
By the end of this lesson, you should be able to meet the following
objectives:
• Describe the types of content libraries
• Recognize how to import content into a content library
• Identify how to publish a content library for external use
About the Content Library
A content library is a repository of OVF templates and other files that can
be shared and synchronized across vCenter Server systems.
Benefits of Content Libraries
Metadata
Sharing and Consistency
Storage Efficiency
Secure Subscription
Local
Library of content that
you control
Published
Local library that makes
content available for
subscription
Subscribed
Library that syncs with a
published library
Types of Content Library
Three types of content library are available: local, published, and
subscribed .
On-Demand >>>>
Library Content • Immediately download all library content
Download library content only when needed
Saves storage backing space. Only metadata is retrieved. Content is downloaded as needed when
creating virtual machines or synchronizing content
Automatic >>>>
Metadata
Subscribing to vCloud Director 5.5 Catalogs
You can subscribe a content library to VMware vCloud Director® 5.5.
The subscription process is the same
as with the published content library:
• Uses the published URL
• Static user name (always vcsp)
and password
Content Catalogs in
vCloud Director 5.5
vCenter Server 6
Subscription URL
Password (Optional)
Publish and Subscribe
Interactions between the publisher and subscriber can include
connectivity, security, an actionable files.
vCenter Server vCenter Server
Templates
Other
Subscribe using URL.
Transfer Service Transfer Service
Content Library Service Content Library Service
Synchronization and Versioning
Synchronization is used to resolve versioning discrepancies between the
publisher and the subscribing content libraries.
vCenter ServervCenter Server
VMware Content Subscription Protocol
HTTP/NFC
VCSP
Transfer Service Transfer Service
Content Library Service Content Library Service
Content Library Requirements and Limitations
Single storage backing and datastore (64 TB maximum).
License to scale based on content library usage.
Maximum 256 library Items.
Synchronization occurs once every 24 hours.
Maximum 5 concurrently synchronized library
items for each subscribed library.
Creating a Content Library
You can create a content library in the vSphere Web Client and populate
it with templates to use to deploy virtual machines or vApps in your
virtual environment.
Selecting Storage for the Content Library
You select storage for the content library based on the type of library you
are creating.
Populating Content Libraries with Content
You populate a content library with templates that you can use to
provision new virtual machines.
To add templates to a content library, use one of the following methods:
• Clone a virtual machine to a template in the content library.
• Clone a template from the vSphere inventory or from another content library.
• Clone a vApp.
• Import a template from an URL.
• Import an OVF file from your local file system.
Importing Items into the Content Library
Your source to import items in to a content library can be a file stored on
your local machine or a file stored on a Web server.
Click this icon to import OVF
pages and other file types
into the content library.
Deploying a Virtual Machine to a Content Library
You can clone virtual machines or virtual machine templates to templates
in the content library and use them later to provision virtual machines on
a virtual data center, a data center, a cluster, or a host.
Publishing a Content Library for External Use
You can publish a content library for external use and add password
protection by editing the content library settings:
• Users access the library through the subscription URL that is system
generated.
Review of Learner Objectives
You should be able to meet the following objectives:
• Describe the types of content libraries
• Recognize how to import content into a content library
• Identify how to publish a content library for external use
Key Points
• vCenter Server provides features for provisioning virtual machines, such as
templates and cloning.
• By deploying virtual machines from a template, you can create many virtual
machines easily and quickly.
• You can use vSphere vMotion to move virtual machines while they are
powered on.
• You can use vSphere Storage vMotion to move virtual machines from one
datastore to another datastore.
• You can use virtual machine snapshots to preserve the state of the virtual
machine so that you can return to the same state repeatedly.
• A vApp is a container for one or more virtual machines. The vApp can be used
to package and manage related applications.
• Content libraries provide simple and effective management for virtual machine
templates, vApps, and other types of files for vSphere administrators.
Questions?

Más contenido relacionado

La actualidad más candente

Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready Nodes
Andrew McDaniel
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
glbsolutions
 

La actualidad más candente (20)

VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure software
 
Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready Nodes
 
VMware Performance Troubleshooting
VMware Performance TroubleshootingVMware Performance Troubleshooting
VMware Performance Troubleshooting
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 
VMware vSphere 6.0 - Troubleshooting Training - Day 5
VMware vSphere 6.0 - Troubleshooting Training - Day 5VMware vSphere 6.0 - Troubleshooting Training - Day 5
VMware vSphere 6.0 - Troubleshooting Training - Day 5
 
VMWARE ESX
VMWARE ESXVMWARE ESX
VMWARE ESX
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
 
VMware vSphere 6.0 - Troubleshooting Training - Day 2
VMware vSphere 6.0 - Troubleshooting Training - Day 2VMware vSphere 6.0 - Troubleshooting Training - Day 2
VMware vSphere 6.0 - Troubleshooting Training - Day 2
 
VSAN – Architettura e Design
VSAN – Architettura e DesignVSAN – Architettura e Design
VSAN – Architettura e Design
 
A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875A day in the life of a VSAN I/O - STO7875
A day in the life of a VSAN I/O - STO7875
 
Virtualization
VirtualizationVirtualization
Virtualization
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
 
SDN입문 (Overlay and Underlay)
SDN입문 (Overlay and Underlay)SDN입문 (Overlay and Underlay)
SDN입문 (Overlay and Underlay)
 
OpenStack Architecture and Use Cases
OpenStack Architecture and Use CasesOpenStack Architecture and Use Cases
OpenStack Architecture and Use Cases
 
Open vSwitch Introduction
Open vSwitch IntroductionOpen vSwitch Introduction
Open vSwitch Introduction
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overview
 

Destacado

VMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A TutorialVMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A Tutorial
Richard McDougall
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
aleyeldean
 
С Юбилеем дорогая сестра
С Юбилеем дорогая сестраС Юбилеем дорогая сестра
С Юбилеем дорогая сестра
regi666
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworld
Richard McDougall
 
Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013
Richard McDougall
 
Vsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscsVsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscs
Dhymas Mahendra
 

Destacado (17)

VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6
 
VMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A TutorialVMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A Tutorial
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
 
VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6
 
Red Hat System Administration
Red Hat System AdministrationRed Hat System Administration
Red Hat System Administration
 
Making of the Burner Board
Making of the Burner BoardMaking of the Burner Board
Making of the Burner Board
 
Virtualizing Oracle Databases with VMware
Virtualizing Oracle Databases with VMwareVirtualizing Oracle Databases with VMware
Virtualizing Oracle Databases with VMware
 
Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_Vsicm51 m01 course_intro_
Vsicm51 m01 course_intro_
 
Nexus 1000V Support for VMWare vSphere 6
Nexus 1000V Support for VMWare vSphere 6Nexus 1000V Support for VMWare vSphere 6
Nexus 1000V Support for VMWare vSphere 6
 
Virtual Infrastructure Overview
Virtual Infrastructure OverviewVirtual Infrastructure Overview
Virtual Infrastructure Overview
 
VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
 
Taking VMware Performance Monitoring Beyond VCOPS
Taking VMware Performance Monitoring Beyond VCOPSTaking VMware Performance Monitoring Beyond VCOPS
Taking VMware Performance Monitoring Beyond VCOPS
 
С Юбилеем дорогая сестра
С Юбилеем дорогая сестраС Юбилеем дорогая сестра
С Юбилеем дорогая сестра
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworld
 
Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013
 
Vsphere esxi-vcenter-server-601-setup-mscs
Vsphere esxi-vcenter-server-601-setup-mscsVsphere esxi-vcenter-server-601-setup-mscs
Vsphere esxi-vcenter-server-601-setup-mscs
 
Vsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscsVsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscs
 

Similar a VMware Advance Troubleshooting Workshop - Day 4

VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation Notes
Grit Suwa
 
Net app ecmlp2495163
Net app ecmlp2495163Net app ecmlp2495163
Net app ecmlp2495163
forum4user
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
Stephen Foskett
 
Lxp storage iSCSI Best Practice
Lxp storage iSCSI Best PracticeLxp storage iSCSI Best Practice
Lxp storage iSCSI Best Practice
cmegroz
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Erik Bussink
 

Similar a VMware Advance Troubleshooting Workshop - Day 4 (20)

vSphere
vSpherevSphere
vSphere
 
ProfessionalVMware BrownBag VCP5 Section3: Storage
ProfessionalVMware BrownBag VCP5 Section3: StorageProfessionalVMware BrownBag VCP5 Section3: Storage
ProfessionalVMware BrownBag VCP5 Section3: Storage
 
VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation Notes
 
Guaranteeing Storage Performance by Mike Tutkowski
Guaranteeing Storage Performance by Mike TutkowskiGuaranteeing Storage Performance by Mike Tutkowski
Guaranteeing Storage Performance by Mike Tutkowski
 
file-storage-100.pdf
file-storage-100.pdffile-storage-100.pdf
file-storage-100.pdf
 
V sphere virtual volumes technical overview
V sphere virtual volumes technical overviewV sphere virtual volumes technical overview
V sphere virtual volumes technical overview
 
XenServer Design Workshop
XenServer Design WorkshopXenServer Design Workshop
XenServer Design Workshop
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
 
VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1
 
Net app ecmlp2495163
Net app ecmlp2495163Net app ecmlp2495163
Net app ecmlp2495163
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
Adrian Stoian - Manage Private and Public Cloud Services with System Center 2...
Adrian Stoian - Manage Private and Public Cloud Services with System Center 2...Adrian Stoian - Manage Private and Public Cloud Services with System Center 2...
Adrian Stoian - Manage Private and Public Cloud Services with System Center 2...
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Lxp storage iSCSI Best Practice
Lxp storage iSCSI Best PracticeLxp storage iSCSI Best Practice
Lxp storage iSCSI Best Practice
 
SoNAS
SoNASSoNAS
SoNAS
 
Decisions behind hypervisor selection in CloudStack 4.3
Decisions behind hypervisor selection in CloudStack 4.3Decisions behind hypervisor selection in CloudStack 4.3
Decisions behind hypervisor selection in CloudStack 4.3
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealize
 
Denver VMUG nov 2011
Denver VMUG nov 2011Denver VMUG nov 2011
Denver VMUG nov 2011
 

Último

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Último (20)

Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 

VMware Advance Troubleshooting Workshop - Day 4

  • 1. Introduction to vSphere Storage & VM Management Day 4 VMware vSphere: Install, Configure, Manage
  • 2. Content • Virtual Storage • NFS • iSCSI • Clone, Template, Snapshot • vApp • Content Library
  • 4. Module Lessons Storage Concepts iSCSI Storage NFS Datastores VMFS Datastores Virtual SAN Datastores Virtual Volumes
  • 6. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe VMware vSphere® storage technologies and datastores • Describe the storage device naming convention
  • 7. Basic Storage Overview Storage Technologies Datastore Types FCoE iSCSI Fibre Channel Direct Attached File System NAS NFSVMFS ESXi Hosts
  • 8. Storage Protocol Overview Storage Protocol Boot from SAN Support vSphere vMotion Support vSphere HA Support vSphere DRS Support Raw Device Mapping Support Fibre Channel ● ● ● ● ● FCoE ● ● ● ● ● iSCSI ● ● ● ● ● NFS ● ● ● DAS ● ● Virtual Volumes ● ● ● Virtual SAN ● ● ●
  • 9. About Datastores A datastore is a logical storage unit that can use disk space on one physical device or span several physical devices. Datastores are used to hold virtual machine files, templates, and ISO images. Types of datastores: • VMFS • NFS • Virtual SAN • Virtual Volumes Host Host Datastore
  • 10. About VMFS5 VMFS5: • Allows concurrent access to shared storage. • Can be dynamically expanded. • Uses a 1 MB block size, good for storing large virtual disk files. • Uses subblock addressing, good for storing small files: the subblock size is 8 KB. • Provides on-disk, block-level locking. HostHost VMFS Datastore
  • 11. About NFS NFS: • Is storage shared over the network at the file system level • Supports NFS version 3 and 4.1 over TCP/IP HostHost NFS Datastore
  • 12. Virtual SAN™ is hypervisor-converged, software-defined storage for virtual environments. By clustering host-attached hard disks (HDDs) and/or solid state drives (SSDs), Virtual SAN creates an aggregated datastore shared by virtual machines. vSphere HDD/Flash/SSD Virtual SAN 3-64 Virtual SAN Overview Virtual SAN Datastore
  • 13. About Virtual Volumes vSphere Virtual Volumes Replication Snapshots Caching Encryption Deduplication PE • Native representation of VMDKs on SAN/NAS: No LUNs or volume management. • Works with existing SAN/NAS systems. • A new control path for data operations at the VM/VMDK level. • Snapshots, replications, and other operations at the VM level on external storage. • Automates control of per-VM service levels. • Protocol endpoint provides standard protocol access to storage. • Storage containers can span an entire array. Overview
  • 14. About Raw Device Mapping RDM enables you to store virtual machine data directly on a LUN. The mapping file is stored on a VMFS datastore that points to the raw LUN. -flat.vmdk .vmdk Virtual Disk VMFS or NFS -rdm.vmdk .vmdk RDM VMFS Raw LUN NTFS/ext4
  • 15. Storage Device Naming Conventions Storage devices are identified in several ways: • Runtime name: Uses the convention vmhbaN:C:T:L. This name is not persistent through reboots. • Target: Identifies iSCSI target address and port. • LUN: A unique identifier designated to individual or collections of hard disk devices. A logical unit is addressed by the SCSI protocol or SAN protocols that encapsulate SCSI, such as iSCSI or Fibre Channel.
  • 16. Physical Storage Considerations You should discuss vSphere storage needs with your storage administration team, including the following items: • LUN sizes • I/O bandwidth • I/O requests per second that a LUN is capable of • Disk cache parameters • Zoning and masking • Identical LUN presentation to each VMware ESXi™ host • Active-active or active-passive arrays • Export properties for NFS datastores
  • 17. Review of Learner Objectives You should be able to meet the following objectives: • Describe VMware vSphere® storage technologies and datastores • Describe the storage device naming convention
  • 19. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe uses of IP storage with ESXi • Describe iSCSI components and addressing • Configure iSCSI initiators
  • 21. iSCSI Addressing iSCSI target name: iqn.1992-08.com.mycompany:stor1-47cf3c25 or eui.fedcba9876543210 iSCSI alias: stor1 IP address: 192.168.36.101 iSCSI initiator name: iqn.1998-01.com.vmware:train1-64ad4c29 or eui.1234567890abcdef iSCSI alias: train1 IP address: 192.168.36.88
  • 23. Setting Up iSCSI Adapters You set up software or hardware adapters before an ESXi host can work with a SAN. Supported iSCSI adapter types (vmhba): • Software adapter • Hardware adapter: • Independent hardware adapter • Dependent hardware adapter
  • 24. ESXi Network Configuration for IP Storage A VMkernel port must be created for ESXi to access software iSCSI. The same port can be used to access NAS/NFS storage. To optimize your vSphere networking setup, separate iSCSI networks from NAS/NFS networks: • Physical separation is preferred. • If physical separation is not possible, use VLANs.
  • 25. Creating Datastores and Discovering iSCSI Targets Based on the environment and storage needs, you can create VMFS, NFS, or virtual datastores as repositories for virtual machines. The iSCSI adapter discovers storage resources on the network and determines which ones are available for access. An ESXi host supports the following discovery methods: • Static • Dynamic, also called SendTargets The SendTargets response returns the IQN and all available IP addresses. iSCSI Target: 192.168.36.101:3260 SendTargets Request SendTargets Response 192.168.36.101:3260
  • 26. iSCSI Security: CHAP iSCSI initiators use CHAP for authentication purposes. By default, CHAP is not configured. ESXi supports two types of CHAP authentication: • Unidirectional • Bidirectional ESXi also supports per- target CHAP authentication.
  • 27. Multipathing with iSCSI Storage Software or dependent hardware iSCSI: • Use multiple NICs. • Connect each NIC to a separate VMkernel port. • Associate VMkernel ports with the iSCSI initiator. Independent Hardware iSCSI: • Use two or more hardware iSCSI adapters.
  • 28. Review of Learner Objectives You should be able to meet the following objectives: • Describe uses of IP storage with ESXi • Describe iSCSI components and addressing • Configure iSCSI initiators
  • 30. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe NFS components • Describe the differences between NFS v3 and NFS v4.1 • Configure and manage NFS datastores
  • 31. NFS Components Directory to Share with the ESXi Host over the Network VMkernel Port Defined on Virtual Switch ESXi Host with NIC Mapped to Virtual Switch NAS Device or a Server with Storage 192.168.81.72 192.168.81.33
  • 32. Configuring an NFS Datastore Create a VMkernel port: • For better performance and security, separate your NFS network from the iSCSI network. Provide the following information: • NFS version: v3 or v4.1 • Datastore name • NFS server names or IP addresses • Folder on the NFS server, for example, /templates and /nfs_share • Select hosts that will mount the datastore • Whether to mount the NFS file system read-only • Authentication parameters
  • 33. NFS v3 and NFS v4.1 NFS v3: • ESXi managed multipathing • AUTH_SYS (root) authentication • VMware proprietary file locking • Client-side error tracking NFS v4.1: • Native multipathing and session trunking • Optional Kerberos authentication • Built-in file locking • Server-side error tracking
  • 34. NFS Version Compatibility with Other vSphere Technologies NFS v3 NFS v4.1 vSphere vMotion and vSphere Storage vMotion Yes Yes vSphere HA Yes Yes vSphere Fault Tolerance Yes Yes vSphere DRS and vSphere DPM Yes Yes Stateless ESXi and Host Profiles Yes Yes vSphere Storage DRS and vSphere Storage I/O Control Yes No Site Recovery Manager Yes No Virtual Volumes Yes No
  • 35. NFS Datastore Best Practices Best practices: • Configure an NFS array to allow only one NFS protocol. • Use either NFS v3 or NFS v4.1 to mount the same NFS share across all ESXi hosts. • Exercise caution when mounting an NFS share. Mounting an NFS share as NFS v3 on one ESXi host and as NFS v4.1 on another host can lead to data corruption. NFS v3 locking is not compatible with NFS v4.1: • NFS v3 uses proprietary client-side cooperative locking. NFS v4.1 uses server- side locking.
  • 36. NFS Datastore Name and Configuration
  • 37. Viewing IP Storage Information You can view the details of the VMFS or NFS datastores that you created.
  • 38. Unmounting an NFS Datastore Unmounting an NFS datastore causes the files on the datastore to become inaccessible to the ESXi host.
  • 39. Multipathing and NFS 4.1 Storage One recommended configuration for NFS version 4.1 multipathing: • Configure one VMkernel port. • Use adapters attached to the same physical switch to configure NIC teaming. • Configure the NFS server with multiple IP addresses: – IP addresses can be on the same subnet. • To better utilize multiple links, configure NIC teams with the IP hash load-balancing policy. NIC NIC Physical Switch ESXi Host vmnic0 vmnic1
  • 40. Enabling Session Trunking and Multipathing Multiple IP addresses are configured for each NFS v4.1 datastore. 192.168.0.203, 192.168.0.204
  • 41. Review of Learner Objectives You should be able to meet the following objectives: • Describe NFS components • Describe the differences between NFS v3 and NFS v4.1 • Configure and manage NFS datastores
  • 43. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Create a VMFS datastore • Increase the size of a VMFS datastore • Delete a VMFS datastore
  • 44. Using VMFS Datastores with ESXi Hosts Use VMFS datastores whenever possible: • VMFS is optimized for storing and accessing large files. • A VMFS datastore can have a maximum volume size of 64 TB. Use RDMs if the following conditions are true of your virtual machine: • It is taking storage array-level snapshots. • It is clustered to a physical machine. • It has large amounts of data that you do not want to convert into a virtual disk.
  • 45. Creating and Viewing VMFS Datastores VMFS datastores serve as repositories for virtual machines. Using the New Datastore wizard, you can create VMFS datastores on any SCSI-based storage devices that the host discovers, including Fibre Channel, iSCSI, and local storage devices.
  • 47. Managing Overcommitted Datastores A datastore becomes overcommitted when the total provisioned space of thin-provisioned disks is greater than the size of the datastore. Actively monitor your datastore capacity: • Alarms assist through notifications: – Datastore disk overallocation – Virtual machine disk usage • Use reporting to view space usage. Actively manage your datastore capacity: • Increase the datastore capacity when necessary. • Use VMware vSphere® Storage vMotion® to mitigate space usage problems on a particular datastore.
  • 48. Increasing the Size of a VMFS Datastore In general, before making any changes to your storage allocation: • Perform a rescan to ensure that all hosts see the most current storage. • Record the unique identifier. Increase a VMFS datastore’s size to give it more space or possibly to improve performance. Ways to dynamically increase the size of a VMFS datastore: • Add an extent (LUN). • Expand the datastore within its extent.
  • 49. Deleting or Unmounting a VMFS Datastore An unmounted datastore remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted. A deleted datastore is destroyed and disappears from all hosts that have access to it. All virtual machine files on the datastore are permanently removed.
  • 50. Multipathing Algorithms Arrays provide various features. Some offer active- active storage processors. Others offer active-passive storage processors. vSphere offers native path selection, load-balancing, and failover mechanisms. Third-party vendors can create their own software to be installed on ESXi hosts. The third-party software enables hosts to properly interact with the storage arrays. Storage Array SP A 10 SP B 10 ESXi Hosts Storage Processors Switches
  • 51. Configuring Storage Load Balancing Path selection policies exist for: • Scalability: – Round Robin: A multipathing policy that performs load balancing across paths • Availability: – MRU – Fixed
  • 52. Review of Learner Objectives You should be able to meet the following objectives: • Create a VMFS datastore • Increase the size of a VMFS datastore • Delete a VMFS datastore
  • 54. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Explain the purpose of a VMware Virtual SAN™ datastore • Describe the architecture and requirements of Virtual SAN configuration • Describe the steps for configuring Virtual SAN • Explain how to create and use Virtual SAN storage policies
  • 55. About Virtual SAN vSphere HD/SSDHD/SSDSSD SSD Virtual SAN SSD 3-64 Virtual SAN Aggregated Datastore HD/SSD A single Virtual SAN datastore is created, using storage from multiple hosts and multiple disks in the cluster.
  • 56. Virtual SAN Requirements • Not every node in a Virtual SAN cluster needs local storage. • Hosts with no local storage can still use the distributed datastore. Server on vSphere HCL 1 Gb or 10 Gb NIC SAS/SATA: RAID controller must work in passthrough or HBA mode. PCI/SAS/SATA SSD At least 1 of each PCI/SAS/SATA HD/SSD Cache Data Network Controller
  • 57. Configuring a Virtual SAN Datastore A Virtual SAN datastore is configured in a few steps. Configure VMkernel network for Virtual SAN. Enable Virtual SAN on the cluster. Create disk groups (manual or automatic)
  • 58. Disk Groups Virtual SAN disk groups composed of flash-based devices and magnetic disks require: • One flash device: – Maximum of one flash device per disk group • One HD/SSD: – Supports up to seven devices per disk group • Maximum of five disk groups per host Disk Groups
  • 59. Viewing Cluster Summary In the VMware vSphere® Web Client, the Summary tab of the Virtual SAN cluster displays the general Virtual SAN configuration information.
  • 60. Using Virtual SAN Capabilities define the capacity, performance, and availability characteristics of the underlying physical storage. The Virtual SAN cluster presents these capabilities to vCenter Server, where they can be consumed by virtual machines. Requirements outline the needs of a virtual machine. Virtual machine storage policies specify the virtual machine requirements so that the virtual machine can be placed appropriately on the Virtual SAN datastore. Capabilities presented from Virtual SAN. VM requirements based on capabilities. Create policies that contain VM requirements.
  • 61. Objects in Virtual SAN Datastores In a Virtual SAN datastore, files are grouped into four types of objects: • Namespaces • Virtual disks • Snapshots • Swap files Snapshot VMDK VSWP
  • 62. Virtual Machine Storage Policies vSphere Hard disksHard disks SSD SSD Virtual SAN Datastore Hard disks SSD … Virtual SAN Cluster Capacity Availability Performance VM Storage Policy • Virtual machine storage policies are built before VM deployment to reflect the requirements of the application running in the virtual machine. • The policy is based on the Virtual SAN capabilities. • Select the appropriate policy for the virtual machine based on its requirements. • Storage objects for the virtual machine are then created that meet the policy requirements.
  • 63. Configuring Virtual Machine Storage Policies Mirroring Striping Storage Object
  • 64. Viewing a Virtual Machine’s Virtual SAN Datastore The consumption of Virtual SAN storage is based on the virtual machine’s storage policy. The virtual machine’s hard disk view: • Summarizes the total storage size and used storage space • Displays the virtual machine storage policy • Shows the location of disk files on a Virtual SAN datastore
  • 65. Disk Management (1) Disk management in vSphere Web Client: • Easily map the location of magnetic disks and flash-based devices. • Mark disks and control disk LEDs.
  • 66. Disk Management (2) • Light LED on failures: – When a solid-state disk (SSD) or a magnetic disk (MD) encounters a permanent error, Virtual SAN automatically turns the disk LED on. • Turn disk LED on or off: – User might need to locate a disk, so Virtual SAN supports manually turning an SSD or MD LED on or off. • Marking a disk as SSD: – Some SSDs might not be recognized as SSDs by ESXi. – Disks can be tagged or untagged as SSDs for cache. • Marking a disk as HDD: – Some SSDs or MDs might not be recognized by ESXi as HDDs. – Disks can be tagged or untagged as HDDs. – SSDs must be marked as HDDs in order to be used for capacity.
  • 67. Adding Disks to a Disk Group Disk groups can be expanded by adding data disks to a node and adding these disks to a particular disk group. The vSphere Web Client shows any unclaimed disk in the disk maintenance window.
  • 68. Removing Disks from a Disk Group Individual disks can be removed from a disk group. Ensure that data is evacuated before the disk is removed. Alternatively, you may place the host in maintenance mode.
  • 69. Virtual SAN Cluster Member Maintenance Mode Options Before you shut down, reboot, or disconnect a host that is a member of a Virtual SAN cluster, you must place the host in maintenance mode. When you place a host in maintenance mode, you can select a specific evacuation mechanism. When any member node of a Virtual SAN cluster enters maintenance mode, the cluster capacity is automatically reduced because the member node no longer contributes storage to the cluster. Option Action Ensure Accessibility Moves enough components to ensure operational integrity of objects. Full Data Migration All components are evacuated from the host. No Data Migration No action is taken, which can result in degraded objects.
  • 70. To remove a host that is participating in a Virtual SAN cluster: 1. Place the host in maintenance mode. 2. Delete the disk groups associated with the host. 3. Remove the host from the cluster. Removing a Host from a Virtual SAN Cluster
  • 71. Review of Learner Objectives You should be able to meet the following objectives: • Explain the purpose of a Virtual SAN datastore • Describe the architecture and requirements of Virtual SAN configuration • Describe the steps for configuring Virtual SAN • Explain how to create and use Virtual SAN storage policies
  • 73. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe the benefits of software-defined storage • Describe per-virtual machine storage policy management • Explain how VMDK data operations are offloaded to storage arrays through the use of VMware vSphere® API for Storage Awareness™
  • 74. Next-Generation Storage Next-generation storage is required to meet certain criteria. Management Network/Security Storage/Availability Compute Lower cost of storage. Reduce manual processes around storage management. Handle explosive data growth. Respond to new data access and analysis requirements.
  • 75. Using the Hypervisor to Transform Storage Object-Based Pool SAN/NAS Pool Hypervisor Converged Pool Abstract and Pool (Virtualized Data Plane) Automate service-level agreements through virtual machine-centric policies. (Policy-Based Control Plane) Virtual Machine-Level Data Services (Virtual Data Services) SAN/NASx86 Servers Cloud Object Storage vSphere Replication Snapshots
  • 76. Why Virtual Volumes Customers have major concerns about storage.  “Setting up storage requires too much time.”  “Data operations are LUN-centric. We want virtual machine-focused operations.” Storage management is too complex.  “We overprovision storage.”  “Our storage budget keeps going up.”  “SLAs cannot ensure predictable performance.”  “Troubleshooting is very hard.” Cost of ownership is too high. SLAs are too difficult to ensure.
  • 77. VMware vSphere Virtual volumes Replication Snapshots Caching Encryption De-duplication VMware vSphere VMDKs as Native Objects Traditional Model VMDKs and VMDK Data Operations Offloaded to Storage Arrays Virtual Volumes
  • 78. Storage Array Requirements Virtual volumes require that the following criteria be met to function properly: • A storage array compatible with vSphere API for Storage Awareness 2.0. • Must implement vSphere API for Storage Awareness to create the storage provider for virtual volumes: – Firmware – Virtual appliance – Physical appliance • Use APIs to handle offloaded data services on the virtual volumes. • Enable fine capabilities. • Publish a VASA provider that runs on the array through a URL.
  • 79. Storage Administration vSphere PE No need to configure LUNs or NFS shares. Set up a single I/O access called a protocol endpoint, to establish a data path from virtual machines to virtual volumes. Set up a logical entity, called storage container, to group virtual volumes for easy management.
  • 80. Protocol Endpoints The protocol endpoint is set up by the storage administrator. The protocol endpoint is part of the physical storage fabric. It is treated like a LUN. The protocol endpoint supports typical SCSI and NFS commands. Virtual volumes are bound and unbound to a protocol endpoint: ESXi or VMware vCenter Server™ initiates the bind and unbind operation. Existing multipathing policies and NFS topology requirements can be applied. vSphere PE
  • 81. Storage Containers In vCenter Server, the storage containers are represented by virtual datastores: • A storage container is configured by the storage administrator. • A storage container is a logical grouping of virtual volumes. • A storage container’s capacity is limited only by the hardware capacity. • You must set up at least one storage container per storage system. You can have multiple storage containers per array. • You assign capabilities to storage containers. vSphere PE
  • 82. Using Virtual Volumes A vendor provider is a storage provider based on vSphere API for Storage Awareness that allows the array to export its capabilities and present them to vSphere. A protocol endpoint is a replacement for the traditional LUN and can be accessed with typical NFS or SCSI methods. Virtual Volumes datastores are created on the protocol endpoint: • Virtual volumes are objects created on the datastore. Register a storage provider in vCenter Server. Discover protocol endpoints (iSCSI, NFS, and so on). Create Virtual Volumes datastores.
  • 83. Bidirectional Discovery Process Protocol Endpoint Storage administrator sets up a protocol endpoint. ESXi host discovers the protocol endpoint during a scan. vSphere API for Storage Awareness is used to bind virtual volumes to the protocol endpoint. Storage Container Storage administrator sets up a storage container of defined capacity and capability. VASA provider discovers the storage container and reports to vCenter Server. Virtual volumes are created in a Virtual Volumes datastore.
  • 84. Storage-Based Policy Management (1) Storage-based policy management helps ensure that virtual machines receive their required performance, capacity, and availability. Per-virtual machine storage policies. Capacity Performance Availability Policies set based on application needs. SAN/NAS Virtual Volumes Storage Policy-Based Management Virtual Data Plane: Datastore SLAs External storage automates control of service levels.
  • 85. Storage-Based Policy Management (2) Storage policies represent service levels demanded by virtual machines.
  • 86. Review of Learner Objectives You should be able to meet the following objectives: • Describe the benefits of software-defined storage • Describe per-virtual machine storage policy management • Explain how VMDK data operations are offloaded to storage arrays through the use of VMware vSphere API for Storage Awareness
  • 87. Key Points • You use VMFS datastores to hold virtual machine files. • Shared storage is integral to vSphere features such as vSphere vMotion, vSphere HA, and vSphere DRS. • Virtual SAN enables low-end configurations to use vSphere HA, vSphere vMotion, and vSphere Storage vMotion without requiring external shared storage. • Virtual SAN clusters direct-attached server disks to create shared storage designed for virtual machines. • Virtual Volumes is a storage management approach that enables administrators to differentiate virtual machine services per application. • Key components of the Virtual Volumes functionality include virtual volumes, VASA providers, storage containers, protocol endpoints, and virtual datastores. Questions?
  • 90. If a virtual machine cannot access its virtual disks, the cause of the problem might be anywhere from the virtual machine to physical storage. iSCSIDirect Attached File System Ethernet NFS Virtual Disk Datastore Type Transport Backing FC FCoE Review of vSphere Storage Architecture LUN LUN LUN LUN VVOLVSAN Storage Container VSAN Cluster Direct Attached FC/ Ethernet VMFS
  • 91. Review of iSCSI Storage If the VMware ESXi™ host has iSCSI storage connectivity issues, check the iSCSI configuration on the ESXi host and, if necessary, the iSCSI hardware configuration. iSCSI target name: iqn.1992-08-com.acme:storage1 iSCSI alias: storage1 IP address: 192.168.36.101 iSCSI initiator name: iqn.1998-01.com.vmware:train1 iSCSI alias: train1 IP address: 192.168.36.88
  • 92. Storage Problem 1 Initial checks using the command line look at connectivity on the host: • Verify that the ESXi host can see the LUN: – esxcli storage core path list • Check whether a rescan restores visibility to the LUNs. – esxcli storage core adapter rescan –A vmhba## • Check how many datastores exist and how full they are: – df –h | grep VMFS IP storage is not reachable by an ESXi host.
  • 93. Identifying Possible Causes If the ESXi host accessed IP storage in the past, and no recent changes were made to the host configuration, you might take a bottom-up approach to troubleshooting. ESXi Host Possible Causes The VMkernel interface for IP storage is misconfigured. IP storage is not configured correctly on the ESXi host. iSCSI TCP port 3260 is unreachable. A firewall is interfering with iSCSI traffic. NFS storage is not configured correctly. VMFS datastore metadata is inconsistent. The iSCSI storage array is not supported. The LUN is not presented to the ESXi host. The physical hardware is not functioning correctly. Poor iSCSI storage performance is observed. Hardware (Storage Network, Storage Array)
  • 94. Possible Cause: Hardware-Level Problems Check the VMware Compatibility Guide to see if the iSCSI HBA or iSCSI storage array is supported. Verify that the LUN is presented correctly to the ESXi host: • The LUN is in the same storage group as all the ESXi hosts. • The LUN is configured correctly for use with the ESXi host. • The LUN is not set to read-only on the array. • The host ID on the array for the ESXi LUN is less than 255. If the storage device is malfunctioning, use hardware diagnostic tools to identify the faulty component.
  • 95. Possible Cause: Poor iSCSI Storage Performance Adhere to best practices for your IP storage networks: • Avoid oversubscribing your links. • Isolate iSCSI traffic from NFS traffic and any other network traffic. Monitor device latency metrics: • Use the esxtop or resxtop command: Enter d in the window. Device Avg. Kernel Avg. Guest Avg.
  • 96. Possible Cause: VMkernel Interface Misconfiguration A misconfigured VMkernel interface for IP storage affects any IP storage, whether iSCSI or NFS: • To test configuration from the ESXi host, ping the iSCSI target IP address: – For example, ping 172.20.13.14 • 172.20.13.14 is the IP address of the iSCSI target. • If the ping command fails, ensure that the IP settings are correct.
  • 97. Possible Cause: iSCSI HBA Misconfiguration The iSCSI initiator might be configured incorrectly on the ESXi host. Use VMware vSphere® Web Client to check the configured components: • iSCSI initiator name • iSCSI target address and port number • CHAP Verify that the VMkernel port bindings are configured properly.
  • 98. Possible Cause: Port Unreachable Failure could occur because iSCSI TCP port 3260 is unreachable. • From the ESXi host, use the nc (netcat) command to reach port 3260 on the iSCSI storage array. – nc –z IPaddr 3260 • IPaddr is the IP address of the iSCSI storage array. Resolve this problem by checking paths between the host and hardware: • Verify that the iSCSI storage array is configured properly and is active. • Verify that a firewall is not interfering with iSCSI traffic.
  • 99. Possible Cause: VMFS Metadata Inconsistency Verify that your VMware vSphere® VMFS datastore metadata is consistent: • Use the vSphere On-disk Metadata Analyzer to check VMFS metadata consistency: – voma -m vmfs -d /vmfs/devices/disks/naa.00000000000000000000000000:1 -s /tmp/analysis.txt A file system’s metadata must be checked under the following conditions: • Disk replacement • Reports of metadata errors in the vmkernel.log file • Inability to access files on the VMFS volume that are not in use by any other host If you encounter VMFS inconsistencies, perform these tasks: 1. Recreate the VMFS datastore and restore files from your last backup to the VMFS datastore. 2. If necessary, complete a support request.
  • 100. Possible Cause: NFS Misconfiguration If your virtual machines reside on NFS datastores, verify that your NFS configuration is correct. VMkernel port configured with IP address Directory to share with the ESXi host over the network Mount permission (Read/Write or Read-Only) and ACLs ESXi host with NIC mapped to virtual switch NFS Server Name or IP Address
  • 101. NFS Version Compatibility with Other vSphere Technologies vSphere Technologies NFS v3 NFS v4.1 VMware vSphere® vMotion®/VMware vSphere® Storage vMotion® Yes Yes VMware vSphere® High Availability Yes Yes VMware vSphere® Fault Tolerance Yes Yes VMware vSphere® Distributed Resource Scheduler™/VMware vSphere® Distributed Power Management™ Yes Yes Stateless ESXi/Host Profiles Yes Yes VMware vSphere® Storage DRS™/VMware vSphere® Storage I/O Control Yes No VMware Site Recovery Manager™ Yes No VMware vSphere® Virtual Volumes™ Yes No
  • 102. NFS Dual Stack Not Supported NFS v3 and v4.1 use different locking semantics: • NFS v3 uses proprietary client-side cooperative locking. • NFS v4.1 uses server-side locking. The best practices are: • Configure an NFS array to allow only one NFS protocol. • Use either NFS v3 or NFS v4.1 to mount the same NFS share across all ESXi hosts. Data corruption might occur if hosts attempt to access the same NFS share using different NFS client versions. Best Practice
  • 103. Viewing Session Information You use the esxcli storage nfs41 list command to view the volume name, IP address, and other information.
  • 104. Review of Learner Objectives You should be able to meet the following objectives: • Discuss vSphere storage architecture • Identify possible causes of problems in various types of datastores • Analyze common storage connectivity and configuration problems and discuss possible causes • Solve storage connectivity problems, correct misconfigurations, and restore LUN visibility
  • 106. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Review multipathing • Identify common causes of missing paths, including PDL and APD conditions • Solve missing path problems between hosts and storage devices
  • 107. Review of iSCSI Multipathing If your ESXi host has iSCSI multipathing issues, check the multipathing configuration on the ESXi host and, if necessary, the iSCSI hardware configuration.
  • 108. Storage Problem 2 Initial checks of LUN paths are performed using the esxcli command: • Find detailed information regarding multiple paths to the LUNs: – esxcli storage core path list • List LUN multipathing information: – esxcli storage nmp device list • Check whether a rescan restores visibility to the LUNs: – esxcli storage core adapter rescan –A vmhba## • Retrieve SMART data about a specified SSD device: – esxcli storage core device smart get –d device_name One or more paths to a LUN are lost.
  • 109. Identifying Possible Causes If you see errors in /var/log/vmkernel.log that refer to a permanent device loss (PDL) or all paths down (APD) condition, then take a bottom- up approach to troubleshooting. ESXi Host Possible Causes For iSCSI storage, NIC teaming is misconfigured. The path selection policy for a storage device is misconfigured. A PDL condition has occurred. An APD condition has occurred. Hardware (Storage Network, Storage Array)
  • 110. PDL Condition A storage device is in a PDL state when it becomes permanently unavailable to the ESXi host. Possible causes of an unplanned PDL: • The device is unintentionally removed. • The device’s unique ID changes. • The device experiences an unrecoverable hardware error. • The device ran out of space, causing it to become inaccessible. vSphere Web Client displays pertinent information when a device is in a PDL state: • The operational state of the device changes to Lost Communication. • All paths appear as Dead. • Datastores on the device are unavailable.
  • 111. Recovering from an Unplanned PDL If the LUN was not in use when the PDL condition occurred, the LUN is removed automatically after the PDL condition clears. If the LUN was in use, manually detach the device and remove the LUN from the ESXi host. When storage reconfiguration is complete, perform these steps: 1. Reattach the storage device. 2. Mount the datastore. 3. Restore from backups if necessary. 4. Restart the virtual machines.
  • 112. APD Condition An APD condition occurs when a storage device becomes unavailable to your ESXi host for an unspecified amount of time: • This condition is transient. The device is expected to be available again. An APD condition might be caused by several causes: • The storage device is removed in an uncontrolled manner from the host. • The storage device fails: – The VMkernel cannot detect how long the loss of device access will last. • Network connectivity fails, which brings down all paths to iSCSI storage. vSphere Web Client displays pertinent information when an APD condition occurs: • The operational state of the device changes to Dead or Error. • All paths appear as Dead. • Datastores on the device are unavailable.
  • 113. Recovering from an APD Condition The APD condition must be resolved at the storage array or fabric layer to restore connectivity to the host: • All affected ESXi hosts might require a reboot. vSphere vMotion migration of unaffected virtual machines cannot be attempted: • Management agents might be affected by the APD condition. To avoid APD problems, the ESXi host has a default APD handling feature: • Global setting: Misc.APDHandlingEnable – By default, set to 1, which enables storage APD handling • Timeout setting: Misc.APDTimeout – By default, set to 140, the number of seconds that a device can be in APD before failing
  • 114. Possible Cause: NIC Teaming Misconfiguration Verify that NIC teaming is configured properly.
  • 115. Possible Cause: Path Selection Policy Misconfiguration Verify that the path selection policy for a storage device is configured properly.
  • 116. Possible Cause: NFSv3 and v4.1 Misconfiguration Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share recovers from an APD state. The lock protecting VM.vmdk has been lost error message is displayed. This issue occurs because NFSv3 and v4.1 are two different protocols with different behaviors. After the grace period (array vendor-specific), the NFS server flushes the client state. This behavior is expected in NFSv4 servers.
  • 117. Possible Cause: Fault in APD Handling When an APD event occurs, LUNs connected to ESXi might remain inaccessible after paths to the LUNs recover. The 140-second APD timeout expires even though paths to storage are recovered. This issue is due to a fault in APD handling: • When this issue occurs, a LUN has paths available and is online following an APD event, but the APD timer continues upcounting until the LUN enters APD Timeout state. • After the initial APD event, the datastore is inaccessible as long as active workloads are associated with the datastore in question. To solve this problem, upgrade ESXi to version 6.0 Update 1. If you are unable to upgrade, use one of the workaround options: • Perform the procedure to kill all outstanding I/O to the LUN. • Reboot all hosts with volumes in the APD Timeout state.
  • 119. Module Lessons Creating Templates and Clones Modifying Virtual Machines Creating Virtual Machine Snapshots Creating vApps Working with Content Libraries
  • 121. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Create a template • Deploy a virtual machine from a template • Clone a virtual machine • Enable guest operating system customization by VMware vCenter Server™
  • 122. Using a Template A template is a master copy of a virtual machine. It is used to create and provision new virtual machines.
  • 123. Creating a Template Clone the virtual machine to a template: • The virtual machine can be powered on or powered off. Convert the virtual machine to a template: • The virtual machine must be powered off. Clone a template: • Used to create a new template based on one that existed previously.
  • 124. Deploying a Virtual Machine from a Template To deploy a virtual machine, you must provide such information as the virtual machine name, inventory location, host, datastore, and guest operating system customization data.
  • 125. Updating a Template Update a template to include new patches, make system changes, and install new applications: 1. Convert the template to a virtual machine. 2. Place the virtual machine on an isolated network to prevent user access. 3. Make appropriate changes to the virtual machine. 4. Convert the virtual machine to a template.
  • 126. Cloning a Virtual Machine Cloning a virtual machine creates a virtual machine that is an exact copy of the original: • Cloning is an alternative to deploying a virtual machine. • The virtual machine being cloned can be powered on or powered off.
  • 127. Customizing the Guest Operating System Use the Guest Operating System Customization wizard to make virtual machines created from the same template or clone unique. Customizing a guest operating system enables you to change: • Computer name • Network settings • License settings • Windows Security Identifier During cloning or deploying virtual machines from a template: • You can create a specification to prepare the guest operating systems of virtual machines. • Specifications can be stored in the database. • You can edit specifications in the Customization Specifications Manager. • Windows and Linux operating systems are supported.
  • 128. Review of Learner Objectives You should be able to meet the following objectives: • Create a template • Deploy a virtual machine from a template • Clone a virtual machine • Enable guest operating system customization by VMware vCenter Server™
  • 130. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe virtual machine settings and options • Add a hot-pluggable device • Dynamically increase the size of a virtual disk • Add a raw device mapping (RDM) to a virtual machine
  • 131. Modifying Virtual Machine Settings You can modify a virtual machine’s configuration in its Edit Settings dialog box: • Add virtual hardware: – Some hardware can be added while the virtual machine is powered on. • Remove virtual hardware: – Some hardware can be removed only when the virtual machine is powered off • Set virtual machine options. • Control a virtual machine’s CPU and memory resources.
  • 132. Hot-Pluggable Devices The CPU hot-plug option enables you to add CPU resources to a running virtual machine: • Examples of hot-pluggable devices: USB controllers, Ethernet adapters, and hard disk devices. With supported guest operating systems, you can also add CPU and memory while the virtual machine is powered on.
  • 133. Creating an RDM An RDM (a -rdm.vmdk file) enables a virtual machine to gain direct access to a physical LUN. Encapsulating disk information in the RDM enables the VMkernel to lock the LUN so that only one virtual machine can write to the LUN. You must define the following items when creating an RDM: • Target LUN: LUN that the RDM will map to • Mapped datastore: Stores the RDM file with the virtual machine or on a different datastore • Compatibility mode • Virtual device node
  • 134. Dynamically Increasing a Virtual Disk’s Size You can increase the size of a virtual disk that belongs to a powered-on virtual machine: • The virtual disk must be in persistent mode. • It must not contain snapshots. Dynamically increase a virtual disk from, for example, 2 GB to 20 GB. Increases the size of the existing virtual disk file.
  • 135. Thin-provisioned virtual disks can be converted to a thick, eager-zeroed format. To inflate a thin-provisioned disk: • The virtual machine must be powered off. • Right-click the virtual machine’s .vmdk file and select Inflate. Or you can use VMware vSphere® Storage vMotion® and select a thick- provisioned disk as the destination. Inflating a Thin-Provisioned Disk
  • 136. Virtual Machine Options On the VM Options tab, you can set or change virtual machine options to run VMware Tools™ scripts, control user access to the remote console, configure startup behavior, and more. VM Directory .vmx File Location VM Display Name Guest Operating System Type
  • 137. VMware Tools Options Schedule VMware Tools scripts. Customize power button actions. Update checks
  • 138. Boot Options Delay power on. Boot into BIOS. Retry after failed boot.
  • 139. Troubleshooting a Failed VMware Tools Installation on a Guest Operating System Problems: • VMware Tools installation errors before completion. • VMware Tools installation fails to complete. • Unable to complete VMware Tools for Windows or Linux installation. • VMware Tools hangs when installing or reinstalling. Solutions: 1. Verify that that the guest operating system that you are trying to install is fully certified by VMware. 2. Verify that the correct operating system is selected. 3. Verify that the ISO image is not corrupted. 4. If installing on a Windows operating system, ensure that you are not experiencing problems with your Windows registry. 5. If installing on a 64-bit Linux guest operating system, verify that no dependencies are missing.
  • 140. Review of Learner Objectives You should be able to meet the following objectives: • Describe virtual machine settings and options • Add a hot-pluggable device • Dynamically increase the size of a virtual disk • Add a raw device mapping (RDM) to a virtual machine
  • 142. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Take a snapshot of a virtual machine and manage multiple snapshots • Delete virtual machine snapshots • Consolidate snapshots
  • 143. Virtual Machine Snapshots Snapshots enable you to preserve the state of the virtual machine so that you can repeatedly return to the same state.
  • 144. Virtual Machine Snapshot Files A snapshot consists of a set of files: the memory state file (.vmsn), the description file (-00000#.vmdk), and the delta file (-00000#- delta.vmdk). The snapshot list file (.vmsd) keeps track of the virtual machine’s snapshots.
  • 145. Taking a Snapshot You can take a snapshot while a virtual machine is powered on, powered off, or suspended. A snapshot captures the state of the virtual machine: memory state, settings state, and disk state. Virtual machine snapshots are not recommended as a virtual machine backup strategy. Pending transactions committedtodisk .vmdk
  • 146. Managing Snapshots The Snapshot Manager enables you to review all snapshots for the active virtual machine and act on them directly. Actions you can perform: • Revert to a snapshot. • Delete one or all snapshots.
  • 147. Deleting a Virtual Machine Snapshot (1) If you delete a snapshot one or more levels above You Are Here, the snapshot state is deleted. The snap01 data is committed into the previous state (base disk) and the foundation for snap02 is retained. base disk (5GB) snap01 delta (1GB) base disk (5GB) + snap01 data snap02 delta (2GB) You are here.
  • 148. snap02 delta (2GB) You are here. snap01 delta (1GB) Deleting a Virtual Machine Snapshot (2) If you delete the current snapshot, the changes are committed to its parent. The snap02 data is committed into snap01 data, and the snap02 -delta.vmdk file is deleted. base disk (5GB) snap01 delta (1GB) + snap02 delta (2GB)
  • 149. Deleting a Virtual Machine Snapshot (3) If you delete a snapshot one or more levels below You Are Here, subsequent snapshots are deleted and you can no longer return to those states. The snap02 data is deleted. base disk (5GB) snap01 delta (1GB) snap02 delta (2GB) You are here.
  • 150. You are here. Deleting All Virtual Machine Snapshots The delete-all-snapshots mechanism uses storage space efficiently. The size of the base disk does not increase. Just like a single snapshot deletion, changed blocks in the snapshot overwrite their counterparts in the base disk. base disk (5GB) snap01 delta (1GB) snap02 delta (2GB) base disk (5GB) + snap01/02 data You are here.
  • 151. About Snapshot Consolidation Snapshot consolidation is a method to commit a chain of snapshots to the base disks, when the Snapshot Manager shows that no snapshots exist, but the delta files still remain on the datastore. Snapshot consolidation is intended to resolve problems that might occur with snapshots: • The snapshot descriptor file is committed correctly, but the Snapshot Manager incorrectly shows that all the snapshots are deleted. • The snapshot files (-delta.vmdk)are still part of the virtual machine. • Snapshot files continue to expand until the virtual machine runs out of datastore space.
  • 152. The Snapshot Manager displays no snapshots. However, a warning on the Monitor > Issues tab of the virtual machine notifies the user that a consolidation is required. Discovering When to Consolidate
  • 153. Performing Snapshot Consolidation After the snapshot consolidation warning appears, the user can use the vSphere Web Client to consolidate the snapshots: • Select Snapshots > Consolidate to reconcile snapshots. • All snapshot delta disks are committed to the base disks.
  • 154. Review of Learner Objectives You should be able to meet the following objectives: • Take a snapshot of a virtual machine and manage multiple snapshots • Delete virtual machine snapshots • Consolidate snapshots
  • 156. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe a vApp • Build a vApp • Use a vApp to manage virtual machines • Deploy and export a vApp
  • 157. Managing Virtual Machines with a vApp A vApp is an object in the vCenter Server inventory: • A vApp is a container for one or more virtual machines. • A vApp can be used to package and manage multitiered applications.
  • 158. vApp Characteristics You can configure several vApp settings by right-clicking the vApp: • CPU and memory allocation • IP allocation policy You can also configure the virtual machine startup and shutdown order.
  • 159. Exporting and Deploying vApps Exporting the vApp as an OVF template: • Share with others. • Use for archive purposes. Deploying the OVF template: • Deploy multitier vApps. • Deploy OVF from VMware Virtual Appliance Marketplace.
  • 160. Review of Learner Objectives You should be able to meet the following objectives: • Describe a vApp • Build a vApp • Use a vApp to manage virtual machines • Deploy and export a vApp
  • 162. Learner Objectives By the end of this lesson, you should be able to meet the following objectives: • Describe the types of content libraries • Recognize how to import content into a content library • Identify how to publish a content library for external use
  • 163. About the Content Library A content library is a repository of OVF templates and other files that can be shared and synchronized across vCenter Server systems.
  • 164. Benefits of Content Libraries Metadata Sharing and Consistency Storage Efficiency Secure Subscription
  • 165. Local Library of content that you control Published Local library that makes content available for subscription Subscribed Library that syncs with a published library Types of Content Library Three types of content library are available: local, published, and subscribed . On-Demand >>>> Library Content • Immediately download all library content Download library content only when needed Saves storage backing space. Only metadata is retrieved. Content is downloaded as needed when creating virtual machines or synchronizing content Automatic >>>> Metadata
  • 166. Subscribing to vCloud Director 5.5 Catalogs You can subscribe a content library to VMware vCloud Director® 5.5. The subscription process is the same as with the published content library: • Uses the published URL • Static user name (always vcsp) and password Content Catalogs in vCloud Director 5.5 vCenter Server 6
  • 167. Subscription URL Password (Optional) Publish and Subscribe Interactions between the publisher and subscriber can include connectivity, security, an actionable files. vCenter Server vCenter Server Templates Other Subscribe using URL. Transfer Service Transfer Service Content Library Service Content Library Service
  • 168. Synchronization and Versioning Synchronization is used to resolve versioning discrepancies between the publisher and the subscribing content libraries. vCenter ServervCenter Server VMware Content Subscription Protocol HTTP/NFC VCSP Transfer Service Transfer Service Content Library Service Content Library Service
  • 169. Content Library Requirements and Limitations Single storage backing and datastore (64 TB maximum). License to scale based on content library usage. Maximum 256 library Items. Synchronization occurs once every 24 hours. Maximum 5 concurrently synchronized library items for each subscribed library.
  • 170. Creating a Content Library You can create a content library in the vSphere Web Client and populate it with templates to use to deploy virtual machines or vApps in your virtual environment.
  • 171. Selecting Storage for the Content Library You select storage for the content library based on the type of library you are creating.
  • 172. Populating Content Libraries with Content You populate a content library with templates that you can use to provision new virtual machines. To add templates to a content library, use one of the following methods: • Clone a virtual machine to a template in the content library. • Clone a template from the vSphere inventory or from another content library. • Clone a vApp. • Import a template from an URL. • Import an OVF file from your local file system.
  • 173. Importing Items into the Content Library Your source to import items in to a content library can be a file stored on your local machine or a file stored on a Web server. Click this icon to import OVF pages and other file types into the content library.
  • 174. Deploying a Virtual Machine to a Content Library You can clone virtual machines or virtual machine templates to templates in the content library and use them later to provision virtual machines on a virtual data center, a data center, a cluster, or a host.
  • 175. Publishing a Content Library for External Use You can publish a content library for external use and add password protection by editing the content library settings: • Users access the library through the subscription URL that is system generated.
  • 176. Review of Learner Objectives You should be able to meet the following objectives: • Describe the types of content libraries • Recognize how to import content into a content library • Identify how to publish a content library for external use
  • 177. Key Points • vCenter Server provides features for provisioning virtual machines, such as templates and cloning. • By deploying virtual machines from a template, you can create many virtual machines easily and quickly. • You can use vSphere vMotion to move virtual machines while they are powered on. • You can use vSphere Storage vMotion to move virtual machines from one datastore to another datastore. • You can use virtual machine snapshots to preserve the state of the virtual machine so that you can return to the same state repeatedly. • A vApp is a container for one or more virtual machines. The vApp can be used to package and manage related applications. • Content libraries provide simple and effective management for virtual machine templates, vApps, and other types of files for vSphere administrators. Questions?