SlideShare una empresa de Scribd logo
1 de 31
Descargar para leer sin conexión
White Paper
Abstract
This white paper explains the best practices for deploying EMC®
RecoverPoint or demonstration purposes as a virtual machine under ESX
server 4.01 or later using the VMware®
DirectPath feature.
June 2012
Deploying and Implementing RecoverPoint in a
Virtual Machine for demonstration and proof of
concept purposes
2 Deploying and implementing RecoverPoint in a virtual machine
Copyright © 2012 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate of its
publication date. The information is subject to change without notice.
The information in this publication is provided “as is”. EMC Corporation
makes no representations or warranties of any kind with respect to the
information in this publication, and specifically disclaims implied
warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC
Corporation Trademarks on EMC.com.
VMware is a registered trademark of VMware, Inc. All other trademarks
used herein are the property of their respective owners.
Part Number h8969.3
3 Deploying and implementing RecoverPoint in a virtual machine
Table of Contents
Executive summary ................................................................................................4	
  
Audience................................................................................................................................................................ 4	
  
Why Virtualize RecoverPoint? ..................................................................................4	
  
VMware considerations for vRPA/D deployment.........................................................5	
  
Moving vRPA/Ds over different ESX servers ............................................................................................................ 5	
  
vRPA/D Cluster Deployment Type ........................................................................................................................... 5	
  
VMware DirectPath and PowerPath......................................................................................................................... 6	
  
Prerequisites for vRPA/D Deployment ......................................................................6	
  
Pre-requisites......................................................................................................................................................... 6	
  
Deploying vRPA/D .................................................................................................8	
  
Preparing the VMware Hypervisor (ESX Server) ....................................................................................................... 8	
  
Deploying ESX server.............................................................................................................................................. 8	
  
Configuring VMware DirectPath devices on ESX server............................................................................................ 8	
  
Deploying vRPA/D Cluster using Deployment Manager ......................................................................................... 15	
  
Moving the vRPA/D .............................................................................................................................................. 29	
  
Conclusion ......................................................................................................... 31	
  
References ......................................................................................................... 31	
  
4 Deploying and implementing RecoverPoint in a virtual machine
Executive summary
With the rapid growth of virtualization world, today’s solutions, which consist of both physical hardware
and software code elements, are expected to be able to also function in the virtual cloud.
RecoverPoint solution is formed of both a physical hardware which known as a RecoverPoint Appliance
(RPA) and the running application code. When implemented as a virtualized instance it is known as a
virtual RPA with Directpath or vRPA/D.
Installing RecoverPoint as a virtual instance requires both specific hardware (which will be discussed
thoroughly in the “Prerequisites for vRPA/D Deployment” chapter) and a current RecoverPoint ISO image.
Deploying vRPA/D is currently intended only for demo or proof of concept (POC) purposes; EMC does not
guarantee that vRPA/D performance characteristics are equivalent to RecoverPoint’s performance.
The purpose of this document is to explain and demonstrate the steps involved in deploying vRPA/D,
which is running the RecoverPoint software in a VMware Virtual machine using a specified QLogic HBA.
This document describes the recommended way to build a vRPA/D based on research and development
activities in EMC Labs. The content included in this document provides a simple to deploy guide for
vRPA/D and is to be used only for demo and POC purposes.
Audience
This white paper is intended for customers, ESN certified partners and EMC internal staff that are VMware
and RecoverPoint professionals or other identical trained audience.
Note: vRPA/D is to be used only for demo or proof of concept (POC) purposes, it is not intended for
production usage:
• EMC does not provide any support for vRPA/D
• EMC doesn’t guarantee that the performance of vRPA/D has any relationship to the performance of a RecoverPoint
appliance
• Issues will be fixed according to engineering case evaluation
Why Virtualize RecoverPoint?
Virtualizing RecoverPoint can provide some new beneficial features, which arrive from VMware’s virtual
consolidation environment, such as:
• RecoverPoint “Cluster in a box“ – With ESX you can run multiple RecoverPoint instances thus you
can set up two RecoverPoint sites in a single VMware ESX server
• Thin provisioning of both Memory & CPU resources –multiple RecoverPoint instances can share
CPU & Memory resources dynamically, without the need to pre allocate full capacity of CPU &
Memory levels and utilizing VMware Thin Provisions technologies (such as Memory — page
sharing, ballooning, and swapping)
• Simple RPA Backup and Snapshot – due to the fact that RecoverPoint is only a set of VM files, it is
faster to clone it and even utilize VMware hot Snapshots which allows safe point in time
protection of your RecoverPoint instance (for example, you might do this before changing major
RecoverPoint configurations or upgrading the RecoverPoint code)
5 Deploying and implementing RecoverPoint in a virtual machine
• In collaboration with RecoverPoint’s “Virtual WWN” feature, a vRPA/D can be roamed across
multiple VMware ESX servers, as long as you understand the limitations that DirectPath imposes
(such as identical HBA adapters are required and vMotion is not supported)
VMware considerations for vRPA/D deployment
Moving vRPA/Ds to different ESX servers
Due to VMware DirectPath feature limitations (such as the unique reservation of PCI ports on specific ESX
server), there can be implications on or failures to a vRPA/D that must be understood when considering
VMware based Failover scenarios. Both use cases will require additional user configuration to assure
correct bindings of the new ESX server PCI slot as a DirectPath FC Adapter device. If you have such a
configuration and you need additional assistance please send an email to RecoverPoint-vRPA-
DirectPath@emc.com and we will help as time permits. If you are a customer, please have your Account
Representative send this email.
• vMotion as part of the VMware Cluster failover will require manual steps with vSphere as
shown in “Moving the vRPA/D” chapter
• VMware Site Recovery Manager Failover
vRPA/D Cluster Deployment Type
There can be various deployments of RecoverPoint vRPA/D clusters over VMware ESX hosts. Table 1
shows the decision matrix for the available vRPA/D deployments:
Table 1 vRPA/D Deployment matrix
vRPA/D	
  Deployment	
  Type	
   Configuration	
   Pros	
   Cons	
  
vRPA/D
“Both Sites in a box”
Both RecoverPoint
sites reside on single
ESX Host
Requires a single ESX
Server for both
RecoverPoint Clusters
Requires high-powered
hardware
The ESX server acts as
single point of failure
for all vRPA/Ds in
both clusters
vRPA/D
“Site per box”
Each RecoverPoint
Site’s vRPA/Ds are
managed on their own
Site ESX server
Requires only 2 ESX
servers for the entire
vRPA/Ds cluster
Each ESX server is
single point of failure
for a Site
vRPA/D Cluster
Recommended configuration
vRPA/Ds are spread
among multiple ESX
Hosts to ensure
redundancy and
performance
Best performance
deployment for vRPA/Ds
Requires at a minimum
of 4 ESX servers, 2 in
each site
Best Redundancy for both
Site and Cluster fail level
Can use commodity
hardware
6 Deploying and implementing RecoverPoint in a virtual machine
VMware DirectPath and PowerPath
VMware DirectPath provides the Virtual Machine with direct and exclusive access to physical Fibre
Channel host bus adapters in the ESX server. These HBA’s are separate from the HBA’s that the ESX
server uses to access its own fibre channel storage. When you use Direct Path you have some limitations
in other VMware functions, such as:
• vMotion and Storage vMotion
• Fault Tolerance
• Snapshots and VM suspend
• Device Hot Add
Note: vRPA/D can not be used with PowerPath/VE
Prerequisites for vRPA/D Deployment
The main feature that allows RecoverPoint virtualization comes from VMware technology and was first
introduced in ESX 4.01 – named as “VMware DirectPath”.
This feature utilize an offloading of server I/O devices communication into the hypervisor thus allowing
virtual machines to access a specific physical I/O device (HBA or NIC) using “pass-through”
communication instead of the former VMware virtualized drivers.
The RecoverPoint appliance hardware (Gen 4) specifications (A Dell R610 derived 1U server with 8GB
RAM/Dual Quad-core CPU’s, two 146GB internal hard disks, and two 8Gb quad port QLA2564 FC HBA’s)
introduces a high physical resource demands (to support both new features and higher storage
performance) which using virtualization might consume less resources (assuming performance utilization
is average and multiple vRPA/D instances are leveled correctly with overall memory and CPU load).
Following are the detailed hardware and software components that are required for a vRPA/D
deployment.
Pre-requisites
The following pre-requisites are necessary to deploy a vRPA/D configuration:
• Hardware for the ESX Server
o Any hardware on the VMware HCL that supports the ESX/ESXi 4.0, 4.1 and 5.0
• VMware DirectPath server architecture:
o Intel VT-d (Xeon 5500 systems and Nehalem processors)
o AMD platforms with I/O Virtualization Technology (AMD IOMMU)
• VMware DirectPath FC HBA:
o QLogic FC HBA’s – QLA24xx/25xx
o Only these, others may not work.
Note: Both ESX and ESXi support a maximum of 8 VMware DirectPath supported HBA’s, which
caps the maximum amount of RecoverPoint VMs per ESX server that can be installed to 8 if dual
port HBAs are installed or to 16 if the quad port FC HBAs are used.
• Physical Memory:
The following recommended memory settings can vary according to the total memory load of all
the running vRPA/D instances in the ESX server with the help of VMware advanced memory
management capabilities which requires “VMware Tools”
7 Deploying and implementing RecoverPoint in a virtual machine
The following values represent recommended “minimum / optimal” values of physical memory
which will be required for a given amount of deployed RecoverPoint VMs on a single ESX server:
• For 1 VM instance: 4GB / 8GB
• For 2 VM instances: 8GB / 16GB
• For 3 VM instances: 12GB / 16GB
• For 4 VM instances: 16GB / 24GB
Note: For more than 4 RecoverPoint VMs per single ESX server, you will be required to obey the
hardware limitations of the running ESX Server system according to the manufacture technical
specifications and the supported maximum memory per the running ESX server version
• Storage:
vRPA/D only supports EMC Storage Arrays and SCSI based LUNs. Note that a VMAX 20K and 40K
has FTS that enables non-EMC Storage Arrays to be attached to the VMAX. Also note that a VPLEX
supports over 35 non-EMC Storage Array families.
o Choosing EMC VMAX SAN storage, an EMC VPLEX platform or EMC VNX/CLARiiON SAN
storage will allow vRPA/D to support “Array based splitter” (aka the “Symmetrx Splitter”,
“VPLEX Splitter” or “VNX/CLARiiON Splitter”) as well as the “Host based splitter” (aka
“Kdriver”)
o Choosing non-EMC SAN storage is not possible.
The SAN Array should have enough provisioned free space to allocate for RecoverPoint
volumes (including two Repository volumes, the pairing LUNs and the Journal volumes
according to RecoverPoint documentation and best practices)
• FC SAN Switch:
o A RecoverPoint supported FC SAN based switch (with applicable installed license)
Note: If the RecoverPoint Splitter technology is “Fabric Splitter” type, then make sure that
required switch configuration is configured according to RecoverPoint documentation for
“Fabric Splitter” deployments
• Software:
• VMware Virtual Server OS (hosting the RecoverPoint VM) can be:
o ESX 4.0.1 / 4.1 / 5.0
o ESXi 4.0.1 / 4.1 / 5.0
• EMC RecoverPoint 3.4 or 3.5
• A license for EMC RecoverPoint 3.4 or 3.5 (see the section “Requesting a RecoverPoint license”
below)
• Storage Array license: if you using the VNX/CLARiiON Array splitter then you will need to install
an enabler in your VNX/CLARiiON Array to support it– see the applicable RecoverPoint
documentation
• SAN FC Switch license: if you are using a Fabric based splitter then a specific license may be
required to be installed in addition to supported switch firmware version– see the applicable
RecoverPoint documentation
8 Deploying and implementing RecoverPoint in a virtual machine
Deploying vRPA/D
Preparing the VMware Hypervisor (ESX Server)
Verifying that Virtualization is enabled in server BIOS
VMware virtualization hypervisor (VMware ESX) requires that server BIOS will be enabled for
“Virtualization Technology”.
Figure 1 shows this option as “Enabled” which confirm to VMware ESX server installation prerequisite
Example: In DELL servers: after powering on server - hit F2 to enter the system BIOS console,
navigate to “Processor Settings” section in the main BIOS screen).
Figure 1 - DELL BIOS menu to enable virtualization support by CPU
Deploying ESX server
Proceed with normal installation of your ESX server setup.
Configuring VMware DirectPath devices on ESX server
Upon successful completion of ESX server installation, the ESX server performs its first full reboot. At this
point, the ESX server is up and running and ready to setup the pass-through option for the vRPA/D PCI
devices (required for use by VMware DirectPath).
Error! Reference source not found. shows VMware DirectPath maximum values for both ESX 4.x &
ESXi 4.x:
Table 2 VMware DirectPath Maximum values
VMware	
  DirectPath	
  PCI	
  devices	
  per	
  VM	
   2	
  (in	
  4.01)	
  /	
  4	
  (in	
  4.1	
  and	
  5.0)	
  
VMware	
  DirectPath	
  SCSI	
  targets	
  per	
  VM	
   60	
  (Array	
  Initiator	
  targets,	
  not	
  LUNs)	
  
VMware	
  DirectPath	
  Physical	
  devices	
  per	
  ESX	
   8	
  (Physical	
  HBA	
  cards)	
  
9 Deploying and implementing RecoverPoint in a virtual machine
vRPA/D supports both a dual port and quad port HBA. It is commended that Quad Port HBAs be used
since you can:
• Increase the count of available VMware DirectPath HBAs (and RecoverPoint VM counts per single
ESX) on a limited PCI slot server
• Utilize ESX server for both VMware DirectPath (vRPA/D) and regular ESX to SAN connectivity (by
using only 2 ports out of the 4 ports on the HBA for vRPA/D
Enabling the VMware DirectPath devices
1) Connect to the ESX server using the VI Client to the either the ESX server or the managing vCenter
server.
2) Select the ESX server in question, go to the “Configuration” tab and under Advanced
“Settings”, on the far right side of the screen, choose “Configure Passthrough”.
3) A full list of the devices available for VMware DirectPath use are then presented under a pop-up
window titled “Mark devices for Passthrough”.
4) Select the HBA ports as appropriate (see Figure 2 which demonstrate HBA enabling for VMware
DirectPath feature)
Figure 2 - Selecting DirectPath PCI Devices for vRPA/D
5) An ESX reboot is required for this setting to take effect.
Install RecoverPoint as VM
1) Download the current RecoverPoint ISO from Powerlink, if you are a customer the operation must be
performed by your Account Representative.
2) Select appropriate machine(s) that run ESX
3) Install the physical HBA card(s) into these machines
4) Deploy a “New Virtual Machine” using VMware wizard
10 Deploying and implementing RecoverPoint in a virtual machine
a. Give it an appropriate name such as vRPA/D1
5) Select the VM type as “Debian GNU/ Linux 5 (64-bit)”
6) Assign the relevant virtual hardware resources to the new VM as described below:
• 8GB RAM (minimum of 4GB)
• 4 vCPU (minimum if 2vCPU)
• 2 x vNIC (WAN & LAN connectivity and management)
• 70GB Hard disk (the initial utilized disk space for the OS is 8GB)
Figure 3 - vRPA/D VM hardware resources view
7) Attach the RecoverPoint install image/CD using one of the following options:
a. Mount a local bootable RecoverPoint DVD (mounting the physical DVD/CD Drive on the ESX
Server Hardware) using a DVD image burned from the ISO you downloaded in Step 1.
b. Mount a copied bootable RecoverPoint image from the desktop you are working on (by clicking
the “cd icon” in the virtual console) or also from other datastore (if you previously copied it
over) using the ISO image downloaded in Step 1.
8) VMware Tools – since RecoverPoint code does not support the VMware support tools, you must skip
this step.
Note: It is important to provision sufficient virtual resources or else the RecoverPoint deployment may
fail to complete and errors will be triggered.
Binding VMware DirectPath ports for the vRPA/D
Once the vRPA/D has been installed as a VM, we will be required to power down this VM and “Add” a
new “PCI device” from the list of the available VMware DirectPath device ports.
Figure 4 shows a Virtual Machine Properties which was configured to expose two VMware DirectPath
HBAs (QLogic)
11 Deploying and implementing RecoverPoint in a virtual machine
Figure 4 - Binding available DirectPath HBA into vRPA/D VM
Pre Configuring vRPA/D – RPA Network settings
The following steps will provide the minimum connectivity configuration that will later allow deploying
RecoverPoint cluster using the “RecoverPoint Deployment Manager”
1) While connected through the VI Client, open a “Console” session (virtual KVM) to the vRPA/D virtual
machine.
2) After logging into the RecoverPoint management console (using “boxmgmt” user), you are prompted
to enter a temporary IP address, subnet and default gateway – proceed with temporary ip network
settings (as shown in Figure 5)
Figure 5 - Pre Configure fresh vRPA/D installation
12 Deploying and implementing RecoverPoint in a virtual machine
Note: In this environment, a default gateway was not required. RecoverPoint can then be configured
either via the GUI or CLI wizards.
Pre Configuring vRPA/D – FC Port settings
1) Review current WWN’s which are registered by RecoverPoint vRPA/D using RecoverPoint CLI
“Main Menu” by entering the following menu sequence:
[3] “Diagnostics” à [2] “Fibre Channel Diagnostics” à [2] “View Fibre Channel Details”
Note: Although QLogic HBA’s have their own WWN’s, the RecoverPoint appliance layers its own native
WWN’s on top of those.
Figure 7 - vRPA/D Native WWN mapping
2) RecoverPoint WWN’s will also appear in the FC switch as KASHYA ports (Figure 8 reflect an
example output of Brocade FC Switch)
Figure 8 - vRPA/D FC Ports view in FC Switch
Figure 6 - Review vRPA/D FC Detail menu
13 Deploying and implementing RecoverPoint in a virtual machine
3) In order to review the array controllers, in the “Fibre Channel Diagnostics” menu, select
Option [3] to “Detect Fibre Channel Targets”.
Figure 9 displays the WWN’s of a CX4 ports that have been zoned to the vRPA/D.
Figure 9 - Detecting target WWN via vRPA/D
Zoning the vRPA/D to the Storage Array
A vRPA/D is bounded to the splitter environment which being used (Host based or Array based)
Example: in the VNX/CLARiiON array-based splitter, the required zoning must include zoning each of
each vRPA/D HBA ports to both of the EMC Array controller ports (in CLARiiON this refers to SPA and SPB).
Note: VMAX 10K support requires RecoverPoint v3.4.1 or later, VPLEX, VMAX 20K and VMAX 40K
requires RecoverPoint v3.5 or later
For VNX/CLARiiON arrays the zoning should include:
• vRPA/D HBA0 ports -> Both Array controllers ports
• vRPA/D HBA1 ports -> Both Array controllers port
vRPA/D Initiator Registration & Storage Allocation
Once the vRPA/D port initiators are zoned and successfully logging into the storage array, those initiators
need to be manually registered. The example below shows a CLARiiON (for VMAX please consult with
Symmetrix Technical Notes in EMC Powerlink and for VPLEX Local and VPLEX Metro please consult with
the VPLEX Technical Notes in EMC Powerlink) equivalent registration steps.
1) Register the newly discovered initiators as a “New Host” with its own IP address. The initiators
for the vRPA/D need to be registered with an Initiator Type of “RecoverPoint Appliance” and
have a “Failover Mode” equal to “4”. (Figure 10 shows an example for adding vRPA/D initiators
as RecoverPoint appliance initiators
14 Deploying and implementing RecoverPoint in a virtual machine
Figure 10 - vRPA/D FC port registration in Array management
2) Once the initiators are registered to the new vRPA/D, the vRPA/D can be added to a
Navisphere/Unisphere Storage Group as a host in order to access the required storage/LUNs.
3) The vRPA/D(s) requires LUN Masking access in the same manner as physical RPA would require;
The below bullets emphasize the core requirement for each LUN type (for further details, you are
recommended to visit RecoverPoint Admin Guide available on the EMC Powerlink):
a. Journal volumes – must be exposed only to the applicable site vRPA/Ds
b. Repository volume – must be exposed only to the applicable site vRPA/Ds
c. Replicated volume copies – must be exposed to both the applicable site vRPA/Ds and the
site Hosts
4) All of the masked LUNs can be easily verified using the vRPA/D “Diagnostics” menu using the
management CLI of RecoverPoint code.
Figure 11 - Verifying LUN masking via vRPA/D "Diagnostics" menu
Note: Figure 11 - describes such a verification attempt for two masked LUNs (A production LUN which is
4GB and a second LUN which is 50GB) which are both being exposed correctly by the vRPA/D
15 Deploying and implementing RecoverPoint in a virtual machine
Deploying vRPA/D Cluster
Upon successful configuring of vRPA/D storage and network connectivity, we can proceed into full-scale
deployment of RecoverPoint Using RecoverPoint Deployment Manager wizard that provides the safest
and fully automated deployment of RecoverPoint appliances
Deploying vRPA/D Cluster using Deployment Manager
A vRPA/D cluster deployment is handled in the same manner as a regular physical RPA Cluster.
RecoverPoint Deployment Manager is used for RecoverPoint deployment and provides the most
automated and error free deployment method.
Below is the full procedure for vRPA/D Cluster deployment using RecoverPoint Deployment Manager Tool.
1) Execute RecoverPoint Deployment Manager Wizard, you will be asked to first log into the RP
Deployment Manager.
Figure 12 - RP Deployment Manager: Authentication screen
Note: The RP Deployment Manager also contains wizards relative to RPA upgrades and replacement.
2) Select the “RecoverPoint Installer Wizard” to begin the vRPA/D network identity configuring
(IP Address, Subnet Mask, Default Gateway, Management IP Addresses and the RPA Cluster
details).
16 Deploying and implementing RecoverPoint in a virtual machine
Figure 13 - RP Deployment Manager: Deployment wizard
3) Review the prerequisites for the installation. At this stage, after completing all of the previous
steps for the vRPA/D, all of the prerequisites should be satisfied (see Figure 14).
Figure 14 - RP Deployment Manager: vRPA/D Prerequisites
4) The next screen will prompt for an installation structure file; create a new file or use an existing
saved configuration file.
Note: Figure 15 shows a consolidated view of the settings required when configuring a vRPA/D cluster
(i.e. number of sites, amount of cluster nodes at each site and the type of replication between sites).
17 Deploying and implementing RecoverPoint in a virtual machine
Figure 15 - RP Deployment Manager: Environment Settings screen
5) Upon completion of the previous installer screen, you will be required to configure the vRPA/D
networks (Management and WAN) details for vRPA/D site A, including the site’s vRPA/D
instances (Figure 16 shows an example of two vRPA/Ds configuration in Site A)
Figure 16 - RP Deployment Manager: Configuring vRPA/D Site A networks
18 Deploying and implementing RecoverPoint in a virtual machine
6) The next wizard screen (Figure 17) will require answering the “Advanced settings” questions that
relates to splitter type in use and other environment variables specific to the storage arrays type in
use.
Figure 17 - Configuring vRPA/D Sites advanced settings screen
7) Upon completion of the previous installer screen, you will be required to configure the vRPA/D
networks (Management and WAN) details for vRPA/D site B, including the site’s vRPA/D
instances (Figure 18 shows an example of two vRPA/Ds configuration in Site A)
Figure 18 - RP Deployment Manager: Configuring vRPA/D Site B networks
8) Upon completion of previous step, you will be instructed to approve the overall vRPA/D
configuration and for which vRPA/D sites. This step will lock the required vRPA/D sites
19 Deploying and implementing RecoverPoint in a virtual machine
configuration and prepare them to be applied on each of the related vRPA/D instances (see Figure
19 for this step screen).
Figure 19 - RP Deployment Manager: Applying configuration
Note: that if only one of the sites is to be installed at this stage, the wizard provides a checkbox to
confirm whether or not the other site is already installed.
9) The next wizard screen will provide the installer confirmation for the previous applied settings
(see Figure 20)
Figure 20 - RP Deployment Manager: result screen of applying vRPA/D Configuration
10)Upon successful confirmation in previous step, the installer will begin the vRPA/D storage
configuration wizard while showing the managed vRPA/D WWN (see Figure 21)
20 Deploying and implementing RecoverPoint in a virtual machine
Figure 21 - RP Deployment Manager: Site A Zoning and LUN Masking configuration
11)The wizard then runs the vRPA/D SAN diagnostics, thus providing the list of available LUNs to be
used as the vRPA/D Cluster Repository volume for Site A (equivalent to traditional cluster’s
Quorum disk). You will be required to select the desired LUN to act as the Site (see Figure 22)
Figure 22 - RP Deployment Manager: Site A Repository volume selection
12)Completing repository volume selection in previous step, will display the summary for the
storage configuration for Site A (see Figure 23)
21 Deploying and implementing RecoverPoint in a virtual machine
Figure 23 - RP Deployment Manager: Site Summary screen
13)The installer wizard proceeds through the exact sequence of the previous storage configuration
details (Site A), this time for the remote/target site (Site B)
14)Upon completion of the storage configuration for Site B, a summary screen appear to
indicate the success of the installer process which also allows deploying RecoverPoint
Management Application through a given Site (see Figure 24)
Figure 24 - RP Deployment Manager: Success summary of vRPA/D Cluster
22 Deploying and implementing RecoverPoint in a virtual machine
Configuring the RecoverPoint Splitters
Note: This procedure assumes that splitters were installed correctly. To configure the RecoverPoint
splitter, perform the following steps:
1) Open “RecoverPoint Management Application”, and right click the “Splitters” object
choose “Add New Splitter” target.
2) From the list of the available splitters, choose the applicable splitters which will be required to
allow RecoverPoint replication (Figure 25 shows an example of discovered VNX/CLARiiON splitters
for both vRPA/D sites) and click “Next”
Figure 25 - Configuring vRPA/D splitters screen
3) Proceed with the on screen instructions (For the VNX/CLARiiON-based array splitter you will be
asked to provide the array “login credentials” or to select “Configure login credential
later” for both sites) and upon completion of splitter information, hit “Finish” (Figure 26 shows
successful summary of added VNX/CLARiiON splitters)
Figure 26 - RecoverPoint validated splitters
Configuring RecoverPoint CGs with vRPA/Ds
Configuring RecoverPoint Consistency Group (CG) using vRPA/Ds is possible due to the transparency of
the virtualization layer from the Application management.
23 Deploying and implementing RecoverPoint in a virtual machine
The consistency group wizard navigate through the required CG elements such as: CG name, the
preferred RPA, the Policy attributes for each copy, volumes to be used as the source/replica in the
Replication Sets and the relevant Journal volumes.
Once the entire consistency group configuration has been completed, a summary screen will be shown
before initiating the new replication (see Figure 27)
Figure 27 - Configured vRPA/D CG summary screen
Upon completing the CG wizard, we will be able to review the replication status for the given CG. Figure
28 shows the initial synchronization completion for a RecoverPoint CLR configuration, where the
“Production Source” copy has a “Direct Access”, while both replica copies (“Local Replica” and
“Remote Replica” shows “No Access” state)
Figure 28 - RecoverPoint CLR replication topology
More in depth replication analysis is available through RecoverPoint’s Management GUI through the
“statistics” tab (see Figure 29)
24 Deploying and implementing RecoverPoint in a virtual machine
Figure 29 - RecoverPoint statistics panel to indicate replication state
Replacing a vRPA/D with the RPA Replacement Wizard
Replacing a vRPA/D within a clustered RecoverPoint configuration requires the RecoverPoint Deployment
Manager Wizard.
The below procedure will guide through the needed steps to replace a vRPA/D using the Deployment
Manager wizard.
1) Deploy the RecoverPoint Deployment Manager Wizard
2) Select the “RPA Replacement Wizard” option, and click “Next” (see Figure 30)
25 Deploying and implementing RecoverPoint in a virtual machine
Figure 30 - RP Deployment Manager: choosing vRPA/D replacement option
Note: This procedure will import the vRPA/D into the existing configuration, providing the new vRPA/D
with the same configuration and management details as the previous/failed vRPA/D.
3) Highlight the required failed vRPA/D (which is about to be replaced) as shown in Figure 31.
Note: Notice the checkbox at the bottom of the screen that prompts the user to confirm whether or not
the replacement vRPA/D has been configured with required RP code and network identity to allow an
automatic replacement.
4) When the new/replacement vRPA/D is online and configured with the required temporary network
connectivity, check the bottom screen checkbox to allow the wizard proceed and click “Next”
Figure 31 - RPA Replacement wizard: select failed vRPA/D
5) Confirm the status of the replacement RPA, by checking the bottom screen checkbox (shown in
Figure 32) and click “Next”
26 Deploying and implementing RecoverPoint in a virtual machine
Figure 32 - RPA Replacement wizard: Confirm failed vRPA/D
6) The next screen will require the approval for cloning (spoofing) the failed vRPA/D WWNs
configuration into the new vRPA/D. By spoofing the WWNs there is no requirement for new zoning
at the SAN level.
Notice: If new WWNs are introduced then they will need to be zoned accordingly!
27 Deploying and implementing RecoverPoint in a virtual machine
Figure 33 - RPA Replacement wizard: validating storage configuration
7) The wizard automatically runs through the validation process of the storage and SAN
configurations (before the final “apply changes” phase for the settings on the new vRPA/D).
8) Once all of those changes have been applied then the wizard provides a summary of the steps
completed as part of replacing the faulted vRPA/D and resuming cluster operations with the new
vRPA/D (shown in Figure 34).
Figure 34 - vRPA/D Replacement wizard: Applying configuration screen
RecoverPoint Splitters
There are 5 options to choose from when considering the RecoverPoint Splitter
• Windows Host Splitter (for RecoverPoint/CL and RecoverPoint/EX with RecoverPoint 3.5 and with
RecoverPoint/SE, RecoverPoint/EX and RecoverPoint/L with RecoverPoint 3.4)
• VMAX-based Splitter
• VPLEX-based Splitter
• VNX/CLARiiON-based Splitter
• Brocade/Cisco Intelligent Fabric Splitter
Choosing a RecoverPoint splitter is based upon many environmental scenarios. In this example,
RecoverPoint is using the array based VNX/CLARiiON Splitter.
Enabling the “RecoverPoint Splitter” in the FLARE or VNX Operating Environment can enable this feature
directly on the array. For a Symmetrix VMAX and VPLEX the splitter is already enabled. The following
displays a list of all of the software features that are enabled on one of the CX4 arrays being used in this
example
28 Deploying and implementing RecoverPoint in a virtual machine
Figure 35 - RecoverPoint splitter view in CLARiiON management GUI
The Software tab under the “Properties” section of the CX4 array is the only field in which the
RecoverPoint Splitter can be viewed from the Navisphere perspective. There is nothing else to tune or
configure on the CLARiiON array in relation to RecoverPoint.
As with other Layered Applications, the RecoverPoint Splitter is pre-installed as part of the FLARE code,
but is not visible or available to the user until the RecoverPoint Splitter enabler key is installed. This
enabler key can be installed via the Navisphere Service Taskbar.
When an array-based splitter is used the maximum size volume (LUN) that can be replicated is 32TB. In
environments where an array-based splitter is not being used then the maximum size for a replicated LUN
is 2TB. The VMAX splitter is supported on the VMAX series, the VPLEX splitter is supported on VPLEX Local
and VPLEX Metro, and the VNX/CLARiiON splitter is supported on VNX series, CX3 and CX4 arrays. (The
VNX/CLARiiON splitter does not support VNXe, AX4-5 or pre CX3 storage arrays).
WWN Spoofing
When moving or replacing a vRPA/D it is possible to retain the WWN’s of the previous vRPA/D’s WWNs
and apply them to the new vRPA/D.
A RecoverPoint appliance generates its own WWNs during installation, based in part on the underlying
HBA WWN. The trick to enabling easy mobility of a vRPA/D is to hardcode the WWNs so that they don’t
change when ported to a new set of HBAs (in the same host, or a different one).
Doing this allows a vRPA/D to move another host with different HBAs without the need for additional
zoning or LUN masking. The process looks like this:
Hard Coding the WWNs
1. On the vRPA/D console (connect via SSH or via the VI Client), carry out the following steps
2. Enter the Diagnostics Menu
29 Deploying and implementing RecoverPoint in a virtual machine
3. Enter the Fibre Channel Diagnostic Menu
4. Select the View Fibre Channel Details option.
5. If using SSH copy and paste the WWNs out to a text file for later use.
6. Navigate back through the menus and enter the Cluster Operations Menu
7. Detach the vRPA/D from the cluster.
8. Once detached, go into the Setup menu, and then option 1 to Modify, then specify the site of the
vRPA/D you want to modify
9. Select Option 3 to set the WWN Name / Port Pair Addresses
10.Then specify the vRPA/D you want to change, and the number of HBA ports that the RPA uses
11.Using the WWN details we copied earlier, paste in the WWN and Node WWN details for each HBA
port in sequence.
12.Once done, backup three levels in the menu tree and select option 5 to Apply the configuration.
13.This gives you a summary of the entire cluster configuration, and you can see the WWNs that you
just hardcoded for the relevant vRPA/D.
14.Confirm that you want to apply the configuration, and then enter the site and box number to apply
the details to.
15.Finally, reattach the vRPA/D to the cluster, which will cause a reboot of the vRPA/D.
16.Confirm that the cluster resumes normal operation
Moving the vRPA/D
There are can be various ways to perform vRPA/D relocation among ESX servers, as shown below:
Ø Manual move using vMotion as part of vSphere Cluster (applicable for vSphere 4.01 and
later)
Ø Automated Failover using vSphere Cluster as part of HA/DRS Failover policy(valid
for only vSphere 4.1 and later)
Ø Automated Failover using SRM (Compatible)
Note: it is recommended to configure your vRPA/D with spoofed WWN’s when you consider
moving/failing over the vRPA/D into other ESX servers due to the fact that each ESX server has its own
unique attached HBA’s WWN which can result in a failure of the vRPA/D code.
Manual move using vMotion as part of vSphere Cluster
1. Verify that the new ESX server has identical HBA (otherwise, the vRPA/D will fail to start on the
new ESX server) as the old ESX (where the vRPA/D is now hosted)
2. Move the vRPA/D using a simple drag and drop in vCenter, keeping the storage locations as they
were
3. Re-configure the vRPA/D to assign the correct set of physical HBAs that you want the RPA to use in
the new host. A vRPA/D uses VMware DirectPath to get direct access to the required QLogic HBAs,
so remove the two HBAs that were being used in the original host, and move them to the new
host. On the new host assign access to two new HBAs in the new host.
30 Deploying and implementing RecoverPoint in a virtual machine
4. Once complete, power on the virtual machine, and validate that the vRPA/D comes up cleanly by
observing the VM state in the vSphere GUI or using the RecoverPoint GUI under the “RPA” tab.
Note: This process can be done in advance of setting up the vRPA/D cluster, or it can be done afterwards
if you decide to enable this behavior at a later date.
This feature might be useful if:
• You want to do some maintenance on the physical host, and want the RecoverPoint cluster to run
on all vRPA/Ds while this is happening.
• You want to upgrade the hardware that a vRPA/D runs on by moving it to another machine with
better processors or faster HBAs as long as this new hardware still adheres to the support list
shown above.
• If the customer wants to migrate their RecoverPoint appliances from physical to virtual, in which
case they can hardcode the WWNs from the physical RPA into the vRPA/D, allowing for a quick and
easy transfer.
Automated Failover using vSphere Cluster as part of HA/DRS Failover policy
In vSphere 4.1 VMware has introduced new vMotion feature named as dvMotion (which is the acronym
for DirectPath vMotion) which can be used to provide an automated Failover of vRPA/D using the vMotion
engine. The details are complex; if you are interested in this please send an email to the vRPA/d team at
RecoverPoint-vRPA-DirectPath@emc.com
Note: This feature relays on the vSphere 4.1 experimental “dvMotion” feature
Automated Failover using SRM
VMware Site Recovery product enables automated failover of VMware Sites and clusters. It is highly
suggested to use the compatible SRM functionality with ESX4i and later. vSphere 5 introduced improved
vMotion & SRM capabilities, refer to the appropriate VMware documentation for full details.
Comments and getting help
Product and Technical support are available as follows:
Product information. For documentation and release notes, or for information about licensing and
service, go to the RecoverPoint landing page on Powerlink: RecoverPoint Family or send an email to
RecoverPointDealSupportDesk@emc.com
RecoverPoint licensing information
To request a license for your vRPA/D configuration do the following:
Go to PowerLink, and in the top-level menu navigate to Request Support - > Create Service Request.
• Mark it as “this is a: technical problem”
• Enter “N/A” as the customer site ID
• Enter contact name
• Select product as RecoverPoint
• In the Problem Summary enter “License Request for vRPA/D”
• In the Problem Description enter the following information:
o "This	
  is	
  a	
  license	
  request	
  for	
  vRPA/D"	
  
o The	
  version	
  of	
  RecoverPoint	
  (3.4	
  or	
  3.5)	
  required.	
  
o State	
  if	
  you	
  require	
  a	
  RecoverPoint/SE,	
  RecoverPoint/EX	
  or	
  RecoverPoint/CL	
  license.	
  
31 Deploying and implementing RecoverPoint in a virtual machine
o State	
  the	
  replicated	
  capacity	
  that	
  is	
  required	
  (1	
  to	
  300	
  TB).	
  (300	
  TB	
  is	
  the	
  maximum	
  replicated	
  capacity	
  
that	
  can	
  be	
  requested)	
  
o State	
  if	
  you	
  need	
  local,	
  remote	
  or	
  both	
  local	
  and	
  remote	
  replication.	
  
o State	
  how	
  many	
  RPAs	
  will	
  you	
  need.	
  (The	
  number	
  of	
  RecoverPoint	
  Virtual	
  Machines)	
  
o State	
  where	
  will	
  the	
  VMs	
  be	
  installed.	
  
o State	
  if	
  this	
  is	
  for	
  an	
  internal	
  lab	
  or	
  for	
  a	
  proof	
  of	
  concept.	
  
§ If	
  this	
  is	
  for	
  a	
  POC	
  please	
  provide	
  the	
  name	
  of	
  the	
  customer.	
  
o Provide	
  your	
  full	
  contact	
  information	
  including	
  name,	
  address,	
  phone,	
  email	
  
• Submit.
Conclusion
This white paper contains enough information to install and operate RecoverPoint as a virtual machine. If
you have issues, comments, or questions about this document include the relevant page numbers and
any other information that will help us locate the information you are addressing. Send comments to:
RecoverPoint-vRPA-DirectPath@emc.com
References
If you are having difficulty with vRPA/D ensure you read these references before sending an email.
EMC references
• Introduction to EMC RecoverPoint 3.5 New Features and Functions
• EMC RecoverPoint Family Overview
VMware references
• Configuration Examples and Troubleshooting for VMDirectPath
• Configuring VMDirectPath I/O pass-through devices on an ESX host
• PCI Passthrough with PCIe devices behind a non-ACS switch in vSphere
• VMware Tools Installation Guide For Operating System Specific Packages
• Performance Best Practices for VMware vSphere® 4.0
• Installing VMware Tools in a Linux virtual machine using a Compiler
• Configuration Examples and Troubleshooting for VMDirectPath
• Configuration Maximums - ESX 4.1
• Configuration Maximums - ESX 4.0

Más contenido relacionado

La actualidad más candente

Install ovs on local pc
Install ovs on local pcInstall ovs on local pc
Install ovs on local pcApplistarVN
 
Machine Data 101: Turning Data Into Insight
Machine Data 101: Turning Data Into InsightMachine Data 101: Turning Data Into Insight
Machine Data 101: Turning Data Into InsightSplunk
 
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...VMworld
 
Azure Site Recovery Bootcamp
Azure Site Recovery BootcampAzure Site Recovery Bootcamp
Azure Site Recovery BootcampAsaf Nakash
 
Sql 2012 always on
Sql 2012 always onSql 2012 always on
Sql 2012 always ondilip nayak
 
2021 二月 Kasten K10 介紹與概觀
2021 二月 Kasten K10 介紹與概觀2021 二月 Kasten K10 介紹與概觀
2021 二月 Kasten K10 介紹與概觀Wales Chen
 
RedHat Certification Track
RedHat Certification TrackRedHat Certification Track
RedHat Certification Trackssuser113f26
 
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best Practices
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best PracticesOracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best Practices
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best PracticesMarkus Michalewicz
 
Oracle Database on ACFS: a perfect marriage?
Oracle Database on ACFS: a perfect marriage?Oracle Database on ACFS: a perfect marriage?
Oracle Database on ACFS: a perfect marriage?Ludovico Caldara
 
Presentation citrix desktop virtualization
Presentation   citrix desktop virtualizationPresentation   citrix desktop virtualization
Presentation citrix desktop virtualizationxKinAnx
 
Standard Edition High Availability (SEHA) - The Why, What & How
Standard Edition High Availability (SEHA) - The Why, What & HowStandard Edition High Availability (SEHA) - The Why, What & How
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
 
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup Appliances
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesDeep Dive: a technical insider's view of NetBackup 8.1 and NetBackup Appliances
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1Sanjeev Kumar
 
Migration to Oracle Multitenant
Migration to Oracle MultitenantMigration to Oracle Multitenant
Migration to Oracle MultitenantJitendra Singh
 
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Osama Mustafa
 
Private cloud with vmware
Private cloud with vmwarePrivate cloud with vmware
Private cloud with vmwareAnton An
 
Nsx security deep dive
Nsx security deep diveNsx security deep dive
Nsx security deep divesolarisyougood
 
Veeam Backup and Replication: Overview
Veeam  Backup and Replication: OverviewVeeam  Backup and Replication: Overview
Veeam Backup and Replication: OverviewDudley Smith
 
Oracle RAC 12c Best Practices with Appendices DOAG2013
Oracle RAC 12c Best Practices with Appendices DOAG2013Oracle RAC 12c Best Practices with Appendices DOAG2013
Oracle RAC 12c Best Practices with Appendices DOAG2013Markus Michalewicz
 

La actualidad más candente (20)

Install ovs on local pc
Install ovs on local pcInstall ovs on local pc
Install ovs on local pc
 
Machine Data 101: Turning Data Into Insight
Machine Data 101: Turning Data Into InsightMachine Data 101: Turning Data Into Insight
Machine Data 101: Turning Data Into Insight
 
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...
VMworld 2013: VMware vSphere Replication: Technical Walk-Through with Enginee...
 
Azure Site Recovery Bootcamp
Azure Site Recovery BootcampAzure Site Recovery Bootcamp
Azure Site Recovery Bootcamp
 
Sql 2012 always on
Sql 2012 always onSql 2012 always on
Sql 2012 always on
 
2021 二月 Kasten K10 介紹與概觀
2021 二月 Kasten K10 介紹與概觀2021 二月 Kasten K10 介紹與概觀
2021 二月 Kasten K10 介紹與概觀
 
RedHat Certification Track
RedHat Certification TrackRedHat Certification Track
RedHat Certification Track
 
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best Practices
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best PracticesOracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best Practices
Oracle Real Application Clusters (RAC) 12c Rel. 2 - Operational Best Practices
 
Oracle Database on ACFS: a perfect marriage?
Oracle Database on ACFS: a perfect marriage?Oracle Database on ACFS: a perfect marriage?
Oracle Database on ACFS: a perfect marriage?
 
Presentation citrix desktop virtualization
Presentation   citrix desktop virtualizationPresentation   citrix desktop virtualization
Presentation citrix desktop virtualization
 
Standard Edition High Availability (SEHA) - The Why, What & How
Standard Edition High Availability (SEHA) - The Why, What & HowStandard Edition High Availability (SEHA) - The Why, What & How
Standard Edition High Availability (SEHA) - The Why, What & How
 
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup Appliances
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesDeep Dive: a technical insider's view of NetBackup 8.1 and NetBackup Appliances
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup Appliances
 
Microservices with Docker
Microservices with Docker Microservices with Docker
Microservices with Docker
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1
 
Migration to Oracle Multitenant
Migration to Oracle MultitenantMigration to Oracle Multitenant
Migration to Oracle Multitenant
 
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
 
Private cloud with vmware
Private cloud with vmwarePrivate cloud with vmware
Private cloud with vmware
 
Nsx security deep dive
Nsx security deep diveNsx security deep dive
Nsx security deep dive
 
Veeam Backup and Replication: Overview
Veeam  Backup and Replication: OverviewVeeam  Backup and Replication: Overview
Veeam Backup and Replication: Overview
 
Oracle RAC 12c Best Practices with Appendices DOAG2013
Oracle RAC 12c Best Practices with Appendices DOAG2013Oracle RAC 12c Best Practices with Appendices DOAG2013
Oracle RAC 12c Best Practices with Appendices DOAG2013
 

Destacado

Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 
EMC Data domain advanced features and functions
EMC Data domain advanced features and functionsEMC Data domain advanced features and functions
EMC Data domain advanced features and functionssolarisyougood
 
Helsinki collab
Helsinki collabHelsinki collab
Helsinki collabsara_chou
 
4 things you_cannot_recover
4 things you_cannot_recover4 things you_cannot_recover
4 things you_cannot_recoverChandan Dubey
 
Converged data center_f_co_e_iscsi_future_storage_networking
Converged data center_f_co_e_iscsi_future_storage_networkingConverged data center_f_co_e_iscsi_future_storage_networking
Converged data center_f_co_e_iscsi_future_storage_networkingEMC
 
Spending multipliers
Spending multipliersSpending multipliers
Spending multipliersTravis Klein
 
Beautiful quotestoliveby
Beautiful quotestolivebyBeautiful quotestoliveby
Beautiful quotestolivebyChandan Dubey
 
5 tips for_getting_a_job_interview_next_week ppt
5 tips for_getting_a_job_interview_next_week ppt5 tips for_getting_a_job_interview_next_week ppt
5 tips for_getting_a_job_interview_next_week pptPeggy McKee
 
Prezentace 1.SG
Prezentace 1.SGPrezentace 1.SG
Prezentace 1.SGanatoni19
 
Delivering ITaaS With a Software-Defined Data Center
Delivering ITaaS With a Software-Defined Data CenterDelivering ITaaS With a Software-Defined Data Center
Delivering ITaaS With a Software-Defined Data CenterEMC
 

Destacado (20)

Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 
RecoverPoint Deep Dive
RecoverPoint Deep DiveRecoverPoint Deep Dive
RecoverPoint Deep Dive
 
EMC Data domain advanced features and functions
EMC Data domain advanced features and functionsEMC Data domain advanced features and functions
EMC Data domain advanced features and functions
 
Helsinki collab
Helsinki collabHelsinki collab
Helsinki collab
 
4 things you_cannot_recover
4 things you_cannot_recover4 things you_cannot_recover
4 things you_cannot_recover
 
Doc2
Doc2Doc2
Doc2
 
Doug museum
Doug museumDoug museum
Doug museum
 
50 states
50 states50 states
50 states
 
Converged data center_f_co_e_iscsi_future_storage_networking
Converged data center_f_co_e_iscsi_future_storage_networkingConverged data center_f_co_e_iscsi_future_storage_networking
Converged data center_f_co_e_iscsi_future_storage_networking
 
Sunsilk gog
Sunsilk gogSunsilk gog
Sunsilk gog
 
Taller de circ a
Taller de circ aTaller de circ a
Taller de circ a
 
Spending multipliers
Spending multipliersSpending multipliers
Spending multipliers
 
Beautiful quotestoliveby
Beautiful quotestolivebyBeautiful quotestoliveby
Beautiful quotestoliveby
 
Be well happy
Be well happyBe well happy
Be well happy
 
5 tips for_getting_a_job_interview_next_week ppt
5 tips for_getting_a_job_interview_next_week ppt5 tips for_getting_a_job_interview_next_week ppt
5 tips for_getting_a_job_interview_next_week ppt
 
Prezentace 1.SG
Prezentace 1.SGPrezentace 1.SG
Prezentace 1.SG
 
Thots
ThotsThots
Thots
 
Delivering ITaaS With a Software-Defined Data Center
Delivering ITaaS With a Software-Defined Data CenterDelivering ITaaS With a Software-Defined Data Center
Delivering ITaaS With a Software-Defined Data Center
 
Venture for America Summary
Venture for America SummaryVenture for America Summary
Venture for America Summary
 
Panel 4
Panel 4Panel 4
Panel 4
 

Similar a White Paper: Deploying and Implementing RecoverPoint in a Virtual Machine for Demonstration and Proof-of-Concept Purposes

Networker integration for optimal performance
Networker integration for optimal performanceNetworker integration for optimal performance
Networker integration for optimal performanceMohamed Sohail
 
H6340 powerpath-ve-for-vmware-vsphere-wp
H6340 powerpath-ve-for-vmware-vsphere-wpH6340 powerpath-ve-for-vmware-vsphere-wp
H6340 powerpath-ve-for-vmware-vsphere-wpjackli9898
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service EMC
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180Suresh Kumar
 
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...VMworld
 
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtime
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtimeH10696 so-emc-vplex-and-vmware-solution-to-reduce-downtime
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtimesri200012
 
Cloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock SystemCloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock SystemEMC
 
Server And Hardware Virtualization_Aakash1.1
Server And Hardware Virtualization_Aakash1.1Server And Hardware Virtualization_Aakash1.1
Server And Hardware Virtualization_Aakash1.1Aakash Agarwal
 
SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)Cenk Ersoy
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQDavid Pasek
 
Eco4Cloud - Company Presentation
Eco4Cloud - Company PresentationEco4Cloud - Company Presentation
Eco4Cloud - Company PresentationEco4Cloud
 
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...EMC
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
 
White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service   White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service EMC
 
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...Streamline operations with new and updated VMware vSphere 8.0 features on 16t...
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...Principled Technologies
 
Vsc 71-se-presentation-training
Vsc 71-se-presentation-trainingVsc 71-se-presentation-training
Vsc 71-se-presentation-trainingnarit_ton
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5Vepsun Technologies
 

Similar a White Paper: Deploying and Implementing RecoverPoint in a Virtual Machine for Demonstration and Proof-of-Concept Purposes (20)

Networker integration for optimal performance
Networker integration for optimal performanceNetworker integration for optimal performance
Networker integration for optimal performance
 
H6340 powerpath-ve-for-vmware-vsphere-wp
H6340 powerpath-ve-for-vmware-vsphere-wpH6340 powerpath-ve-for-vmware-vsphere-wp
H6340 powerpath-ve-for-vmware-vsphere-wp
 
01 v mware overview
01  v mware overview01  v mware overview
01 v mware overview
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service
 
Whitepaper
WhitepaperWhitepaper
Whitepaper
 
Vsphere 4-partner-training180
Vsphere 4-partner-training180Vsphere 4-partner-training180
Vsphere 4-partner-training180
 
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...
 
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtime
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtimeH10696 so-emc-vplex-and-vmware-solution-to-reduce-downtime
H10696 so-emc-vplex-and-vmware-solution-to-reduce-downtime
 
Cloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock SystemCloud Foundry Platform as a Service on Vblock System
Cloud Foundry Platform as a Service on Vblock System
 
Server And Hardware Virtualization_Aakash1.1
Server And Hardware Virtualization_Aakash1.1Server And Hardware Virtualization_Aakash1.1
Server And Hardware Virtualization_Aakash1.1
 
SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)SAP and VMware (Virtualizing SAP)
SAP and VMware (Virtualizing SAP)
 
prof1
prof1prof1
prof1
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQ
 
Eco4Cloud - Company Presentation
Eco4Cloud - Company PresentationEco4Cloud - Company Presentation
Eco4Cloud - Company Presentation
 
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
White Paper: Using VMware Storage APIs for Array Integration with EMC Symmetr...
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
 
White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service   White Paper: EMC Backup-as-a-Service
White Paper: EMC Backup-as-a-Service
 
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...Streamline operations with new and updated VMware vSphere 8.0 features on 16t...
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...
 
Vsc 71-se-presentation-training
Vsc 71-se-presentation-trainingVsc 71-se-presentation-training
Vsc 71-se-presentation-training
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 

Más de EMC

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote EMC
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremioEMC
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lakeEMC
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereEMC
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History EMC
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewEMC
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeEMC
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic EMC
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityEMC
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015EMC
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesEMC
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS Breach2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS BreachEMC
 

Más de EMC (20)

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS Breach2014 Cybercrime Roundup: The Year of the POS Breach
2014 Cybercrime Roundup: The Year of the POS Breach
 

Último

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 

Último (20)

How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 

White Paper: Deploying and Implementing RecoverPoint in a Virtual Machine for Demonstration and Proof-of-Concept Purposes

  • 1. White Paper Abstract This white paper explains the best practices for deploying EMC® RecoverPoint or demonstration purposes as a virtual machine under ESX server 4.01 or later using the VMware® DirectPath feature. June 2012 Deploying and Implementing RecoverPoint in a Virtual Machine for demonstration and proof of concept purposes
  • 2. 2 Deploying and implementing RecoverPoint in a virtual machine Copyright © 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided “as is”. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware is a registered trademark of VMware, Inc. All other trademarks used herein are the property of their respective owners. Part Number h8969.3
  • 3. 3 Deploying and implementing RecoverPoint in a virtual machine Table of Contents Executive summary ................................................................................................4   Audience................................................................................................................................................................ 4   Why Virtualize RecoverPoint? ..................................................................................4   VMware considerations for vRPA/D deployment.........................................................5   Moving vRPA/Ds over different ESX servers ............................................................................................................ 5   vRPA/D Cluster Deployment Type ........................................................................................................................... 5   VMware DirectPath and PowerPath......................................................................................................................... 6   Prerequisites for vRPA/D Deployment ......................................................................6   Pre-requisites......................................................................................................................................................... 6   Deploying vRPA/D .................................................................................................8   Preparing the VMware Hypervisor (ESX Server) ....................................................................................................... 8   Deploying ESX server.............................................................................................................................................. 8   Configuring VMware DirectPath devices on ESX server............................................................................................ 8   Deploying vRPA/D Cluster using Deployment Manager ......................................................................................... 15   Moving the vRPA/D .............................................................................................................................................. 29   Conclusion ......................................................................................................... 31   References ......................................................................................................... 31  
  • 4. 4 Deploying and implementing RecoverPoint in a virtual machine Executive summary With the rapid growth of virtualization world, today’s solutions, which consist of both physical hardware and software code elements, are expected to be able to also function in the virtual cloud. RecoverPoint solution is formed of both a physical hardware which known as a RecoverPoint Appliance (RPA) and the running application code. When implemented as a virtualized instance it is known as a virtual RPA with Directpath or vRPA/D. Installing RecoverPoint as a virtual instance requires both specific hardware (which will be discussed thoroughly in the “Prerequisites for vRPA/D Deployment” chapter) and a current RecoverPoint ISO image. Deploying vRPA/D is currently intended only for demo or proof of concept (POC) purposes; EMC does not guarantee that vRPA/D performance characteristics are equivalent to RecoverPoint’s performance. The purpose of this document is to explain and demonstrate the steps involved in deploying vRPA/D, which is running the RecoverPoint software in a VMware Virtual machine using a specified QLogic HBA. This document describes the recommended way to build a vRPA/D based on research and development activities in EMC Labs. The content included in this document provides a simple to deploy guide for vRPA/D and is to be used only for demo and POC purposes. Audience This white paper is intended for customers, ESN certified partners and EMC internal staff that are VMware and RecoverPoint professionals or other identical trained audience. Note: vRPA/D is to be used only for demo or proof of concept (POC) purposes, it is not intended for production usage: • EMC does not provide any support for vRPA/D • EMC doesn’t guarantee that the performance of vRPA/D has any relationship to the performance of a RecoverPoint appliance • Issues will be fixed according to engineering case evaluation Why Virtualize RecoverPoint? Virtualizing RecoverPoint can provide some new beneficial features, which arrive from VMware’s virtual consolidation environment, such as: • RecoverPoint “Cluster in a box“ – With ESX you can run multiple RecoverPoint instances thus you can set up two RecoverPoint sites in a single VMware ESX server • Thin provisioning of both Memory & CPU resources –multiple RecoverPoint instances can share CPU & Memory resources dynamically, without the need to pre allocate full capacity of CPU & Memory levels and utilizing VMware Thin Provisions technologies (such as Memory — page sharing, ballooning, and swapping) • Simple RPA Backup and Snapshot – due to the fact that RecoverPoint is only a set of VM files, it is faster to clone it and even utilize VMware hot Snapshots which allows safe point in time protection of your RecoverPoint instance (for example, you might do this before changing major RecoverPoint configurations or upgrading the RecoverPoint code)
  • 5. 5 Deploying and implementing RecoverPoint in a virtual machine • In collaboration with RecoverPoint’s “Virtual WWN” feature, a vRPA/D can be roamed across multiple VMware ESX servers, as long as you understand the limitations that DirectPath imposes (such as identical HBA adapters are required and vMotion is not supported) VMware considerations for vRPA/D deployment Moving vRPA/Ds to different ESX servers Due to VMware DirectPath feature limitations (such as the unique reservation of PCI ports on specific ESX server), there can be implications on or failures to a vRPA/D that must be understood when considering VMware based Failover scenarios. Both use cases will require additional user configuration to assure correct bindings of the new ESX server PCI slot as a DirectPath FC Adapter device. If you have such a configuration and you need additional assistance please send an email to RecoverPoint-vRPA- DirectPath@emc.com and we will help as time permits. If you are a customer, please have your Account Representative send this email. • vMotion as part of the VMware Cluster failover will require manual steps with vSphere as shown in “Moving the vRPA/D” chapter • VMware Site Recovery Manager Failover vRPA/D Cluster Deployment Type There can be various deployments of RecoverPoint vRPA/D clusters over VMware ESX hosts. Table 1 shows the decision matrix for the available vRPA/D deployments: Table 1 vRPA/D Deployment matrix vRPA/D  Deployment  Type   Configuration   Pros   Cons   vRPA/D “Both Sites in a box” Both RecoverPoint sites reside on single ESX Host Requires a single ESX Server for both RecoverPoint Clusters Requires high-powered hardware The ESX server acts as single point of failure for all vRPA/Ds in both clusters vRPA/D “Site per box” Each RecoverPoint Site’s vRPA/Ds are managed on their own Site ESX server Requires only 2 ESX servers for the entire vRPA/Ds cluster Each ESX server is single point of failure for a Site vRPA/D Cluster Recommended configuration vRPA/Ds are spread among multiple ESX Hosts to ensure redundancy and performance Best performance deployment for vRPA/Ds Requires at a minimum of 4 ESX servers, 2 in each site Best Redundancy for both Site and Cluster fail level Can use commodity hardware
  • 6. 6 Deploying and implementing RecoverPoint in a virtual machine VMware DirectPath and PowerPath VMware DirectPath provides the Virtual Machine with direct and exclusive access to physical Fibre Channel host bus adapters in the ESX server. These HBA’s are separate from the HBA’s that the ESX server uses to access its own fibre channel storage. When you use Direct Path you have some limitations in other VMware functions, such as: • vMotion and Storage vMotion • Fault Tolerance • Snapshots and VM suspend • Device Hot Add Note: vRPA/D can not be used with PowerPath/VE Prerequisites for vRPA/D Deployment The main feature that allows RecoverPoint virtualization comes from VMware technology and was first introduced in ESX 4.01 – named as “VMware DirectPath”. This feature utilize an offloading of server I/O devices communication into the hypervisor thus allowing virtual machines to access a specific physical I/O device (HBA or NIC) using “pass-through” communication instead of the former VMware virtualized drivers. The RecoverPoint appliance hardware (Gen 4) specifications (A Dell R610 derived 1U server with 8GB RAM/Dual Quad-core CPU’s, two 146GB internal hard disks, and two 8Gb quad port QLA2564 FC HBA’s) introduces a high physical resource demands (to support both new features and higher storage performance) which using virtualization might consume less resources (assuming performance utilization is average and multiple vRPA/D instances are leveled correctly with overall memory and CPU load). Following are the detailed hardware and software components that are required for a vRPA/D deployment. Pre-requisites The following pre-requisites are necessary to deploy a vRPA/D configuration: • Hardware for the ESX Server o Any hardware on the VMware HCL that supports the ESX/ESXi 4.0, 4.1 and 5.0 • VMware DirectPath server architecture: o Intel VT-d (Xeon 5500 systems and Nehalem processors) o AMD platforms with I/O Virtualization Technology (AMD IOMMU) • VMware DirectPath FC HBA: o QLogic FC HBA’s – QLA24xx/25xx o Only these, others may not work. Note: Both ESX and ESXi support a maximum of 8 VMware DirectPath supported HBA’s, which caps the maximum amount of RecoverPoint VMs per ESX server that can be installed to 8 if dual port HBAs are installed or to 16 if the quad port FC HBAs are used. • Physical Memory: The following recommended memory settings can vary according to the total memory load of all the running vRPA/D instances in the ESX server with the help of VMware advanced memory management capabilities which requires “VMware Tools”
  • 7. 7 Deploying and implementing RecoverPoint in a virtual machine The following values represent recommended “minimum / optimal” values of physical memory which will be required for a given amount of deployed RecoverPoint VMs on a single ESX server: • For 1 VM instance: 4GB / 8GB • For 2 VM instances: 8GB / 16GB • For 3 VM instances: 12GB / 16GB • For 4 VM instances: 16GB / 24GB Note: For more than 4 RecoverPoint VMs per single ESX server, you will be required to obey the hardware limitations of the running ESX Server system according to the manufacture technical specifications and the supported maximum memory per the running ESX server version • Storage: vRPA/D only supports EMC Storage Arrays and SCSI based LUNs. Note that a VMAX 20K and 40K has FTS that enables non-EMC Storage Arrays to be attached to the VMAX. Also note that a VPLEX supports over 35 non-EMC Storage Array families. o Choosing EMC VMAX SAN storage, an EMC VPLEX platform or EMC VNX/CLARiiON SAN storage will allow vRPA/D to support “Array based splitter” (aka the “Symmetrx Splitter”, “VPLEX Splitter” or “VNX/CLARiiON Splitter”) as well as the “Host based splitter” (aka “Kdriver”) o Choosing non-EMC SAN storage is not possible. The SAN Array should have enough provisioned free space to allocate for RecoverPoint volumes (including two Repository volumes, the pairing LUNs and the Journal volumes according to RecoverPoint documentation and best practices) • FC SAN Switch: o A RecoverPoint supported FC SAN based switch (with applicable installed license) Note: If the RecoverPoint Splitter technology is “Fabric Splitter” type, then make sure that required switch configuration is configured according to RecoverPoint documentation for “Fabric Splitter” deployments • Software: • VMware Virtual Server OS (hosting the RecoverPoint VM) can be: o ESX 4.0.1 / 4.1 / 5.0 o ESXi 4.0.1 / 4.1 / 5.0 • EMC RecoverPoint 3.4 or 3.5 • A license for EMC RecoverPoint 3.4 or 3.5 (see the section “Requesting a RecoverPoint license” below) • Storage Array license: if you using the VNX/CLARiiON Array splitter then you will need to install an enabler in your VNX/CLARiiON Array to support it– see the applicable RecoverPoint documentation • SAN FC Switch license: if you are using a Fabric based splitter then a specific license may be required to be installed in addition to supported switch firmware version– see the applicable RecoverPoint documentation
  • 8. 8 Deploying and implementing RecoverPoint in a virtual machine Deploying vRPA/D Preparing the VMware Hypervisor (ESX Server) Verifying that Virtualization is enabled in server BIOS VMware virtualization hypervisor (VMware ESX) requires that server BIOS will be enabled for “Virtualization Technology”. Figure 1 shows this option as “Enabled” which confirm to VMware ESX server installation prerequisite Example: In DELL servers: after powering on server - hit F2 to enter the system BIOS console, navigate to “Processor Settings” section in the main BIOS screen). Figure 1 - DELL BIOS menu to enable virtualization support by CPU Deploying ESX server Proceed with normal installation of your ESX server setup. Configuring VMware DirectPath devices on ESX server Upon successful completion of ESX server installation, the ESX server performs its first full reboot. At this point, the ESX server is up and running and ready to setup the pass-through option for the vRPA/D PCI devices (required for use by VMware DirectPath). Error! Reference source not found. shows VMware DirectPath maximum values for both ESX 4.x & ESXi 4.x: Table 2 VMware DirectPath Maximum values VMware  DirectPath  PCI  devices  per  VM   2  (in  4.01)  /  4  (in  4.1  and  5.0)   VMware  DirectPath  SCSI  targets  per  VM   60  (Array  Initiator  targets,  not  LUNs)   VMware  DirectPath  Physical  devices  per  ESX   8  (Physical  HBA  cards)  
  • 9. 9 Deploying and implementing RecoverPoint in a virtual machine vRPA/D supports both a dual port and quad port HBA. It is commended that Quad Port HBAs be used since you can: • Increase the count of available VMware DirectPath HBAs (and RecoverPoint VM counts per single ESX) on a limited PCI slot server • Utilize ESX server for both VMware DirectPath (vRPA/D) and regular ESX to SAN connectivity (by using only 2 ports out of the 4 ports on the HBA for vRPA/D Enabling the VMware DirectPath devices 1) Connect to the ESX server using the VI Client to the either the ESX server or the managing vCenter server. 2) Select the ESX server in question, go to the “Configuration” tab and under Advanced “Settings”, on the far right side of the screen, choose “Configure Passthrough”. 3) A full list of the devices available for VMware DirectPath use are then presented under a pop-up window titled “Mark devices for Passthrough”. 4) Select the HBA ports as appropriate (see Figure 2 which demonstrate HBA enabling for VMware DirectPath feature) Figure 2 - Selecting DirectPath PCI Devices for vRPA/D 5) An ESX reboot is required for this setting to take effect. Install RecoverPoint as VM 1) Download the current RecoverPoint ISO from Powerlink, if you are a customer the operation must be performed by your Account Representative. 2) Select appropriate machine(s) that run ESX 3) Install the physical HBA card(s) into these machines 4) Deploy a “New Virtual Machine” using VMware wizard
  • 10. 10 Deploying and implementing RecoverPoint in a virtual machine a. Give it an appropriate name such as vRPA/D1 5) Select the VM type as “Debian GNU/ Linux 5 (64-bit)” 6) Assign the relevant virtual hardware resources to the new VM as described below: • 8GB RAM (minimum of 4GB) • 4 vCPU (minimum if 2vCPU) • 2 x vNIC (WAN & LAN connectivity and management) • 70GB Hard disk (the initial utilized disk space for the OS is 8GB) Figure 3 - vRPA/D VM hardware resources view 7) Attach the RecoverPoint install image/CD using one of the following options: a. Mount a local bootable RecoverPoint DVD (mounting the physical DVD/CD Drive on the ESX Server Hardware) using a DVD image burned from the ISO you downloaded in Step 1. b. Mount a copied bootable RecoverPoint image from the desktop you are working on (by clicking the “cd icon” in the virtual console) or also from other datastore (if you previously copied it over) using the ISO image downloaded in Step 1. 8) VMware Tools – since RecoverPoint code does not support the VMware support tools, you must skip this step. Note: It is important to provision sufficient virtual resources or else the RecoverPoint deployment may fail to complete and errors will be triggered. Binding VMware DirectPath ports for the vRPA/D Once the vRPA/D has been installed as a VM, we will be required to power down this VM and “Add” a new “PCI device” from the list of the available VMware DirectPath device ports. Figure 4 shows a Virtual Machine Properties which was configured to expose two VMware DirectPath HBAs (QLogic)
  • 11. 11 Deploying and implementing RecoverPoint in a virtual machine Figure 4 - Binding available DirectPath HBA into vRPA/D VM Pre Configuring vRPA/D – RPA Network settings The following steps will provide the minimum connectivity configuration that will later allow deploying RecoverPoint cluster using the “RecoverPoint Deployment Manager” 1) While connected through the VI Client, open a “Console” session (virtual KVM) to the vRPA/D virtual machine. 2) After logging into the RecoverPoint management console (using “boxmgmt” user), you are prompted to enter a temporary IP address, subnet and default gateway – proceed with temporary ip network settings (as shown in Figure 5) Figure 5 - Pre Configure fresh vRPA/D installation
  • 12. 12 Deploying and implementing RecoverPoint in a virtual machine Note: In this environment, a default gateway was not required. RecoverPoint can then be configured either via the GUI or CLI wizards. Pre Configuring vRPA/D – FC Port settings 1) Review current WWN’s which are registered by RecoverPoint vRPA/D using RecoverPoint CLI “Main Menu” by entering the following menu sequence: [3] “Diagnostics” à [2] “Fibre Channel Diagnostics” à [2] “View Fibre Channel Details” Note: Although QLogic HBA’s have their own WWN’s, the RecoverPoint appliance layers its own native WWN’s on top of those. Figure 7 - vRPA/D Native WWN mapping 2) RecoverPoint WWN’s will also appear in the FC switch as KASHYA ports (Figure 8 reflect an example output of Brocade FC Switch) Figure 8 - vRPA/D FC Ports view in FC Switch Figure 6 - Review vRPA/D FC Detail menu
  • 13. 13 Deploying and implementing RecoverPoint in a virtual machine 3) In order to review the array controllers, in the “Fibre Channel Diagnostics” menu, select Option [3] to “Detect Fibre Channel Targets”. Figure 9 displays the WWN’s of a CX4 ports that have been zoned to the vRPA/D. Figure 9 - Detecting target WWN via vRPA/D Zoning the vRPA/D to the Storage Array A vRPA/D is bounded to the splitter environment which being used (Host based or Array based) Example: in the VNX/CLARiiON array-based splitter, the required zoning must include zoning each of each vRPA/D HBA ports to both of the EMC Array controller ports (in CLARiiON this refers to SPA and SPB). Note: VMAX 10K support requires RecoverPoint v3.4.1 or later, VPLEX, VMAX 20K and VMAX 40K requires RecoverPoint v3.5 or later For VNX/CLARiiON arrays the zoning should include: • vRPA/D HBA0 ports -> Both Array controllers ports • vRPA/D HBA1 ports -> Both Array controllers port vRPA/D Initiator Registration & Storage Allocation Once the vRPA/D port initiators are zoned and successfully logging into the storage array, those initiators need to be manually registered. The example below shows a CLARiiON (for VMAX please consult with Symmetrix Technical Notes in EMC Powerlink and for VPLEX Local and VPLEX Metro please consult with the VPLEX Technical Notes in EMC Powerlink) equivalent registration steps. 1) Register the newly discovered initiators as a “New Host” with its own IP address. The initiators for the vRPA/D need to be registered with an Initiator Type of “RecoverPoint Appliance” and have a “Failover Mode” equal to “4”. (Figure 10 shows an example for adding vRPA/D initiators as RecoverPoint appliance initiators
  • 14. 14 Deploying and implementing RecoverPoint in a virtual machine Figure 10 - vRPA/D FC port registration in Array management 2) Once the initiators are registered to the new vRPA/D, the vRPA/D can be added to a Navisphere/Unisphere Storage Group as a host in order to access the required storage/LUNs. 3) The vRPA/D(s) requires LUN Masking access in the same manner as physical RPA would require; The below bullets emphasize the core requirement for each LUN type (for further details, you are recommended to visit RecoverPoint Admin Guide available on the EMC Powerlink): a. Journal volumes – must be exposed only to the applicable site vRPA/Ds b. Repository volume – must be exposed only to the applicable site vRPA/Ds c. Replicated volume copies – must be exposed to both the applicable site vRPA/Ds and the site Hosts 4) All of the masked LUNs can be easily verified using the vRPA/D “Diagnostics” menu using the management CLI of RecoverPoint code. Figure 11 - Verifying LUN masking via vRPA/D "Diagnostics" menu Note: Figure 11 - describes such a verification attempt for two masked LUNs (A production LUN which is 4GB and a second LUN which is 50GB) which are both being exposed correctly by the vRPA/D
  • 15. 15 Deploying and implementing RecoverPoint in a virtual machine Deploying vRPA/D Cluster Upon successful configuring of vRPA/D storage and network connectivity, we can proceed into full-scale deployment of RecoverPoint Using RecoverPoint Deployment Manager wizard that provides the safest and fully automated deployment of RecoverPoint appliances Deploying vRPA/D Cluster using Deployment Manager A vRPA/D cluster deployment is handled in the same manner as a regular physical RPA Cluster. RecoverPoint Deployment Manager is used for RecoverPoint deployment and provides the most automated and error free deployment method. Below is the full procedure for vRPA/D Cluster deployment using RecoverPoint Deployment Manager Tool. 1) Execute RecoverPoint Deployment Manager Wizard, you will be asked to first log into the RP Deployment Manager. Figure 12 - RP Deployment Manager: Authentication screen Note: The RP Deployment Manager also contains wizards relative to RPA upgrades and replacement. 2) Select the “RecoverPoint Installer Wizard” to begin the vRPA/D network identity configuring (IP Address, Subnet Mask, Default Gateway, Management IP Addresses and the RPA Cluster details).
  • 16. 16 Deploying and implementing RecoverPoint in a virtual machine Figure 13 - RP Deployment Manager: Deployment wizard 3) Review the prerequisites for the installation. At this stage, after completing all of the previous steps for the vRPA/D, all of the prerequisites should be satisfied (see Figure 14). Figure 14 - RP Deployment Manager: vRPA/D Prerequisites 4) The next screen will prompt for an installation structure file; create a new file or use an existing saved configuration file. Note: Figure 15 shows a consolidated view of the settings required when configuring a vRPA/D cluster (i.e. number of sites, amount of cluster nodes at each site and the type of replication between sites).
  • 17. 17 Deploying and implementing RecoverPoint in a virtual machine Figure 15 - RP Deployment Manager: Environment Settings screen 5) Upon completion of the previous installer screen, you will be required to configure the vRPA/D networks (Management and WAN) details for vRPA/D site A, including the site’s vRPA/D instances (Figure 16 shows an example of two vRPA/Ds configuration in Site A) Figure 16 - RP Deployment Manager: Configuring vRPA/D Site A networks
  • 18. 18 Deploying and implementing RecoverPoint in a virtual machine 6) The next wizard screen (Figure 17) will require answering the “Advanced settings” questions that relates to splitter type in use and other environment variables specific to the storage arrays type in use. Figure 17 - Configuring vRPA/D Sites advanced settings screen 7) Upon completion of the previous installer screen, you will be required to configure the vRPA/D networks (Management and WAN) details for vRPA/D site B, including the site’s vRPA/D instances (Figure 18 shows an example of two vRPA/Ds configuration in Site A) Figure 18 - RP Deployment Manager: Configuring vRPA/D Site B networks 8) Upon completion of previous step, you will be instructed to approve the overall vRPA/D configuration and for which vRPA/D sites. This step will lock the required vRPA/D sites
  • 19. 19 Deploying and implementing RecoverPoint in a virtual machine configuration and prepare them to be applied on each of the related vRPA/D instances (see Figure 19 for this step screen). Figure 19 - RP Deployment Manager: Applying configuration Note: that if only one of the sites is to be installed at this stage, the wizard provides a checkbox to confirm whether or not the other site is already installed. 9) The next wizard screen will provide the installer confirmation for the previous applied settings (see Figure 20) Figure 20 - RP Deployment Manager: result screen of applying vRPA/D Configuration 10)Upon successful confirmation in previous step, the installer will begin the vRPA/D storage configuration wizard while showing the managed vRPA/D WWN (see Figure 21)
  • 20. 20 Deploying and implementing RecoverPoint in a virtual machine Figure 21 - RP Deployment Manager: Site A Zoning and LUN Masking configuration 11)The wizard then runs the vRPA/D SAN diagnostics, thus providing the list of available LUNs to be used as the vRPA/D Cluster Repository volume for Site A (equivalent to traditional cluster’s Quorum disk). You will be required to select the desired LUN to act as the Site (see Figure 22) Figure 22 - RP Deployment Manager: Site A Repository volume selection 12)Completing repository volume selection in previous step, will display the summary for the storage configuration for Site A (see Figure 23)
  • 21. 21 Deploying and implementing RecoverPoint in a virtual machine Figure 23 - RP Deployment Manager: Site Summary screen 13)The installer wizard proceeds through the exact sequence of the previous storage configuration details (Site A), this time for the remote/target site (Site B) 14)Upon completion of the storage configuration for Site B, a summary screen appear to indicate the success of the installer process which also allows deploying RecoverPoint Management Application through a given Site (see Figure 24) Figure 24 - RP Deployment Manager: Success summary of vRPA/D Cluster
  • 22. 22 Deploying and implementing RecoverPoint in a virtual machine Configuring the RecoverPoint Splitters Note: This procedure assumes that splitters were installed correctly. To configure the RecoverPoint splitter, perform the following steps: 1) Open “RecoverPoint Management Application”, and right click the “Splitters” object choose “Add New Splitter” target. 2) From the list of the available splitters, choose the applicable splitters which will be required to allow RecoverPoint replication (Figure 25 shows an example of discovered VNX/CLARiiON splitters for both vRPA/D sites) and click “Next” Figure 25 - Configuring vRPA/D splitters screen 3) Proceed with the on screen instructions (For the VNX/CLARiiON-based array splitter you will be asked to provide the array “login credentials” or to select “Configure login credential later” for both sites) and upon completion of splitter information, hit “Finish” (Figure 26 shows successful summary of added VNX/CLARiiON splitters) Figure 26 - RecoverPoint validated splitters Configuring RecoverPoint CGs with vRPA/Ds Configuring RecoverPoint Consistency Group (CG) using vRPA/Ds is possible due to the transparency of the virtualization layer from the Application management.
  • 23. 23 Deploying and implementing RecoverPoint in a virtual machine The consistency group wizard navigate through the required CG elements such as: CG name, the preferred RPA, the Policy attributes for each copy, volumes to be used as the source/replica in the Replication Sets and the relevant Journal volumes. Once the entire consistency group configuration has been completed, a summary screen will be shown before initiating the new replication (see Figure 27) Figure 27 - Configured vRPA/D CG summary screen Upon completing the CG wizard, we will be able to review the replication status for the given CG. Figure 28 shows the initial synchronization completion for a RecoverPoint CLR configuration, where the “Production Source” copy has a “Direct Access”, while both replica copies (“Local Replica” and “Remote Replica” shows “No Access” state) Figure 28 - RecoverPoint CLR replication topology More in depth replication analysis is available through RecoverPoint’s Management GUI through the “statistics” tab (see Figure 29)
  • 24. 24 Deploying and implementing RecoverPoint in a virtual machine Figure 29 - RecoverPoint statistics panel to indicate replication state Replacing a vRPA/D with the RPA Replacement Wizard Replacing a vRPA/D within a clustered RecoverPoint configuration requires the RecoverPoint Deployment Manager Wizard. The below procedure will guide through the needed steps to replace a vRPA/D using the Deployment Manager wizard. 1) Deploy the RecoverPoint Deployment Manager Wizard 2) Select the “RPA Replacement Wizard” option, and click “Next” (see Figure 30)
  • 25. 25 Deploying and implementing RecoverPoint in a virtual machine Figure 30 - RP Deployment Manager: choosing vRPA/D replacement option Note: This procedure will import the vRPA/D into the existing configuration, providing the new vRPA/D with the same configuration and management details as the previous/failed vRPA/D. 3) Highlight the required failed vRPA/D (which is about to be replaced) as shown in Figure 31. Note: Notice the checkbox at the bottom of the screen that prompts the user to confirm whether or not the replacement vRPA/D has been configured with required RP code and network identity to allow an automatic replacement. 4) When the new/replacement vRPA/D is online and configured with the required temporary network connectivity, check the bottom screen checkbox to allow the wizard proceed and click “Next” Figure 31 - RPA Replacement wizard: select failed vRPA/D 5) Confirm the status of the replacement RPA, by checking the bottom screen checkbox (shown in Figure 32) and click “Next”
  • 26. 26 Deploying and implementing RecoverPoint in a virtual machine Figure 32 - RPA Replacement wizard: Confirm failed vRPA/D 6) The next screen will require the approval for cloning (spoofing) the failed vRPA/D WWNs configuration into the new vRPA/D. By spoofing the WWNs there is no requirement for new zoning at the SAN level. Notice: If new WWNs are introduced then they will need to be zoned accordingly!
  • 27. 27 Deploying and implementing RecoverPoint in a virtual machine Figure 33 - RPA Replacement wizard: validating storage configuration 7) The wizard automatically runs through the validation process of the storage and SAN configurations (before the final “apply changes” phase for the settings on the new vRPA/D). 8) Once all of those changes have been applied then the wizard provides a summary of the steps completed as part of replacing the faulted vRPA/D and resuming cluster operations with the new vRPA/D (shown in Figure 34). Figure 34 - vRPA/D Replacement wizard: Applying configuration screen RecoverPoint Splitters There are 5 options to choose from when considering the RecoverPoint Splitter • Windows Host Splitter (for RecoverPoint/CL and RecoverPoint/EX with RecoverPoint 3.5 and with RecoverPoint/SE, RecoverPoint/EX and RecoverPoint/L with RecoverPoint 3.4) • VMAX-based Splitter • VPLEX-based Splitter • VNX/CLARiiON-based Splitter • Brocade/Cisco Intelligent Fabric Splitter Choosing a RecoverPoint splitter is based upon many environmental scenarios. In this example, RecoverPoint is using the array based VNX/CLARiiON Splitter. Enabling the “RecoverPoint Splitter” in the FLARE or VNX Operating Environment can enable this feature directly on the array. For a Symmetrix VMAX and VPLEX the splitter is already enabled. The following displays a list of all of the software features that are enabled on one of the CX4 arrays being used in this example
  • 28. 28 Deploying and implementing RecoverPoint in a virtual machine Figure 35 - RecoverPoint splitter view in CLARiiON management GUI The Software tab under the “Properties” section of the CX4 array is the only field in which the RecoverPoint Splitter can be viewed from the Navisphere perspective. There is nothing else to tune or configure on the CLARiiON array in relation to RecoverPoint. As with other Layered Applications, the RecoverPoint Splitter is pre-installed as part of the FLARE code, but is not visible or available to the user until the RecoverPoint Splitter enabler key is installed. This enabler key can be installed via the Navisphere Service Taskbar. When an array-based splitter is used the maximum size volume (LUN) that can be replicated is 32TB. In environments where an array-based splitter is not being used then the maximum size for a replicated LUN is 2TB. The VMAX splitter is supported on the VMAX series, the VPLEX splitter is supported on VPLEX Local and VPLEX Metro, and the VNX/CLARiiON splitter is supported on VNX series, CX3 and CX4 arrays. (The VNX/CLARiiON splitter does not support VNXe, AX4-5 or pre CX3 storage arrays). WWN Spoofing When moving or replacing a vRPA/D it is possible to retain the WWN’s of the previous vRPA/D’s WWNs and apply them to the new vRPA/D. A RecoverPoint appliance generates its own WWNs during installation, based in part on the underlying HBA WWN. The trick to enabling easy mobility of a vRPA/D is to hardcode the WWNs so that they don’t change when ported to a new set of HBAs (in the same host, or a different one). Doing this allows a vRPA/D to move another host with different HBAs without the need for additional zoning or LUN masking. The process looks like this: Hard Coding the WWNs 1. On the vRPA/D console (connect via SSH or via the VI Client), carry out the following steps 2. Enter the Diagnostics Menu
  • 29. 29 Deploying and implementing RecoverPoint in a virtual machine 3. Enter the Fibre Channel Diagnostic Menu 4. Select the View Fibre Channel Details option. 5. If using SSH copy and paste the WWNs out to a text file for later use. 6. Navigate back through the menus and enter the Cluster Operations Menu 7. Detach the vRPA/D from the cluster. 8. Once detached, go into the Setup menu, and then option 1 to Modify, then specify the site of the vRPA/D you want to modify 9. Select Option 3 to set the WWN Name / Port Pair Addresses 10.Then specify the vRPA/D you want to change, and the number of HBA ports that the RPA uses 11.Using the WWN details we copied earlier, paste in the WWN and Node WWN details for each HBA port in sequence. 12.Once done, backup three levels in the menu tree and select option 5 to Apply the configuration. 13.This gives you a summary of the entire cluster configuration, and you can see the WWNs that you just hardcoded for the relevant vRPA/D. 14.Confirm that you want to apply the configuration, and then enter the site and box number to apply the details to. 15.Finally, reattach the vRPA/D to the cluster, which will cause a reboot of the vRPA/D. 16.Confirm that the cluster resumes normal operation Moving the vRPA/D There are can be various ways to perform vRPA/D relocation among ESX servers, as shown below: Ø Manual move using vMotion as part of vSphere Cluster (applicable for vSphere 4.01 and later) Ø Automated Failover using vSphere Cluster as part of HA/DRS Failover policy(valid for only vSphere 4.1 and later) Ø Automated Failover using SRM (Compatible) Note: it is recommended to configure your vRPA/D with spoofed WWN’s when you consider moving/failing over the vRPA/D into other ESX servers due to the fact that each ESX server has its own unique attached HBA’s WWN which can result in a failure of the vRPA/D code. Manual move using vMotion as part of vSphere Cluster 1. Verify that the new ESX server has identical HBA (otherwise, the vRPA/D will fail to start on the new ESX server) as the old ESX (where the vRPA/D is now hosted) 2. Move the vRPA/D using a simple drag and drop in vCenter, keeping the storage locations as they were 3. Re-configure the vRPA/D to assign the correct set of physical HBAs that you want the RPA to use in the new host. A vRPA/D uses VMware DirectPath to get direct access to the required QLogic HBAs, so remove the two HBAs that were being used in the original host, and move them to the new host. On the new host assign access to two new HBAs in the new host.
  • 30. 30 Deploying and implementing RecoverPoint in a virtual machine 4. Once complete, power on the virtual machine, and validate that the vRPA/D comes up cleanly by observing the VM state in the vSphere GUI or using the RecoverPoint GUI under the “RPA” tab. Note: This process can be done in advance of setting up the vRPA/D cluster, or it can be done afterwards if you decide to enable this behavior at a later date. This feature might be useful if: • You want to do some maintenance on the physical host, and want the RecoverPoint cluster to run on all vRPA/Ds while this is happening. • You want to upgrade the hardware that a vRPA/D runs on by moving it to another machine with better processors or faster HBAs as long as this new hardware still adheres to the support list shown above. • If the customer wants to migrate their RecoverPoint appliances from physical to virtual, in which case they can hardcode the WWNs from the physical RPA into the vRPA/D, allowing for a quick and easy transfer. Automated Failover using vSphere Cluster as part of HA/DRS Failover policy In vSphere 4.1 VMware has introduced new vMotion feature named as dvMotion (which is the acronym for DirectPath vMotion) which can be used to provide an automated Failover of vRPA/D using the vMotion engine. The details are complex; if you are interested in this please send an email to the vRPA/d team at RecoverPoint-vRPA-DirectPath@emc.com Note: This feature relays on the vSphere 4.1 experimental “dvMotion” feature Automated Failover using SRM VMware Site Recovery product enables automated failover of VMware Sites and clusters. It is highly suggested to use the compatible SRM functionality with ESX4i and later. vSphere 5 introduced improved vMotion & SRM capabilities, refer to the appropriate VMware documentation for full details. Comments and getting help Product and Technical support are available as follows: Product information. For documentation and release notes, or for information about licensing and service, go to the RecoverPoint landing page on Powerlink: RecoverPoint Family or send an email to RecoverPointDealSupportDesk@emc.com RecoverPoint licensing information To request a license for your vRPA/D configuration do the following: Go to PowerLink, and in the top-level menu navigate to Request Support - > Create Service Request. • Mark it as “this is a: technical problem” • Enter “N/A” as the customer site ID • Enter contact name • Select product as RecoverPoint • In the Problem Summary enter “License Request for vRPA/D” • In the Problem Description enter the following information: o "This  is  a  license  request  for  vRPA/D"   o The  version  of  RecoverPoint  (3.4  or  3.5)  required.   o State  if  you  require  a  RecoverPoint/SE,  RecoverPoint/EX  or  RecoverPoint/CL  license.  
  • 31. 31 Deploying and implementing RecoverPoint in a virtual machine o State  the  replicated  capacity  that  is  required  (1  to  300  TB).  (300  TB  is  the  maximum  replicated  capacity   that  can  be  requested)   o State  if  you  need  local,  remote  or  both  local  and  remote  replication.   o State  how  many  RPAs  will  you  need.  (The  number  of  RecoverPoint  Virtual  Machines)   o State  where  will  the  VMs  be  installed.   o State  if  this  is  for  an  internal  lab  or  for  a  proof  of  concept.   § If  this  is  for  a  POC  please  provide  the  name  of  the  customer.   o Provide  your  full  contact  information  including  name,  address,  phone,  email   • Submit. Conclusion This white paper contains enough information to install and operate RecoverPoint as a virtual machine. If you have issues, comments, or questions about this document include the relevant page numbers and any other information that will help us locate the information you are addressing. Send comments to: RecoverPoint-vRPA-DirectPath@emc.com References If you are having difficulty with vRPA/D ensure you read these references before sending an email. EMC references • Introduction to EMC RecoverPoint 3.5 New Features and Functions • EMC RecoverPoint Family Overview VMware references • Configuration Examples and Troubleshooting for VMDirectPath • Configuring VMDirectPath I/O pass-through devices on an ESX host • PCI Passthrough with PCIe devices behind a non-ACS switch in vSphere • VMware Tools Installation Guide For Operating System Specific Packages • Performance Best Practices for VMware vSphere® 4.0 • Installing VMware Tools in a Linux virtual machine using a Compiler • Configuration Examples and Troubleshooting for VMDirectPath • Configuration Maximums - ESX 4.1 • Configuration Maximums - ESX 4.0