Más contenido relacionado La actualidad más candente (20) Similar a Presentation power vm virtualization without limits (20) Más de solarisyougood (20) Presentation power vm virtualization without limits1. IBM Power Systems
PowerVM: Virtualization without limits
© 2012 IBM Corporation
César Diniz Maciel
Executive IT Specialist
IBM Global Techline
cmaciel@us.ibm.com
3. IBM Power Systems
PowerVM: Virtualization Without Limits
Sold with more than 70% of Power Systems
Improves IT resource utilization
Reduces IT infrastructure costs
Simplifies management
2 © 2012 IBM Corporation
4. © 2012 IBM Corporation3
IBM Power Systems
Why virtualize workloads with PowerVM?
Creating a virtualized workload with PowerVM is simple:
– Create a new PowerVM logical partition (LPAR) or virtual machine (VM)
– Install the operating system (AIX, IBM i or Linux) in the VM
– Install the workload application(s) in the VM
– Configure the operating system and applications as required
At this point, the completed virtualized workload can be stored, copied,
archived or modified just like any other file
The benefits of virtualizing workloads with PowerVM in this way include:
– Rapid provisioning – deploying the ready-to-run workload is a quick and easy process
– Scalability – deploying multiple copies of the same workload type is simplified
– Recoverability – bringing a workload back online after an outage is fast and reliable
– Consolidation – many diverse workloads can be hosted on the same server
All of these benefits save system administrator time and resources
– In addition, workload consolidation offers significant IT infrastructure cost reductions
© Copyright IBM Corporation 2012
5. Source: Does Your Virtualization Platform Matter? Getting the Most Out of Your I©T 2P0la12tfoIBrmMsCworipthoration4
IBM Power Systems
Power is “Optimized for efficiency”
PowerVM is the only hypervisor that delivers on the promise
of efficiency as you scale your infrastructure
PowerVM is the only hypervisor that delivers on the promise
of efficiency as you scale your infrastructure
The more you use
PowerVM, the lower your
cost per unit of work.
Data normalized to a
Medium VMware
deployment
PowerVM cost per VM
declines 19.3% as
environment scaled from
Medium to Very Large in
size
Competing virtualization
cost per VM increased up
to 1.92x over same scale
PowerVM versus competitive virtualization study
61,000 customers surveyed
Virtualization; Solitaire Interglobal Ltd (All rights reserved); April 2012.
6. IBM Power Systems
PowerVM Editions
6 © 2012 IBM Corporation
Q4 2012
Features
* Requires
eFW760
** Requires
VMControl
PowerVM Editions Express Standard Enterprise
Concurrent VMs 2 per server 20 per core*
(up to 1000)
20 per core*
(up to 1000)
Virtual I/O Server
NPIV
Suspend/Resume
Shared Processor Pools
Thin Provisioning
Live Partition Mobility
Active Memory Sharing
Shared Storage Pools Enhancemen s
VIOS Performance Advi r
Linked Clone
i r i i n Mobility Perf r
Improvemen
7. IBM Power Systems
What is the VIOS?
A special purpose appliance partition
– Provide I/O virtualization
– Advanced POWER Virtualization enabler
First available in 2004
Built on top of AIX, but not an AIX partition
IBM i first attached to VIOS in 2008 with the IBM i 6.1
VIOS is licensed with PowerVM
7 © 2012 IBM Corporation
8. © 2012 IBM Corporation7
IBM Power Systems
Two I/O Server Options
IBM i
Hypervisor
POWER6/7
IBM i
IBM i
Hypervisor
POWER6/7
VIOS
• Built into IBM i
•Host Disk, Optical, Tape
•Consolidate Ethernet Traffic
•Same technology as hosting AIX,
Linux, and iSCSI
•Virtual I/O Server (part of PowerVM)
•Host Disk, Optical, Tape
•Bridge Ethernet Traffic
•Attach external storage
•Advance Virtualization Functions
•Virtualizes for AIX and Linux also on
POWER5
9. IBM Power Systems
IBM PowerVM Virtual Ethernet
PowerVM Ethernet switch
– Part of PowerVM Hypervisor
– Moves data between LPARs
IBM i 7.1 TR 2 lpar
– Bridges traffic to and from
external networks
PowerVM Hypervisor
IBM i 7.1 TR 2
CMN
VLAN-Aware Ethernet Switch
Ethernet
Switch
LIND LIND
Bridge ID Bridge ID
CMN
(Vir)
Client 2
CMN
(Vir)
Client 1
CMN
(Vir)
9 © 2012 IBM Corporation
10. IBM Power Systems
Why use the VIOS?
I/O Capacity Utilization
Storage Allocation Flexibility
Ethernet Flexibility
Memory Sharing
Suspend/Resume
Mobility
10 © 2012 IBM Corporation
11. IBM Power Systems
11 © 2012 IBM Corporation
Progression of Virtual Storage Devices on VIOS
The ability to share virtual SCSI disks backed by a Physical Volume (PV) or
a Logical Volume (LV) has been available from the beginning.
VIO server 1.2 gave the ability to share the CDROM drive with client LPARs
through Virtual Optical devices.
With VIO server 1.5, the ability to create “file-backed” virtual devices in
addition to virtual SCSI devices backed by a PV or LV
Using the cpvdi command a virtual device image can now be copied from
one virtual target device (VTD) to a different VTD. This feature was added
under VIO 1.5.2.1-FP11.1.
12. IBM Power Systems
vSCSI (Classic)
Source
VIOS IBM i Client
(System 1)
POWER6 with IBM i 6.1.1
System 1
System 2
System 3
FC HBA
AIX Client
(System 2)
Linux Client
(System 3)
Hypervisor
•Assign storage to the physical HBA
in the VIOS
•Hostconnect is created as an open
storage or AIX hosttype, requires
512 byte per sector LUNs to be
assigned to the hostconnect
•Cannot Migrate existing direct
connect LUNs on IBM I
•May migrate LUNs on AIX and Linux
if they have UDID
•Many Storage options supported
6B22
Device
Type
12 © 2012 IBM Corporation
vfc0 Linux
fc
device
type
13. IBM Power Systems
vSCSI (Classic) Storage Device Virtualizer
VIOS
POWER6 with IBM i 6.1.1
FC HBA
IBM i Client
Hypervisor
6B22
Device
Type
VSCSI
SERVER
VSCSI
Client
vhostXXX hdisk1
hdisk2
•Storage is assigned to
the VIOS partition
•Within the VIOS you
map the hdisk (lun) to
the vhost corresponding
to the client partition
•Storage management
allocation is done from
both the external storage
box and the VIOS
•Flexible disk sizes up to
2Tb – 512
•AIX supports disks up to
16 TB
•16 disks per vscsi
adapter on IBM i
13 © 2012 IBM Corporation
14. IBM Power Systems
14 © 2012 IBM Corporation
Virtual Optical Media
File-backed device that works like an optical device (think of it as an ISO image).
With read-only virtual media the same virtual optical device can be presented to multiple client
partitions simultaneously
You could easily boot from and install partitions remotely without having the need to swap out
physical CD/DVDs or setup Network Installation Manager (NIM) server. It is also easier to boot a
partition into maintenance mode to repair problems
Easier to maintain a complete library of all the software needed for the managed system. Various
software packages as well as all the necessary software levels to support each partition
Client partitions could use blank file-backed virtual optical media for backup purposes (read/write
devices)
These file-backed optical devices could then be backed up from on the VIO server to other types
of media (tape, physical CD/DVD, TSM server, etc.)
15. IBM Power Systems
vSCSI Tape and optical
VIOS
POWER6 with IBM i 6.1.1
IBM i Client
Hypervisor
VSCSI
SERVER
VSCSI
Client
vhostXXX
•Storage is assigned to
the VIOS partition
•Within the VIOS you
map physical tape or
optical or file backed
virtual optical to the
vhost corresponding to
the client partition
•Only SAS tape drives
supported, no
autoloaders or libraries
cd1
rmt1
CD1
RMT1
TAP01
15 © 2012 IBM Corporation
OPT01
16. IBM Power Systems
Performance – Does Virtualization Perform?
Database ASP
16
14
12
10
8
6
4
2
0
0 10000 20000 30000
OPS
40000 50000 60000
ResponseTimeinMS
VIOS DS5K DA DS5K
16 © 2012 IBM Corporation
17. IBM Power Systems
N-Port ID Virtualization (Fibre Channel adapter virtualization)
Source
VIOS IBM i Client
(System 1)
POWER6 with IBM i 6.1.1
System 1
System 2
System 3
8Gbs HBA
IBM i Client
(System 1)
IBM i Client
(System 1)
Hypervisor
17 © 2012 IBM Corporation
•Hypervisor assigns 2 unique
WWPNs to each Virtual fiber
•Requires 520 byte per sector LUNs
to be assigned to the iSeries
hostconnect on DS8K
•Can Migrate existing direct connect
LUNS
•DS8100, DS8300, DS8700,
DS8800, DS5100 and DS5300
supported
•AIX and Linux support same
devices that are supported with
direct attach
Virtual address example C001234567890001
Note: an NPIV ( N_port ) capable switch is required to connect the
VIOS to the SAN to use NPIV.
18. IBM Power Systems
Multiple VFC server adapters may map
to the same physical adapter port.
18 © 2012 IBM Corporation
Each VFC server adapter connects to
one VFC client adapter; each VFC client
adapter gets two unique WWPN.
Client WWPN stays the same regardless
of physical port it is connected to.
Support for dynamically changing the
physical port to virtual port mapping.
Clients can discover and manage
physical devices on the SAN.
VIOS can’t access or emulate storage,
just provides clients access to the SAN.
Support for concurrent microcode
download to the physical FC adapter
N-Port ID Virtualization
(Fibre Channel adapter virtualization)
19. IBM Power Systems
DS8000
EMC
VIOS
Virtual SCSI
FC Adapters
SAN
generic
scsi disk
generic
scsi disk
Virtual SCSI model
Virtualized
disksFCAdapter
EMC
DS8000
VIOS
Virtual FC
FC Adapters
SAN
EMCDS8000
N-Port ID Virtualization
Shared
FCAdapter
POWER6POWER5 or POWER6
AIX AIX
Disks
19 © 2012 IBM Corporation
21. IBM Power Systems
21 © 2012 IBM Corporation
NPIV Configuration - Limitations
Single client adapter per physical port per partition
– Intended to avoid single point of failure
– Documentation only – not enforced
Maximum of 64 active client connections per physical port
– It is possible to map more than 64 clients to a single adapter port
– May be less due to other VIOS resource constraints
32K unique WWPN pairs per system platform
– Removing adapter does not reclaim WWPNs
•
•
Can be manually reclaimed through CLI (mksyscfg, chhwres…)
“virtual_fc_adapters” attribute
– If exhausted, need to purchase activation code for more
Device Limitations
– Maximum of 128 visible target ports
•
•
•
•
Not all visible target ports will necessarily be active
Redundant paths to a single DS8000 node
Device level port configuration
Inactive target ports still require client adapter resources
– Maximum of 64 target devices
Any combination of disk and tape
Tape libraries and tape drives are counted separately
22. IBM Power Systems
NPIV Performance
NPIV vs Direct Attach (DS8300)
0.01
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001
0
0 20 40 60
CPW Users
80 100 120
ApplicationResponseTime
npiv run2 direct attach
22 © 2012 IBM Corporation
23. © 2
IBM Power Systems
Virtualizing disk storage with IBM i or VIOS
Single IBM i or VIOS host
provides access to SAN or
internal storage
– AIX, IBM i, or Linux client partitions
– Protect data via RAID-5, RAID-6, or
RAID-10
Redundant VIOS hosts multiple paths to
attached SAN storage with MPIO
– AIX, IBM i, and Linux client partitions
– One set of disk
Redundant IBM i or VIOS hosts provide
access SAN or internal storage
– AIX, IBM i, and Linux client partitions
– Client LPAR protects data via mirroring
– Two sets of disk and adapters
22 012 IBM Corporation
24. IBM Power Systems
Redundant VIOS with NPIV
VIOSVIOS
POWER6
IBM i
Physical FC
connections
SYSBASIASP
Server
VFC
adapters
Client
VFC
adapters
1
Step 1: configure virtual and
physical FC adapters
– Best Practice to make VIOS
redundant or separate individual VIOS
partitions where a single hardware
failure would not take down both
VIOS partitions.
Step 2: configure SAN fabric and
storage
– Zone LUNs to the virtual WWPNs.
– Each disk sees a path through 2
VIOS partitions
2
•Notes: Support up to 8 paths per LUN on
IBM I
•Up to 32 paths per LUN on AIX
•Not all paths have to go through separate
VIOS partitions.
•New multi-path algorithm in i 7.1 TR2
24 © 2012 IBM Corporation
25. IBM Power Systems
25 © 2012 IBM Corporation
VIOS – Storage attach
Three categories of storage attachment to IBM i through VIOS
1) Supported (IBM storage)
- tested by IBM; IBM supports the solution and owns resolution
IBM will deliver the fix
2) Tested / Recognized (3rd party storage including EMC and Hitachi)
- IBM / storage vendor collaboration, solution was tested (by vendor, IBM, or both);
- CSA in place, states that IBM and storage vendor will work together to resolve the issue
- IBM or storage vendor will deliver the fix
3) Other
- not tested by IBM, maybe not have been tested at all
No commitment / obligation to provide fix
-
Category #3 (Other) was introduced in the last few years, “other” storage used to
invalidate the VIOS warranty. IBM Service has committed to provide some limited level of
problem determination for service requests / issues involving "other” storage. To the
extent that they will try to isolate it to being a problem within VIOS or IBM i, or external to
VIOS or IBM i (ie. a storage problem). No guarantee that a fix will be provided, even if the
problem was identified as a VIOS or IBM i issue
26. IBM Power Systems
IBM PowerVM Virtual Ethernet
PowerVM Ethernet switch
– Part of PowerVM Hypervisor
– Moves data between LPARs
Shared Ethernet Adapter
– Part of the VIO server
– Logical device
– Bridges traffic to and from
external networks
Additional capabilities
– VLAN aware
– Link aggregation for external networks
– SEA Failover for redundancy
PowerVM Hypervisor
Virtual I/O Server
CMN
)
Shared
Ethernet
Adapter
CMN
(Vir)
VLAN-Aware Ethernet Switch
Client 2
CMN
(Vir)
Client 1
CMN
(Vir)
Ethernet
Switch
26 © 2012 IBM Corporation
27. IBM Power Systems
© 2012 IBM Corporation
Shared Ethernet Adapter with Load Sharing
On the VIOS Version 2.2.1.0, or later, you can use the Shared Ethernet Adapter failover with load
sharing configuration to use the bandwidth of the backup Shared Ethernet Adapter without any
impact to reliability.
In the Shared Ethernet Adapter failover with load sharing configuration, the primary and the backup
Shared Ethernet Adapters negotiate the set of virtual local area network (VLAN) IDs that they are
responsible for bridging. After successful negotiation, each Shared Ethernet Adapter bridges the
assigned trunk adapters and the associated VLANs. Thus, both the primary and the backup Shared
Ethernet Adapter bridge the workload for their respective VLANs. If a failure occurs, the active
Shared Ethernet Adapter bridges all trunk adapters and the associated VLANs. This action helps to
avoid disruption in network services.
Note that it is not load balancing – Different VLANs go through each SEA. If you have a single
trunk adapter with a single VLAN, there will be no load sharing. You need at least two trunk
adapters per SEA for load sharing to work.
You configure Load Sharing using the same mkvdev command with the attribute -attr
ha_mode=sharing. Existing SEA can be changed to Load Sharing using the chdev command.
29. IBM Power Systems
PowerVM Active Memory Sharing
Supports over-commitment of logical memory
with overflow going to a paging device
Intelligently flow memory from one partition to
another for increased utilization and flexibility
Memory from a shared physical memory pool
is dynamically allocated among logical
partitions as needed to optimize overall
memory usage
Designed for partitions with variable memory
requirements
PowerVM Enterprise Edition on POWER6 and
Power7 processor-based systems
– Partitions must use VIOS for I/O virtualization
* All statements regarding IBM's future direction and intent are subject to change or
POWER Server
Virtual
I/O
Server
Paging
PowerVM Hypervisor AMS
Dedicated
Memory
CPU
28 © 2012 IBM Corporationwithdrawal without notice, and represent goals and objectives only.
Shared Memory
Shared CPU
Reduce memory costs by improving memory utilization on Power Servers
30. IBM Power Systems
Shared memory model
Partition 1
(4 GB)
Partition 2
(4 GB)
Partition 4
(4 GB)
Partition 6
(8 GB)
Dedicated memory
partitions
Free
LPAR 1 LPAR 2 LPAR 4
LPAR 6
Hypervisor
Partition 3
(10 GB)
LPAR 3
Partition 5
(6 GB)
LPAR 5
Logical partition current memory
Shared memory
partitions
Paged ou
t
Used partition memory
Free partition memory
AMS
shared memory pool
(24 GB)
Free
memory
pool
(2.5 GB)
Hypervisor
memory
(1.5 GB)
Dedicated
memory
(8 GB)
Physical Memory (36 GB)
© 2012 IBM Corporation
31. IBM Power Systems
LPAR
#6
90 GB 60 GB 90 GB 45 GB 75 GB
Dedicated
Processor
LPAR
Finance
75 GB 60 GB
Dedicated CPUs
LPAR
#5
LPAR
#1
LPAR
VIOS
LPAR
#2
Dedicated
Processor
LPAR
Planning
90 GB 5 GB
AMS
PSP
Micro-Partition Processor Pool
M M M M M M
M M M M M M
Shared CPU M M M
M M M
M M M
M M M
M M M
M M M
M
= No AMD
M M M
M M M M M M
M M M M M M
M M M M M M
M M M M M M
M M M
LPAR
#3
M M M
M M M
M M M
LPAR
#4
M M M
M M M
M M M
M M M
M M M
M M M
Active Memory Deduplication
M M M
M M M
M M M
M M M
= No AMS
AMS LPARs same OS & App, so same pages
found in each LPAR. Only 1 copy needed.
Memory
• Hypervisor detects identical pages via lightweight sum checks
• Changes mapping to share a common page
• Includes AIX / IBM i / Linux – requires POWER7 w FW 740 and VIOS 2.2
© 2012 IBM Corporation
but regular
Dedicated
M M M
M M M
M M M
M M M
M M M
M M M
M M M
34. IBM Power Systems
Active Memory Expansion
POWER7 advantage for AIX 6.1and 7.1
Expand memory beyond physical limits
More effective server consolidation
• Run more application workload / users per partition
• Run more partitions and more workload per server
Expand
memory
True
memory
True
memory
True
memory
True
memory
True
memory
True
memory
Expand
memory
Expand
emo y
Expan
memory
Effectively up
to 100% more
memory
Exp nd
memory
Ex nd
memory
© 2012 IBM Corporation
35. IBM Power Systems
Tool included in AIX 6.1 TL4 SP2
Run tool in the partition of interest for memory expansion.
Input desired expanded memory size. Tool outputs different real memory and CPU
resource combinations to achieve the desired effective memory.
# amepat
Active Memory Expansion Modeled Statistics:
-----------------------
Modeled Expanded Memory Size : 8.00 GB
--------- -------------- ----------------- -----------
Active Memory Expansion Recommendation:
---------------------
The recommended AME configuration for this workload is to configure
the LPAR with a memory size of 5.50 GB and to configure a memory
expansion factor of 1.51. This will result in a memory expansion of
45% from the LPAR's current memory size. With this configuration,
the estimated CPU usage due to Active Memory Expansion is
approximately 0.58 physical processors, and the estimated overall
peak CPU resource required for the LPAR is 3.72 physical processors.
This sample partition
has fairly good
expansion potential
A nice “sweet” spot for
this partition appears to
be 45% expansion
• 2.5 GB gained memory
• Using about 0.58 cores
additional CPU resource
Active Memory Expansion - Planning Tool
© 2012 IBM Corporation
Expansion True Memory Modeled Memory CPU Usage
Factor Modeled Size Gain Estimate
1.21 6.75 GB 1.25 GB [ 19%] 0.00
1.31 6.25 GB 1.75 GB [ 28%] 0.20
1.41 5.75 GB 2.25 GB [ 39%] 0.35
1.51 5.50 GB 2.50 GB[ 45%] 0.58
1.61 5.00 GB 3.00 GB [ 60%] 1.46
36. IBM Power Systems
Amount of memory expansion
POWER7+ uses on-chip hardware accelerator to do some of the
compression / decompression work. There is a knee-of-cure
relationship for CPU resource required for memory expansion
– Even with POWER7+ hardware accelerator there is some resource required.
– The more memory expansion done, the more CPU resource required
Knee varies depending on how compressible memory contents are
% CPU
utilization
for
expansion
POWER7
Active Memory Expansion – POWER7+ HW accelerator
POWER7+
36 © 2012 IBM Corporation
37. IBM Power Systems
37 © 2012 IBM Corporation
LPAR Suspend/Resume – Customer Value
Resource balancing for long-running batch jobs
– e.g. suspend lower priority and/or long running workloads to free resources.
Planned CEC outages for maintenance/upgrades
– Suspend/resume may be used in place of or in conjunction with partition mobility.
– Suspend/resume may require less time and effort than manual database shutdown
and restart, for example.
Requirements:
•All I/O is virtualized
• HMC version 7 releases 7.3
• FW: Ax730_xxx
•IBM i 7.1 TR2, AIX 6.1 TL5 and 7.1, and
Linux
• VIOS 2.2.1.0 FP24 SP2
38. IBM Power Systems
Rebalance processing
power across servers when
and where you need it
Reduce planned downtime by
moving workloads to another server
during system maintenance
Live Partition Mobility requires the purchase of the optional PowerVM Enterprise Edition
AIX and Linux support LPM on POWER6 systems. IBM i requires POWER7 systems
38 © 2012 IBM Corporation
Movement to a
different server
with no lossof
service
Virtualized SAN and Network Infrastructure
Virtualized SAN and Network Infrastructure
Live Partition Mobility
Move a running partition from one POWER6/7 server
to another with no application downtime
39. IBM Power Systems
39 © 2012 IBM Corporation
Partition Mobility: Active and Inactive LPARs
Active Partition Mobility
Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting the operation of the OS and
applications running in that LPAR.
Applicability
Workload consolidation (e.g. many to one)
Workload balancing (e.g. move to larger system)
Planned CEC outages for maintenance/upgrades
Impending CEC outages (e.g. hardware warning received)
Ability to move from Power7 servers to Power8 servers (when available)
without an outage
Inactive Partition Mobility
Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Suspended Partition Mobility
Suspended Partition Migration transfers a partition that is suspended from one
system to another.
Active Partition Mobility
Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting the operation of the OS and
applications running in that LPAR.
Applicability
Workload consolidation (e.g. many to one)
Workload balancing (e.g. move to larger system)
Planned CEC outages for maintenance/upgrades
Impending CEC outages (e.g. hardware warning received)
Ability to move from Power7 servers to Power8 servers (when available)
without an outage
Inactive Partition Mobility
Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Suspended Partition Mobility
Suspended Partition Migration transfers a partition that is suspended from one
system to another.
40. IBM Power Systems
Requirements – AIX/Linux
Software
HMC/Firmware
version 7 releases 3.2
Note that for POWER7 it has to
be the minimum level supported
with the system
Firmware service 01Ex320
(POWER6), or later – All POWER7
systems are supported (except the
Power 755 and Power 775)
PowerVM Enterprise Edition
VIOS 1.5.1.1 or later (VIOS 2 or later
on POWER7)
Supported client operating systems
AIX 5.3 TL7 SP1, AIX 6 and AIX 7
RedHat Linux 5 U1 or later
SuSE 10 or later
I/O
All I/O through the VIOS
vSCSI, NPIV, Virtual Ethernet
External Storage
Storage must be SAN-attached to
both source and target systems
POWER6 or POWER7 processor-based
systems
Both source and destination on same
Ethernet network
40 © 2012 IBM Corporation
41. IBM Power Systems
Requirements
Software
HMC/Firmware
version 7 releases 7.5
Firmware service pack 730_51,
740_40, or later
PowerVM Enterprise Edition
VIOS 2.2.1.4
Supported client operating
systems
IBM i 7.1 TR4
I/O
All I/O through the VIOS
VSCSI, NPIV, VE
External Storage
Same storage to both source and
destination
Power7 tower / rack Hardware
Both source and destination on same
Ethernet network
41 © 2012 IBM Corporation
42. © 2012 IBM Corporation41
IBM Power Systems
M
Storage
Subsystem
A
VIOS
vtscsi0
vhost0
f s0
ent2 en2
SEA (if)
ent0
ent1
Hypervisor
VIOS
cs0
en2 ent2
(if) SEA
ent0
ent1
vtscsi0
vhost0
Mover
Service
VASI
Mover
Service
VASI
Shell PartitionSuspended Partition
OFninceisehntohuegmhigmreamtioonry
CCrperaSaeantgatadeteresrtsevhmhmieraitlgovlurvepaaealbtirStenhtCeigetniSoIn
moomnvoeretidamgd,riegonsverauyitlscsLppeyPasesgnAtedeRsmthe
soduerfcienistiyosntsem
Validate environment
for appropriate
resources
Live Partition Mobility
Power7 System #2Power7 System #1
HMC
Hypervisor
A
vscsi0
VLAN
en0
(if)
ent1
VLAN
A
vscsi0
en0
(if)
ent1
IBM i Client 1IBM i Client 1
MM M M M M M M M M M M M M M
Partition Mobility now supported on POWER7
IBM i 7.1 TR4
IBM Confidential
43. 42 © 2012 IBM Corporation
IBM Power Systems
Servers
Administrators
Admin
…
Servers
Admin
…
Heterogeneous
Virtual Servers
Storage
Memory
CPU
Storage
Memory
CPU
VIOS
IntegratInt
egrateded
Server &Storage
SynerStoMrranagement
VIOS
Storage Mobility
Storage Aggregation
Snapshots & clones
Thin Provisioning
Integrated Storage Capabilities
Server System
Integrated Storage Virtualization increases Platform Value
ne i s
en mgmt domains
cku
w th d f e a
s a o a e a m n s a o
Storage System
Administrators
SOFS, NetApp, EMC, Other NAS
NAS
IBM, EMC, Hitachi, Other SAN
Storage pooling SAN Virtualization
Migration File Virtualization
Copy services Caching
SAN
Geo mirroringThin provisioning
VIOS 2.2 - Integrated Storage Virtualization
= Dec ease c m l t n
44. IBM Power Systems
PowerVM – VIOS Shared Storage Pool
Extending Storage Virtualization Beyond a Single System
vSCSI Classic – storage virtualization vSCSI NextGen – clustered storage virtualization
Storage pool spans multiple VIOS’s and servers
Enabler for federated management
Location transparency
Advanced capabilities
Supports SAN and NAS, IBM and non-IBM Storage
Storage pooled at VIOS for a single system
Enables dynamic storage allocation
Supports Local and SAN Storage, IBM and
non-IBM Storage
Storage PoolStorage PoolStorage PoolStorage Pool
44 © 2012 IBM Corporation
45. IBM Power Systems
Power Systems Software – October Content
PowerVM v2.2.2
• Shared Storage Pool Improvements
• Scaling and reliability improvements, plus more statistics for operational control
• Up to 16 VIOS in the shared processor pool
• Linked clones implementation
• Support for 20 VMs per Core
• Provides more flexibility for small workloads allowing entitlement down to .05 of a core
• LPM concurrency improvements (2x improvement from 8 to 16 transfers)
• 3x performance improvement for LPM operations
• Partition remote restart support (official, not RPQ-based) – all partitions
• New VIOS Performance Advisor analyzes VIOS performance and recommends VIOS
system changes to optimize performance
45 © 2012 IBM Corporation
46. IBM Power Systems
46 © 2012 IBM Corporation
Resources and references
Techdocs – http://www.ibm.com/support/techdocs
(presentations, tips & techniques, white papers, etc.)
IBM PowerVM Virtualization Introduction and Configuration - SG24-7940
http://www.redbooks.ibm.com/abstracts/sg247940.html?Open
IBM PowerVM Virtualization Managing and Monitoring - SG24-7590
http://www.redbooks.ibm.com/abstracts/sg247590.html?Open
IBM PowerVM Virtualization Active Memory Sharing – REDP4470
http://www.redbooks.ibm.com/abstracts/redp4470.html?Open
IBM System p Advanced POWER Virtualization (PowerVM) Best Practices -
REDP4194
http://www.redbooks.ibm.com/abstracts/redp4194.html?Open
Power Systems: Virtual I/O Server and Integrated Virtualization Manager
commands (iphcg.pdf)
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphcg/iphcg.pdf
47. IBM Power Systems
47 © 2012 IBM Corporation
Trademarks and Disclaimers
8 IBM Corporation 1994-2008. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to make them available in every country.
Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at
http://www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other
countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce.
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer Entertainment, Inc., in the United States, other countries, or both and are used under license
therefrom.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual
environmental costs and performance characteristics may vary by customer.
Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does
not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including
vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other
claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance,
function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here
to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any
user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage
configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements
equivalent to the ratios stated here.
Prices are suggested U.S. list prices and are subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your
geography.