The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
1. IBM Power Systems
and IBM Storage
Technical University
The Pendulum
Swings Back –
Understanding Converged
and Hyperconverged
Integrated Systems
Tony Pearson
Master Inventor and Senior IT Architect,
IBM Corporation
2. Abstract
In the early days of IT, storage was internal to its server, over
time, storage outgrew its container, and we started have
externally attached storage, and benefits like RAID and
clustered servers for high availability. Then, SANs, LANs and
WANs took the main stage, allowing for greater connectivity
and distance.
But now, it seems the pendulum is swinging back with
converged and hyperconverged integrated systems.
This session will provide the motivations, advantages and
disadvantages of these new configurations.
2
3. This week with Tony Pearson
Day Time Topic
Monday 10:15am
Business Continuity – The seven tiers of business
continuity and disaster recovery
1:45pm IBM’s Cloud Storage Options
4:30pm
Introduction to IBM Cloud Object Storage System
and its Applications (powered by Cleversafe)
Tuesday 10:15am
The Pendulum Swings Back – Understanding
Converged and Hyperconverged Environments
11:30am
New generation of storage tiering: Simpler
management, Lower costs and Increased
performance
3:15pm
Introduction to IBM Cloud Object Storage System
and its Applications (powered by Cleversafe)
Wednesday
9:00am IBM Spectrum Scale for File and Object Storage
3
4. The Pendulum Swings on Infrastructure Design
Internal Storage
• Personal
Information
Managers (PIM)
• Mainframe
• AS/400
Advantages
Simple, self-contained
Disadvantages
• Scalability limited to what can fit
inside the hardware container
• Single Point of Failure (SPOF) if
your server is down, you lose
access to your data inside
• Backups, Security and other Policy
enforcement is done on a system-
by-system basis individually
• Depreciation applies to server and
storage together
4
5. The Pendulum Swings to External Storage
External Storage
• Mainframe
• AS/400
• Linux, UNIX,
Windows
Advantages
More room for storage growth
Two or more servers can directly
attach to external storage
High-availability clusters
Centralize features, snapshots and
tape drives for backups
RAID and Shared Cache for data protection
and performance
Separate depreciation schedules
Disadvantages
• Scalability limited to number of
hosts attached
• Limited distance for external cables
5
6. The Pendulum Swings to Networked Storage
SAN
LAN
Advantages
Many more hosts can be attached
Greater distances enables Disaster
Recovery
Fewer, larger systems like Tape
Libraries easier to manage
Disadvantages
• SANs and LANs requires different
skill sets
• OS-specific and device-specific
management tools
Networked Storage
• SAN and NAS attached
flash, disk and tape systems
• IBM Spectrum Storage
6
7. The Pendulum Swings to Converged Systems
SAN
LAN
Advantages
• Converged Systems can also
connect to existing SAN/LAN
• General purpose or workload-
specific
• Fewer servers required with
virtualization
• Portability to Cloud
Disadvantages
• Lose some of the gains from
SAN/LAN
• Backup and Disaster Recovery?
• Islands of processing and data?
Converged Systems
• Best-of-breed Switches,
Servers and Storage
hardware packaged into a
single rack
7
8. Converged Systems – Introducing VersaStack
Vblock / VxBlock
Cisco and EMC
Flexpod
Cisco and NetApp
PureSystems
IBM POWER
+ IBM Storage
VersaStack
Cisco and IBM
• Cisco Nexus and MDS switches
• Cisco UCS x86 servers
• Cisco UCS Director software
• FlashSystem
900, V9000, A9000
• SVC, Storwize V7000/F,
Storwize V5000/F
8
9. VersaStack
• Seamless Integration
• Simplify Deployment
• Process Automation
Easy
Efficient
Versatile
10xperformance acceleration
and 5x data reduction
84%reduced provisioning
times1
62%lower infrastructure cost2
with data reduction
guarantee3
Store more for less
Reduce Provisioning Time
Unified Management
Scale up, scale out architecture
Flexible Cloud Capabilities
Dynamic Infrastructure
9
10. Local Area
Network Data Center
Network
Host Bus Adapter
(HBA)
Network Interface
Card (NIC)
10/100/1000
1GbE
10GbE
4 Gbps
8 Gbps
16 Gbps
32 Gbps
64 Gbps
128 Gbps
Storage Area
Network
Converged Network
Adapter (CNA)
10GbE
25GbE
40GbE
100GbE
Data Center Bridging (DCB)
• Data, Voice, Video
Block: iSCSI, FCoE
File: NFS, SMB, FTP, HDFS
Object: HTTP, Amazon S3,
OpenStack Swift
The Data Center Network
10
With over 50 million ports, Fibre Channel is not
going away anytime soon. Projections indicate a
slow decline, only 3% per year over next few years.
11. Storage built with
IBM FlashCore Technology
Storage built with
IBM FlashCore Technology
SAN Volume
Controller
w/Cisco UCS
Storwize
V5000/V5030F w/
Cisco UCS Mini
Storwize V7000
and V7000F w/
Cisco UCS
Entry to Mid-Size
Business, ROBO
Medium to Large
Enterprise
Highest Levels of
Performance
Mixed
storage
environments
FlashSystem
V9000/900
w/Cisco UCS
VDI environments
FlashSystem
A9000 w/Cisco
UCS
Storage built with IBM Spectrum Virtualize software
Unified Management with Cisco UCS Director
www.ibm.com/versastack
VersaStack Scalable Storage Options
IBM
Spectrum
Accelerate
Unstructured
Data
IBM Cloud
Object Storage
w/Cisco UCS
11
12. IBM Spectrum Virtualize – Key Features
Easy Tier
– Automatically moves extents between
Flash, Enterprise and Nearline disk
Thin Provisioning and
Real-time Compression
• Inline compression for active primary
workloads
• Intel QuickAssist co-processor
• Up to 80% Savings - More effective than Data
Deduplication for active workloads
• Ideal for Databases, VMs, CAD/CAM
Engineering blueprints, etc.
Data-at-Rest Encryption
– AES 256 bit encryption implemented in
FlashSystem V9000, SVC and Storwize
controllers
• Does not require Self-Encrypting
Drives (SED)
• Supports internal and externally
virtualized storage
– Works with all other features including
Real-time Compression and Easy Tier
• Data is compressed first, then
encrypted
– Encryption keys stored on USB memory
sticks or IBM SKLM server
– No performance impact to
applications!
12
13. Real-time Compression implementation on Spectrum
Virtualize
IBM Random Access Compression Engine™
Benefits
• Hardware-assisted real-time
compression
• Compressed data in cache to
increase hit ratios
• More capacity savings than data
deduplication for active data
• Compress existing data without
downtime
• Compress before Encryption to
optimize benefits of both
Upper cache
Lower cache
• Stretch Cluster forwarding
• Metro Mirror, HyperSwap
• Compression
offloaded to Intel®
QuickAssist FPGA
• FlashCopy
• Global Mirroring
• Thin Provisioning
5x
effective
capacity!
• Encryption
13
14. Each can have up to 40U
Expansion Enclosures
Storwize V7000 Upgrade Options
Start with 1 Control Enclosure
Add up to 40U Expansion Enclosures
Cluster up to 4 Control Enclosures together into a single system
Block-only
FCP, FCoE and iSCSI, up to 2,944 drives
24-Bay in 2U
2.5-inch (SFF)
• 400/800/1600/3200 GB
1.92, 3.84, 7.68 and 15.36 TB SSD
• 300/600 GB 15K RPM SAS
• 600/900/1200/1800 GB 10K RPM SAS
• 1and 2 TB 7,200 RPM NL-SAS
12-Bay in 2U
12-bay 3.5-inch (LFF)
• 2/3/4/6/8/10 TB 7,200
RPM NL-SAS
92-Bay in 5U
1.6 TB to 15.36 TB SSD
• 600GB 15K rpm
• 1.2TB and 1.8GB 10K rpm
• 6 / 8 / 10 TB NL-SAS
14
15. Storwize V5000 Gen2 models
Up to 2 Control Enclosures
Add up to 40U Expansion Enclosures
per controller
Storwize V5030 and V5030F
Supports thin provisioning, FlashCopy, Easy
Tier, remote mirroring, encryption,
compression and external virtualization
1 Control Enclosure
Add up to 20U
Expansion
Enclosures
Storwize V5010
Supports thin provisioning, FlashCopy,
Easy Tier and remote mirroring
Storwize V5020
Supports thin provisioning, FlashCopy,
Easy Tier, remote mirroring, and
encryption
up to 392 internal drivesup to 1,008 internal drives
15
16. SVC with FlashSystem 900 vs. FlashSystem V9000
• More options to choose from
• Capacity-based License
• Requires SAN infrastructure
for most configurations
• Simplified options
• Enclosure-based License
• Can be direct-attached or
SAN
2 SAN Volume Controllers
(node-pair or I/O Group)
+
FlashSystem 900
(dual controller)
FlashSystem
V9000
16
17. Components of FlashSystem A9000
A9000 “The Pod”
• 3 servers
• 1 FlashSystem 900
Module Usable Effective *
1.2 TB 12 TB 60 TB
2.9 TB 29 TB 150 TB
5.7 TB 57 TB 300 TB
* Based on estimated 5.26x reduction ratio
Data
Type
Dedupe Compress Combined
Virtual
Desktop
(VDI)
16.7x 2x 33x
KVM –
Linux
guests
1.9x 3.8x 7.2x
Database
Restore +
Test
1.02x 4.2x 4.2x
Pattern removal, dedupe and
HW-based compression
FCP and iSCSI
supported
17
18. VersaStack Solution for IBM Cloud Object Storage
Cisco UCS S3260 Storage Server
Dual nodes in 4U space
56 hot-swappable 3.5” LFF HDD
4, 6, 8, or 10 TB 7200-rpm NL-SAS
(28 drives per COS Slicestor)
Cisco UCS 6300 Fabric Interconnect
Low-latency, lossless 10 and 40 GbE
Cisco UCS 220 M4 servers
1U with 36 cores, 24 DDR4 memory DIMMs
For COS Manager and Accessers
Cisco Validated Design (CVD)
784 to 1960 TB
Usable Capacity
18
19. Day
On-premises Off-premises
Flash Disk
IBM Cloud
& other clouds
CloudCenter
Model, Benchmark, Deploy, Manage
UCS Director
Application Centric Infrastructure (ACI)
VersaStack + CloudCenter
Any Application. Any Data. Anywhere
API driven use cases:
Instant Recovery
Self-service Test/Dev
DevOps
Automated DR
IBM Spectrum CDM
Automation
and
Self Service
End-to-End Hybrid Cloud
19
20. The Pendulum Swings to Hyperconverged Systems
Advantages
Industry standard server and storage
hardware
Servers can now hold sufficient Flash and
Disk capacity
Easy to re-purpose servers as needed
Disadvantages
• SPOF requires 2 or more copies across
independent servers
• Some systems offer basic RAID
• High-speed Ethernet or InfiniBand
network for connectivity
• Distance and Scalability issues on some
deployments
• Server/Storage depreciation lockstep
Hyperconverged
Systems
• Cheap commodity
servers with internal
storage
• Software-Defined
Storage to access data
20
21. Hyperconvergence Packing Options
Supermicro®
Hyperconverged
Appliance with IBM
Spectrum Accelerate
• Nutanix NX series,
CS series on POWER
• Simplivity OmniCube
• EVO:Rail
• VMware VSAN
• IBM Spectrum
Accelerate
• IBM Spectrum Scale
FPO
• Nutanix MXP software
Solution Appliance Software
21
22. VMware Virtual SAN (VSAN)
• VSAN cluster consists of
3-64 VMware ESXi hosts
• At least 2 must have disk
groups
• Each host has 0 to 5 disk
groups
• Disk Group is 1 SSD plus
1-7 HDD
• 70% SSD as Read cache
• 30% SSD as Write cache
• IP Network used to make three
copies (replication) of data
• L2 Multicast required
• 1GbE can be used
• Jumbo Frames and 10GbE
recommended
• Only members in the cluster
can access the data
22
24. Servers + VMware
Storage Switch
HA Shared
Storage
SSD Array
Backup Appliance
WAN Optimization
Cloud Gateway
Storage Caching
Backup Apps
Legacy Stack
Pre-integrated
storage and
server resources Converged storage
and server
resources
Converge entire
stack into single
resource pool
HPE Simplivity OmniCube
Cloud
Economics
Web-Scale
Enterprise
Capabilities
HyperconvergedConverged
1-8 nodes per datacenter,
32 max federated
24
25. Virtual Storage Control Virtual Storage Control
Virtual Machine/Virtual Disk
Flash HDD
Enterprise Storage
Snapshots, clones,
replication, compression,
thin provisioning ,
deduplication
Data Management
Data locality, tiering,
balancing, tunable
resilience
Hypervisor
Agnostic
vSphere,
Hyper-V,
Acropolis
3-64
nodes
Nutanix Distributed File System (NDFS)
Acropolis Hypervisor (AHV) is Nutanix version of Linux KVM for x86 and POWER systems
25
26. IBM CS-series Models for Nutanix
IBM CS821 (1U)
Two 10-core 2.09 GHz
POWER8 CPUs
Up to 160 threads
Up to 256 GB memory
Up to 7.68 TB flash
Nutanix AHV hypervisor
Little endian Linux
IBM CS822 (2U)
Two 11-core 2.89 GHz
POWER8 CPUs
Up to 176 threads
Up to 512 GB memory
Up to 15.36 TB flash
Nutanix AHV hypervisor
Little endian Linux
26
27. IBM Hyperconvered System Powered by Nutanix
Architecture
Nutanix Acropolis for Power System
App Mobility Fabric
Workload Mobility | HA | DR | VM Placement | Resource Scheduling
Distributed Storage Fabric
Compression | Dedupe | Protection
Acropolis Hypervisor
CentOS KVM-based
OpenPower Servers
SSD SSD
SSD SSD
Prism Console, CLI, Rest-Based Infrastructure Services
CS822 CS821
Direct-Attached Storage Top of Rack
Switches
Open Virtual Switch
Virtual Overlay Network
Nutanix
on x86
Nutanix Prism Infrastructure Management
Hardware Management
Monitor | Alert | Topology | Inventory l Disk Mgmt |
F/W Update | Rolling Update | “Light Path” Diag
Virtualization Management
VM Lifecycle Mgmt | Live Migration | Dynamic VM Reconfig
| VM-HA | VM-Based Backup
Nutanix
Acropolis
for
Power
Additional
Clusters Managed
by Prism Central
Guest VMs
Linux guest support
- Ubuntu 16.04
- CentOS 7.x
Nutanix Prism Central Multi-Cluster Infrastructure Management
Cross-Cluster Infrastructure Management
Homogenous Clusters | Heterogonous Clusters
27
28. Details on benchmarks in speaker notes.
3YR EDB with Support
Hyperconverged Systems
CS822
Dell XC630-10
IBM CS Models versus Dell XC630-10
530 tps per $K (2.2x)
74,826 tps (1.8x)
232 tps per $K
42,059 tps
2,560 tps per $K (2.3x)
387,062 tps (1.8x)
1,110 tps per $K
210,339 tps
66 tps per $K (1.7x)
13,705 tps (1.3x)
39 tps per $K
10,506 tps
22 cores 24 cores
28
29. 451 Research on Hyperconverged Infrastructure
(HCI)
451 Research polled 100 enterprise companies
that evaluated Hyperconverged Infrastructure:
• 44% chose not to adopt for enterprise use
• 55% substantial refresh or upgrade of their
datacenter network.
• 78% were looking at eight or fewer nodes.
• 65% prefer Fibre Channel protocol.
• Half of those who chose to integrate HCI in
production expressed dissatisfaction with
system capabilities.
• No large HCI enterprise installations available
for reference.
• Most HCI vendors do not allow the publication
of independent, third-party evaluations of their
performance and scalability claims.
• HCI modules are not interchangeable, long-
term commitment to a single vendor for both
hardware and software.
29
30. IBM Storage Portfolio – XIV and IBM Spectrum
Accelerate
IBM Spectrum Scale
Elastic Storage Server
IBM Spectrum Virtualize
FlashSystem
IBM
Spectrum
Accelerate
XIV
DS8000
All-Flash
Flash/Disk
Hybrid
Pre-built system,
• FCP and iSCSI volumes
• OpenStack Cinder
• VMware VAAI, VASA,
SRA, VVols
• Hyper-Scale manager,
mobility, consistency
• Real-time Compression
• Microsoft and Hyper-V
integration
• Data-at-Rest Encryption
XIV Gen3
Software deployed on
client-choice x86 servers:
• Hyperconvergence
• iSCSI volumes
• OpenStack Cinder
• VMware VAAI, VASA,
SRA
• Hyper-Scale manager,
mobility, consistency
30
32. VM 2
IBM Spectrum Accelerate for Block-Level
Hyperconvergence
Enables the IT administrator to
single-handedly manage the entire
data center stack
Allows hardware standardization of
network, compute, storage, power
and environmentals
Leverages existing Data Center
services and maintenance contracts
Simplifies the architecture when
lacking specialized, domain-specific
skill sets
iSCSI volumes can also be used by
bare metal servers and other
hypervisors
Available as software-only or
Supermicro® Hyperconverged
Appliance pre-built system
Ethernet
Interconnect
Hypervisor
IBM
Spectrum
Accelerate
IBM
Spectrum
Accelerate
IBM
Spectrum
Accelerate
Hypervisor
iSCSI
Hypervisor
VM 1
VM 4
VM 6
iSCSI
iSCSI
VM 3
VM 5
iSCSI
32
33. IBM Storage Portfolio – File and Object Store
IBM Spectrum Scale
Elastic Storage Server
IBM
Spectrum
Archive
IBM
Cloud
Object
Storage
System
All-Flash
Flash/Disk
Hybrid
Data
Management
Object
Store
Physical
Tape
• Offers a global name space of file and object
access storage
• Space-efficient snapshots, Information
Lifecycle management (ILM), Active File
Management (AFM) and remote mirroring
• Based on technology from IBM General
Parallel File System (GPFS)
• Drastically lowers
the cost for long-
term data retention
• Based on
technology from IBM
Linear Tape File
System (LTFS)
33
34. IBM Spectrum Scale™ – Supported Topologies
Twin-tailed
FCP, iSCSI, IB
Internal, Direct-Attach
Shared PoolsShare-Nothing Pools
NSD Servers
Access files on direct, twin-
tailed or SAN attached disk
OpenStack drivers
Can be enabled as
“Protocol Nodes”
File Placement
Optimization (FPO)
Servers
For AIX, Linux-x86
and Linux on POWER
Access files on direct
attached disk
Exports files to other
FPO servers
Hyperconverged
External Clients
Access data via NAS, HDFS and
object protocols over IP network
TCP/IP
NSD Clients
For Linux, AIX,
and Windows
Access files via
SAN, TCP/IP or
RDMA
TCP/IP or RDMA network
34
35. App
2
Spectrum Scale File Placement Optimization (FPO)
for clustered file and object storage
Enables the IT
administrator to single-
handedly manage the
entire data center stack
Allows hardware
standardization of
network, compute, storage,
power and environmentals
Bare metal deployments
for AIX and Linux
Supports Linux KVM
hypervisors, Docker and
LXC Containers
Ethernet
Or
Infiniband
Interconnect
Server
Spectrum
Scale
Spectrum
Scale
Spectrum
Scale
Server
POSIX
Server
App
4
App
6
POSIX
POSIX
App
5
App
3
App
1
35
36. How much of your environment can benefit from these
Systems
IBM PureSystems
Handles database,
web, and analytics
workloads VersaStack
Handles all x86 workloads
(all bare metal native OS,
Hypervisors and Containers)
Hyperconverged
Limited to Windows
and Linux VMs
VMware? Hyper-V?
Linux KVM? Acropolis?
High-end z and POWER
Many mission critical workloads are
best served on z System mainframe
or high-end POWER servers
36
37. The Pendulum Swings to meet Client Requirements
SAN
LAN
Internal Storage
Networked Storage
• IBM Flash, Disk and
Tape storage systems
• IBM Spectrum Storage
External Storage
• FlashSystem
• SVC, Storwize family
Converged Systems
• Best of breed server,
storage and network
hardware
Hyperconverged Systems
• Cheap commodity servers
with internal storage
• Software-Defined Storage to
access data
37
41. IBM Tucson Executive Briefing Center
• Tucson, Arizona is
home for storage
hardware and
software design and
development
• IBM Tucson
Executive Briefing
Center offers:
– Technology
briefings
– Product
demonstrations
– Solution workshops
• Take a video tour!
– http://youtu.be/CXr
poCZAazg
https://www.ibm.com/it-infrastructure/services/client-centers
ccenter@us.ibm.com
41
42. About the Speaker
42
Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line. Tony joined IBM Corporation in
1986 in Tucson, Arizona, USA, and has lived there ever since. In his current role, Tony presents briefings on storage topics
covering the entire IBM Storage product line, IBM Spectrum Storage software products, and topics related to Cloud Computing,
Analytics and Cognitive Solutions. He interacts with clients, speaks at conferences and events, and leads client workshops to
help clients with strategic planning for IBM’s integrated set of storage management software, hardware, and virtualization
solutions.
Tony writes the “Inside System Storage” blog, which is read by thousands of clients, IBM sales reps and IBM Business Partners
every week. This blog was rated one of the top 10 blogs for the IT storage industry by “Networking World” magazine, and #1
most read IBM blog on IBM’s developerWorks. The blog has been published in series of books, Inside System Storage: Volume
I through V.
Over the past years, Tony has worked in development, marketing and consulting for various storage hardware and software
products. Tony has a Bachelor of Science degree in Software Engineering, and a Master of Science degree in Electrical
Engineering, both from the University of Arizona. . Tony is an inventor or co-inventor of 19 patents in the field of electronic data
storage.
9000 S. Rita Road
Bldg 9032 Floor 1
Tucson, AZ 85744
+1 520-799-4309 (Office)
tpearson@us.ibm.com
Tony Pearson
Master Inventor
Senior IT Architect
IBM Storage
45. Notice and disclaimers continued
Information concerning non-IBM products was obtained from the
suppliers of those products, their published announcements or
other publicly available sources. IBM has not tested those
products in connection with this publication and cannot confirm
the accuracy of performance, compatibility or any other claims
related to non-IBM products. Questions on the capabilities of
non-IBM products should be addressed to the suppliers of those
products. IBM does not warrant the quality of any third-party
products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM expressly disclaims all
warranties, expressed or implied, including but not limited
to, the implied warranties of merchantability and fitness for
a particular, purpose.
The provision of the information contained herein is not intended
to, and does not, grant any right or license under any IBM
patents, copyrights, trademarks or other intellectual
property right.
IBM, the IBM logo, ibm.com, AIX, BigInsights, Bluemix, CICS,
Easy Tier, FlashCopy, FlashSystem, GDPS, GPFS,
Guardium, HyperSwap, IBM Cloud Managed Services, IBM
Elastic Storage, IBM FlashCore, IBM FlashSystem, IBM
MobileFirst, IBM Power Systems, IBM PureSystems, IBM
Spectrum, IBM Spectrum Accelerate, IBM Spectrum Archive,
IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum
Scale, IBM Spectrum Storage, IBM Spectrum Virtualize, IBM
Watson, IBM z Systems, IBM z13, IMS, InfoSphere, Linear
Tape File System, OMEGAMON, OpenPower, Parallel
Sysplex, Power, POWER, POWER4, POWER7, POWER8,
Power Series, Power Systems, Power Systems Software,
PowerHA, PowerLinux, PowerVM, PureApplica- tion, RACF,
Real-time Compression, Redbooks, RMF, SPSS, Storwize,
Symphony, SystemMirror, System Storage, Tivoli,
WebSphere, XIV, z Systems, z/OS, z/VM, z/VSE, zEnterprise
and zSecure are trademarks of International Business
Machines Corporation, registered in many jurisdictions
worldwide. Other product and service names might
be trademarks of IBM or other companies. A current list of
IBM trademarks is available on the Web at "Copyright and
trademark information" at:
www.ibm.com/legal/copytrade.shtml.
Linux is a registered trademark of Linus Torvalds in the United
States, other countries, or both. Java and all Java-based
trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
45