SlideShare una empresa de Scribd logo
1 de 58
© 2006 IBM Corporation
This presentation is intended for the education of IBM and Business Partner sales personnel. It should not be distributed to customers.
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
IBM System x iDataPlex™:
Internet Scale Computing
XTW01
Topic 4
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Overview
The objectives of this course of study are:
> List three emerging technologies for an iDataPlex solution
> List three business goals that iDataPlex addressed
> Identify elements of the iDataPlex rack design
> Match the server offering to its characteristics
2
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Agenda
> *Introducing IBM iDataPlex*
> Rack Unit
> iDataPlex Nodes Overview
> iDataPlex Management
3
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
An IBM Portfolio that Covers the Spectrum of
Business Needs
BladeCenter
Enterprise eX5
iDataPlex
Enterprise Racks & Towers
Scale,
Power,
Density,
Optimizatio
n
Consolidation,
Virtualization
Infrastructure
Simplification,
Application
Serving
Scale Out
ScaleUp
*This slide contains animations that will launch with slide.
4
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Introducing IBM System x iDataPlex
> High volume, low cost computing for
HPC and Cloud
> Innovative, flexible design for Internet-
scale computing
> Up to 5X compute density for efficient
space utilization
> Front-accessible and intelligent
components simplify deployment,
serviceability and manageability for
Internet-scale computing
> Reduce cooling costs dramatically;
minimize or even eliminate data center
air conditioning expense
5
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Tough Challenges Require Real Innovation
Decrease Server
Power
Consumption
by 50%
Eliminate
Data Center Air
Conditioning
Increase
Compute Density
by 10x
“More than 70% of the world’s global 1000 organizations will have to modify
their data center facilities significantly during the next five years.”
-- Gartner, September 2007
Appropriate density for a given power envelope
Scale data center computing with no new
construction
Attack largest operational expense items
Cap carbon footprint for data centers
Rising energy costs & rising energy demand
Power & thermal issues inhibit operations
6
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Fast, Cool, Dense, Flexible. TCO without compromise!
Why iDataPlex?
iDataPlex is:
> Optimized both mechanically as a half-depth server
solution and component-wise for maximum power and
cooling efficiency
> Designed to maximize utilization of Data Center floor
space, power and cooling infrastructure with Industry-
standards based server platform.
> Easy-to-maintain solution with individually serviceable
servers, front access hard drives/cabling,
> Configurable for customer-specific compute, storage,
or I/O needs and delivered pre-configured for rapid
deployment
> Common tools across the System x portfolio for
management at the node, rack or Data Center level
7
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Agenda
> Introducing IBM iDataPlex
> *The Rack Unit*
> iDataPlex Nodes Overview
> iDataPlex Management
8
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Increases Density for Space Savings
Optional Rear Door Heat eXchanger
iDataPlexTypical Enterprise Rack
Full-fan
air depth
Half-fan
air depth
42U Enterprise Rack
1
42U Enterprise Rack
2
> Optimized rack design more than doubles server density per rack
> Air flow efficiency equals power savings of up to 20% and 30-40% lower
airflow impedance
> IBM Rear Door Heat eXchanger provides the ultimate in cooling savings
Top-down view
9
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
iDataPlex Rack -vs.- Standard 19” Rack
Top View Top View
Standard 19” Rack iDataPlex Rack
600mm
(840mmw/RDHx)
1200mm
446x400mm
446x400mm
RDHX
iDataPlex Rack design:
 Rack is rotated 90°
– Half-depth, front-access servers.
 Low airflow impedance.
 Servers located side-by-side.
 Doubles server density in similar
footprint.
– Greater cross-section for RDHX.
 102U Rack
– 84U for Nodes, etc.
– 16U for switches, etc.(vertical)
– 2U Rack Management Appliance
(horizontal) slots
 Space-saving footprint
– 1200mm wide x 600mm deep.
444x750mm
640mm
1050mm
10
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
iDataPlex – Designed for Data Center Flexibility
PDUs
(rear)
3U Chassis
2U Chassis
iDataPlex Rear Door Heat
ExchangerServer Tray
Storage Drives & Network
Options
Dual IOTray
Storage Tray
Broad portfolio of customizable components that adjust to your computing
needs
Switches
(front)
*This slide contains animations that will launch with slide.
11
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Rear Door Heat eXchanger
Swings to
provide
access to
rear PDUs
Lock handle to
close/ open
door
Industry Standard hose
fittings
No Leaks
Sealed Internal
coils
Perforated Door for clear airflow
IBM patent pending hex
airflow design
 Liquid cooling at the rack is
75%-95% more efficient than air
cooling by a CRAC
 Can eliminate rack heat exhaust
 No electrical or moving parts
 No condensation
 Chilled water or evaporative
liquid
12
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
A New Way to Cool the Data Center
Provides Up to 100% Heat Extraction, and can even cool the room!
IR Thermal Camera View of RDHx with:
top photo: water flow off
bottom photo: water flow on
iDataPlex rack can run at room
temperature (no air conditioning
required) when used with the
optional Rear Door Heat
eXchanger.
These photos were taken in the
Thermal Lab at IBM, showing the
iDataPlex rack without (top
image) and with (bottom image)
the Rear Door Heat eXchanger
closed and operational.
13
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Easily Serviceable Integrated x86 Packaged Design
> All-front access eliminates accessing rear of rack
> Swappable server trays in chassis
> Chassis docks directly into power
> Chassis guides keep upper servers in place
> Rack-side pockets for cables provide highly efficient cable
routing
Shared dual domain power
supply with hot dock ports
so planars dock into power
connector
Tool-less server
and chassis
changes done
easily with
innovative server
latch locks
High Efficiency
shared 80mm fans
provide superior
cooling at a low
wattage draw and
low noise levels
Flex Node Technology
Innovative design so planar
trays are independent &
swappable for multiple
configurations
Asset Tags -
Confirm Asset I/Ds
Chassis Guides
& Rail kits -
Individual
Chassis Rails for
reliable fit & safe
servicing
2U Flex Chassis
Designed for Data Center Serviceability
14
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
iDataPlex Rack Design
23.6
”
47.2””
AIR
FLOW
AIR
FLOW
19” 19”
Top View
Energy efficiency
> Optimizes airflow for cooling efficiency with half-depth rack
> Reduces pressure drop to improve chilled air efficiency
Leadership density
> Dual column / Half-depth rack
> Standard two-floor tile rack footprint
> Up to 168 physical nodes in 8 square feet
Flexibility
> Matches US & European data center floor tile standards
> Compatible with standard forced air environments
Ease of use
> All service and cabling from the front
15
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Agenda
> Introducing IBM iDataPlex
> Rack Unit
> *iDataPlex Nodes Overview*
> iDataPlex Management
16
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Current iDataPlex Server Offerings
Processor: Quad Core Intel Xeon 5500
Quick Path Interconnect up to 6.4 GT/s
Memory:16 DIMM DDR3 - 128 GB max
Memory Speed: up to 1333 MHz
PCIe: x16 electrical/ x16 mechanical
Chipset: Tylersburg-36D
iDataPlex dx360 M2
High-performance Dual-Socket
Storage: 12 3.5” HDD up to 24 TB per node / 672TB per rack
Proc: 6 or 4 Core Intel Xeon 5600
Memory: 16 DIMM / 128 GB max
Chipset: Westmere
iDataPlex 3U Storage Rich
File Intense Dual-Socket
Processor: 6 & 4 Core Intel Xeon 5600
Quick Path Interconnect up to 6.4 GT/s
Memory:16 DIMM DDR3 - 128 GB max
Memory Speed: up to 1333 MHz
PCIe: x16 electrical/ x16 mechanical
Chipset: Westmere 12 MB cache
iDataPlex dx360 M3
High-performance Dual-Socket
Processor: 6 & 4 Core Intel Xeon 5600
2 NVIDIA M1060, M2050,
M2070, M2070Q
Quick Path Interconnect up to 6.4 GT/s
Memory:16 DIMM DDR3 - 192 GB max
Memory Speed: up to 1333 MHz
PCIe: x16 electrical/ x16 mechanical
Chipset: Westmere 12 MB cache
iDataPlex dx360 M3 Refresh
Exa-scale Hybrid CPU + GPU
17
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Key Features Benefits
42 GPU servers in a
standard rack footprint
• 10X Increased compute performance saving 65% less acquisition costs
for 4X the density
• 3.7X increase in Flops/Watt offers an power and cooling efficiency of
72% lower power consumption
Support for up to two
NVIDIA GPUs
• Expanded I/O capabilities fits 49 Teraflops of performance in a rack
with 61% more density than outboard solutions
• Flexibility to configure I/O intense configurations with networking and
storage along with compute CPU intense service without compromising
density
Storage Performance
Options
• Higher capacity/higher performance SAS controller
Memory capacity up
to 192 GB per server
• Expanded memory options with 16GB DDR-3 1333MHz memory
Fast, Cool, Dense, Flexible. TCO without compromise!
Accelerate performance without compromising density through maximum flexibility.
Target Workloads:
Workloads: Risk Analytics, Seismic/Petro Exploration, Medical Imaging, Digital Content Creation, Online Gaming
Industries: FSS (Financial, Insurance), Public (Life Science, Education, Government) Industrial (Petro, Auto, Comm (M&E, DCC)
Customers: looking for value hardware to perform massive parallel computing in data center with space, power & cooling constraints
How to Beat HP/Dell:
1.Better flexibility and density
in power and cooling
constrained data centers
2.Energy Efficiency –
maximizes power & cooling
efficiency for GPU servers
3.Ultimate space and OPEX
savings with the Rear Door
Heat Exchanger
How to Beat HP/Dell:
1. Better flexibility and density
in power and cooling
constrained data centers
2. Energy Efficiency –
maximizes power & cooling
efficiency for GPU servers
3. Ultimate space and OPEX
savings with the Rear Door
Heat Exchanger
iDataPlex dx360 M3
18
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
4- 2.5” SS SAS 6Gbps
(or SATA, or 3.5”, or SSD…)
InfiniBand DDR
(or QDR, or 10GbE…)
NVIDIA Tesla M2050 #1
(or NVIDIA Tesla M1060,or FX3800, or Fusion IO, or)
NVIDIA Tesla M2050 #2
(or NVIDIA Quadro FX3800, or Fusion IO, or…)
Server level value
>Each server is individually serviceable
>Each GPU is individually replaceable
>6Gbps SAS drives and controller for maximum performance
>Service and support for server and GPU from IBM
dx360 M3 Server GPU I/O Tray - Front View
19
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Why GPU’s
> Applications run faster because it is using the
high-performance of parallel cores on the GPU
> Many codes and algorithms in HPC and other
applications are parallel floating point math
problems
> GPU’s do general purpose scientific and
engineering computing
> GPGPU = General Purpose Graphics Processing
Unit
Visualization
– Method of representing large amounts of complex
data in ways that are easier to understand, analyze
and support decision making.
Acceleration
– Augment or supplement the server CPU’s to
achieve greater overall performance and/or
efficiency by leveraging massive parallelism and
the superior floating point capability and on card
memory
x86 Intel CPU + GPU work
together in heterogeneous
computing model
–The sequential part of the
application runs on the CPU
–The computationally-intensive
part runs on the GPU
20
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Do More with Maximum Performance Density
49 Teraflops of Sustained performance
4X increased performance per rack
10X increased performance per node
65% Less acquisition costs
3.7X increase in Flops/Watt
dx360 M3
Xeon X5670
2.93GHz / 6C / 95W
1008
Cores
672
Cores
Xeon X5570
2.93GHz /
4C / 95W
dx360 M2
38,136
Cores
Xeon X5670
2.93GHz / 6C / 95W
Fermi M2050
448C / 225W
dx360 M3 Refresh
21
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
16x PCI-E
Riser card
slot
Virtual
Media Key
Mini PCI-E SAS
Card Slot
JP1
Clear
CMOS
Hard drive carrier
16 max.(8x per) DDR3 DIMMs
(2GB, 4GB, & 8GB) 1.5V and
1.35V
DIMM Bank 1
DIMM Bank 2
CPU 1-Supports
DIMM Bank 0
(Intel Westmere
EP / Nehalem EP)
CPU 0-Supports
DIMM Bank 1
(Intel Westmere EP)
SATA
ports 6x
JP2
Boot
Block
Enable
Battery
8X RAID Rise
Card slot
Ethernet Ports &
Management
dx360 M3 1U Flex Node (MT 6391) Interior View
22
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
System x iDataPlex dx360 M3
Tailored for Your Business Needs
iDataPlex flexibility with better performance, efficiency and more options!
1U Drive Tray
1U Compute
Node
3U Storage Chassis
Maximize Storage Density
3U, 1 Node Slot & Triple Drive Tray
HDD: 12 (3.5” Drives) up to 24TB
I/O: PCIe for networking + PCIe for RAID
Compute + Storage
Balanced Storage and Processing
2U, 1 Node Slot & Drive Tray
HDD: up to 5 (3.5”)
Compute Intensive
Maximum Processing
2U, 2 Compute Nodes
750W N+N
Redundant
Power Supply
900W
Power
Supply
1U GPU I/O Tray
550W
Power
Supply
Acceleration Compute + I/O
Maximum Component Flexibility
2U, 1 Node Slot
I/O: up to 2 PCIe, HDD up to 8 (2.5”)
23
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Front View
Two Ethernet
ports
Power-connector
button
Power-on LED
Hard disk drive
activity LED
Locator LED
System-error LED
Serial Video USB
ports
System-
management
Ethernet
24
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Intel®
5520
Chipset
PCI Express* 2.0
ICH 9/10 Intel®
82599 10GbE
Controller
Intel®
Data Center
Manager
Intel®
Node Manager
Technology
New Processor,
Proven Platform
Intel®
Xeon®
5600
Intel®
Xeon®
5600
> Intel® Xeon® 5600 Processor Series
 32nm Technology with 2nd Generation High-k Process
> Intel 5520 (Tylersburg) Chipset
 IOH (Northbridge) + ICH10 (Southbridge)
 Up to 6.4 GT/sec speeds
 Dual x16 Gen2 or Quad x8 PCI Express 2.0
graphics card support
> Integrated DDR3 three-channel memory
controller
 2nd
CPU required to use all 16 DIMMs
> Two Intel QuickPath interconnect (Intel QPI)
links per components
 High Speed Serial link between CPUs/chipset
 Up to 25.6 GB/sec bandwidth per link
> Single die four or six core
> Three cache levels:
 32 KB data / 32 KB instruction L1 cache per core
 256 KB L2 memory cache per core
 Shared 8MB L3 cache (12MB max.)
dx360 M3 Processor Subsystem
QPI
Up to 25.6 GB/sec
bandwidth per link
Up to 16 slots
DDR3 Memory
25
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Refresh - Westmere-EP SKU
Server / Workstation
2S
X5670
2.93 GHz 6C
X5660
2.80 GHz 6C
X5650
2.66 GHz 6C
E5640
2.66 GHz 4C
E5630
2.53 GHz 4C
E5620
2.40 GHz 4C
L5640
2.26 GHz 6C
L5630
2.13 GHz 4C
95W
95W
95W
80W
80W
80W
60W
40W
StandardAdvancedBasic
E5507
2.26 GHz 4C
E5506
2.13 GHz 4C
E5503
2.00 GHz 2C
80W
80W
80W
•5.86 GT/s QPI
•12MB L3 Cache
•Turbo/HT enabled
•DDR3-1066
•Turbo 4C:
NA/NA/1/1/2/2
•Nehalem-EP remains
for Value SKUs.
• 4.8 GT/s QPI
•4MB L3 Cache on 4C
•DDR3-800
Freq Optimized
X5667
3.06 GHz 4C
95W
X5677
3.46 GHz 4C
X5680
3.33 GHz 6C
130W 130W
LV
• up to 5.86 GT/s QPI
and 12MB cache
•60W Turbo 2/2/3/3/4/4
•40W Turbo
NA/NA/1/1/2/2
•Turbo/HT disabled on
1.6 GHz
•6.4 GT/s QPI
•12MB L3 Cache
•Turbo/HT enabled
•DDR3 -1333
•130W Turbo
6C: 1/1/2/2/3/3
4C: NA/NA/1/1/2/2
•95W Turbo
6C: 2/2/3/3/4/4
4C: NA/NA/2/2/3/3
L5609
1.86 GHz 4C <=40W
Support for 130 W not cost
efficient for iDataPlex dx360 M3
26
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Intel®
Xeon®
5600
Intel®
Xeon®
5600
95W
80W
60W
40W
Better performance/Watt
Lower power consumption
Lower Power CPUs
Up to 10% reduction in
memory power†
Intelligent Power
Technology
Integrated Power Gates and
Automated Low Power States with
Six Cores
Lower Power
DDR3 Memory
Nehalem Micro-architecture + 32nm CPU + Enhanced Power Mgt
= Greater Energy Efficiency
Nehalem Micro-architecture + 32nm CPU + Enhanced Power Mgt
= Greater Energy Efficiency
CPU Power Management
Optimized power consumption
through more efficient Turbo Boost
and memory power management
Building on Xeon® 5500 Leadership Capabilities
CTO
Westmere-EP Energy Efficiency
27
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Planar Building Block Diagram
Tylersburg
36D
SAS LSI 1064 Opt.
RAID Mini-Slot
PCI-E x4 Gen1
CSI
16 x DDR3
800/1066/1333MHz
VRD11.1VRD11.1
CSI
LGA 1366
6.4 GT/s
CSI
VRD11.1VRD11.1
Intel Xeon
5600
LGA 1366
6.4 GT/s
6.4 GT/sDDR3
DDR3
DDR3
8 DIMMs
DDR3
DDR3
DDR3
8 DIMMs
LPC
PCI-E x16 slot
PCI-E x16 Gen2
PCI-E x1
ICH10
6 SATA 3 USB
CLINK
ESI
Video
Maxim
VSC45
2
COM
RJ45RJ45
10/1003000MB/s 480Mb/s
2GB/s
8GB/s
IMM
PCI-E x8 Gen2
RAID PCI-E x8 slot
4GB/s
PCI-E x4
Intel
Zoar
Intel
Zoar
RJ45RJ45
GbE
RJ45RJ45
GbE
Intel NIC
Intel Xeon
5600
28
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Processor Installation
Intel Xeon Processor
Align Tabs/notches
> Intel Xeon Processors
 Same clock rate, cache size/type, and identical
in the core frequencies
> LGA 1366 socket (Socket B)
 Pads of bare gold-plated copper
− (No pins on CPU)
 Load plate with locking lever
 Align notches to ensure proper installation
Attention: Follow the instructions carefully to install the
CPU. Do not use any tools or sharp objects to lift the
locking lever on the CPU socket. Do not press CPU into
the socket. Make sure that the CPU is oriented and
aligned correctly in the socket before closing the CPU
retainer.
Attention: Follow the instructions carefully to install the
CPU. Do not use any tools or sharp objects to lift the
locking lever on the CPU socket. Do not press CPU into
the socket. Make sure that the CPU is oriented and
aligned correctly in the socket before closing the CPU
retainer.
29
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Heatsink Requirements
Dust Cover Heat Sink Filler
Heat Sink
P/N # Heatsink ASM for CPU #1 and #2
46M5518 dx360 M3 Processor Heat sink
Plus
Attention: Do not set the heat sink once removed from
the plastic cover. Do not touch thermal material on
the bottom of the heat or CPU. Touching thermal
material will contaminate it.
Attention: Do not set the heat sink once removed from
the plastic cover. Do not touch thermal material on
the bottom of the heat or CPU. Touching thermal
material will contaminate it.
30
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Memory Subsystem
> Supports registered DDR3 LP ECC memory
 Active Memory features, including advanced Chipkill memory protection
– 16x better error correction thank standard ECC memory
Choice of standard 1.5v or 1.35v consumes 10% less energy
Sixteen DIMM slots
 DIMM sizes:
– 2 GB, 4 GB, 8 GB or 16 GBRDIMM sizes:
 Maximum capacity
– 256 GB (max. x16GB DIMMs)
31
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
DIMM 1
DIMM 2
DIMM 4
DIMM 5
DIMM 7
DIMM 8
Example of Memory Bank 1 and CPU 1
Ch 2
DIMMs
Ch 1
DIMMs
Ch 0
DIMMs
DIMM 9
DIMM 6
DIMM 3
dx360 M3 Memory Installation
CPU must be
populated for access
to it’s DIMMS
Intel®
Xeon®
processor
32
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
> Standard with onboard ICH10 SATA II controller
 Support 5 drives (depending on the configuration) internal simple-swap (SS) SATA II
drives, or 4 SS SSDs. Hot-swap SAS or SATA HDDs, or simple-swap SAS HDDs,
require an optional adapter
> ServeRAID-BR10il v2 controller
 Support (no cache) for up 3Gbps7 (x4 PCIe)
> RAID-0/1/1E to 4 HDDs or SSDs.
> ServeRAID-M1015 SAS/SATA controller
 6Gbps (x8 PCIe) RAID-0/1/10 (no cache) for up to 16 drives (limited by available bays)
> ServeRAID M1000 Series Advance Feature Key adds RAID-5 with SED support
> ServeRAID-M5014 SAS/SATA controller offers
 6Gbps (x8 PCIe)
> Enhanced performance with 256MB of cache memory
 RAID-0/1/10/5/50 for up to 16 drives (limited by available bays).
> ServeRAID-M5015 SAS/SATA controller
 6Gbps (x8 PCIe)
> enhanced performance with 512MB of cache memory and battery backup, and
supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays).
> ServeRAID M5000 Series Advance Feature Key adds RAID-6/60 with SED support to
the M5014 and M5015
> ServeRAID M5000 Series Battery Key adds battery backup support to the M5014.
 Support either SAS or SATA, hot-swap or simple-swap, 3.5-inch or 2.5-inch drives.
Drives cannot be intermixed.
dx360 M3 Disk Controller
Simple-swap drive
Hot-swap drive
33
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
dx360 M3 Hard Drive Flexible Configurations
2U Chassis with 2 1U Server trays
 2 3.5-inch SS SATA HDDs (1 per server)—using the onboard controller
 2 3.5-inch SS SAS HDDs (1 per server)—requires ServeRAID-BR10il v2
 4 2.5-inch SS SAS HDDs (2 per server)—requires ServeRAID-BR10il v2 or M1015
 4 2.5-inch SS SATA HDDs (2 per server)—using the onboard controller, ServeRAID-BR10il v2
 4 2.5-inch SS SATA SSDs (2 per server)—using the onboard controller, ServeRAID-BR10il v2, or IBM 6GB SSD HBA Card
2U Chassis with 1 Server and one 1U Storage tray
 5 3.5-inch SS SATA HDDs (1 per server, 4 per tray)—5 using the onboard controller
 4 3.5-inch SS SAS HDDs (0 per server, 4 per tray)—requires ServeRAID-BR10il v2, M1015 or M5015
 4 3.5-inch SS SATA HDDs (4 per tray)—requires ServeRAID-BR10il v2, M1015 or M5015
 8 2.5-inch SS SAS HDDs (0 per server, 8 per tray)—requires ServeRAID-M1015 or M5015
 8 2.5-inch SS SATA HDDs (0 per server, 8 per tray)—requires ServeRAID-M1015 or M5015
 8 2.5-inch SS SATA SSDs (0 per server, 8 per tray)—requires ServeRAID-M1015, M5015, or IBM 6GB SSD HBA Card
2U Chassis with 1 Server and one 1U I/O tray
 2 3.5-inch SS SATA HDDs (1 per server, 1 per tray)—using the onboard controller or ServeRAID-BR10il v2
 2 3.5-inch SS SAS HDDs (1 per server, 1 per tray)—requires ServeRAID-BR10il v2, or M1015
 4 2.5-inch SS SAS/SATA HDDs (2 per server, 2 per tray)—requires ServeRAID-M1015
 4 2.5-inch SS SSDs (2 per server, 2 per tray)—requires ServeRAID-M1015 or IBM 6GB SSD HBA Card
3U Chassis with 1 Server and one 2U Storage tray
 12 3.5-inch HS SATA HDDs (0 per server, 12 per tray)—requires ServeRAID-M1015, or M5015
 12 3.5-inch HS SAS HDDs (0 per server, 12 per tray)—requires ServeRAID-M1015, or M5015
34
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
NVIDIA Graphic Adapters Options
iDataPlex Server Platform
NVIDIA Tesla M1060
 1 GT200 GPU
4 GB Memory
190 Watt
Double Wide / Dual Slot
Passive Cooling
FRU PN 43V5909
NVIDIA Quadro FX3800
1 GB Memory
107 Watt
Single Wide / Single Slot
FRU PN 43V5925
NVIDIA Tesla M2050
1 Fermi GPU
3 GB Memory
225 Watt
Double Wide / Dual Slot
Passive Cooling
FRU PN 43V5894
NVIDIA Tesla M2070/M2070Q
1 Fermi GPU
6 GB Memory
225 Watt
Double Wide / Dual Slot
Passive Cooling
Cuda, OpenCL, OpenGL
FRU PN 43V5935 M2070
43V5943 M2070Q
35
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Tesla T10: The Processor Inside
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
> 8 TP per TPA (240 Total)
> Full scalar processor with integer
and floating point units
> 16K of RAM for Shared Memory
Thread Processor (TP)
FP Integer
Multi-banked
Register File
SpcOps
ALUs
Thread Processor Array (TPA)
30 TPAs = 240 Processors
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP Array Shared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
Double Precision
Special Function Unit (SFU)
TP ArrayShared Memory
36
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
NVIDIA Tesla M2050 and M2070/M2070Q
Tesla M2050 and Tesla M2070/M2070Q
Computing Processor Module
9.75 inches
4.37 inches
vented bracket
37
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
2 PCIe x16
(1 per side)
PCIE x8
GPU 1GPU 1
GPU 2GPU 2
HBA
dx360 M3 – New 3-slot Riser and I/O Tray
38
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
New Innovations
> 448 CUDA Cores
> NVIDIA Parallel DataCache
> NVIDIA GigaThread
> ECC Support
DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F
DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F
L2L2
The Soul of a Supercomputer in the body of a GPU
Tesla T20 Series Architecture Structure (1 of 5)
39
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
448 CUDA Cores
> Optimized performance and
accuracy with up to 8X faster
double precision
> Compliant with industry
standards for floating point
arithmetic
> Versatile accelerators for a
wide variety of applications
> Valued by HPC clients
running linear algebra and
numerical simulation
applications
DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F
DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F
L2L2
The Soul of a Supercomputer in the body of a GPU
Tesla T20 Series Architecture Structure (2 of 5)
40
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
NVIDIA Parallel DataCache
> First GPU architecture to support
a true cache hierarchy in
combination with one-chip shared
memory
> Improves bandwidth and reduces
latency through L1 cache’s
configurable shared memory
> Fast, coherent data sharing across
the GPU through unified L2 cache
> Clients running physics solvers,
raid tracing or sparse matrix
multiplication algorithms benefit
greatly from this cache hierarchy
DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F
DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F
L2L2
The Soul of a Supercomputer in the body of a GPU
Tesla T20 Series Architecture Structure (3 of 5)
41
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Tesla T20 Series Architecture Structure (4 of 5)
NVIDIA GigaThread
> Increased efficiency with
concurrent kernel execution
> Dedicated, bi-directional data
transfer engines
> Intelligently manage tens of
thousands of threads
DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F
DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F
L2L2
The Soul of a Supercomputer in the body of a GPU
42
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Tesla T20 Series Architecture Structure (5 of 5)
ECC Support
> First GPU architecture to support
ECC (error checking and correction)
> Detects and corrects errors before
system is affected
> Protects register files, shared
memories, L1 and L2 cache and
DRAM
DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F
DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F
L2L2
The Soul of a Supercomputer in the body of a GPU
43
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Tesla M2070 and Tesla M2070Q Board Configuration
44
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Emulex PCI-e HBA
Brocade PCIe HBAs
dx360 M3 I/O Support
High IOPS SS Class SSD PCIe
HBA
QLogic PCIe HBA
QLogic CNA
Brocade CNA
45
> High-performance PCIe Host Bus Adapter (HBAs)
 10Gb Ethernet, Fibre Channel, InfiniBand and GPU cards
> Up to 3 PCIe adapter slots per chassis
 Five differ riser cards available:
– 1U single-slot (front) PCIe slot
♦ Supports all configurations
– 2U two-slot (front PCIe slot
♦ Supports any two adapters with max. one
GPU/GPGPU adapter
– 2U three-slot (front PCIe slot
♦ Supports 2 GPU/GPGPU adapters and any other
PCIe card
 Third PCIe slot backside of riser card
♦ 2U I/O Rich configurations using PCIE tray
– 1 single-slot (rear) PCIe slot
♦ All 2U configurations
– 2U single-slot (rear) PCIe slot
♦ 3U configurations only
NVIDIA Tesla M2050/
M2070/M2070Q
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
> Flexibility to match power supply with workload
 550W High Efficiency Non-redundant Power Support
− Maximum Efficiency for lower power requirements
 More efficiency by running higher on the power
curve
 900W High Efficiency Power Supply available for non-
redundant higher power requirements
 750W N+N Redundant Power Supply
− Optional at the chassis level to tailor rack-level
applications that require redundant power in some or
all nodes
− Power envelope fits most applications, oversubscription
/ throttling for extremes
 Straightforward, cost-effective solution for
both node and line-feed redundancy
 Meets customer requirement for rack-level line feed
redundancy
 Meets requirement for node-level power protection for
storage-rich, VM & Enterprise environments
> Maintains maximum floor space density with
the iDataPlex rack
 New power supplies in same form factor as existing for
2U & 3U chassis power supply
iDataPlex 2U/3U Power Supply Option
900W HE
550W HE
750W N+N
AC 1
AC 2
PS 1 750W Max
PS 2 750W Max
C
A
B
750W Total in redundant mode
200-240V only
Redundant supply block diagram
46
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Fan unit comprised of four 80 mm
fans (partially removed) Redundant power
supply cable
iDataPlex Efficiency in Power and Cooling
Direct Dock Power connector
47
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Agenda
> Introducing IBM iDataPlex
> Rack Unit
> iDataPlex Nodes Overview
> *iDataPlex Management *
48
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
The Total Systems Management Experience
Integrated Management Module (IMM)
• Standards-based hardware which combines
diagnostic and remote control
UEFI—next generation BIOS
• Richer management experience and future-ready
Hardware and
firmware advances
which are standard
across all new
systems
ToolsCenter
• Consolidated, integrated suite of management
tools
• Powerful bootable media creator
Redesigned system
tool portfolio for
single-system
management and
scripting
IBM Systems Director
• Platform management that is easy and efficient
• Management of physical and virtual resources
across heterogeneous systems
IBM Systems platform
solution for System x,
BladeCenter, Power
Systems, System z and
storage
IBM Tivoli
Upward integration into
Tivoli Service Management
End-to-end stack to deliver future-proof systems management today
*This slide contains animations that will launch with slide. 49
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
50
iDataPlex/Intelligent Cluster Partner Ecosystem
Technology Collaboration PartnersSolution Collaboration Partners
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
An Affordable Fabric - Reliable Switches
Trusted Industry Switch Suppliers
>1GbE, 10GbE and InfiniBand
>Up to 384 Ethernet connections
Affordable Price-Performance
>Highest throughput with lowest latency
>Easy provisioning
>Repurpose connections
Scalable & Interoperable
>Interconnect with existing network
>Manage server changes automatically
>Interoperable with CISCO Management
*This slide contains animations that will launch with slide.
51
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
"Despicable Me represents a breakthrough in the emerging model of collaborative,
geographically distributed digital movie making. Thanks to the capacity of IBM's rendering
technology and the skills of our artists, we were able to bring our creative vision to life
through the completion of a wonderfully entertaining film.”
Chris Meledandri, founder of Illumination Entertainment
IBM System x iDataPlexIBM System x iDataPlex
 Supported a 330 person global team
 Created 142 terabytes of data
 6500 processor cores
 Cut data center floor space in half
 Reduced energy consumption by 40%
The Real Star of ‘Despicable Me’ – System x iDataPlex
52
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
43 iDataPlex Systems43 iDataPlex Systems listed with Supercomputing
Leadership in TOP 500TOP 500
November 2010Semiannual independent ranking of the top 500 supercomputers in the world. www.top500.org
53
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
2009 - 2010 iDataPlex Awards
IBM’s LargestIBM’s Largest
Green Data CenterGreen Data Center
“Top Server
Product of 2009”
SearchDataCenter.com
"IBM is building what the customers
are telling them to build."
IBM TOP 5 Technologies for Corporate
Environmental Innovation Program
16 of top 50 in Green 500
43 Places on TOP500 list
iDataPlex dx360 M3
2010 Readers' Choice:
“Best HPC Server Product or Technology”
Awarded
at Super
Computing
2010
2010 Silicon Valley Leadership Group (SVLG) Chill Off 2 Competition
Results proved the IBM Rear Door Heat Exchanger had the “best energy efficiency.”
Solution also considered “Lowest Facilities Energy Consumption” and “Most Economical.”
SVLG Chill Off competitors included: APC, LCP+, Knurr, Liebert, Sun and DirectTouch
54
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Topic 4 - Course Summary
Having completed this topic you should be able to:
> List three emerging technologies for an iDataPlex solution
> List three goals that iDataPlex addressed
> Identify elements of the iDataPlex rack design
> Match the server offering to its characteristics
55
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Glossary of Acronyms
Active Energy Manager
BIOS (Basic Input/Output System)
Compute
Compute Unified Device Architecture
(CUDA)
Dynamic System Analysis (DSA)
Extreme Cluster Administration Toolkit
(xCAT)
Flex Node Technology
Graphical Processing Unit (GPU)
InfiniBand
Integrated Management Module (IMM)
Nehalem EP (Efficient Performance)
Predictive Failure Analysis (PFA)
QPI (Quick Path Interconnect)
Reliability, Availability, and Serviceability
(RAS)
Teraflops
Tesla M1060 and M2050
Thread Processing Array (TPA)
Thread Processing Cluster (TPC)
Thread Processing (TP)
UEFI (Unified Extensible Firmware
Interface)
56
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
Additional Resources
IBM STG SMART Zone for more education:
> Internal: http://lt.be.ibm.com/smartzone/modulartechnical
> BP: http://www.ibm.com/services/weblectures/dlv/partnerworld
IBM System x iDataPlex home page
> http://www-03.ibm.com/systems/x/hardware/idataplex/
Rear Door Heat eXchanger Installation and Maintenance Guide
> http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075220
IBM System x iDataPlex dx360 User's Guide
> http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077374
IBM System x iDataPlex dx360 Problem Determination and Service Guide
> http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077375
IBM PDU+ Installation and Maintenance Guide
> http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5073026
IBM ServerProven
> http://www.ibm.com/servers/eserver/serverproven/compat/us/nos/ematrix.shtml 57
IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM
Corporation
End of Presentation
58

Más contenido relacionado

La actualidad más candente

Flex system client_presentation
Flex system client_presentationFlex system client_presentation
Flex system client_presentationNatalija Pavic
 
Future of Power: PureFlex and IBM i - Erik Rex
Future of Power: PureFlex and IBM i - Erik RexFuture of Power: PureFlex and IBM i - Erik Rex
Future of Power: PureFlex and IBM i - Erik RexIBM Danmark
 
Ibm pure systems sales bootcamp
Ibm pure systems sales bootcampIbm pure systems sales bootcamp
Ibm pure systems sales bootcampsolarisyougood
 
IBM i 7.1 & TRs CEC 2012
IBM i 7.1 & TRs CEC 2012IBM i 7.1 & TRs CEC 2012
IBM i 7.1 & TRs CEC 2012COMMON Europe
 
IBM Power Event, Keynote Presentation Doug Davis
IBM Power Event, Keynote Presentation Doug DavisIBM Power Event, Keynote Presentation Doug Davis
IBM Power Event, Keynote Presentation Doug DavisIBM Danmark
 
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...Simon Womack
 
The benefits of IBM FlashSystems
The benefits of IBM FlashSystemsThe benefits of IBM FlashSystems
The benefits of IBM FlashSystemsLuca Comparini
 
Xtw01t5v011311 disk storage
Xtw01t5v011311 disk storageXtw01t5v011311 disk storage
Xtw01t5v011311 disk storagepgnguyen44
 
Aix The Future of UNIX
Aix The Future of UNIX Aix The Future of UNIX
Aix The Future of UNIX xKinAnx
 
Xiv cloud machine_webinar_090414
Xiv cloud machine_webinar_090414Xiv cloud machine_webinar_090414
Xiv cloud machine_webinar_090414Jinesh Shah
 
IBM BladeCenter Fundamentals Introduction
IBM BladeCenter Fundamentals Introduction IBM BladeCenter Fundamentals Introduction
IBM BladeCenter Fundamentals Introduction Dsunte Wilson
 
Big data and ibm flashsystems
Big data and ibm flashsystemsBig data and ibm flashsystems
Big data and ibm flashsystemssolarisyougood
 
Flash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling PointFlash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling PointCTI Group
 
IBM flash systems
IBM flash systems IBM flash systems
IBM flash systems Solv AS
 
FlashSystems 2016 update
FlashSystems 2016 updateFlashSystems 2016 update
FlashSystems 2016 updateJoe Krotz
 

La actualidad más candente (20)

Flex system client_presentation
Flex system client_presentationFlex system client_presentation
Flex system client_presentation
 
Future of Power: PureFlex and IBM i - Erik Rex
Future of Power: PureFlex and IBM i - Erik RexFuture of Power: PureFlex and IBM i - Erik Rex
Future of Power: PureFlex and IBM i - Erik Rex
 
Ibm pure systems sales bootcamp
Ibm pure systems sales bootcampIbm pure systems sales bootcamp
Ibm pure systems sales bootcamp
 
IBM i 7.1 & TRs CEC 2012
IBM i 7.1 & TRs CEC 2012IBM i 7.1 & TRs CEC 2012
IBM i 7.1 & TRs CEC 2012
 
IBM Power Event, Keynote Presentation Doug Davis
IBM Power Event, Keynote Presentation Doug DavisIBM Power Event, Keynote Presentation Doug Davis
IBM Power Event, Keynote Presentation Doug Davis
 
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...
November flex and pure flex announcements.ppt&token=mtm1mjkynzewmze4mw==&loca...
 
IBM I and blade center update 2009
IBM I and blade center update 2009IBM I and blade center update 2009
IBM I and blade center update 2009
 
The benefits of IBM FlashSystems
The benefits of IBM FlashSystemsThe benefits of IBM FlashSystems
The benefits of IBM FlashSystems
 
Xtw01t5v011311 disk storage
Xtw01t5v011311 disk storageXtw01t5v011311 disk storage
Xtw01t5v011311 disk storage
 
Aix The Future of UNIX
Aix The Future of UNIX Aix The Future of UNIX
Aix The Future of UNIX
 
Xiv cloud machine_webinar_090414
Xiv cloud machine_webinar_090414Xiv cloud machine_webinar_090414
Xiv cloud machine_webinar_090414
 
IBM BladeCenter Fundamentals Introduction
IBM BladeCenter Fundamentals Introduction IBM BladeCenter Fundamentals Introduction
IBM BladeCenter Fundamentals Introduction
 
Big data and ibm flashsystems
Big data and ibm flashsystemsBig data and ibm flashsystems
Big data and ibm flashsystems
 
Flash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling PointFlash Ahead: IBM Flash System Selling Point
Flash Ahead: IBM Flash System Selling Point
 
Iasp Enablement
Iasp EnablementIasp Enablement
Iasp Enablement
 
IBM PureSystems
IBM PureSystemsIBM PureSystems
IBM PureSystems
 
Presentation Why I Final 7 15 09
Presentation Why I Final 7 15 09Presentation Why I Final 7 15 09
Presentation Why I Final 7 15 09
 
IBM PureFlex System
IBM PureFlex SystemIBM PureFlex System
IBM PureFlex System
 
IBM flash systems
IBM flash systems IBM flash systems
IBM flash systems
 
FlashSystems 2016 update
FlashSystems 2016 updateFlashSystems 2016 update
FlashSystems 2016 update
 

Similar a Xtw01t4v011311 i dataplex

NeXtScale HPC seminar
NeXtScale HPC seminarNeXtScale HPC seminar
NeXtScale HPC seminarIBM Danmark
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
 
Dell Technologies - The Complete ISG Hardware Portfolio
Dell Technologies - The Complete ISG Hardware PortfolioDell Technologies - The Complete ISG Hardware Portfolio
Dell Technologies - The Complete ISG Hardware PortfolioDell Technologies
 
MT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerMT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerDell EMC World
 
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power Edge22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power EdgeSashikris
 
Presentation blade center 101
Presentation   blade center 101Presentation   blade center 101
Presentation blade center 101xKinAnx
 
Presentation ibm system x values proposition with vm ware
Presentation   ibm system x values proposition with vm warePresentation   ibm system x values proposition with vm ware
Presentation ibm system x values proposition with vm waresolarisyourep
 
Fujitsu World Tour 2017 - Compute Platform For The Digital World
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu World Tour 2017 - Compute Platform For The Digital World
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu India
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 PresentationRamanDua
 
N6200 Release Deck
N6200 Release DeckN6200 Release Deck
N6200 Release Deckrichswain
 
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013Jaroslav Prodelal
 
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013Jaroslav Prodelal
 
Ibm power systems facts and features power 8
Ibm power systems facts and features  power 8 Ibm power systems facts and features  power 8
Ibm power systems facts and features power 8 Diego Alberto Tamayo
 
IBM Special Announcement session Intel #IDF2013 September 10, 2013
IBM Special Announcement session Intel #IDF2013 September 10, 2013IBM Special Announcement session Intel #IDF2013 September 10, 2013
IBM Special Announcement session Intel #IDF2013 September 10, 2013Cliff Kinard
 

Similar a Xtw01t4v011311 i dataplex (20)

NeXtScale HPC seminar
NeXtScale HPC seminarNeXtScale HPC seminar
NeXtScale HPC seminar
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
 
Dell Technologies - The Complete ISG Hardware Portfolio
Dell Technologies - The Complete ISG Hardware PortfolioDell Technologies - The Complete ISG Hardware Portfolio
Dell Technologies - The Complete ISG Hardware Portfolio
 
MT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerMT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data center
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power Edge22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
 
Presentation blade center 101
Presentation   blade center 101Presentation   blade center 101
Presentation blade center 101
 
Presentation ibm system x values proposition with vm ware
Presentation   ibm system x values proposition with vm warePresentation   ibm system x values proposition with vm ware
Presentation ibm system x values proposition with vm ware
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
 
DS8800 Client Presentation
DS8800 Client PresentationDS8800 Client Presentation
DS8800 Client Presentation
 
Fujitsu World Tour 2017 - Compute Platform For The Digital World
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu World Tour 2017 - Compute Platform For The Digital World
Fujitsu World Tour 2017 - Compute Platform For The Digital World
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 Presentation
 
Vortrag nimz-rpg-021110
Vortrag nimz-rpg-021110Vortrag nimz-rpg-021110
Vortrag nimz-rpg-021110
 
N6200 Release Deck
N6200 Release DeckN6200 Release Deck
N6200 Release Deck
 
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
 
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
 
Ibm power systems facts and features power 8
Ibm power systems facts and features  power 8 Ibm power systems facts and features  power 8
Ibm power systems facts and features power 8
 
IBM Special Announcement session Intel #IDF2013 September 10, 2013
IBM Special Announcement session Intel #IDF2013 September 10, 2013IBM Special Announcement session Intel #IDF2013 September 10, 2013
IBM Special Announcement session Intel #IDF2013 September 10, 2013
 
IBM System x iDataPlex dx360 M3 Product Guide
IBM System x iDataPlex dx360 M3 Product GuideIBM System x iDataPlex dx360 M3 Product Guide
IBM System x iDataPlex dx360 M3 Product Guide
 

Último

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 

Último (20)

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 

Xtw01t4v011311 i dataplex

  • 1. © 2006 IBM Corporation This presentation is intended for the education of IBM and Business Partner sales personnel. It should not be distributed to customers. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation IBM System x iDataPlex™: Internet Scale Computing XTW01 Topic 4
  • 2. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Overview The objectives of this course of study are: > List three emerging technologies for an iDataPlex solution > List three business goals that iDataPlex addressed > Identify elements of the iDataPlex rack design > Match the server offering to its characteristics 2
  • 3. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Agenda > *Introducing IBM iDataPlex* > Rack Unit > iDataPlex Nodes Overview > iDataPlex Management 3
  • 4. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation An IBM Portfolio that Covers the Spectrum of Business Needs BladeCenter Enterprise eX5 iDataPlex Enterprise Racks & Towers Scale, Power, Density, Optimizatio n Consolidation, Virtualization Infrastructure Simplification, Application Serving Scale Out ScaleUp *This slide contains animations that will launch with slide. 4
  • 5. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Introducing IBM System x iDataPlex > High volume, low cost computing for HPC and Cloud > Innovative, flexible design for Internet- scale computing > Up to 5X compute density for efficient space utilization > Front-accessible and intelligent components simplify deployment, serviceability and manageability for Internet-scale computing > Reduce cooling costs dramatically; minimize or even eliminate data center air conditioning expense 5
  • 6. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Tough Challenges Require Real Innovation Decrease Server Power Consumption by 50% Eliminate Data Center Air Conditioning Increase Compute Density by 10x “More than 70% of the world’s global 1000 organizations will have to modify their data center facilities significantly during the next five years.” -- Gartner, September 2007 Appropriate density for a given power envelope Scale data center computing with no new construction Attack largest operational expense items Cap carbon footprint for data centers Rising energy costs & rising energy demand Power & thermal issues inhibit operations 6
  • 7. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Fast, Cool, Dense, Flexible. TCO without compromise! Why iDataPlex? iDataPlex is: > Optimized both mechanically as a half-depth server solution and component-wise for maximum power and cooling efficiency > Designed to maximize utilization of Data Center floor space, power and cooling infrastructure with Industry- standards based server platform. > Easy-to-maintain solution with individually serviceable servers, front access hard drives/cabling, > Configurable for customer-specific compute, storage, or I/O needs and delivered pre-configured for rapid deployment > Common tools across the System x portfolio for management at the node, rack or Data Center level 7
  • 8. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Agenda > Introducing IBM iDataPlex > *The Rack Unit* > iDataPlex Nodes Overview > iDataPlex Management 8
  • 9. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Increases Density for Space Savings Optional Rear Door Heat eXchanger iDataPlexTypical Enterprise Rack Full-fan air depth Half-fan air depth 42U Enterprise Rack 1 42U Enterprise Rack 2 > Optimized rack design more than doubles server density per rack > Air flow efficiency equals power savings of up to 20% and 30-40% lower airflow impedance > IBM Rear Door Heat eXchanger provides the ultimate in cooling savings Top-down view 9
  • 10. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation iDataPlex Rack -vs.- Standard 19” Rack Top View Top View Standard 19” Rack iDataPlex Rack 600mm (840mmw/RDHx) 1200mm 446x400mm 446x400mm RDHX iDataPlex Rack design:  Rack is rotated 90° – Half-depth, front-access servers.  Low airflow impedance.  Servers located side-by-side.  Doubles server density in similar footprint. – Greater cross-section for RDHX.  102U Rack – 84U for Nodes, etc. – 16U for switches, etc.(vertical) – 2U Rack Management Appliance (horizontal) slots  Space-saving footprint – 1200mm wide x 600mm deep. 444x750mm 640mm 1050mm 10
  • 11. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation iDataPlex – Designed for Data Center Flexibility PDUs (rear) 3U Chassis 2U Chassis iDataPlex Rear Door Heat ExchangerServer Tray Storage Drives & Network Options Dual IOTray Storage Tray Broad portfolio of customizable components that adjust to your computing needs Switches (front) *This slide contains animations that will launch with slide. 11
  • 12. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Rear Door Heat eXchanger Swings to provide access to rear PDUs Lock handle to close/ open door Industry Standard hose fittings No Leaks Sealed Internal coils Perforated Door for clear airflow IBM patent pending hex airflow design  Liquid cooling at the rack is 75%-95% more efficient than air cooling by a CRAC  Can eliminate rack heat exhaust  No electrical or moving parts  No condensation  Chilled water or evaporative liquid 12
  • 13. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation A New Way to Cool the Data Center Provides Up to 100% Heat Extraction, and can even cool the room! IR Thermal Camera View of RDHx with: top photo: water flow off bottom photo: water flow on iDataPlex rack can run at room temperature (no air conditioning required) when used with the optional Rear Door Heat eXchanger. These photos were taken in the Thermal Lab at IBM, showing the iDataPlex rack without (top image) and with (bottom image) the Rear Door Heat eXchanger closed and operational. 13
  • 14. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Easily Serviceable Integrated x86 Packaged Design > All-front access eliminates accessing rear of rack > Swappable server trays in chassis > Chassis docks directly into power > Chassis guides keep upper servers in place > Rack-side pockets for cables provide highly efficient cable routing Shared dual domain power supply with hot dock ports so planars dock into power connector Tool-less server and chassis changes done easily with innovative server latch locks High Efficiency shared 80mm fans provide superior cooling at a low wattage draw and low noise levels Flex Node Technology Innovative design so planar trays are independent & swappable for multiple configurations Asset Tags - Confirm Asset I/Ds Chassis Guides & Rail kits - Individual Chassis Rails for reliable fit & safe servicing 2U Flex Chassis Designed for Data Center Serviceability 14
  • 15. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation iDataPlex Rack Design 23.6 ” 47.2”” AIR FLOW AIR FLOW 19” 19” Top View Energy efficiency > Optimizes airflow for cooling efficiency with half-depth rack > Reduces pressure drop to improve chilled air efficiency Leadership density > Dual column / Half-depth rack > Standard two-floor tile rack footprint > Up to 168 physical nodes in 8 square feet Flexibility > Matches US & European data center floor tile standards > Compatible with standard forced air environments Ease of use > All service and cabling from the front 15
  • 16. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Agenda > Introducing IBM iDataPlex > Rack Unit > *iDataPlex Nodes Overview* > iDataPlex Management 16
  • 17. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Current iDataPlex Server Offerings Processor: Quad Core Intel Xeon 5500 Quick Path Interconnect up to 6.4 GT/s Memory:16 DIMM DDR3 - 128 GB max Memory Speed: up to 1333 MHz PCIe: x16 electrical/ x16 mechanical Chipset: Tylersburg-36D iDataPlex dx360 M2 High-performance Dual-Socket Storage: 12 3.5” HDD up to 24 TB per node / 672TB per rack Proc: 6 or 4 Core Intel Xeon 5600 Memory: 16 DIMM / 128 GB max Chipset: Westmere iDataPlex 3U Storage Rich File Intense Dual-Socket Processor: 6 & 4 Core Intel Xeon 5600 Quick Path Interconnect up to 6.4 GT/s Memory:16 DIMM DDR3 - 128 GB max Memory Speed: up to 1333 MHz PCIe: x16 electrical/ x16 mechanical Chipset: Westmere 12 MB cache iDataPlex dx360 M3 High-performance Dual-Socket Processor: 6 & 4 Core Intel Xeon 5600 2 NVIDIA M1060, M2050, M2070, M2070Q Quick Path Interconnect up to 6.4 GT/s Memory:16 DIMM DDR3 - 192 GB max Memory Speed: up to 1333 MHz PCIe: x16 electrical/ x16 mechanical Chipset: Westmere 12 MB cache iDataPlex dx360 M3 Refresh Exa-scale Hybrid CPU + GPU 17
  • 18. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Key Features Benefits 42 GPU servers in a standard rack footprint • 10X Increased compute performance saving 65% less acquisition costs for 4X the density • 3.7X increase in Flops/Watt offers an power and cooling efficiency of 72% lower power consumption Support for up to two NVIDIA GPUs • Expanded I/O capabilities fits 49 Teraflops of performance in a rack with 61% more density than outboard solutions • Flexibility to configure I/O intense configurations with networking and storage along with compute CPU intense service without compromising density Storage Performance Options • Higher capacity/higher performance SAS controller Memory capacity up to 192 GB per server • Expanded memory options with 16GB DDR-3 1333MHz memory Fast, Cool, Dense, Flexible. TCO without compromise! Accelerate performance without compromising density through maximum flexibility. Target Workloads: Workloads: Risk Analytics, Seismic/Petro Exploration, Medical Imaging, Digital Content Creation, Online Gaming Industries: FSS (Financial, Insurance), Public (Life Science, Education, Government) Industrial (Petro, Auto, Comm (M&E, DCC) Customers: looking for value hardware to perform massive parallel computing in data center with space, power & cooling constraints How to Beat HP/Dell: 1.Better flexibility and density in power and cooling constrained data centers 2.Energy Efficiency – maximizes power & cooling efficiency for GPU servers 3.Ultimate space and OPEX savings with the Rear Door Heat Exchanger How to Beat HP/Dell: 1. Better flexibility and density in power and cooling constrained data centers 2. Energy Efficiency – maximizes power & cooling efficiency for GPU servers 3. Ultimate space and OPEX savings with the Rear Door Heat Exchanger iDataPlex dx360 M3 18
  • 19. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 4- 2.5” SS SAS 6Gbps (or SATA, or 3.5”, or SSD…) InfiniBand DDR (or QDR, or 10GbE…) NVIDIA Tesla M2050 #1 (or NVIDIA Tesla M1060,or FX3800, or Fusion IO, or) NVIDIA Tesla M2050 #2 (or NVIDIA Quadro FX3800, or Fusion IO, or…) Server level value >Each server is individually serviceable >Each GPU is individually replaceable >6Gbps SAS drives and controller for maximum performance >Service and support for server and GPU from IBM dx360 M3 Server GPU I/O Tray - Front View 19
  • 20. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Why GPU’s > Applications run faster because it is using the high-performance of parallel cores on the GPU > Many codes and algorithms in HPC and other applications are parallel floating point math problems > GPU’s do general purpose scientific and engineering computing > GPGPU = General Purpose Graphics Processing Unit Visualization – Method of representing large amounts of complex data in ways that are easier to understand, analyze and support decision making. Acceleration – Augment or supplement the server CPU’s to achieve greater overall performance and/or efficiency by leveraging massive parallelism and the superior floating point capability and on card memory x86 Intel CPU + GPU work together in heterogeneous computing model –The sequential part of the application runs on the CPU –The computationally-intensive part runs on the GPU 20
  • 21. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Do More with Maximum Performance Density 49 Teraflops of Sustained performance 4X increased performance per rack 10X increased performance per node 65% Less acquisition costs 3.7X increase in Flops/Watt dx360 M3 Xeon X5670 2.93GHz / 6C / 95W 1008 Cores 672 Cores Xeon X5570 2.93GHz / 4C / 95W dx360 M2 38,136 Cores Xeon X5670 2.93GHz / 6C / 95W Fermi M2050 448C / 225W dx360 M3 Refresh 21
  • 22. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 16x PCI-E Riser card slot Virtual Media Key Mini PCI-E SAS Card Slot JP1 Clear CMOS Hard drive carrier 16 max.(8x per) DDR3 DIMMs (2GB, 4GB, & 8GB) 1.5V and 1.35V DIMM Bank 1 DIMM Bank 2 CPU 1-Supports DIMM Bank 0 (Intel Westmere EP / Nehalem EP) CPU 0-Supports DIMM Bank 1 (Intel Westmere EP) SATA ports 6x JP2 Boot Block Enable Battery 8X RAID Rise Card slot Ethernet Ports & Management dx360 M3 1U Flex Node (MT 6391) Interior View 22
  • 23. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation System x iDataPlex dx360 M3 Tailored for Your Business Needs iDataPlex flexibility with better performance, efficiency and more options! 1U Drive Tray 1U Compute Node 3U Storage Chassis Maximize Storage Density 3U, 1 Node Slot & Triple Drive Tray HDD: 12 (3.5” Drives) up to 24TB I/O: PCIe for networking + PCIe for RAID Compute + Storage Balanced Storage and Processing 2U, 1 Node Slot & Drive Tray HDD: up to 5 (3.5”) Compute Intensive Maximum Processing 2U, 2 Compute Nodes 750W N+N Redundant Power Supply 900W Power Supply 1U GPU I/O Tray 550W Power Supply Acceleration Compute + I/O Maximum Component Flexibility 2U, 1 Node Slot I/O: up to 2 PCIe, HDD up to 8 (2.5”) 23
  • 24. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Front View Two Ethernet ports Power-connector button Power-on LED Hard disk drive activity LED Locator LED System-error LED Serial Video USB ports System- management Ethernet 24
  • 25. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Intel® 5520 Chipset PCI Express* 2.0 ICH 9/10 Intel® 82599 10GbE Controller Intel® Data Center Manager Intel® Node Manager Technology New Processor, Proven Platform Intel® Xeon® 5600 Intel® Xeon® 5600 > Intel® Xeon® 5600 Processor Series  32nm Technology with 2nd Generation High-k Process > Intel 5520 (Tylersburg) Chipset  IOH (Northbridge) + ICH10 (Southbridge)  Up to 6.4 GT/sec speeds  Dual x16 Gen2 or Quad x8 PCI Express 2.0 graphics card support > Integrated DDR3 three-channel memory controller  2nd CPU required to use all 16 DIMMs > Two Intel QuickPath interconnect (Intel QPI) links per components  High Speed Serial link between CPUs/chipset  Up to 25.6 GB/sec bandwidth per link > Single die four or six core > Three cache levels:  32 KB data / 32 KB instruction L1 cache per core  256 KB L2 memory cache per core  Shared 8MB L3 cache (12MB max.) dx360 M3 Processor Subsystem QPI Up to 25.6 GB/sec bandwidth per link Up to 16 slots DDR3 Memory 25
  • 26. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Refresh - Westmere-EP SKU Server / Workstation 2S X5670 2.93 GHz 6C X5660 2.80 GHz 6C X5650 2.66 GHz 6C E5640 2.66 GHz 4C E5630 2.53 GHz 4C E5620 2.40 GHz 4C L5640 2.26 GHz 6C L5630 2.13 GHz 4C 95W 95W 95W 80W 80W 80W 60W 40W StandardAdvancedBasic E5507 2.26 GHz 4C E5506 2.13 GHz 4C E5503 2.00 GHz 2C 80W 80W 80W •5.86 GT/s QPI •12MB L3 Cache •Turbo/HT enabled •DDR3-1066 •Turbo 4C: NA/NA/1/1/2/2 •Nehalem-EP remains for Value SKUs. • 4.8 GT/s QPI •4MB L3 Cache on 4C •DDR3-800 Freq Optimized X5667 3.06 GHz 4C 95W X5677 3.46 GHz 4C X5680 3.33 GHz 6C 130W 130W LV • up to 5.86 GT/s QPI and 12MB cache •60W Turbo 2/2/3/3/4/4 •40W Turbo NA/NA/1/1/2/2 •Turbo/HT disabled on 1.6 GHz •6.4 GT/s QPI •12MB L3 Cache •Turbo/HT enabled •DDR3 -1333 •130W Turbo 6C: 1/1/2/2/3/3 4C: NA/NA/1/1/2/2 •95W Turbo 6C: 2/2/3/3/4/4 4C: NA/NA/2/2/3/3 L5609 1.86 GHz 4C <=40W Support for 130 W not cost efficient for iDataPlex dx360 M3 26
  • 27. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Intel® Xeon® 5600 Intel® Xeon® 5600 95W 80W 60W 40W Better performance/Watt Lower power consumption Lower Power CPUs Up to 10% reduction in memory power† Intelligent Power Technology Integrated Power Gates and Automated Low Power States with Six Cores Lower Power DDR3 Memory Nehalem Micro-architecture + 32nm CPU + Enhanced Power Mgt = Greater Energy Efficiency Nehalem Micro-architecture + 32nm CPU + Enhanced Power Mgt = Greater Energy Efficiency CPU Power Management Optimized power consumption through more efficient Turbo Boost and memory power management Building on Xeon® 5500 Leadership Capabilities CTO Westmere-EP Energy Efficiency 27
  • 28. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Planar Building Block Diagram Tylersburg 36D SAS LSI 1064 Opt. RAID Mini-Slot PCI-E x4 Gen1 CSI 16 x DDR3 800/1066/1333MHz VRD11.1VRD11.1 CSI LGA 1366 6.4 GT/s CSI VRD11.1VRD11.1 Intel Xeon 5600 LGA 1366 6.4 GT/s 6.4 GT/sDDR3 DDR3 DDR3 8 DIMMs DDR3 DDR3 DDR3 8 DIMMs LPC PCI-E x16 slot PCI-E x16 Gen2 PCI-E x1 ICH10 6 SATA 3 USB CLINK ESI Video Maxim VSC45 2 COM RJ45RJ45 10/1003000MB/s 480Mb/s 2GB/s 8GB/s IMM PCI-E x8 Gen2 RAID PCI-E x8 slot 4GB/s PCI-E x4 Intel Zoar Intel Zoar RJ45RJ45 GbE RJ45RJ45 GbE Intel NIC Intel Xeon 5600 28
  • 29. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Processor Installation Intel Xeon Processor Align Tabs/notches > Intel Xeon Processors  Same clock rate, cache size/type, and identical in the core frequencies > LGA 1366 socket (Socket B)  Pads of bare gold-plated copper − (No pins on CPU)  Load plate with locking lever  Align notches to ensure proper installation Attention: Follow the instructions carefully to install the CPU. Do not use any tools or sharp objects to lift the locking lever on the CPU socket. Do not press CPU into the socket. Make sure that the CPU is oriented and aligned correctly in the socket before closing the CPU retainer. Attention: Follow the instructions carefully to install the CPU. Do not use any tools or sharp objects to lift the locking lever on the CPU socket. Do not press CPU into the socket. Make sure that the CPU is oriented and aligned correctly in the socket before closing the CPU retainer. 29
  • 30. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Heatsink Requirements Dust Cover Heat Sink Filler Heat Sink P/N # Heatsink ASM for CPU #1 and #2 46M5518 dx360 M3 Processor Heat sink Plus Attention: Do not set the heat sink once removed from the plastic cover. Do not touch thermal material on the bottom of the heat or CPU. Touching thermal material will contaminate it. Attention: Do not set the heat sink once removed from the plastic cover. Do not touch thermal material on the bottom of the heat or CPU. Touching thermal material will contaminate it. 30
  • 31. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Memory Subsystem > Supports registered DDR3 LP ECC memory  Active Memory features, including advanced Chipkill memory protection – 16x better error correction thank standard ECC memory Choice of standard 1.5v or 1.35v consumes 10% less energy Sixteen DIMM slots  DIMM sizes: – 2 GB, 4 GB, 8 GB or 16 GBRDIMM sizes:  Maximum capacity – 256 GB (max. x16GB DIMMs) 31
  • 32. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation DIMM 1 DIMM 2 DIMM 4 DIMM 5 DIMM 7 DIMM 8 Example of Memory Bank 1 and CPU 1 Ch 2 DIMMs Ch 1 DIMMs Ch 0 DIMMs DIMM 9 DIMM 6 DIMM 3 dx360 M3 Memory Installation CPU must be populated for access to it’s DIMMS Intel® Xeon® processor 32
  • 33. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation > Standard with onboard ICH10 SATA II controller  Support 5 drives (depending on the configuration) internal simple-swap (SS) SATA II drives, or 4 SS SSDs. Hot-swap SAS or SATA HDDs, or simple-swap SAS HDDs, require an optional adapter > ServeRAID-BR10il v2 controller  Support (no cache) for up 3Gbps7 (x4 PCIe) > RAID-0/1/1E to 4 HDDs or SSDs. > ServeRAID-M1015 SAS/SATA controller  6Gbps (x8 PCIe) RAID-0/1/10 (no cache) for up to 16 drives (limited by available bays) > ServeRAID M1000 Series Advance Feature Key adds RAID-5 with SED support > ServeRAID-M5014 SAS/SATA controller offers  6Gbps (x8 PCIe) > Enhanced performance with 256MB of cache memory  RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). > ServeRAID-M5015 SAS/SATA controller  6Gbps (x8 PCIe) > enhanced performance with 512MB of cache memory and battery backup, and supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). > ServeRAID M5000 Series Advance Feature Key adds RAID-6/60 with SED support to the M5014 and M5015 > ServeRAID M5000 Series Battery Key adds battery backup support to the M5014.  Support either SAS or SATA, hot-swap or simple-swap, 3.5-inch or 2.5-inch drives. Drives cannot be intermixed. dx360 M3 Disk Controller Simple-swap drive Hot-swap drive 33
  • 34. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation dx360 M3 Hard Drive Flexible Configurations 2U Chassis with 2 1U Server trays  2 3.5-inch SS SATA HDDs (1 per server)—using the onboard controller  2 3.5-inch SS SAS HDDs (1 per server)—requires ServeRAID-BR10il v2  4 2.5-inch SS SAS HDDs (2 per server)—requires ServeRAID-BR10il v2 or M1015  4 2.5-inch SS SATA HDDs (2 per server)—using the onboard controller, ServeRAID-BR10il v2  4 2.5-inch SS SATA SSDs (2 per server)—using the onboard controller, ServeRAID-BR10il v2, or IBM 6GB SSD HBA Card 2U Chassis with 1 Server and one 1U Storage tray  5 3.5-inch SS SATA HDDs (1 per server, 4 per tray)—5 using the onboard controller  4 3.5-inch SS SAS HDDs (0 per server, 4 per tray)—requires ServeRAID-BR10il v2, M1015 or M5015  4 3.5-inch SS SATA HDDs (4 per tray)—requires ServeRAID-BR10il v2, M1015 or M5015  8 2.5-inch SS SAS HDDs (0 per server, 8 per tray)—requires ServeRAID-M1015 or M5015  8 2.5-inch SS SATA HDDs (0 per server, 8 per tray)—requires ServeRAID-M1015 or M5015  8 2.5-inch SS SATA SSDs (0 per server, 8 per tray)—requires ServeRAID-M1015, M5015, or IBM 6GB SSD HBA Card 2U Chassis with 1 Server and one 1U I/O tray  2 3.5-inch SS SATA HDDs (1 per server, 1 per tray)—using the onboard controller or ServeRAID-BR10il v2  2 3.5-inch SS SAS HDDs (1 per server, 1 per tray)—requires ServeRAID-BR10il v2, or M1015  4 2.5-inch SS SAS/SATA HDDs (2 per server, 2 per tray)—requires ServeRAID-M1015  4 2.5-inch SS SSDs (2 per server, 2 per tray)—requires ServeRAID-M1015 or IBM 6GB SSD HBA Card 3U Chassis with 1 Server and one 2U Storage tray  12 3.5-inch HS SATA HDDs (0 per server, 12 per tray)—requires ServeRAID-M1015, or M5015  12 3.5-inch HS SAS HDDs (0 per server, 12 per tray)—requires ServeRAID-M1015, or M5015 34
  • 35. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation NVIDIA Graphic Adapters Options iDataPlex Server Platform NVIDIA Tesla M1060  1 GT200 GPU 4 GB Memory 190 Watt Double Wide / Dual Slot Passive Cooling FRU PN 43V5909 NVIDIA Quadro FX3800 1 GB Memory 107 Watt Single Wide / Single Slot FRU PN 43V5925 NVIDIA Tesla M2050 1 Fermi GPU 3 GB Memory 225 Watt Double Wide / Dual Slot Passive Cooling FRU PN 43V5894 NVIDIA Tesla M2070/M2070Q 1 Fermi GPU 6 GB Memory 225 Watt Double Wide / Dual Slot Passive Cooling Cuda, OpenCL, OpenGL FRU PN 43V5935 M2070 43V5943 M2070Q 35
  • 36. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Tesla T10: The Processor Inside Double Precision Special Function Unit (SFU) TP Array Shared Memory > 8 TP per TPA (240 Total) > Full scalar processor with integer and floating point units > 16K of RAM for Shared Memory Thread Processor (TP) FP Integer Multi-banked Register File SpcOps ALUs Thread Processor Array (TPA) 30 TPAs = 240 Processors Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP Array Shared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory Double Precision Special Function Unit (SFU) TP ArrayShared Memory 36
  • 37. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation NVIDIA Tesla M2050 and M2070/M2070Q Tesla M2050 and Tesla M2070/M2070Q Computing Processor Module 9.75 inches 4.37 inches vented bracket 37
  • 38. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 2 PCIe x16 (1 per side) PCIE x8 GPU 1GPU 1 GPU 2GPU 2 HBA dx360 M3 – New 3-slot Riser and I/O Tray 38
  • 39. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation New Innovations > 448 CUDA Cores > NVIDIA Parallel DataCache > NVIDIA GigaThread > ECC Support DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F L2L2 The Soul of a Supercomputer in the body of a GPU Tesla T20 Series Architecture Structure (1 of 5) 39
  • 40. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 448 CUDA Cores > Optimized performance and accuracy with up to 8X faster double precision > Compliant with industry standards for floating point arithmetic > Versatile accelerators for a wide variety of applications > Valued by HPC clients running linear algebra and numerical simulation applications DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F L2L2 The Soul of a Supercomputer in the body of a GPU Tesla T20 Series Architecture Structure (2 of 5) 40
  • 41. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation NVIDIA Parallel DataCache > First GPU architecture to support a true cache hierarchy in combination with one-chip shared memory > Improves bandwidth and reduces latency through L1 cache’s configurable shared memory > Fast, coherent data sharing across the GPU through unified L2 cache > Clients running physics solvers, raid tracing or sparse matrix multiplication algorithms benefit greatly from this cache hierarchy DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F L2L2 The Soul of a Supercomputer in the body of a GPU Tesla T20 Series Architecture Structure (3 of 5) 41
  • 42. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Tesla T20 Series Architecture Structure (4 of 5) NVIDIA GigaThread > Increased efficiency with concurrent kernel execution > Dedicated, bi-directional data transfer engines > Intelligently manage tens of thousands of threads DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F L2L2 The Soul of a Supercomputer in the body of a GPU 42
  • 43. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Tesla T20 Series Architecture Structure (5 of 5) ECC Support > First GPU architecture to support ECC (error checking and correction) > Detects and corrects errors before system is affected > Protects register files, shared memories, L1 and L2 cache and DRAM DRAMI/FDRAMI/FHOSTI/FHOSTI/FGigaThreadDRAMI/FDRAMI/F DRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/FDRAMI/F L2L2 The Soul of a Supercomputer in the body of a GPU 43
  • 44. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Tesla M2070 and Tesla M2070Q Board Configuration 44
  • 45. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Emulex PCI-e HBA Brocade PCIe HBAs dx360 M3 I/O Support High IOPS SS Class SSD PCIe HBA QLogic PCIe HBA QLogic CNA Brocade CNA 45 > High-performance PCIe Host Bus Adapter (HBAs)  10Gb Ethernet, Fibre Channel, InfiniBand and GPU cards > Up to 3 PCIe adapter slots per chassis  Five differ riser cards available: – 1U single-slot (front) PCIe slot ♦ Supports all configurations – 2U two-slot (front PCIe slot ♦ Supports any two adapters with max. one GPU/GPGPU adapter – 2U three-slot (front PCIe slot ♦ Supports 2 GPU/GPGPU adapters and any other PCIe card  Third PCIe slot backside of riser card ♦ 2U I/O Rich configurations using PCIE tray – 1 single-slot (rear) PCIe slot ♦ All 2U configurations – 2U single-slot (rear) PCIe slot ♦ 3U configurations only NVIDIA Tesla M2050/ M2070/M2070Q
  • 46. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation > Flexibility to match power supply with workload  550W High Efficiency Non-redundant Power Support − Maximum Efficiency for lower power requirements  More efficiency by running higher on the power curve  900W High Efficiency Power Supply available for non- redundant higher power requirements  750W N+N Redundant Power Supply − Optional at the chassis level to tailor rack-level applications that require redundant power in some or all nodes − Power envelope fits most applications, oversubscription / throttling for extremes  Straightforward, cost-effective solution for both node and line-feed redundancy  Meets customer requirement for rack-level line feed redundancy  Meets requirement for node-level power protection for storage-rich, VM & Enterprise environments > Maintains maximum floor space density with the iDataPlex rack  New power supplies in same form factor as existing for 2U & 3U chassis power supply iDataPlex 2U/3U Power Supply Option 900W HE 550W HE 750W N+N AC 1 AC 2 PS 1 750W Max PS 2 750W Max C A B 750W Total in redundant mode 200-240V only Redundant supply block diagram 46
  • 47. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Fan unit comprised of four 80 mm fans (partially removed) Redundant power supply cable iDataPlex Efficiency in Power and Cooling Direct Dock Power connector 47
  • 48. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Agenda > Introducing IBM iDataPlex > Rack Unit > iDataPlex Nodes Overview > *iDataPlex Management * 48
  • 49. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation The Total Systems Management Experience Integrated Management Module (IMM) • Standards-based hardware which combines diagnostic and remote control UEFI—next generation BIOS • Richer management experience and future-ready Hardware and firmware advances which are standard across all new systems ToolsCenter • Consolidated, integrated suite of management tools • Powerful bootable media creator Redesigned system tool portfolio for single-system management and scripting IBM Systems Director • Platform management that is easy and efficient • Management of physical and virtual resources across heterogeneous systems IBM Systems platform solution for System x, BladeCenter, Power Systems, System z and storage IBM Tivoli Upward integration into Tivoli Service Management End-to-end stack to deliver future-proof systems management today *This slide contains animations that will launch with slide. 49
  • 50. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 50 iDataPlex/Intelligent Cluster Partner Ecosystem Technology Collaboration PartnersSolution Collaboration Partners
  • 51. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation An Affordable Fabric - Reliable Switches Trusted Industry Switch Suppliers >1GbE, 10GbE and InfiniBand >Up to 384 Ethernet connections Affordable Price-Performance >Highest throughput with lowest latency >Easy provisioning >Repurpose connections Scalable & Interoperable >Interconnect with existing network >Manage server changes automatically >Interoperable with CISCO Management *This slide contains animations that will launch with slide. 51
  • 52. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation "Despicable Me represents a breakthrough in the emerging model of collaborative, geographically distributed digital movie making. Thanks to the capacity of IBM's rendering technology and the skills of our artists, we were able to bring our creative vision to life through the completion of a wonderfully entertaining film.” Chris Meledandri, founder of Illumination Entertainment IBM System x iDataPlexIBM System x iDataPlex  Supported a 330 person global team  Created 142 terabytes of data  6500 processor cores  Cut data center floor space in half  Reduced energy consumption by 40% The Real Star of ‘Despicable Me’ – System x iDataPlex 52
  • 53. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 43 iDataPlex Systems43 iDataPlex Systems listed with Supercomputing Leadership in TOP 500TOP 500 November 2010Semiannual independent ranking of the top 500 supercomputers in the world. www.top500.org 53
  • 54. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation 2009 - 2010 iDataPlex Awards IBM’s LargestIBM’s Largest Green Data CenterGreen Data Center “Top Server Product of 2009” SearchDataCenter.com "IBM is building what the customers are telling them to build." IBM TOP 5 Technologies for Corporate Environmental Innovation Program 16 of top 50 in Green 500 43 Places on TOP500 list iDataPlex dx360 M3 2010 Readers' Choice: “Best HPC Server Product or Technology” Awarded at Super Computing 2010 2010 Silicon Valley Leadership Group (SVLG) Chill Off 2 Competition Results proved the IBM Rear Door Heat Exchanger had the “best energy efficiency.” Solution also considered “Lowest Facilities Energy Consumption” and “Most Economical.” SVLG Chill Off competitors included: APC, LCP+, Knurr, Liebert, Sun and DirectTouch 54
  • 55. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Topic 4 - Course Summary Having completed this topic you should be able to: > List three emerging technologies for an iDataPlex solution > List three goals that iDataPlex addressed > Identify elements of the iDataPlex rack design > Match the server offering to its characteristics 55
  • 56. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Glossary of Acronyms Active Energy Manager BIOS (Basic Input/Output System) Compute Compute Unified Device Architecture (CUDA) Dynamic System Analysis (DSA) Extreme Cluster Administration Toolkit (xCAT) Flex Node Technology Graphical Processing Unit (GPU) InfiniBand Integrated Management Module (IMM) Nehalem EP (Efficient Performance) Predictive Failure Analysis (PFA) QPI (Quick Path Interconnect) Reliability, Availability, and Serviceability (RAS) Teraflops Tesla M1060 and M2050 Thread Processing Array (TPA) Thread Processing Cluster (TPC) Thread Processing (TP) UEFI (Unified Extensible Firmware Interface) 56
  • 57. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation Additional Resources IBM STG SMART Zone for more education: > Internal: http://lt.be.ibm.com/smartzone/modulartechnical > BP: http://www.ibm.com/services/weblectures/dlv/partnerworld IBM System x iDataPlex home page > http://www-03.ibm.com/systems/x/hardware/idataplex/ Rear Door Heat eXchanger Installation and Maintenance Guide > http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075220 IBM System x iDataPlex dx360 User's Guide > http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077374 IBM System x iDataPlex dx360 Problem Determination and Service Guide > http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077375 IBM PDU+ Installation and Maintenance Guide > http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5073026 IBM ServerProven > http://www.ibm.com/servers/eserver/serverproven/compat/us/nos/ematrix.shtml 57
  • 58. IBM Systems & Technology Group Education & Sales Enablement © 2011 IBM Corporation End of Presentation 58

Notas del editor

  1. {DESCRIPTION} This screen displays a right-align front view image of the IBM blades and rack servers. {TRANSCRIPT} Welcome to IBM iDataPlex™ Internet Scale Computing. This is Topic 4 in a series of topics on System x Technical Principles.
  2. {DESCRIPTION} This screen lists the topic objectives. {TRANSCRIPT} The objectives of this course of study are: List three emerging technologies for an iDataPlex solution List three goals that iDataPlex addressed Identify elements of the iDataPlex rack design Match the server offering to its characteristics
  3. {DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} This topic introduces IBM System x™ iDataPlex™ innovative design solution for large-scale data centers, list advantages of iDataPlex optimized rack design, and data center power and cooling efficiencies, and identify iDataPlex flexible configuration in its 2U and 3U chassis.. Finally, we will discuss iDataPlex management features.
  4. {DESCRIPTION} This screen displays four circles and each circle is connected to an image of a server portfolio: (clockwise – Enterprise eX5, BladeCenter, Enterprise rack and towers and Enterprise eX5. {TRANSCRIPT} IBM System x™ iDataPlex™, is a flexible, massive scale-out data center server solution built on industry standard components for customers who are looking for compute density and energy efficiency. iDataPlex application positioning against the BladeCenter, rack, and high-end scale-up eX5 technology servers is an important aspect to consider. The iDataPlex is positioned for HPC, and Grid computing, whereas the high-end servers are excellent in server consolidation and virtualization solutions. BladeCenter offerings are well positioned for scale-out solutions in infrastructure simplification and application serving. iDataPlex is the right choice for customers that are: Facing power, cooling and density challenges Having software redundancy built into their application and are comfortable with lower hardware redundancy Focusing on lowering their capital expense and operating expense
  5. {DESCRIPTION} This screen displays a right-align front view image of the iDataPlex 100U rack. {TRANSCRIPT} The iDataPlex product was build on market needs, draws on all IBM’s capability and positions us as a leader. It is a word with three meanings: i for internet, Data is for DataCenter and Plex means multiple. IBM iDataPlex is a data center solution for high performance computing (HPC) cluster and corporate batch processing customers experiencing limitations of electrical power, cooling, physical space, or a combination of these. By providing a &quot;big picture&quot; approach to the design, iDataPlex uses innovative ways to integrate Intel-based processing at the node, rack, and data center levels to maximize power and cooling efficiencies while providing the compute density needed. A key component of the iDataPlex solution is its optimized rack design — which doubles server density per rack. It is built with industry-standard components to create flexible configurations of servers, chassis, and networking switches that integrate easily. This allows customers to configure customized solutions for applications to meet their specific business needs for computing power, storage intensity, and the right I/O and networking. It also provides an ease to service management and quick access without having to remove chassis and other components.
  6. {DESCRIPTION} This screen displays left-align view image of a group of IBM tower servers, a thermostat with a hand turning the dial, and a energy efficient bulb. {TRANSCRIPT} In order to meet the demands and to build upon customers’ requirements and their source: Expense frameworks, power &amp; cooling issues, with flexibility and the ability to grow and deploy, there are three important areas to focus: Increase compute density by 10x, Eliminate Data Center Air conditioning, and Decrease server power consumption by 50%.
  7. {DESCRIPTION} This screen displays a right-align front view image of the iDataPlex rack. {TRANSCRIPT} IBM continue to lead the industry in x86 innovation by contributing great investments into IBM iDataPlex to solve the needs of large-scale datacenters. IBM iDataPlex racks and nodes are designed specifically to address data center space and power-constraint challenges— using up to 40% less power than similarly configured standard 1U servers with an innovative half-depth design that provides better power and cooling efficiency. Customers can go green and save with iDataPlex efficient and cost-effective design. It’s design maximizes the amount of computing that can be deployed in the data center within limited floor space, power and cooling envelopes. iDataPlex supports the full range of iDataPlex configurations, the servers are easy to maintain, with individually serviceable servers and front access to all hard drives and cabling. The flexible design allows the chassis and racks to be configured to meet specific customer requirements, whether maximum compute density, more storage or I/O density, or a combination to create the specific rack-level computing environment the client needs. The iDataPlex rack is delivered as a pre-integrated solution, so the servers can be deployed and get to work quickly. Finally, iDataPlex servers have common firmware and management with the System x portfolio, providing robust and consistent management across the data center. In environments where higher rack power levels have caused customers to relocate their servers in an effort to maintain cooling, resulting in using up valuable and expensive raised floor space. IBM anticipated this trend and developed the IBM Rear Door Heat eXchanger for IBM Enterprise Racks. The Rear Door Heat eXchanger&apos;s liquid cooling design removes the heat generated from a fully populated rack and then releases cooler air which exits the rear of the rack. This simple, cost effective, easily installable solution can save on valuable floor space, reduce the heat load to the data center environment and eliminate hot spots within the data center.
  8. {DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Our next topic is the iDataPlex optimized rack design, and data center power and cooling efficiencies.
  9. {DESCRIPTION} This screen contains diagrams that illustrates the air depth in a traditional enterprise rack environment versus an iDataPlex rack environment. {TRANSCRIPT} In today’s fast-paced IT environment, overcrowded data centers are becoming more and more common—which means IT managers are simply running out of room to expand. An iDataPlex solution can help with these problems with its unique rack design optimized to save floor space. The innovative rack architecture more than doubles the server density over standard 1U racks, so you can pack more processing power into a highly efficient, compact system without adding more floor space to your data center. Important points to consider: iDataPlex optimized rack design, doubles server density per rack, maximize number of servers in data center because airflow and cooling issues are solved, and save with great floor space utilization Air flow efficiency equals fan power savings with its shallow depth rack reduces the amount of air needed to cool by half, and cuts cooling costs 20% compared to equivalent compute power in an enterprise rack And with the Rear Door Heat eXchanger provides the ultimate in cooling savings — v irtually eliminates heat exhaust from the rack
  10. {DESCRIPTION} This screen displays a front view image of the iDataPlex 100U rack and a traditional 42-rack {TRANSCRIPT} As mentioned earlier, a typical iDataPlex solution consists of multiple fully populated rack installations. The groundbreaking iDataPlex solution offers increased density in its rack cabinet design. In that sense, the iDataPlex rack is essentially two 42 units racks connected together and provides additional vertical bays. It uses the dimensions of a standard 42 unit enterprise rack but can hold 102 units of equipment, populated with up to 84 servers, plus 16 1U vertical slots for switches, appliances, and power distribution units (PDUs). It also contains 2 1U horizontal slots at the bottom for the iDataPlex Rack management or other low-power/infrequent-access devices. This added density addresses the major problems that prevent most data centers today from reaching their full capacity: insufficient electrical power and excess heat. iDataPlex’s efficiency results in more density within the same infrastructure, even in a standard rack, allowing you to g et “more on the floor” with iDataPlex. The iDataPlex rack is shallower in depth compared to a standard 42U server rack as shown in diagram in the lower left. The iDataPlex rack is 600 mm deep (840 mm with the Rear Door Heat eXchanger) compared to the 42U rack which is 1050 mm deep. The shallow depth of the rack and the iDataPlex nodes is part of the reason that the cooling efficiency of iDataPlex is higher than the traditional rack design, because air travels a much shorter distance to cool the internals of the server compared to airflow in a traditional rack. The increased air pressure resulting from the shorter distance through the rack and the 2U and 3U chassis four larger fans makes for one of the most efficient air-cooled solutions on the market. This allows racks to be positioned much closer together, actually eliminating the need for “hot aisles” between rows of fully populated racks. And, all this is before adding the effects of including the innovative and incredibly effective Rear Door Heat eXchanger.
  11. {DESCRIPTION} This screen displays a front view image of the iDataPlex 100U rack and a traditional 42-rack {TRANSCRIPT} The iDataPlex solution is made up of many racks of servers that have been custom designed and built to meet the customer&apos;s needs for maximum compute power, hybrid CPU and GPU acceleration, storage intensity, and the right I/O and networking. This includes compute nodes, local storage, networking, power distribution, cooling, and management. Each customized solution is integrated and tested by IBM during the manufacturing process. When the iDataPlex is delivered, it is ready to plug in the power feed and network connection, and deploy the software to each node. This means that iDataPlex provides a custom-designed and factory-integrated solution that provides easy deployment and simplified management. The flexible design of iDataPlex provides cost-efficient servers in configurations to meet many needs. Each node design has a common power supply and fan assembly for all models, to minimize costs and maximize the benefits of standardization. The basis for the flex nodes is an industry standard motherboard based on the SSI specification. In addition to flexibility at the server level, iDataPlex offers flexibility at the rack level. It can be cabled either through the bottom, if it&apos;s set on a raised floor, or from the ceiling. Front-access cabling and Direct Dock Power enable you to make changes in networking, power connections, and storage quickly and easily. The rack also supports multiple networking topologies including Ethernet, InfiniBand, and Fibre Channel. As you can see, iDataPlex offers a flexible set of configurations created from common building blocks. These configurations are either computationally dense, I/O rich, or storage rich. This modular approach to server design keeps costs low while providing a wide range of node types.
  12. {DESCRIPTION} This screen displays an image of the iDataPlex rack with the Rear Door Heat eXchanger attached, and displays close-up images of the hex airflow design, standard hose fitting featuring sealed internal coils, and its swing door ability. {TRANSCRIPT} With the optional IBM Rear Door Heat eXchanger as part of an iDataPlex solution, can provide a high-density data center environment that can alleviate the cooling challenges. The Rear Door Heat eXchanger is a water-cooled door that is mounted to the rear of the IBM iDataPlex rack to cool the air that is heated and exhausted by the devices inside the rack. The Rear Door Heat eXchanger&apos;s sealed coil design when filled with above dew-point, delivers chilled, conditioned water through a supply hose to the heat exchanger. A return hose delivers warmed water back from the heat exchanger. That means the air exiting the rear of the rack can actually be cooler than the air going into the rack. The IBM Rear Door Heat eXchanger requires a cooling distribution unit (CDU) in your data center. It connects to your water system with two quick-connects on the bottom of the door. The door swings open so you can still access the PDUs at the rear of the rack without unmounting the heat exchanger. Service clearance is the same as for a standard rear door installation. The heat exchanger does not require electricity.
  13. {DESCRIPTION} This screen displays a right-align rear view image of the IBM iDataPlex dx360 M3 100U rack, and contains two images that displays thermal cooling – top image with the Rear Door Heat eXchanger attached and bottom image is without the Rear Door Heat eXchanger. {TRANSCRIPT} The innovative iDataPlex design does more than just save power and space—it also helps save cooling costs. With further adjustments, the Rear Door Heat eXchanger can help cool the room. In fact, it can even go beyond that, to the point of helping to cool the data center itself and reducing the need for Computer Room Air Conditioning units (CRACs). With the addition of the optional Rear Door Heat eXchanger water-cooled door provides energy savings from not having to cool air with fans or blowers elsewhere in the computer room, as is done with conventional computer room air conditioner (CRAC) units. Because of the design of the iDataPlex rack, the Rear Door Heat eXchanger has a large surface area for the number of servers it cools, making it very efficient. It can greatly reduce, or even eliminate the requirement for additional cooling in the server room, freeing space that is occupied by the numerous CRAC units that are usually required. For customers who are able to cool their data centers with water, the Rear Door Heat eXchanger can withdraw 100% or more of the heat coming from a 100,000 BTU (approximately 30 kilowatt-hours (kWh)) rack of servers to alleviate the cooling challenge that many data centers are having. By selecting the correct water inlet temperature and water flow rate, you can achieve optimal heat removal. The images shown are thermal images taken of a person standing beside an iDataPlex rack under test in the IBM Thermal Lab. The top image shows water flow off and the bottom image shows water flow on when the heat exchanger was operational. Even without water-cooling, the iDataPlex solution is still at least 20% cooler than the conventional rack approach.
  14. {DESCRIPTION} This screen displays a right-align front view image of the IBM iDataPlex 100U rack. {TRANSCRIPT} For ease of serviceability, all access to hard drives, planar, and I/O is from the front of the rack. There is no need to access the rear of the iDataPlex rack for any servicing except for the Rear Door Heat eXchanger. Additional easy to service points are: Swappable server trays in chassis Blade-like design with chassis docking into power connector Chassis guides keep upper servers in place Rack-side pockets for cables provide highly efficient cable routing. Again, all cables, except power (PDUs), are routed out the front of the chassis and other components creating an ease to service management Flexible support options from self maintenance to 24x7x4 response time One phone number for all support
  15. {DESCRIPTION} This screen displays a right-align front view image of the IBM iDataPlex dx360 M3 100U rack, and a diagram that illustrates the air flow of 100U rack. {TRANSCRIPT} iDataPlex innovative rack solution is designed with emphasis on: Energy efficiency – Optimizes airflow for cooling efficiency with half-depth rack – Reduces pressure drop to improve chilled air efficiency Leadership density – Dual column / Half-depth rack – Standard two-floor tile rack footprint – Up to 168 physical nodes in 8 square feet Flexibility – Matches US &amp; European data center floor tile standards – Compatible with standard forced air environments Ease of use – All service and cabling from the front
  16. {DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Next, we will introduce the iDataPlex nodes.
  17. {DESCRIPTION} This screen displays images of the IBM iDataPlex dx360 M2, dx360 M3, dx360 M3 Refresh and dx360 M3 3U Storage server. {TRANSCRIPT} The iDataPlex portfolio continues to evolve to meet the computing requirements in the data center of today and tomorrow. IBM introduced the dx360 M2 in March 2009, based on Intel Nehalem processors which provides maximum performance while maintaining outstanding performance per watt with the highly efficient iDataPlex design. In March 2010, IBM introduced the dx360 M3, increasing our performance and efficiency with the new Intel Westmere processors and new server capabilities which we will go into more detail on in the next few charts. In May 2010, IBM introduced a 3-slot riser card that supports 2 NVIDIA Graphics Processing Units (GPU) and a high bandwidth adapter. We also have a 3U chassis available with the dx360 M3 server, which provides up to 12 3.5” SAS or SATA hard disk drives, up to 24TB per server for large capacity local storage. The iDataPlex portfolio also comes with 3-year customer replaceable unit and onsite limited warranty. Again, within the iDataPlex rack we can mix these offerings to provide the specific rack-level solution that the client is looking for.
  18. {DESCRIPTION} This screen highlights the IBM’s marketing strategy, and displays an interior image of the dx360 M3 2U I/O located in the right-hand side of the screen. {TRANSCRIPT} IBM System x dx360 M3 Refresh provides new options that significantly increase the flexibility of iDataPlex. Starting with our new I/O capabilities, which will support up to 2 very large x16 PCIe adapters such as the new NVIDIA M2050, M2070 or M2070Q GPU cards, plus a high bandwidth network adapter, and high capacity at up to 6Gbps performance. In addition, dx360 M3 offers increased memory capacity to 192GB per server. IBM System x iDataPlex Acceleration node architecture is the next-generation data center solution for clients who find limitations in their EXA-scale computing environments. By delivering customized solutions that help reduce overall data center costs, IBM addresses the business growth challenges in large-scale data centers. iDataPlex incorporates innovative ways to integrate a hybrid Intel-based processor with NVIDIA GPU acceleration for efficiency at the node to drive more density in the rack, and TCO advantage in the data center.
  19. {DESCRIPTION} This screen displays a front view image of the dx360 M3 new I/O tray with red arrows identifying component located in the front of the unit. {TRANSCRIPT} IBM System x dx360 M3 new Graphical Processing Unit (GPU) I/O tray featuring a 3 slot riser card, allowing 2 full height full length 1.5 wide cards (such as the NVIDIA M2050, M2070 or M2070Q ) in the top of the chassis with x16 connectivity. In addition there is an open x8 slot designed to accommodate a high bandwidth adapter such as InfiniBand or 10Gb Ethernet, or Converged Network adapter. The dx360 M3 GPU I/O tray also has an internal slot that will accommodate a RAID adapter, providing full 6Gbps performance for up to four 2.5” drives. As mentioned earlier, when compared to outboard solutions, each iDataPlex GPU server is individually serviceable. In the event of a problem with a GPU card, sparing of GPUs becomes much simpler, as each card can be replaced individually, instead of an outboard unit that contains 4 cards. The significant I/O capabilities also provide for maximum local storage performance with RAID. Also, GPU’s will be provided as part of the Intelligent Cluster integrated solution from IBM, so when there is an issue there is only one number to call for resolution.
  20. {DESCRIPTION} This screen displays a graphic illustrating the cores for the CPU and GPU. {TRANSCRIPT} The use of the GPU to do mathematical computations is one way to meet these increasing application demands. GPU computing is the use of the CPU and the GPU together. The IBM iDataPlex dx360 M3 is powered by both Intel Xeon CPUs and NVIDIA Tesla GPUs, and is designed to be clustered with other dx360 M3 modular servers to form a supercomputer. GPUs have evolved from just doing graphics to becoming general purpose processors that can do scientific computing. Graphics is after all a mathematical problem, a sub-set of scientific computing, and using CPUs and GPUs together is about choosing the right processor for the right job. Whereas a CPU is great for sequential computing, a GPU is best for parallel computing. An example from every day is Excel (or photo-editing). Whereas launching the application is completely sequential and should run on the CPU, the mathematical computations in Excel (the image editing filters in photo editing) run best on the GPU. Supercomputers are measured by the industry standard Linpack benchmark that tests double precision performance. Today it takes 8 racks – or about 600 CPUs - just to be 500 th in the top500. A simple 1 rack GPU cluster gets you on the top500 (about 430 or so). Eight racks of GPUs, the same number of racks just to get a CPU on the top500, gets you into the top 25 fastest computers in the world. And for the same performance a GPU cluster consumes 1/6 the power so it’s more efficient and costs less to run.
  21. {DESCRIPTION} This screen displays two images of the iDataPlex 100U racks and also shows the stages in growth when implementing GPUs in the iDataPlex products. {TRANSCRIPT} Compared to first-generation Intel® Xeon® processor-based iDataPlex servers, the dx360 M3 server with 2 GPUs improves performance density in the data center for massive parallel computations after software porting. IBM dx360 M3 Graphical Processing Unit (GPU) I/O capabilities can increase node density with GPU acceleration by consolidating for an example, eight dx360 servers (using Westmere CPU) with a capacity of 1 TeraFlop into one dx360 server with two NVIDIA GPUs for the same capacity 1TeraFlop — delivering 72% less power consumption in flops and wattage. Therefore reducing acquisition costs by 65% with nearly 10 times more performance per server.
  22. {DESCRIPTION} This screen displays an image that identifies the location of the internal connectors on the dx360 M3 Flex Node system board tray. {TRANSCRIPT} The dx360 M3 is a 1U server, available as machine type 6391, that fits into both the 2U Flex node and 3U chassis, each supporting two trays containing various combinations of server(s), storage, and I/O. Each tray in the dx360 M3 can be configured with a two-socket Intel Xeon 5600 or 5500 processors with 16 memory slots, or up to 128GB of memory capacity. Each tray also has room for two 2.5-inch disks and two PCI Express 2.0 slots for linking in InfiniBand or Ethernet connectivity above and beyond the two Gigabit Ethernet ports on the system board and the 100Mbit Ethernet port for the on-board service processor.
  23. {DESCRIPTION} This screen displays a front view image of the dx360 M3 2U and 3U servers’ system board and 1U expansion trays. {TRANSCRIPT} IBM 2U flex chassis and a 3U chassis can be configured for high-capacity storage requirements to meet a large variety of business needs through an extensive portfolio. Both chassis can be ordered in several different configurations: The Compute Intensive server is a system-board tray designed with one PCI-E adapter connector and one 3.5-inch hard disk drive bay or two 2.5-inch hot-swappable hard disk drive bays (depending on the configuration it is attached to). The 2U Compute + Storage server consists of one system-board tray with the 1U storage expansion unit that is installed in a 2U chassis. The storage expansion unit provides four additional 3.5-inch hard disk drive bays for the system-board tray, for a combined total of five. You can configure the 2U storage server with up to five 3.5-inch hard disk drives. The Acceleration Compute + Input/output server consists of one system-board tray and an I/O expansion tray that is installed in a 2U chassis. You can configure up to eight 2.5-inch hard disk drives, and up to two PCI-E adapters. The 3U storage server consists of one system-board tray and a triple storage expansion unit that is installed in a 3U chassis. The 3U chassis supports up to twelve 3.5-inch hot-swappable hard disk drives and one PCIe adapter. Additional option cards are supported via a riser slot in the system board. When using a 3U storage server configuration, the hard disk drive bay in the system-board tray is not used.
  24. {DESCRIPTION} This screen displays a front view image of the dx360 M3 and a close-up image of the Light Path Diagnostic Panel. {TRANSCRIPT} In addition to the multiple configuration options, located in the center of each system-board tray are the controls, connectors, and LEDs. Reading from left to right, the server has one RS232 serial port, one VGA port (wired to an onboard Matrox G200 graphics adapter supporting resolutions up to 1280x1024), two USB 2.0 ports, one 10/100 Mbps RJ45 connector for dedicated systems management (wired to the Integrated Management Module (IMM)), and two 1 Gbps Ethernet interfaces based on the Intel 82575 controller.
  25. {DESCRIPTION} This screen displays a topology Intel 5520 Tylersburg Chipset illustrating the connection of the two processors, eighteen DIMM slots, PCIe slots, and the I/O Controller Hub (ICH). {TRANSCRIPT} IBM dx360 M3 processor subsystem contains an Intel 5520 (Tylersburg) Chipset supporting Intel’s latest Intel® Xeon 5600 Series processors at up to 6.4 GT (Giga-Transfers) per second via two separate point-to-point QuickPath Interconnect (Intel QPI) links. The Xeon 5600 series are based on the 32nm Technology with 2 nd Generation High-K Process and are capable of supporting 4- and 6-cores per processor package. The Intel QPI is designed for increased bandwidth and low latency. It can achieve data transfer speeds as high as 25.6 GB/sec. The Intel 5520 Chipset delivers dual x16 Gen2 or quad x8 PCI Express 2.0 graphics card support. In addition, it contains an integrated memory controller that supports three channels of lower power DDR3 memory with up to 1333 MHz providing three cache levels hierarchy, 32 KB data / 32 KB instruction of L1 cache, 256 KB of L2 memory cache per core, and a fully shared 8 MB of L3 cache (with a maximum 12MB) that is shared among all cores to match the needs of various applications.
  26. {DESCRIPTION} This screen lists the iDataPlex dx360 M3 advanced, standard and basic processor SKUs. {TRANSCRIPT} The dx360 M3 will support all of the new Westmere-EP 5600 series CPU’s up through the 95W bin. It is important to understand that not all Westmere processors from Intel are 6 core processors . The Advanced line-up at the top have three 6-core processors with speeds up to 2.93GHz, it also includes a 4-core (3.06GHz) processor. In the Standard line-up, all the processors are 4-core like Nehalem 5500 series but have increased from 8 to 12MB cache over Nehalem. The Basic lineup are actually Nehalem-EP 5500 series processors, continuing on from the previous generation. On the right are the Low Voltage 6-core, 60W processor and 4-core 40W processors. These processors are tailored for clients who are willing to pay a premium to get the lowest power draw possible. The dx360 M3 does not support 130W SKUs. The 130W processors would provide a small performance improvement over the top bin 95W processor, but with increased processor cost. However, the significant power and cooling increase of the 130W processors (70W per server, nearly 6KW/rack), and redesigning the server for 130W processors would reduce the efficiency of the server for 95W and below deployments. The over-riding values of an iDataPlex solution in data centers for clients is memory bandwidth and highest efficiency at the lowest cost, and the small performance benefit of the 130W processors would not be justifiable. Another thing to note is the memory bandwidth. Only the Advanced CPU’s provide 1333MHz memory bandwidth, whereas the Standard and Low Power provide 1066MHz bandwidth and the Basic provide 800MHz bandwidth. Although not listed on this chart, the full lineup of Intel Nehalem 5500 series processors remains available and supported on the dx360 M3 as well. To stay current with the latest support processor visit the IBM ServerProven Web site.
  27. {DESCRIPTION} This screen displays the Intel Xeon architecture block diagram that illustrated key features that help enhanced the dx360 M3 processor subsystem. {TRANSCRIPT} Intel Xeon® 5600 features and benefits are build on the Xeon® 5500 leadership capabilities. This new CPU and platform architecture delivers better performance/wattage and lower power consumption than its predecessor. The foundational improvements to the server platform architecture complement the new microarchitecture for dramatic improvement in native platform performance such as: Quick Path Interconnect, integrated memory controller and native DDR3 memory (improve memory access speed, lower latency, with more memory capacity). It also supports PCIe2, and 10Gb Ethernet. With this new CPU comes outstanding innovations in processor technologies like the Intel® Intelligent Power Technology, Integrated Power Gates and Automated Low-Power States technologies help lowers energy costs by automatically putting processor and memory into the lowest available power state to meet the current workload while minimizing impact on performance. And it offers CPU Power Management to optimized power consumption through more efficient Turbo Boost and memory power management.
  28. {DESCRIPTION} This screen displays an architecture block diagram identifies the flow pattern of the connectors/components of the dx360 M3 planar board. {TRANSCRIPT} This block diagram illustrates the functional paths of the major components on the dx360 M3 system board. The Intel 5520 (Tylersburg) IOH chipset provides the interface between the processors, and the PCI Express buses that interface to the ICH10 South Bridge. The ICH10 in turns interface with the IMM, the optional mini-RAID connector, SATA ports, and USB busses.
  29. {DESCRIPTION} This screen displays a front view image of the dx360 M3 LGA 1366 socket, processor, and close-up on the CPU align notch. {TRANSCRIPT} The dx360 M3 has two Intel land grid array (LGA) 1366 sockets, also known as Socket B that is used as the physical interface for Intel Xeon processors. Unlike the pin grid array (PGA) interface found on most AMD and older Intel processors, there are no pins on the chip; in place of the pins are pads of bare gold-plated copper that is soldered on the system board. The advantage of this architecture is that it is now the system board that has the pins, rather than the CPU. The risk of bent pins since the pins are spring-loaded and locate onto a surface, rather than into a hole. Also, the CPU is pressed into place by a &quot;load plate&quot;, rather than human fingers directly. The installing technician lifts the hinged &quot;load plate&quot;, inserts the processor, closes the load plate over the top of the processor, and press down a locking lever. To prevent damage make sure that the alignment notches on the CPU matches the align tab on the socket. The pressure of the locking lever on the load plate clamps the processor&apos;s 1366 gold-plated copper contact points firmly down onto the system board’s 1366 pins, ensuring a good connection. The load plate only covers the edges of the top surface of the CPU. When installing both processors make sure that CPU 1 and 2 are identical (number of cores, cache size and type, clock speed, internal and external clock frequencies).
  30. {DESCRIPTION} This screen displays a front view image of the CPU heat sink and dust cover and heat sink filler. {TRANSCRIPT} With each CPU installation a heat sink cooling device is required. The heat sink is placed on top of the CPU and secured by four screws to the system board. If an optional CPU is not installed, a CPU dust cover and heatsink filler must be installed in that CPU socket. The dust cover will help prevent dust from falling down in to the pins on the system board that could affect your processor performance, and the CPU heatsink filler is required to balance air flow impedance.
  31. {DESCRIPTION} This screen provides a single processor topology of the dx360 M3 memory subsystem featuring three DIMM channels with three memory DIMMs in each channel. {TRANSCRIPT} The dx360 M3 system board supports registered double data rate III (DDR3) LP (low-profile) DIMMs and provides Active Memory features, including advanced Chipkill memory protection, for up to 16X better error correction than standard error-correction code (ECC) memory. In addition to offering triple the memory bandwidth of DDR2 or fully-buffered memory, DDR3 memory also uses less energy. DDR2 memory already offered up to 37% lower energy use than fully buffered memory. Now, a generation later, DDR3 memory is even more efficient, using 10-15% less energy than DDR2 memory. The dx360 M3 supports up to 256GB of memory in 16 DIMM slots using 2GB, 4GB, 8GB or 16 GB (registered DIMM) RDIMMs. The dx360 M3 also supports either standard 1.5V DIMMs or 1.35V DIMMs that consume 10% less energy.
  32. {DESCRIPTION} This screen provides a single processor topology of the dx360 M3 memory subsystem featuring three DIMM channels with three memory DIMMs in each channel. {TRANSCRIPT} Redesign in the architecture of the Xeon 5600 and 5500Series processors bring radical changes in the way memory works in these servers. For example, the Xeon 5600 and 5500 Series processors integrate the memory controller inside the processor, resulting in two memory controllers in a 2-socket system. Each memory controller has three memory channels. Depending on the type of memory, population of memory, and processor model, the memory may be clocked at 1333MHz, 1066MHz or 800MHz. For each CPU, a minimum of two DIMMs must be installed. The system board tray support three single-ranked and dual-ranked DIMMs per channel or two quad-ranked DIMMs per channel. Additional DIMMs may be installed one a time as needed. However, when populating DIMM slots using quad rank DIMMs, only 12 DIMM slots are supported. A DIMM or DIMM filler must occupy each DIMM socket before the server is turned on. Each CPU has its own memory DIMM bank. If only one processor is installed, only the first eight DIMM slots can be used. Adding a second processor not only doubles the amount of memory available for use, but also doubles the number of memory controllers, thus doubling the system memory bandwidth. If you add a second processor, but no additional memory for the second processor, the second processor has to access the memory from the first processor “remotely,” resulting in longer latencies and lower performance. The latency to access remote memory is almost 75% higher than local memory access. So, the goal should be to always populate both processors with memory. It is important to populate all three memory channels in each processor. The relative memory bandwidth decreases as the number of channels populated decreases. This is because the bandwidth of all the memory channels is utilized to support the capability of the processor. So, as the channels are decreased, the burden to support the requisite bandwidth is increased on the remaining channels, causing them to become a bottleneck. If 1.35V and 1.5V DIMMs are mixed, all DIMMS will run at 1.5V. If Chipkill and non-Chipkill DIMMS are used, all memory will run in non-Chipkill mode.
  33. {DESCRIPTION} This screen displays images of a simple-swap and hot-swap hard disk drives, and list supported disk controllers for the dx360 M3 disk subsystem. {TRANSCRIPT} All iDataPlex models includes an integrated six-port SATA II controller. This controller supports up to five (depending on the configuration) internal simple-swap (SS) SATA II drives, or 4 SS SSDs. Hot-swap SAS or SATA HDDs, or simple-swap SAS HDDs, require an optional adapter. The integrated 3Gbps (x4 PCIe) ServeRAID-BR10il v2 controller offers hardware RAID-0/1/1E support (no cache) for up to 4 HDDs or SSDs. The 6Gbps (x8 PCIe) ServeRAID-M1015 SAS/SATA controller supports RAID-0/1/10 (no cache) for up to 16 drives (limited by available bays). The IBM ServeRAID M1000 Series Advance Feature Key adds RAID-5 with SED support. The 6Gbps (x8 PCIe) ServeRAID-M5014 SAS/SATA controller offers enhanced performance with 256MB of cache memory, and supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). The 6Gbps (x8 PCIe) ServeRAID-M5015 SAS/SATA controller offers enhanced performance with 512MB of cache memory and battery backup, and supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). The IBM ServeRAID M5000 Series Advance Feature Key adds RAID-6/60 with SED support to the M5014 and M5015. The IBM ServeRAID M5000 Series Battery Key adds battery backup support to the M5014. The ServeRAID controllers provide SAS data transfer speeds of up to 3Gb per second 8 in each direction (full-duplex), for an aggregate speed of 6Gbps. The serial design of the SAS bus allows maximum performance to be maintained as additional drives are added. Both controllers support either SAS or SATA, hot-swap or simple-swap, 3.5-inch or 2.5-inch drives. However these drives cannot be intermixed. All drives must be the same type, the same physical size, and use the same interface. Note: SATA II drives also operate at a data transfer speed of up to 300MB per second (but in halfduplex mode). This throughput is similar to that of Ultra320 SCSI, with lower latency.
  34. {DESCRIPTION} This screen display images of a simple-swap and hot-swap hard disk drives, and provide bullet highlights of the dx360 M3 disk subsystem. {TRANSCRIPT} The iDataPlex nodes offer a wide array of flexible storage options as shown here, supporting between 1 to 12 (3.5-inch) hot-swap SAS or SATA drives or up to 2 to 8 (2.5-inch) simple-swap SAS or SATA hard disk drives, or up to 8 2.5-inch solid-state drives (SSDs), offering high-performance with high availability from 50GB to 24TB of storage per chassis, depending on the chassis used and the configuration. The 2.5-inch drives consume approximately half the power of 3.5-inch drives. 2.5-inch solid-state drives use approximately 1/5 the power of 2.5-inch HDDs, with triple the reliability and higher read performance than HDDs.
  35. {DESCRIPTION} This screen displays images of the NVIDIA Tesla M2050, NIVIDIA M1060, and NVIDIA Quadro FX3800 adapters. {TRANSCRIPT} The dx360 M2 introduced support for graphics adapters back in 2009 with the NVIDIA Quadro FX3800, and the dx360 M3 has continued to evolve with newer adapter capabilities with the NVIDIA Tesla M1060, Tesla M2050 followed by the new Tesla M2070 and M2070Q as part of the iDataPlex solution. The NVIDIA Quadro FX3800 has 192 Compute Unified Device Architecture (CUDA) cores for parallel computation and dedicated 1 GB of GDDR3 memory onboard. The 256-bit memory interface allows for a total memory bandwidth of 51.2 GB per second. It is a single-wide PCIe card and its maximum power consumption is 108 watts. The NVIDIA Tesla cards have one Tesla GPU onboard that implements NVIDIA’s Fermi architecture. It is the first implementation of GPUs with the sole purpose of accelerating your application by using the general purpose GPU model (GPGPU). Primary areas are simulations in many different areas that rely heavily on floating-point calculations. Note: Two M1060, M2050, or M2070/M2070Q GPUs can work together on a common workload for double the performance.
  36. {DESCRIPTION} This screen the Tesla T10 series processor inside for the Tread Processor (TP) and the Thread Processor Array (TPA). {TRANSCRIPT} Let’s take a look at the technical perspectives of the latest GPU adapters – starting with the NVIDIA Tesla M1060. The Tesla T10 GPU contains 30 Thread Processor Arrays or TPAs for a total of 240 Thread Processors or “cores”. The M1060 has a 512-bit memory interface to 4 GB of GDDR3 memory with a maximum bandwidth of up to 102 GB per second. The M1060 is a double-wide PCIe card and has a maximum power consumption of about 190W. It provides up to 933 Gflops single-precision floating-point performance (peak) or 78 Gflops double-precision (peak). NVIDIA Tesla M1060 delivers supercomputing performance while requiring less power and space. Featuring the revolutionary NVIDIA CUDA parallel computing architecture and powered by 240 parallel processing cores. The Tesla M1060 shatters the performance per watt expectations to help you solve the toughest computing problems faster.
  37. {DESCRIPTION} This screen displays a close image of the NVIDIA Tesla M2050 and the Tesla M2070/M2070Q adapter. {TRANSCRIPT} The Tesla M2050 and Tesla M2070 and M2070Q computing processor board shown here conform to the PCI Express, double-wide, full height (4.376 inches by 9.75 inches) form factor computing module based on the NVIDIA Fermi GPU. This module comprises a computing subsystem with a GPU and high speed memory. The Tesla 20 Series image is shown without the bracket that is located to your lower right. The vented bracket is shipped standard with all adapters.
  38. {DESCRIPTION} This screen displays 3-D images of the dx360 M3. {TRANSCRIPT} The 3-D drawings illustrates the internals of the new configuration for I/O featuring two GPUs. To your left is the 3 slot riser, that contains 2 full x16 slots on the top, one on either side, and located at the bottom a x8 slot for high bandwidth adapter, enabling clients the flexibility and the performance and the number of I/O slots that they demand in tomorrow&apos;s workloads. The NVIDIA GPU adapters interface to the dx360 M3 through the industry standard PCI-E bus which allows GPUs to quickly and easily integrate into standard servers configurations. All NVIDIA PCI Express adapters with on-board GPUs require a x16 mechanical PCIe slot for installation. The NVIDIA PCIe adapters that are used to connect external GPU enclosures to the dx360 M3 system come in both x16 and x8 interface, although the x16 PCIe adapter is preferred due to performance. PCIe cables is required to connect the NVIDIA PCIe adapters to an NVIDIA external GPU enclosure. When installing the NVIDIA adapter ensure it is completely and evenly inserted into the PCIe slot and all of the retention mechanisms of the system should be used and verified to make sure the card is held in place firmly.
  39. {DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} In the Tesla T20 or Fermi architecture, each streaming multiprocessor or SM (like the Thread processor array in the T10 series) has 448 Compute Unified Device Architecture (CUDA) cores (or thread processors) -- four times as many as the previous Tesla GPU architecture. All 448 cores share the common resources of their streaming multiprocessor. The GPU consists of hundreds of cores that are extremely good at sharing data among themselves, and they can collaborate and get a task done really fast. That&apos;s why the GPU cores are more effective at running applications which have high mathematical computation and high data throughput. The GPU also supports a lot of memory - shared memories, constant caches, texture caches, and newly added the L1 and L2 caches. Also, in order to make the cores in the GPU much more accessible and easily available to programmers, the GPU has a thread schedule, the NVIDIA Giga Thread. This essentially enables a programmer to just launch millions of threads and then the GPU thread scheduler takes care of actually managing the threads and scheduling them on the cores. We will skip the ECC feature for now.
  40. {DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} Double precision arithmetic is at the heart of numerically intensive HPC applications such as linear algebra, numerical simulation, and quantum chemistry. The T20 architecture has been specifically designed to offer unprecedented performance in double precision; up to 16 double precision fused multiply-add operations can be performed per SM, per clock, a dramatic improvement over the Tesla T10 architecture. T20 improves on the scheduler in previous GPU architectures by issuing two instructions per clock cycle instead of one. Each streaming multiprocessor can manage 48 warps of 32 threads each for a total of 1,536 active threads of execution. With 14 streaming multiprocessors, a T20-class GPU can handle 21,504 parallel threads. Using this elegant hierarchical model of instruction issuing, T20 achieves very high efficiency. The M2050, M2070, and M2070Q generate up to 515 gigaflops double-precision (1,030 gigaflops single-precision) peak performance. This is all IEEE compliant; in fact, it is compliant to the 754 2008 standard, which is the latest standard, and it&apos;s powered by a fuse multiply add. A fuse multiply add is a highly accurate mathematical operation because it doesn&apos;t have intermediate results which are rounded. What this ends up being is a processor which is extremely valued by high performance computing customers who want to run very high precision and very computationally intensive applications.
  41. {DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} The 2nd part of this architecture that is extremely important is the memory. Besides the L1 and L2 caches, NVIDIA has always had a shared memory. In this diagram, the shared memory has increased from 16KB to up to 48KB, with the addition of an L1 cache, which is again among 32 cores, and an L2 cache. The architecture itself is a dual issue architecture. This means that you can issue instructions from 2 different threads at the same time. This action leads to more flexibility for the compiler to find parallelism in the core. And really the cache is really held in non-uniform access and applications that have non-uniform access. So anything like finite element analysis, any CAE application, raid tracing, sparse matrix multiplication: all of these benefit greatly by the cache hierarchy.
  42. {DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} The Giga Thread scheduler, as I mentioned earlier, helps in enabling the GPU to take care of thread scheduling. For example, programmers are doing a matrix multiplication. They are multiplying 2 one million by one million matrices; which have to launch one million threads, each of which does a single multiplication between 2 elements of the 2 matrices. However for the hardware, the GPU, actually takes care of scheduling these threads on the cores, and actually takes care of any kind of dependencies or conflict between the thread. The other added feature in the Fermi architecture is the ability to do concurrent kernel executions. What this means is you can essentially launch multiple functions or multiple tasks on the GPU. These tasks are scheduled in parallel where possible by the GPU hardware. Secondly, a new DMA engine was added to the Fermi architecture. In the past, the GPU could communicate with the CPU over a single bi-directional bus; now there are 2 bi-directional buses and 2 bi-directional DMA engines which enable overlapping load, computation, and store them at the same time between the GPU and the CPU.
  43. {DESCRIPTION} This screen the displays Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} Finally, there is the Error Correction Code (ECC) support; as I mentioned earlier, this is a first for any GPU architecture. It provides full ECC, it detects and corrects a single error, and it detects and flags a double error. This is a really important feature for 40 nanometer technologies and Fermi is a 40 nanometer implementation. The ECC implementation protects both the internal register files and shared memories, and L1 and L2 caches, and the external memory on the GPU board, which is connected by the GDDR5 interface. This is an extremely important differentiator for NVIDIA&apos;s GPUs.
  44. {DESCRIPTION} This screen the displays a chart that compares the Tesla T20 series board configurations. {TRANSCRIPT} There is only one configuration for the Tesla M2050 and Tesla M2070. Notice that the specifications only differs with the memory – everything else is the same: The Tesla M2050 module offers 3 GB of GDDR5 memory on board. The Tesla M2070 and Tesla M2070Q module offers 6 GB of GDDR5 memory on board. Both of these products can be configured by the OEM or by the end user to enable or disable ECC or error correcting codes that can fix single-bit errors and report double-bit errors. Enabling ECC will cause some of the memory to be used for the ECC bits, so the user available memory will decrease to approximately 2.62 GB for a Tesla M2050 and approximately 5.25 GB for a Tesla M2070 and Tesla M2070Q. With the Tesla M2070 and M2070Q adds more memory for GPU computing with the same cooling and power at lower price point. In addition the Tesla M2070Q adds Quadro sw (Quadro Software) for professional graphics visualization. The Tesla M2070Q GPU combines Tesla’s high performance computing and the NVIDIA Quadro® professional-class advanced visualization in the same GPU. What this means is that Tesla M2070Q is capable of visualization type applications in addition to it&apos;s high performance computing acceleration capabilities. An example of a Quadro software application that the Tesla M2070Q is capable of is Microsoft&apos;s RemoteFX remote visualization app. The standard Tesla M2070 does not support Microsoft&apos;s RemoteFX. Tesla M2070Q is the ideal solution for customers, who want to deploy high performance computing, advanced and remote visualization in a datacenter.
  45. {DESCRIPTION} This screen displays images of the QLogic PCIe and CNA, Emulex PCIe HBA, Brocade PCIe and CAN, and IBM High IOPS SS Class SSD HBA. It also displays a table that provides a list of the newly supported host bus adapter (HBA) cards and optional RAID cards. {TRANSCRIPT} The System x iDataPlex dx360 M3 provides I/O flexibility and offer potential investment protection by supporting high-performance PCIe Host Bus Adapter cards (HBAs), such as 10Gb Ethernet, Fibre Channel, InfiniBand and GPU cards. The PCI Express (PCI-E), is a high performance, general purpose I/O interconnect used for a variety of computing and communication platforms. It maintains key PCI features, but is a fully serial interface rather than the parallel bus architecture found in the conventional PCI. PCI-E can be used for universal connectivity as a chip-to-chip interconnect, I/O interconnect for adapter cards, or an I/O attach point to other interconnects. Depending upon the configuration, dx360 M3 supports up to three high-speed PCIe adapter slots per chassis through the use of a riser card. There are five different riser cards available for iDataPlex: 1U single-slot for front PCIe slot; used for installation of one PCIe card. Supported in all configurations 2U two-slot for front PCIe slot; used for installation of two PCIe cards. Any two adapters are support, with a maximum of one GPU or GPGPU adapter 2U three-slot for front PCIe slot, with dx360 M3 only; used for installation of two GPU or GPGPU adapters and any other PCIe card. The third PCIe slot is on the backside of the riser card. The card has a PCIe switch onboard and requires a separate power cable. Supported only in 2U I/O-rich configurations using the PCIe tray 1U single-slot for rear PCIe slot; used for installation of any PCIe storage controller card. Supported in all 2U configurations when using the dx360 M3 2U single-slot for rear PCIe slot; used for installation of any PCIe storage controller card. Supported in 3U configurations only
  46. {DESCRIPTION} This screen displays a rear view image of the dx360 M3 and an image the 550W/750W/900W power supplies. {TRANSCRIPT} Each iDataPlex chassis includes a power supply option and low power-consuming fans that provide operating power and cooling for all components within the chassis. The dx360 M3 offers power supply flexibility with a Higher Efficiency 550 watt non-redundant power supply for lower-power grid deployments, or an optional Higher Efficiency 900 watt power supply for non-redundant requirements, capable of reducing power consumption up to 8%, or with two separate 750 watt AC in to 12VDC power supplies to create an N+N configuration. *Note that the 750W N+N power supply actually runs at 900W when both sides are working properly and only drops to 750W if one of the sides drops off. With the redundant power option, customers can still take advantage of all the optimization for software-resilient workloads, and now take advantage of iDataPlex efficiency for non-grid applications where they desire. The 550 watt supply is in the same form factor as the 900W non-redundant supply, with 2 discrete supplies inside the container that are bussed together and 2 discrete line feeds to split power to separate PDU’s. Note, when deploying a full rack of redundant power will require doubling the PDU count, but the vertical slots in the iDataPlex rack can easily accommodate these. Whether the customer’s requirement is line feed maintenance, node protection, or just increased reliability, iDataPlex can now deliver a solution.
  47. {DESCRIPTION} This screen displays a rear view image of the dx360 M3 with fan unit slightly removed. It also displays the 4 fan unit. {TRANSCRIPT} The chassis fan assembly is comprised of four large 80mm fans per 2U or 3U chassis for more efficiency and lower noise than the eight small 40mm fans used in standard 1U servers. The fan assembly is also non-redundant and shared between the two chassis elements (that is, server-server or server-storage enclosures). The fans cannot be replaced individually because the fan assembly is a single unit. The chassis must be removed from the rack in order to service or replace the fan assembly or power supplies. The power supply will provide its own fan(s) for cooling and control the system fans for system cooling. The power supply used in each iDataPlex chassis can be more than 92% efficient depending on the load, and consume 40% less power when compared to a traditional 1U server. The fans shared between two nodes in a Compute Chassis are 70% more efficient in terms of power consumption, compared to those in a traditional 1U server. The iDataPlex uses Direct Dock Power to power the nodes in the chassis. Direct Dock Power allows the chassis to be inserted without having to connect power cables. With the innovation of Direct Dock Power, you can simply push the chassis into the rack, and the power supply will connect to the PDU. You do not have to access the rear of the rack when installing or working with servers. The chassis in turns use industry standard power cords that are attached to the iDataPlex rack. When the chassis is installed in an iDataPlex rack, it is automatically connected to power through a PDU that is mounted to the rack rail—eliminating the need to access the rear of rack to attach the power cord.
  48. {DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Our final topic is on iDataPlex Management.
  49. 5. {DESCRIPTION} This screen displays images of the systems management stack: Tivoli software, IBM systems Director, ToolsCenter, IMM, and UEFI. {TRANSCRIPT} The new generation of IBM System x iDataPlex servers offers a high level of systems management capabilities which is a complete end-to-end stack designed to deliver future-proof management today. It begins with hardware and firmware, featuring the Integration Management Module (IMM) and the Unified Extensible Firmware Interface (UEFI) introduced with out new generated Intel-equipped servers in March 2009. It then builds upon that with our IBM’s ToolsCenter consolidation of tools with some additional important capabilities. On top of that is our advanced management software, IBM Systems Director which allows the servers to be managed either locally or remotely and manages both physical and virtual systems. Systems Director comes standard, is simple to start, allows for an automated, fast deployment, provides a single interface for the entire infrastructure, and seamlessly plugs into many existing enterprise management solutions. At the very top you have IBM Tivoli software enterprise level server management with IBM Tivoli or others. IBM iDataPlex servers also features Dynamic System Analysis, Automatic Server Restart, Wake on LAN® support, and PXE support. To include support for Moab cluster Suite, an intelligent management middleware that provides simple Web-based job management, graphical cluster administration and management reporting tools, and xCat (which stands for Extreme Cluster Administration Toolkit), an open source Linux/AIX/Windows Scale-out Cluster Management Solution.
  50. {DESCRIPTION} This screen lists IBM Business Partner’s icons, and four images of IBM System x systems: HS22 blade, 42U rack with console, IBM BladeCenter S chassis and iDataPlex 100U rack. {TRANSCRIPT} IBM needs partners to be successful and deliver the solutions customers are looking for. We believe open standards benefits our clients and is a major driver of IT innovation and integration. We will continue to engage our partners on innovative concepts like cloud computing, on driving open management standards, and on academic initiatives that will prepare the next generation of IT professionals. And at a client level, we work closely with our partners to deliver full solutions to our client’s needs.
  51. {DESCRIPTION} This screen displays a right-align view image of the IBM iDataPlex dx360 M3 100U rack with a data center switch being highlighted from the rack. {TRANSCRIPT} There are a wide range of switches to select from based on your computing needs.
  52. {DESCRIPTION} This screen displays a image of one of the Despicable Me characters. {TRANSCRIPT} This is just one of iDataPlex many customer success stories. Illumination Entertainment collaborated with Mac Guff Ligne, a Paris-based digital production studio, to complete the 12 months of intensive graphics and 3-D animation rendering, amounting to up to 500,000 frames per week.  To complete the project, the team needed to quickly design and build a dedicated server farm capable of meeting these demanding workloads across its 330 person team of artists, producers and support staff.  The production team also needed efficient space, and an IT solution that was easy to configure, manage and expand. To avoid the potentially high air conditioning costs associated with operating a data center 24 hours a day, 7 days a week, the company also wanted an energy efficient technology platform. Illumination tapped IBM and its Paris-based Business Partner Serviware to build a server farm based on IBM&apos;s iDataPlex system.  With this system&apos;s efficient design and flexible configuration, the company was able to meet the intense computing requirements for the film and save room by doubling the number of systems that can run in a single IBM rack. The entire space used to house the data center amounted to four parking spots in the garage of the production facility, about half of what had initially been allotted.  The studio&apos;s iDataPlex solution included IBM&apos;s innovative Rear Door Heat eXchanger that allows the system to run with no air conditioning required, saving up to 40% of the power used in typical server configurations. Overall, the installation included 6,500 processor cores.&quot;
  53. {DESCRIPTION} This screen lists iDataPlex position among the Top500 list. {TRANSCRIPT} System x iDataPlex continues to prove leadership across SuperComputer deployments as shown in the latest Top500 list.
  54. {DESCRIPTION} This screen displays iDataPlex awards. {TRANSCRIPT} The slide lists iDataPlex’s 2009 – 2010 awards and achievements, among these awards IBM iDataPlex dx360 M3 was cited the 2010 Reader’s Choice: Best HPC Server Product or Technology during the 2010 Supercomputing Conference, held in New Orleans, La. The annual awards are highly coveted as prestigious recognition of achievement by the HPC community.
  55. {DESCRIPTION} This screen lists the topic summary. {TRANSCRIPT} Having completed this Topic you should now be able to: List three emerging technologies for an iDataPlex solution List three goals that iDataPlex addressed Identify elements of the iDataPlex rack design Match the server offering to its characteristics
  56. {DESCRIPTION} This screen identifies abbreviations and acronyms used in the topic. {TRANSCRIPT} Presented is a glossary of abbreviations and acronyms used in this topic.
  57. {DESCRIPTION} This screen displays html links. {TRANSCRIPT} Listed are some additional resources that will help you learn more about the IBM System x iDataPlex solution.
  58. {DESCRIPTION} Displays the statement of “End of Presentation” in the center of the slide. {TRANSCRIPT} Thank you for participating. This concludes this topic.