2. what is a compute blade?
• Features and benefits of a modular “blade”
system, but designed specifically for the needs of
HPC cluster environment
• Consists of 2 pieces:
• Blade Housing - enclosure to hold the
compute modules, mounted into the rack
cabinet
• Compute Blade Module - complete
independent high-end server that slides into
the Blade Housing
2
5. compute blade key points
Each node is
2x Computing power
independent - no
in the same space
impact on other nodes
80+ efficient power Nodes equipped with
supplies, low power management engine:
CPU and drive options IPMI and iKVM
Each compute node is Mix and match
modular, removable architecture in same
and tool-less blade housing
5
6. compute blade vs 1U twin
Twin system
• Nodes fixed into enclosure
• Must take both nodes down even if
servicing only 1 node
• Single shared power supply
Our Compute Blade
• Individual removable nodes
• Nodes can run outside of housing
for testing and serviceability
• Dedicated 80%+ efficient power
supply
• Mix and match CPU architectures
6
7. blade product highlights
• High density without sacrificing performance
• High reliability - independent power supply and
removable blade modules
• Easy serviceability - each module is removable
and useable without blade housing
• Tool-less design for easy replacement of failed
components
• Multiple system architectures - available with
both AMD Opteron and Intel Xeon
7
8. compute blade - front
1 Power LED 4 Slide out ID label area
2 Power Switch 5 Quick release handles
3 HDD LED
8
9. compute blade - inside
blade modules
independently slide out
of the housing
1 Power supply 4 System memory
2 Drive bay (1x 3.5” or 2x 2.5”) 5 Processors
3 Cooling Fans 6 Low-profile expansion card
9
10. compute blade - features
• Easy swap tool-less fan
housing
• Fans and hard drives shock
mounted to prevent
vibration and failure
• 80% efficient power supply
per blade
• Thumbscrew installation for
easy replacement
10
11. compute blade - density
Standard 1U, dual CPU servers: Compute blade servers:
Max: 42 servers per rack 336 cores Max: 84 servers per rack 672 cores
11
12. compute blade models
1BX5501 1BA2301
Processor Dual Quad-Core Intel Xeon 5500 series Dual AMD Opteron 4 core 2300 or 6
core 2400 series
Chipset Intel 5500 chipset with QPI interconnect NVIDIA MCP55 Pro with Hyper Transport
System Memory Maximum of 12 DDR3 DIMMs or 96GB Maximum of 8 DDR2 DIMMs or 64GB
Expansion Slot 1x PCI-e Gen 2.0 x16 1x PCI-e x16
LAN 2x 1Gbps RJ-45 ethernet ports 2x 1Gbps RJ-45 ethernet ports
InfiniBand Optional onboard ConnectX DDR Optional onboard ConnectX DDR
Dedicated LAN for IPMI 2.0 and
Manageability Dedicated LAN for IPMI 2.0
iKVM
Power supply 80%+ efficient power supply 80%+ efficient power supply
12
13. compute blade - 1BX5501
• Processor (per blade) • Management (per blade)
• Two Intel Xeon 5500 Series processors • Integrated IPMI 2.0 module
• Next generation "Nehalem" microarchitecture • Integrated management controller providing iKVM
• Integrated memory controller and 2x QPI chipset and remote disk emulation.
interconnects per processor • Dedicated RJ45 LAN for management network
• 45nm process technology • I/O connections (per blade)
• Chipset (per blade) • One open PCI-Express 2.0 expansion slot running
• Intel 5500 I/O controller hub at 16x
• Memory (per blade) • Two independent 10/100/1000Base-T (Gigabit)
RJ-45 Ethernet interfaces
• 800MHz, 1066MHz, or 1333MHz DDR3 memory • Two USB 2.0 ports
• Twelve DIMM sockets for support up to 144GB of • One DB-9 serial port (RS-232)
memory
• Storage (per blade) • One VGA port
• One 3.5" SATA2 drive bay or two 2.5" SATA2 drive • Optional ConnectX DDR InfiniBand CX4
bays connector
• Support RAID level 0-1 with Linux software RAID • Electrical Requirements (per module)
(with 2.5" drives) • High-efficiency power supply (greater than 80%)
• Drives shock mounted into enclosure to prevent • Output Power: 400W
vibration related failures
• Universal input voltage 100V to 240V
• Support for high-performance solid state drives
• Frequency: 50Hz to 60Hz, single phase
13
15. compute blade - 1BA2301
• Processor (per node) • Management (per node)
• Two AMD Opteron 2300 or 2400 Series processors • Integrated IPMI 2.0 module
(4 core or 6 core processors)
• Dedicated RJ45 LAN for management network
• Next generation “Istanbul” or "Shanghai"
• I/O connections (per node)
microarchitectures
• Integrated memory controller per processor
• One open PCI-Express expansion slot running at
16x
• 45nm process technology • Two independent 10/100/1000Base-T (Gigabit)
• Chipset (per node) RJ-45 Ethernet interfaces
• nVIDIA MCP55-Pro • Two USB 2.0 ports
• Memory (per node) • One DB-9 serial port (RS-232)
• 667MHz or 800MHz DDR2 memory • One VGA port
• Eight DIMM sockets for support up to 64GB of • Optional ConnectX DDR InfiniBand CX4
memory connector
• Storage (per node) • Electrical Requirements (per node)
• One 3.5" SATA2 drive bay or two 2.5" SATA2 drive • High-efficiency power supply (greater than 80%)
bays
• Output Power: 400W
• Support RAID level 0-1 with Linux software RAID
• Universal input voltage 100V to 240V
(with 2.5" drives)
• Drives shock mounted into enclosure to prevent
• Frequency: 50Hz to 60Hz, single phase
vibration related failures
• Support for high-performance solid state drives
15
17. availability and pricing
• Both 1BX5501 and 1BA2301 are available and
shipping now
• Systems available online for remote testing
• For price and custom configuration contact your
Account Representative
• (866) 802-8222
• sales@advancedclustering.com
• http://www.advancedclustering.com/go/blade
17