The document describes the hardware infrastructure of the Michelangelo HPC cluster. It consists of 70 blade nodes with dual AMD Opteron processors and 8 GB RAM each, for a total of 280 cores and 560 GB RAM. Storage includes 26 TB of disk space across multiple disk arrays. The cluster uses Infiniband and Ethernet networking and employs a diskless node design for reliability and maintenance. Diagrams show the cluster spread across 3 racks connected by Infiniband, Ethernet, and Fibre Channel switches for high-performance computing applications.
11. THE DESIGN OF MICHELANGELO
Infiniband
Performance
Bandwidth
Latency
Industry standard
Reliable
11
12. THE DESIGN OF MICHELANGELO
Diskless
Reliable
Easy and fast maintenance
Reconfigurable
Easy expansion
Single point of administration
12
13. THE DESIGN OF MICHELANGELO
Storage Area Network (SAN)
Performance
Reliable
Industry standard
Quality of Service (QoS)
Future expansion
Easy interface to backup systems
13
15. HARDWARE DESCRIPTION
Blade node
Gigabit NIC
4x DDR400 Sockets PCI-E
Extended B/P
Connector
PCI-X Expansion Opteron 2xx
+
+ +
+ +
+
+
IDE
+ +
Port + +
+ +
Combo
+
I/O Port Service Processor
IPMI Controller KVM Controller
ATI RageXL
VGA Controller
15
16. HARDWARE DESCRIPTION
Blade chassis
6+1 2100w Redundant Power 3+1 System Fan Tray AC 100~240V Inlets
Two Gigabit LAN Bays
One Fast Ethernet Bay
One KVM
One Service Processor
16