5. • 4 SandyBridge CPUs
• 4 2-port IB QDR HCA
• Scientific Linux 6
QDR
QDR QDR
QDR
QDR
To server
6. Why InfiniBand?
• Faster networking
• x10 ~ x40 faster than 1GbE.
• Cost-effective!
• $100 for IB 10Gbps(SDR) HCA
• We need at least 1GB/s transfer
7. Why DIY?
• No vendor seems did not yet provide
support for SandyBridge(AVX) + IB QDR
combination.
• Kernel 2.6.32~(RHEL6 or Ubu 10.4 ~) is
required for AVX support.
• SandyBridge IB
.
8. What you need
• HW
• IB HCA
• IB Cable
• IB Switch(optional for small network)
• SW
• Driver suite(OFED, WinOF)
9. OFED
• OSS IB driver suite
• 1.5.3.1 as of May 2011.
• IPoIB CM is now default connection
mode
• Accelerate TCP/IP app without
modification.
13. IPoIB
• IP over InfiniBand
• Accelerate existing TCP/IP app without
modification
• Still requires larger CPU time.
• 930 MB/s BW confirmed with IB SDR
HCA(With Connected Mode).
14. SRP(iSCSI + RDMA)
• 700 ~ 800 MB/s BW confirmed on IB SDR
HCA
• iSCSI + IPoIB CM also achieves same
performance.
• Even though larger CPU time was
consumed.
19. Pros
• Fast!
• Cost effective!
• IPoIB CM shows good performance.
• Accelerates TCP/IP app without
modification.
• Win <-> Lin seems now stable in OFED
1.5.3.1.
20. Cons
• Hard to install OFED drivers
• On newer distribution, newer system
configuration.
• Kernel recompilation required for
• SRPT
• IPoIB CM seems not available on Windows
yet
21. ToDo
• InfiniBand virtualization
• e.g. 4 VMs sharing One 40 Gbps IB HCA
possible(10 Gbps per VM)
• InfiniBand-ready distributed storage system
• Glusterfs, NFS/RDMA
• Diskless boot with FlexBoot.