High Performance Communication for Oracle using InfiniBand
1.
2. High Performance Communication for Oracle using InfiniBand Ross Schibler CTO Topspin Communications, Inc Session id: #36568 Peter Ogilvie Principal Member of Technical Staff Oracle Corporation
8. Three Pain Points Application Servers Shared Storage Oracle RAC Scalability within the Database Tier limited by Interconnect Latency, Bandwidth, and Overhead Gigabit Ethernet Throughput Between the Application Tier and Database Tier limited by Interconnect Bandwidth, and Overhead I/O Requirements driven by number of servers instead of application performance requirements Fibre Channel OUCH! OUCH! OUCH!
10. Removes all Three Bottlenecks Application Servers Shared Storage Oracle RAC Central server to storage I/O scalability through InfiniBand switch Removes I/O bottlenecks to storage and provides smoother scalability InfiniBand provides 10 Gigabit low latency interconnect for cluster Application tier can run over InfiniBand, benefiting from same high throughput and low latency as cluster
11.
12.
13.
14. InfiniBand Nomenclature Server Server Ethernet Storage Network Topspin 360/90 Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Host Server Server CPU CPU Host Interconnect Mem Cntlr System Memory IB Link HCA SM Switch IB Link TCA IB Link TCA Ethernet link IB Link FC link
15.
16.
17. Copy on Receive NIC CPU CPU Host Interconnect Mem Cntlr Server (Host) interconnect System Memory OS Buffer App Buffer Data traverses bus 3 times
18. With RDMA and OS Bypass HCA CPU CPU Host Interconnect Mem Cntlr Server (Host) interconnect System Memory OS Buffer App Buffer Data traverses bus once, saving CPU and memory cycles
21. InfiniBand Cluster Performance Benefits Source: Oracle Corporation and Topspin on dual Xeon processor nodes Network Level Cluster Performance for Oracle RAC InfiniBand delivers 2-3X higher block transfers/sec as compared to GigE Block Transfer/sec (16KB)
22. InfiniBand Application to Database Performance Benefits InfiniBand delivers 30-40% lower CPU utilization and 100% higher throughput as compared to Gigabit Ethernet Source: Oracle Corporation and Topspin Percent
23. Broad Scope of InfiniBand Benefits Oracle RAC Application Servers Network Shared Storage Ethernet gateway FC gateway: host/lun mapping OracleNet: over SDP over IB Intra RAC: IPC over uDAPL over IB DAFS over IB SAN NAS 20% improvement in throughput 2x improvement in throughput and 45% less CPU 3-4x improvement in block updates/sec 30% improvement in DB performance
24. uDAPL Optimization Timeline Database IB HW/FW uDAPL skgxp CacheFusion Workload CM Sept 2002: uDAPL functional with 6Gb/s throughput Dec 2002: Oracle interconnect performance released, showing improvements in bandwidth (3x), latency(10x) and cpu reduction (3x) Feb 2003: Cache Block Updates show fourfold performance improvement in 4-node RAC April-August 2003: Gathering OAST and industry standard workload performance metrics. Fine tuning and optimization at skgxp, uDAPL and IB layers Jan 2003: added Topspin CM for improved scaling of number of connections and reduced setup times LM
31. InfiniBand Benefits by Stress Area Stress level varies over time with each query InfiniBand provides substantial benefits in all three areas Single converged 10 Gig network for cluster, storage, LAN Central I/O scalability Server I/O CPU & kernel offload removes TCP overhead Frees CPU cycles Compute Extremely low latency 10 Gig throughput Cluster Network Benefit Stress Area