SlideShare una empresa de Scribd logo
1 de 31
Ceph Day Chicago – August 16, 2015
Deploying Flash Storage For Ceph
Rob Davis – robd@mellanox.com
© 2015 Mellanox Technologies 2
End-to-End Networking Solutions
Storage
Front / Back-End
Server / Compute Switch / Gateway
56-100G InfiniBand
10-100GbE
Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules
Comprehensive End-to-End InfiniBand and Ethernet Portfolio
Metro / WAN
56-100G InfiniBand
10-100GbE
© 2015 Mellanox Technologies 3
Embedded Inside Storage Platforms Across the Industry
SMB Direct
© 2015 Mellanox Technologies 4
Disk Drive Shipments Continue to Grow
© 2015 Mellanox Technologies 5
Solid State Disk Growth
Today
© 2015 Mellanox Technologies 6
HDD vs. SDD cost projections
Source:Wikibon2014,fromnumeroussources
© 2015 Mellanox Technologies 7
The Performance Difference is Dramatic
0.1
10
1000
HD SSD NVM
AccessTime(micro-Sec)
Storage Media Technology
AccessTimeinMicroSeconds
Hard
Drives
NAND
Flash
Next Gen
NVM
© 2015 Mellanox Technologies 8
Four Year Life Cycle TCO – HDD vs. SSD
Source:Wikibon2014,fromnumeroussources
© 2015 Mellanox Technologies 9
Ceph and Network Traffic
APIs enable access to object, block & file data Fully distributed scale out architecture
Source: http://ceph.com/docs/master/rados/configuration/network-config-ref/
© 2015 Mellanox Technologies 10
Cluster Network Traffic – Sunny Day Scenarios
Client OSD
Write
Write Ack
OSD OSD
Replications
N
Write Operation
Client OSD
Read
Read Reply
OSD OSD
N
READ Operation
No extra cluster network traffic Typically 2x cluster network traffic
(N-1)∙x
© 2015 Mellanox Technologies 11
Flash Can Move the Bottleneck from the Disk to the Network
Network I/O should match Disk I/O
© 2015 Mellanox Technologies 12
Where best to plug in?
© 2015 Mellanox Technologies 13
Classic Network Architecture
© 2015 Mellanox Technologies 14
New Leaf-Spine Architecture
Greater Cross-sectional Bandwidth
© 2015 Mellanox Technologies 15
Recovery backend traffic 1x times lost data
Cluster Network Traffic – Recovery (Replication)
Example - Time to recover
 Net networking time to move data
 10GbE
• 20TB system @10GE 4.4hrs
• 200TB system @10GE 44.4hrs
 40GbE
• 20TB system @40GE 1.1hrs
• 200TB system @40GE 11.1hrs
 100GbE
• 20TB system 100GE 0.4hrs
• 200TB system @100GE 4.44hrs
Client OSD OSD OSD OSD
Read
Read Reply
© 2015 Mellanox Technologies 16
The Effect of Replication vs. Erasure Coding
© 2015 Mellanox Technologies 17
Cluster Network Traffic – Sunny Day Scenarios (Erasure Coding)
Client OSD
Read
Read Reply
K
OSD OSD OSD OSD OSD
M
Read
Shards
decode
READ Operation
Client OSD
Write
Write Ack
K
OSD OSD OSD OSD OSD
M
Write
Shards
encode
Write Operation
~1x cluster network traffic
((k-1)/k)∙x
Typically ~1.4x cluster network traffic
((k+m-1)/k) ∙x
© 2015 Mellanox Technologies 18
Ceph with Flash is Best with a Separate Cluster Network
 High performance networks enable maximum cluster availability
• Clients, OSD, Monitors and Metadata servers communicate over multiple network layers
• Real-time requirements for heartbeat, replication, recovery and re-balancing
• Flash only adds to the network performance requirement
 Cluster (“backend”) network performance dictates cluster’s performance and scalability
• “Network load between Ceph OSD Daemons easily dwarfs the network load between Ceph Clients
and the Ceph Storage Cluster” (Ceph Documentation)
© 2015 Mellanox Technologies 19
Ceph Flash Deployment Using Cluster Network
 Cluster (Private) Network @ 40GbE
• Smooth HA, unblocked heartbeats, efficient data balancing
 Clients @ 1, 10 or 40GbE
• 40 GbE guaranties line rate for high ingress/egress clients
 IOPs and Throughput to Clients @ 1, 10, or 40GbE
2.5x Higher Throughput , 15% Higher IOPs with 40Gb Ethernet vs. 10GbE
(http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf)
Cluster Network
Admin Node
40GbE
Public Network
10GbE/40GBE
Ceph Nodes
(Monitors, OSDs, MDS)
Client Nodes
10GbE/40GbE
© 2015 Mellanox Technologies 20
Reference Architecture for Ceph and Flash
 Red Hat, Supermicro, Seagate, Mellanox, Intel
 Extensive Ceph Performance Testing: Disk, Flash, Network, CPU
 Reference Architecture Published
40GbE Network
Setup
 Key Test Results
• 2 SSDs as fast as many disks, more flash is
faster
 40GbE Advantages
• Up to 2x read throughput per server
• Up to 50% decrease in latency
http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers
© 2015 Mellanox Technologies 21
Ceph Flash Optimized Solution
Highlights Compared to Stock Ceph
• Read performance up to 8x better
• Write performance up to 2x better with tuning
Optimizations
• All-flash storage for OSDs
• Enhanced parallelism and lock optimization
• Optimization for reads from flash
• Improvements to Ceph messenger
Tested Configuration
• InfiniFlash Storage with IFOS 1.0 EAP3
• Up to 4 RBDs
• 2 Ceph OSD nodes, connected to InfiniFlash
• 40GbE NICs from Mellanox
SanDisk InfiniFlash
© 2015 Mellanox Technologies 22
SanDisk InfiniFlash, Maximizing Ceph Random Read IOPS
Random Read IOPs Random Read Latency (ms)
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
200000
25% Read 50% Read 75% Read 100% Read
8KB Random Read, QD=16
Stock Ceph IF-500
0
2
4
6
8
10
12
14
25% Read 50% Read 75% Read 100% Read
8KB Random Read, QD=16
Stock Ceph IF-500
http://www.sandisk.com/about-sandisk/press-room/press-releases/2015/sandisk-creates-new-storage-category-with-infiniflash%E2%84%A2-all-flash-storage-system/
© 2015 Mellanox Technologies 23
Ceph/Flash Media & Entertainment Solution
 Turnkey Object Storage
• Built on Ceph
• Pre-configured for rapid deployment
• Mellanox 10/40GbE networking
 High-Capacity Configuration
• SSDs and 6-8TB Helium-filled drives
• Up to 2PB in 18U
 High-Performance
• Single client read 2.2 GB/s
• SSD caching + Hard Drives
 www.storagefoundry.net
© 2015 Mellanox Technologies 24
Flash Performance Creates Bottleneck in Network Software
50%
100%
Networked Storage
Storage Protocol (SW) Network
Storage Media
Network
HDD
SSD
Storage
Media
0.01
1
100
HD SSD NVM
FC, TCP RDMA RDMA+
AccessTime(micro-Sec)
Protocol and Network
NVM
HDD
The
Network
and the
Protocol
MUST get
faster
© 2015 Mellanox Technologies 25
NVMe Express – Protocol Designed for Flash
 NVMe: Optimized for flash and next-gen NV-
memory
• Traditional SCSI interfaces designed for spinning disk
• NVMe bypasses unneeded layers
 NVMe Flash Outperforms SAS/SATA Flash
• 2x-2.5x more bandwidth, 40-50% lower latency, Up to 3x more
IOPS
© 2015 Mellanox Technologies 26
RDMA Enables More Efficient Networking for Ceph
Higher Bandwidth
Lower Latency
More CPU Power For
Applications
© 2015 Mellanox Technologies 27
Adding RDMA to Ceph
 RDMA Beta Included in Hammer
• Open Source: Mellanox, Red Hat, CohortFS, and Community collaboration
• Full RDMA expected in Infernalis
• New RDMA messenger layer called XioMessenger, built on Accelio (RDMA abstraction layer)
© 2015 Mellanox Technologies 28
 Open source
• https://github.com/accelio/accelio/ && www.accelio.org
 Faster RDMA integration to application
 Asynchronous
 Maximize msg and CPU parallelism
• Enable >10GB/s from single node
• Enable <10usec latency under load
 In Giant and Hammer
• http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger
Accelio, High-Performance Reliable Messaging Library
© 2015 Mellanox Technologies 29
RDMA Enables Efficient Data Movement for Windows SDS
 Without RDMA
• 5.7 GB/s throughput
• 20-26% CPU utilization
• 4 cores 100% consumed by moving data
 With Hardware RDMA
• 11.1 GB/s throughput at half the latency
• 13-14% CPU utilization
• More CPU power for applications, better ROI
x x x x
100GbE With CPU Onload 100 GbE With Network Offload
CPU Onload Penalties
• Half the Throughput
• Twice the Latency
• Higher CPU Consumption
2X Better
Bandwidth
Half the Latency
33% Lower CPU
See the demo: https://www.youtube.com/watch?v=u8ZYhUjSUoI
© 2015 Mellanox Technologies 30
Conclusions
 Flash storage is main stream and approaching price parity with hard disks
 The performance difference between flash and hard disks is dramatic and with the
right network, Ceph can easily leverage this
 Its important to balance the storage and network performance in OSDs
 A separate high performance “Cluster” network is usually needed to realize the full
performance advantage of flash in Ceph systems
 RDMA can further optimize Ceph flash performance
Thank You

Más contenido relacionado

La actualidad más candente

Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandMellanox Technologies
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...PROIDEA
 
Approaching hyperconvergedopenstack
Approaching hyperconvergedopenstackApproaching hyperconvergedopenstack
Approaching hyperconvergedopenstackIkuo Kumagai
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVMPradeep Kumar
 
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...The Linux Foundation
 
09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctripYong Luo
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
 
NVMe Over Fabrics Support in Linux
NVMe Over Fabrics Support in LinuxNVMe Over Fabrics Support in Linux
NVMe Over Fabrics Support in LinuxLF Events
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to Know
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to KnowWebinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to Know
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to KnowStorage Switzerland
 
Lenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo Data Center
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmugcsharney
 
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...OpenStack Korea Community
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation6WIND
 

La actualidad más candente (20)

Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBandAdvancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
 
Approaching hyperconvergedopenstack
Approaching hyperconvergedopenstackApproaching hyperconvergedopenstack
Approaching hyperconvergedopenstack
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVM
 
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
 
09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip09 yong.luo-ceph in-ctrip
09 yong.luo-ceph in-ctrip
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack Networking
 
NVMe Over Fabrics Support in Linux
NVMe Over Fabrics Support in LinuxNVMe Over Fabrics Support in Linux
NVMe Over Fabrics Support in Linux
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to Know
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to KnowWebinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to Know
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to Know
 
Lenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rack
 
Exchange 2010 New England Vmug
Exchange 2010 New England VmugExchange 2010 New England Vmug
Exchange 2010 New England Vmug
 
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation
 
Iaas on xcp
Iaas on xcpIaas on xcp
Iaas on xcp
 

Destacado

Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 

Destacado (20)

Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 

Similar a Deploying Flash Storage For Ceph With High Performance Networking

Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Community
 
Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Ceph Community
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...OpenStack Korea Community
 
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Cloud Native Day Tel Aviv
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpenInfra Days Poland 2019
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RSimon Huang
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Netronome
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Ceph Community
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox Technologies
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVYoshihiro Nakajima
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안NAIM Networks, Inc.
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
 

Similar a Deploying Flash Storage For Ceph With High Performance Networking (20)

Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
 
Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
 
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi Alkobi
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3R
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
 
Mellanox Storage Solutions
Mellanox Storage SolutionsMellanox Storage Solutions
Mellanox Storage Solutions
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for Ceph
 
CloudX on OpenStack
CloudX on OpenStackCloudX on OpenStack
CloudX on OpenStack
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFV
 
Virtualization Acceleration
Virtualization Acceleration Virtualization Acceleration
Virtualization Acceleration
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
 
Demystify OpenPOWER
Demystify OpenPOWERDemystify OpenPOWER
Demystify OpenPOWER
 
TechTalkThai-CiscoHyperFlex
TechTalkThai-CiscoHyperFlexTechTalkThai-CiscoHyperFlex
TechTalkThai-CiscoHyperFlex
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 

Último

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 

Último (20)

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 

Deploying Flash Storage For Ceph With High Performance Networking

  • 1. Ceph Day Chicago – August 16, 2015 Deploying Flash Storage For Ceph Rob Davis – robd@mellanox.com
  • 2. © 2015 Mellanox Technologies 2 End-to-End Networking Solutions Storage Front / Back-End Server / Compute Switch / Gateway 56-100G InfiniBand 10-100GbE Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules Comprehensive End-to-End InfiniBand and Ethernet Portfolio Metro / WAN 56-100G InfiniBand 10-100GbE
  • 3. © 2015 Mellanox Technologies 3 Embedded Inside Storage Platforms Across the Industry SMB Direct
  • 4. © 2015 Mellanox Technologies 4 Disk Drive Shipments Continue to Grow
  • 5. © 2015 Mellanox Technologies 5 Solid State Disk Growth Today
  • 6. © 2015 Mellanox Technologies 6 HDD vs. SDD cost projections Source:Wikibon2014,fromnumeroussources
  • 7. © 2015 Mellanox Technologies 7 The Performance Difference is Dramatic 0.1 10 1000 HD SSD NVM AccessTime(micro-Sec) Storage Media Technology AccessTimeinMicroSeconds Hard Drives NAND Flash Next Gen NVM
  • 8. © 2015 Mellanox Technologies 8 Four Year Life Cycle TCO – HDD vs. SSD Source:Wikibon2014,fromnumeroussources
  • 9. © 2015 Mellanox Technologies 9 Ceph and Network Traffic APIs enable access to object, block & file data Fully distributed scale out architecture Source: http://ceph.com/docs/master/rados/configuration/network-config-ref/
  • 10. © 2015 Mellanox Technologies 10 Cluster Network Traffic – Sunny Day Scenarios Client OSD Write Write Ack OSD OSD Replications N Write Operation Client OSD Read Read Reply OSD OSD N READ Operation No extra cluster network traffic Typically 2x cluster network traffic (N-1)∙x
  • 11. © 2015 Mellanox Technologies 11 Flash Can Move the Bottleneck from the Disk to the Network Network I/O should match Disk I/O
  • 12. © 2015 Mellanox Technologies 12 Where best to plug in?
  • 13. © 2015 Mellanox Technologies 13 Classic Network Architecture
  • 14. © 2015 Mellanox Technologies 14 New Leaf-Spine Architecture Greater Cross-sectional Bandwidth
  • 15. © 2015 Mellanox Technologies 15 Recovery backend traffic 1x times lost data Cluster Network Traffic – Recovery (Replication) Example - Time to recover  Net networking time to move data  10GbE • 20TB system @10GE 4.4hrs • 200TB system @10GE 44.4hrs  40GbE • 20TB system @40GE 1.1hrs • 200TB system @40GE 11.1hrs  100GbE • 20TB system 100GE 0.4hrs • 200TB system @100GE 4.44hrs Client OSD OSD OSD OSD Read Read Reply
  • 16. © 2015 Mellanox Technologies 16 The Effect of Replication vs. Erasure Coding
  • 17. © 2015 Mellanox Technologies 17 Cluster Network Traffic – Sunny Day Scenarios (Erasure Coding) Client OSD Read Read Reply K OSD OSD OSD OSD OSD M Read Shards decode READ Operation Client OSD Write Write Ack K OSD OSD OSD OSD OSD M Write Shards encode Write Operation ~1x cluster network traffic ((k-1)/k)∙x Typically ~1.4x cluster network traffic ((k+m-1)/k) ∙x
  • 18. © 2015 Mellanox Technologies 18 Ceph with Flash is Best with a Separate Cluster Network  High performance networks enable maximum cluster availability • Clients, OSD, Monitors and Metadata servers communicate over multiple network layers • Real-time requirements for heartbeat, replication, recovery and re-balancing • Flash only adds to the network performance requirement  Cluster (“backend”) network performance dictates cluster’s performance and scalability • “Network load between Ceph OSD Daemons easily dwarfs the network load between Ceph Clients and the Ceph Storage Cluster” (Ceph Documentation)
  • 19. © 2015 Mellanox Technologies 19 Ceph Flash Deployment Using Cluster Network  Cluster (Private) Network @ 40GbE • Smooth HA, unblocked heartbeats, efficient data balancing  Clients @ 1, 10 or 40GbE • 40 GbE guaranties line rate for high ingress/egress clients  IOPs and Throughput to Clients @ 1, 10, or 40GbE 2.5x Higher Throughput , 15% Higher IOPs with 40Gb Ethernet vs. 10GbE (http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf) Cluster Network Admin Node 40GbE Public Network 10GbE/40GBE Ceph Nodes (Monitors, OSDs, MDS) Client Nodes 10GbE/40GbE
  • 20. © 2015 Mellanox Technologies 20 Reference Architecture for Ceph and Flash  Red Hat, Supermicro, Seagate, Mellanox, Intel  Extensive Ceph Performance Testing: Disk, Flash, Network, CPU  Reference Architecture Published 40GbE Network Setup  Key Test Results • 2 SSDs as fast as many disks, more flash is faster  40GbE Advantages • Up to 2x read throughput per server • Up to 50% decrease in latency http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers
  • 21. © 2015 Mellanox Technologies 21 Ceph Flash Optimized Solution Highlights Compared to Stock Ceph • Read performance up to 8x better • Write performance up to 2x better with tuning Optimizations • All-flash storage for OSDs • Enhanced parallelism and lock optimization • Optimization for reads from flash • Improvements to Ceph messenger Tested Configuration • InfiniFlash Storage with IFOS 1.0 EAP3 • Up to 4 RBDs • 2 Ceph OSD nodes, connected to InfiniFlash • 40GbE NICs from Mellanox SanDisk InfiniFlash
  • 22. © 2015 Mellanox Technologies 22 SanDisk InfiniFlash, Maximizing Ceph Random Read IOPS Random Read IOPs Random Read Latency (ms) 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 25% Read 50% Read 75% Read 100% Read 8KB Random Read, QD=16 Stock Ceph IF-500 0 2 4 6 8 10 12 14 25% Read 50% Read 75% Read 100% Read 8KB Random Read, QD=16 Stock Ceph IF-500 http://www.sandisk.com/about-sandisk/press-room/press-releases/2015/sandisk-creates-new-storage-category-with-infiniflash%E2%84%A2-all-flash-storage-system/
  • 23. © 2015 Mellanox Technologies 23 Ceph/Flash Media & Entertainment Solution  Turnkey Object Storage • Built on Ceph • Pre-configured for rapid deployment • Mellanox 10/40GbE networking  High-Capacity Configuration • SSDs and 6-8TB Helium-filled drives • Up to 2PB in 18U  High-Performance • Single client read 2.2 GB/s • SSD caching + Hard Drives  www.storagefoundry.net
  • 24. © 2015 Mellanox Technologies 24 Flash Performance Creates Bottleneck in Network Software 50% 100% Networked Storage Storage Protocol (SW) Network Storage Media Network HDD SSD Storage Media 0.01 1 100 HD SSD NVM FC, TCP RDMA RDMA+ AccessTime(micro-Sec) Protocol and Network NVM HDD The Network and the Protocol MUST get faster
  • 25. © 2015 Mellanox Technologies 25 NVMe Express – Protocol Designed for Flash  NVMe: Optimized for flash and next-gen NV- memory • Traditional SCSI interfaces designed for spinning disk • NVMe bypasses unneeded layers  NVMe Flash Outperforms SAS/SATA Flash • 2x-2.5x more bandwidth, 40-50% lower latency, Up to 3x more IOPS
  • 26. © 2015 Mellanox Technologies 26 RDMA Enables More Efficient Networking for Ceph Higher Bandwidth Lower Latency More CPU Power For Applications
  • 27. © 2015 Mellanox Technologies 27 Adding RDMA to Ceph  RDMA Beta Included in Hammer • Open Source: Mellanox, Red Hat, CohortFS, and Community collaboration • Full RDMA expected in Infernalis • New RDMA messenger layer called XioMessenger, built on Accelio (RDMA abstraction layer)
  • 28. © 2015 Mellanox Technologies 28  Open source • https://github.com/accelio/accelio/ && www.accelio.org  Faster RDMA integration to application  Asynchronous  Maximize msg and CPU parallelism • Enable >10GB/s from single node • Enable <10usec latency under load  In Giant and Hammer • http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger Accelio, High-Performance Reliable Messaging Library
  • 29. © 2015 Mellanox Technologies 29 RDMA Enables Efficient Data Movement for Windows SDS  Without RDMA • 5.7 GB/s throughput • 20-26% CPU utilization • 4 cores 100% consumed by moving data  With Hardware RDMA • 11.1 GB/s throughput at half the latency • 13-14% CPU utilization • More CPU power for applications, better ROI x x x x 100GbE With CPU Onload 100 GbE With Network Offload CPU Onload Penalties • Half the Throughput • Twice the Latency • Higher CPU Consumption 2X Better Bandwidth Half the Latency 33% Lower CPU See the demo: https://www.youtube.com/watch?v=u8ZYhUjSUoI
  • 30. © 2015 Mellanox Technologies 30 Conclusions  Flash storage is main stream and approaching price parity with hard disks  The performance difference between flash and hard disks is dramatic and with the right network, Ceph can easily leverage this  Its important to balance the storage and network performance in OSDs  A separate high performance “Cluster” network is usually needed to realize the full performance advantage of flash in Ceph systems  RDMA can further optimize Ceph flash performance

Notas del editor

  1. 2