SlideShare una empresa de Scribd logo
1 de 50
QCT Ceph
Solution – Design
Consideration and
Reference
Architecture
Gary Lee
AVP, QCT
• Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
2
AGENDA
3
Industry Trend
and
Customer Needs
4
• Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
5
Industry Trend
• Scalable Size
• Variable Type
• Longivity Time
• Distributed Location
• Versatile Workload
6
• Affordable Price
• Available Service
• Continuous Innovation
• Consistent Management
• Neutral Vendor
Customer Needs
7
Ceph
Architecture
Ceph Storage Cluster
8
Cluster Network
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Object Block File
Unified
Storage
Scale-out
Cluster
Open
Source
Software
Open
Commodity
Hardware
…..
9
Block
I/O
Ceph
Client
RBD
RADO
SGW
Ceph
FS
Object
I/O
File I/O
RADOS/Cluster Network
OSD
File
System
I/O
Disk I/O
PublicNetwork
End-to-end Data Path
App
Service
10
Ceph Software Architecture
Public Network (ex. 10GbE or 40GbE)
Cluster Network (ex. 10GbE or 40GbE)
Ceph Monitor
…...
RCT or RCC
Nx Ceph OSD Nodes
Ceph OSD Node
Clients
Ceph OSD Node Ceph OSD Node
Ceph Hardware Architecture
12
Technology
13
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 12x 3.5” SAS/SATA HDD
• 4x SATA SSD + PCIe M.2
• 1x SATADOM
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
D51PH-1ULH
14
• Mono/Dual Node
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 78x or 2x 35x SSD/HDD
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 SAS Controller
• 1x PCIe x8 HHLH Card
• 1x PCIe x16 FHHL Card
• 4U
QCT Ceph Storage Server
T21P-4U
15
• 1x Intel Xeon D SoC CPU
• 4x DDR4 Memory
• 12x SAS/SATA HDD
• 4x SATA SSD
• 2x SATA SSD for OS
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
SD1Q-1ULH
• Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
16
CPU/Memory
• SSD:
– Journal
– Tier
– File System Cache
– Client Cache
• Journal
– HDD: SSD (SATA/SAS): 4~5
– HDD: NVMe: 12~18
17
SSD/NVMe
• 2x NVMe ~40Gb
• 4x NVMe ~100Gb
• 2x SATA SSD ~10Gb
• 1x SAS SSD ~10Gb
• (20~25)x HDD ~10Gb
• ~100x HDD ~40Gb
18
NIC
10G/40G -> 25G/100G
• CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
19
NIC
I/O Offloading
• Object Replication
– 1 Primary + 2 Replica (or more)
– CRUSH Allocation Ruleset
• Erasure Coding
– [k+m], e.g. 4+2, 8+3
– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
20
Erasure Coding vs. Replication
Size/
Workload
Small Medium Large
Throughput
Transfer Bandwidth
Sequential R/W
Capacity
Cost/capacity
Scalability
IOPS
IOPS/ per 4k Block
Random R/W
Hyper-converged ?
Desktop
Virtualization
Latency
Random R/W
Hadoop ?
21
Workload and Configuration
22
Red Hat Ceph
• Intel ISA-L
• Intel SPDK
• Intel CAS
• Mellanox Accelio Library
23
Vendor-specific Value-added Software
24
Ceph Reference
Architecture and
QCT Solution
• Trade-off among Technologies
• Scalable in Architecture
• Optimized for Workload
• Affordable as Expected
Design Principle
1. Needs for scale-out storage
2. Target workload
3. Access method
4. Storage capacity
5. Data protection methods
6. Fault domain risk tolerance
26
Design Considerations
27
Transactio
n
Data
Warehous
e
Big
Data
Scientific
Block
Transfe
r
Audio Video
IOPS
MB/sec
OLTP
OLAP
HPC
Streaming
DB
Storage Workload
SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)
Throughput
optimized
QxStor RCT-200
16x D51PH-1ULH (16U)
• 12x 8TB HDDs
• 3x SSDs
• 1x dual port 10GbE
• 3x replica
QxStor RCT-400
6x T21P-4U/Dual (24U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
QxStor RCT-400
11x T21P-4U/Dual (44U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
Cost/Capacity
optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400
Nx T21P-4U/Dual
• 2x 35x 8TB HDDs
• 0x SSDs
• 2x dual port 10GbE
• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition Portfolio
Workload-driven Integrated Software/Hardware Solution
• Densest 1U Ceph building block
• Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB
• At once obtain best throughput
and density
• Block or object storage
• 3x replication
• Video, audio, image repositories, and streaming media
• Highest density 560TB raw
capacity per chassis with greatest
price/performance
• Typically object storage
• Erasure coding common
for maximizing usable capacity
• Object archive
Throughput-Optimized
RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USECASE
QCT QxStor Red Hat Ceph Storage Edition
Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
30
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
31
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
33
Test Result
Client 1
S2B
Client 2
S2B
Client 3
S2B
Ceph 1
S2PH
Ceph 2
S2PH
Ceph 3
S2PH
Ceph 5
S2PH
Ceph 4
S2PH
Client 8
S2B
Client 9
S2B
Client 10
S2B
10Gb
10Gb
Public Network
Cluster Network
General Configuration
• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.
• 10 Client nodes (S2B) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 12 OSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 12 OSD / 3 SSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 12 (OSD) x 2GB = 24 GB
Testing Configuration (Throughput-Optimized)
Client 1
S2S
Client 2
S2S
Client 3
S2S
Ceph 1
S2P
Ceph 2
S2P
Client 6
S2S
Client 7
S2S
Client 8
S2S
10Gb
Public Network
40Gb 40Gb
General Configuration
• 2 Ceph nodes (S2P) with each 2 x 10Gb link.
• 8 Client nodes (S2S) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 35 OSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 35 OSD / 2 PCI-SSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Testing Configuration (Capacity-Optimized)
Level Component Test Suite
Raw I/O Disk FIO
Network I/O Network iperf
Object API I/O librados radosbench
Object I/O RGW Cosbench
Block I/O RBD librbdfio
36
CBT (Ceph Benchmarking Tool)
37
Linear Scale Out
38
Linear Scale Up
39
Price, in terms of Performance
40
Price, in terms of Capacity
41
Protection Scheme
42
Cluster Network
43
QCT/Red Hat
Ceph
Whitepaper
44
http://www.qct.io/account/d
ownload/download?order_
download_id=1022&dtype=
Reference%20Architecture
QCT/Red Hat Ceph Solution Brief
https://www.redhat.com/en/
files/resources/st-
performance-sizing-guide-
ceph-qct-inc0347490.pdf
http://www.qct.io/Solution/
Software-Defined-
Infrastructure/Storage-
Virtualization/QCT-and-
Red-Hat-Ceph-Storage-
p365c225c226c230
QCT/Red Hat Ceph Reference Architecture
• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center
provides you a free hands-on experience. You'll be able to
explore the features and simplicity of the product in real-time.
• Concepts:
Ceph feature and functional test
• Lab Exercises:
Ceph Basics
Ceph Management - Calamari/CLI
Ceph Object/Block Access
46
QCT Offer TryCeph (Test Drive) Later
47
Remote access
to QCT cloud solution centers
• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations
• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
• Ceph is Open Architecture
• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,
– Pre-integrated,
– Comprehensive-tested and
– Well-optimized solution
• Red Hat – Open Software/Support Pioneer
Intel – Open Silicon/Technology Innovator
QCT – Open System/Solution Provider
• Together We Provide the Best
48
CONCLUSION
www.QuantaQCT.com
Thank you!
www.QCT.io
QCT CONFIDENTIAL50
Looking for
innovative cloud solution?
Come to QCT,
who else?

Más contenido relacionado

La actualidad más candente

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetupktdreyer
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
An intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data WorkshopAn intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data WorkshopPatrick McGarry
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-EnglishPeggy Ho
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackOpenStack_Online
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxRobert Starmer
 
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...Ian Colle
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014Kyle Bader
 
Managing data analytics in a hybrid cloud
Managing data analytics in a hybrid cloudManaging data analytics in a hybrid cloud
Managing data analytics in a hybrid cloudKaran Singh
 

La actualidad más candente (19)

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
An intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data WorkshopAn intro to Ceph and big data - CERN Big Data Workshop
An intro to Ceph and big data - CERN Big Data Workshop
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-English
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
Managing data analytics in a hybrid cloud
Managing data analytics in a hybrid cloudManaging data analytics in a hybrid cloud
Managing data analytics in a hybrid cloud
 

Destacado

Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleJames Saint-Rossy
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ian Colle
 
Open source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration managementOpen source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration managementDyaa El-din Ahmed
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackDaniel Schneller
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Lenz Grimmer
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentCeph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentPatrick McGarry
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with cephIan Colle
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityOpenNebula Project
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephRed_Hat_Storage
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 

Destacado (20)

Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
 
Open source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration managementOpen source integrated infra structure using ansible configuration management
Open source integrated infra structure using ansible configuration management
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentCeph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to Deployment
 
OpenStack at PayPal
OpenStack at PayPalOpenStack at PayPal
OpenStack at PayPal
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with ceph
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
 
Ceph Object Store
Ceph Object StoreCeph Object Store
Ceph Object Store
 
Bluestore
BluestoreBluestore
Bluestore
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on Ceph
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 

Similar a QCT Ceph Solution - Design Consideration and Reference Architecture

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Databricks
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_clusterPrabhat gangwar
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBSHanson Dong
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDKKernel TLV
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudNicolas Poggi
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
 
FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)Julien SIMON
 

Similar a QCT Ceph Solution - Design Consideration and Reference Architecture (20)

Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph
CephCeph
Ceph
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Oracle real application_cluster
Oracle real application_clusterOracle real application_cluster
Oracle real application_cluster
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
 
FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)FPGAs in the cloud? (October 2017)
FPGAs in the cloud? (October 2017)
 

Más de Patrick McGarry

Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongPatrick McGarry
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Patrick McGarry
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Patrick McGarry
 
Ceph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper MeliorCeph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper MeliorPatrick McGarry
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputePatrick McGarry
 
Ceph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston MeetupCeph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston MeetupPatrick McGarry
 
Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013Patrick McGarry
 
Powering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - ApacheconPowering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - ApacheconPatrick McGarry
 

Más de Patrick McGarry (12)

Community Update
Community UpdateCommunity Update
Community Update
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk2014 Ceph NYLUG Talk
2014 Ceph NYLUG Talk
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 
Ceph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper MeliorCeph, Xen, and CloudStack: Semper Melior
Ceph, Xen, and CloudStack: Semper Melior
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
 
Ceph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston MeetupCeph & OpenStack - Boston Meetup
Ceph & OpenStack - Boston Meetup
 
Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013Ceph in the Ecosystem - Ceph Day NYC 2013
Ceph in the Ecosystem - Ceph Day NYC 2013
 
Powering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - ApacheconPowering CloudStack with Ceph RBD - Apachecon
Powering CloudStack with Ceph RBD - Apachecon
 

Último

Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 

Último (20)

Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 

QCT Ceph Solution - Design Consideration and Reference Architecture

  • 1. QCT Ceph Solution – Design Consideration and Reference Architecture Gary Lee AVP, QCT
  • 2. • Industry Trend and Customer Needs • Ceph Architecture • Technology • Ceph Reference Architecture and QCT Solution • Test Result • QCT/Red Hat Ceph Whitepaper 2 AGENDA
  • 4. 4
  • 5. • Structured Data -> Unstructured/Structured Data • Data -> Big Data, Fast Data • Data Processing -> Data Modeling -> Data Science • IT -> DT • Monolithic -> Microservice 5 Industry Trend
  • 6. • Scalable Size • Variable Type • Longivity Time • Distributed Location • Versatile Workload 6 • Affordable Price • Available Service • Continuous Innovation • Consistent Management • Neutral Vendor Customer Needs
  • 8. Ceph Storage Cluster 8 Cluster Network Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Object Block File Unified Storage Scale-out Cluster Open Source Software Open Commodity Hardware …..
  • 11. Public Network (ex. 10GbE or 40GbE) Cluster Network (ex. 10GbE or 40GbE) Ceph Monitor …... RCT or RCC Nx Ceph OSD Nodes Ceph OSD Node Clients Ceph OSD Node Ceph OSD Node Ceph Hardware Architecture
  • 13. 13 • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 12x 3.5” SAS/SATA HDD • 4x SATA SSD + PCIe M.2 • 1x SATADOM • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server D51PH-1ULH
  • 14. 14 • Mono/Dual Node • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 78x or 2x 35x SSD/HDD • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 SAS Controller • 1x PCIe x8 HHLH Card • 1x PCIe x16 FHHL Card • 4U QCT Ceph Storage Server T21P-4U
  • 15. 15 • 1x Intel Xeon D SoC CPU • 4x DDR4 Memory • 12x SAS/SATA HDD • 4x SATA SSD • 2x SATA SSD for OS • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server SD1Q-1ULH
  • 16. • Standalone, without EC • Standalone, with EC • Hyper-converged, without EC • High Core vs. High Frequency • 1x OSD ~ (0.3-0.5)x Core + 2G RAM 16 CPU/Memory
  • 17. • SSD: – Journal – Tier – File System Cache – Client Cache • Journal – HDD: SSD (SATA/SAS): 4~5 – HDD: NVMe: 12~18 17 SSD/NVMe
  • 18. • 2x NVMe ~40Gb • 4x NVMe ~100Gb • 2x SATA SSD ~10Gb • 1x SAS SSD ~10Gb • (20~25)x HDD ~10Gb • ~100x HDD ~40Gb 18 NIC 10G/40G -> 25G/100G
  • 19. • CPU Offload through RDMA/iWARP • Erasure Coding Offload • Allocate computing on different silicon areas 19 NIC I/O Offloading
  • 20. • Object Replication – 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset • Erasure Coding – [k+m], e.g. 4+2, 8+3 – Better Data Efficiency • k/(k+m) vs. 1/(1+replication) 20 Erasure Coding vs. Replication
  • 21. Size/ Workload Small Medium Large Throughput Transfer Bandwidth Sequential R/W Capacity Cost/capacity Scalability IOPS IOPS/ per 4k Block Random R/W Hyper-converged ? Desktop Virtualization Latency Random R/W Hadoop ? 21 Workload and Configuration
  • 23. • Intel ISA-L • Intel SPDK • Intel CAS • Mellanox Accelio Library 23 Vendor-specific Value-added Software
  • 25. • Trade-off among Technologies • Scalable in Architecture • Optimized for Workload • Affordable as Expected Design Principle
  • 26. 1. Needs for scale-out storage 2. Target workload 3. Access method 4. Storage capacity 5. Data protection methods 6. Fault domain risk tolerance 26 Design Considerations
  • 28. SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*) Throughput optimized QxStor RCT-200 16x D51PH-1ULH (16U) • 12x 8TB HDDs • 3x SSDs • 1x dual port 10GbE • 3x replica QxStor RCT-400 6x T21P-4U/Dual (24U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica QxStor RCT-400 11x T21P-4U/Dual (44U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica Cost/Capacity optimized IOPS optimized Future direction Future direction NA * Usable storage capacity QxStor RCC-400 Nx T21P-4U/Dual • 2x 35x 8TB HDDs • 0x SSDs • 2x dual port 10GbE • Erasure Coding 4:2 QCT QxStor Red Hat Ceph Storage Edition Portfolio Workload-driven Integrated Software/Hardware Solution
  • 29. • Densest 1U Ceph building block • Best reliability with smaller failure domain • Scale at high scale 2x 280TB • At once obtain best throughput and density • Block or object storage • 3x replication • Video, audio, image repositories, and streaming media • Highest density 560TB raw capacity per chassis with greatest price/performance • Typically object storage • Erasure coding common for maximizing usable capacity • Object archive Throughput-Optimized RCC-400RCT-200 RCT-400 Cost/Capacity-Optimized USECASE QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
  • 30. 30 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 31. 31 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 32. 32 QCT Solution Value Proposition • Workload-driven • Hardware/software pre-validated, pre-optimized and pre-integrated • Up and running in minutes • Balance between production (stable) and innovation (up-streaming)
  • 34. Client 1 S2B Client 2 S2B Client 3 S2B Ceph 1 S2PH Ceph 2 S2PH Ceph 3 S2PH Ceph 5 S2PH Ceph 4 S2PH Client 8 S2B Client 9 S2B Client 10 S2B 10Gb 10Gb Public Network Cluster Network General Configuration • 5 Ceph nodes (S2PH) with each 2 x 10Gb link. • 10 Client nodes (S2B) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB Testing Configuration (Throughput-Optimized)
  • 35. Client 1 S2S Client 2 S2S Client 3 S2S Ceph 1 S2P Ceph 2 S2P Client 6 S2S Client 7 S2S Client 8 S2S 10Gb Public Network 40Gb 40Gb General Configuration • 2 Ceph nodes (S2P) with each 2 x 10Gb link. • 8 Client nodes (S2S) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Testing Configuration (Capacity-Optimized)
  • 36. Level Component Test Suite Raw I/O Disk FIO Network I/O Network iperf Object API I/O librados radosbench Object I/O RGW Cosbench Block I/O RBD librbdfio 36 CBT (Ceph Benchmarking Tool)
  • 39. 39 Price, in terms of Performance
  • 40. 40 Price, in terms of Capacity
  • 46. • The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time. • Concepts: Ceph feature and functional test • Lab Exercises: Ceph Basics Ceph Management - Calamari/CLI Ceph Object/Block Access 46 QCT Offer TryCeph (Test Drive) Later
  • 47. 47 Remote access to QCT cloud solution centers • Easy to test. Anytime and anywhere. • No facilities and logistic needed • Configurations • RCT-200 and newest QCT solutions QCT Offer TryCeph (Test Drive) Later
  • 48. • Ceph is Open Architecture • QCT, Red Hat and Intel collaborate to provide – Workload-driven, – Pre-integrated, – Comprehensive-tested and – Well-optimized solution • Red Hat – Open Software/Support Pioneer Intel – Open Silicon/Technology Innovator QCT – Open System/Solution Provider • Together We Provide the Best 48 CONCLUSION
  • 50. www.QCT.io QCT CONFIDENTIAL50 Looking for innovative cloud solution? Come to QCT, who else?

Notas del editor

  1. Here is 3 skus based on small-medium-large scale. For larger scale, suggest customer to adopt RCT-400 or RCC-400. QCT is planning to do sku that is optimized for IOPS-intensive workloads. We’ll launch it in 2016H2