QCT Ceph Solution - Design Consideration and Reference Architecture
1. QCT Ceph
Solution – Design
Consideration and
Reference
Architecture
Gary Lee
AVP, QCT
2. • Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
2
AGENDA
5. • Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
5
Industry Trend
6. • Scalable Size
• Variable Type
• Longivity Time
• Distributed Location
• Versatile Workload
6
• Affordable Price
• Available Service
• Continuous Innovation
• Consistent Management
• Neutral Vendor
Customer Needs
8. Ceph Storage Cluster
8
Cluster Network
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Object Block File
Unified
Storage
Scale-out
Cluster
Open
Source
Software
Open
Commodity
Hardware
…..
13. 13
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 12x 3.5” SAS/SATA HDD
• 4x SATA SSD + PCIe M.2
• 1x SATADOM
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
D51PH-1ULH
14. 14
• Mono/Dual Node
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 78x or 2x 35x SSD/HDD
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 SAS Controller
• 1x PCIe x8 HHLH Card
• 1x PCIe x16 FHHL Card
• 4U
QCT Ceph Storage Server
T21P-4U
15. 15
• 1x Intel Xeon D SoC CPU
• 4x DDR4 Memory
• 12x SAS/SATA HDD
• 4x SATA SSD
• 2x SATA SSD for OS
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
SD1Q-1ULH
16. • Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
16
CPU/Memory
19. • CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
19
NIC
I/O Offloading
20. • Object Replication
– 1 Primary + 2 Replica (or more)
– CRUSH Allocation Ruleset
• Erasure Coding
– [k+m], e.g. 4+2, 8+3
– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
20
Erasure Coding vs. Replication
21. Size/
Workload
Small Medium Large
Throughput
Transfer Bandwidth
Sequential R/W
Capacity
Cost/capacity
Scalability
IOPS
IOPS/ per 4k Block
Random R/W
Hyper-converged ?
Desktop
Virtualization
Latency
Random R/W
Hadoop ?
21
Workload and Configuration
28. SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)
Throughput
optimized
QxStor RCT-200
16x D51PH-1ULH (16U)
• 12x 8TB HDDs
• 3x SSDs
• 1x dual port 10GbE
• 3x replica
QxStor RCT-400
6x T21P-4U/Dual (24U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
QxStor RCT-400
11x T21P-4U/Dual (44U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
Cost/Capacity
optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400
Nx T21P-4U/Dual
• 2x 35x 8TB HDDs
• 0x SSDs
• 2x dual port 10GbE
• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition Portfolio
Workload-driven Integrated Software/Hardware Solution
29. • Densest 1U Ceph building block
• Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB
• At once obtain best throughput
and density
• Block or object storage
• 3x replication
• Video, audio, image repositories, and streaming media
• Highest density 560TB raw
capacity per chassis with greatest
price/performance
• Typically object storage
• Erasure coding common
for maximizing usable capacity
• Object archive
Throughput-Optimized
RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USECASE
QCT QxStor Red Hat Ceph Storage Edition
Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
32. 32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
46. • The Red Hat Ceph Storage Test Drive lab in QCT Solution Center
provides you a free hands-on experience. You'll be able to
explore the features and simplicity of the product in real-time.
• Concepts:
Ceph feature and functional test
• Lab Exercises:
Ceph Basics
Ceph Management - Calamari/CLI
Ceph Object/Block Access
46
QCT Offer TryCeph (Test Drive) Later
47. 47
Remote access
to QCT cloud solution centers
• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations
• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
48. • Ceph is Open Architecture
• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,
– Pre-integrated,
– Comprehensive-tested and
– Well-optimized solution
• Red Hat – Open Software/Support Pioneer
Intel – Open Silicon/Technology Innovator
QCT – Open System/Solution Provider
• Together We Provide the Best
48
CONCLUSION
Here is 3 skus based on small-medium-large scale.
For larger scale, suggest customer to adopt RCT-400 or RCC-400.
QCT is planning to do sku that is optimized for IOPS-intensive workloads. We’ll launch it in 2016H2