SlideShare una empresa de Scribd logo
1 de 8
Enterprise Flash TechnologyBenchmark Summary Technology/Consulting/Managed Solutions
IOPS (CX4-120 EFD vs. CX3-80 15K FC)
Response Time (CX4-120 EFD vs. CX3-80 15K FC)
IOPS per Drive (CX4-120 EFD vs. CX3-80 15K FC) Note:  FC drive IOps exceed theoretical maximum of 180 IOPS due to cache benefit.
CX3-80 Jetstress Benchmark Database Sizing and  Throughput Jetstress System Parameters Disk Subsystem Performance CX4-120 Jetstress Benchmark Database Sizing and  Throughput Jetstress System Parameters Disk Subsystem Performance
Conclusions Reduced storage footprint from 24 drives to 14 drives = ~ 42% reducution ~ 12x Random Read Performance Improvement ~ 5x Random Write Performance Improvement ~ 8 – 10 Performance Improvement with Random 60% Read / 40% Write workload

Más contenido relacionado

La actualidad más candente

Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Database High Availability Using SHADOW Systems
Database High Availability Using SHADOW SystemsDatabase High Availability Using SHADOW Systems
Database High Availability Using SHADOW SystemsJaemyung Kim
 
CASSANDRA MEETUP - Choosing the right cloud instances for success
CASSANDRA MEETUP - Choosing the right cloud instances for successCASSANDRA MEETUP - Choosing the right cloud instances for success
CASSANDRA MEETUP - Choosing the right cloud instances for successErick Ramirez
 
SQream on Ibm power9 (english)
SQream on Ibm power9 (english)SQream on Ibm power9 (english)
SQream on Ibm power9 (english)Yutaka Kawai
 
An introduction to column store indexes and batch mode
An introduction to column store indexes and batch modeAn introduction to column store indexes and batch mode
An introduction to column store indexes and batch modeChris Adkin
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
 
Introduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDIntroduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDPGConf APAC
 
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)Kohei KaiGai
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Garbage collection in JVM
Garbage collection in JVMGarbage collection in JVM
Garbage collection in JVMaragozin
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Joblib Toward efficient computing : from laptop to cloud
Joblib Toward efficient computing : from laptop to cloudJoblib Toward efficient computing : from laptop to cloud
Joblib Toward efficient computing : from laptop to cloudPyDataParis
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Community
 
Less is More: 2X Storage Efficiency with HDFS Erasure Coding
Less is More: 2X Storage Efficiency with HDFS Erasure CodingLess is More: 2X Storage Efficiency with HDFS Erasure Coding
Less is More: 2X Storage Efficiency with HDFS Erasure CodingZhe Zhang
 

La actualidad más candente (20)

Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Database High Availability Using SHADOW Systems
Database High Availability Using SHADOW SystemsDatabase High Availability Using SHADOW Systems
Database High Availability Using SHADOW Systems
 
CASSANDRA MEETUP - Choosing the right cloud instances for success
CASSANDRA MEETUP - Choosing the right cloud instances for successCASSANDRA MEETUP - Choosing the right cloud instances for success
CASSANDRA MEETUP - Choosing the right cloud instances for success
 
SQream on Ibm power9 (english)
SQream on Ibm power9 (english)SQream on Ibm power9 (english)
SQream on Ibm power9 (english)
 
Dba tuning
Dba tuningDba tuning
Dba tuning
 
An introduction to column store indexes and batch mode
An introduction to column store indexes and batch modeAn introduction to column store indexes and batch mode
An introduction to column store indexes and batch mode
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Introduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDIntroduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XID
 
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Garbage collection in JVM
Garbage collection in JVMGarbage collection in JVM
Garbage collection in JVM
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Joblib PyDataParis2016
Joblib PyDataParis2016Joblib PyDataParis2016
Joblib PyDataParis2016
 
Joblib Toward efficient computing : from laptop to cloud
Joblib Toward efficient computing : from laptop to cloudJoblib Toward efficient computing : from laptop to cloud
Joblib Toward efficient computing : from laptop to cloud
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Less is More: 2X Storage Efficiency with HDFS Erasure Coding
Less is More: 2X Storage Efficiency with HDFS Erasure CodingLess is More: 2X Storage Efficiency with HDFS Erasure Coding
Less is More: 2X Storage Efficiency with HDFS Erasure Coding
 

Destacado

Compliance collisions-misconceptions
Compliance collisions-misconceptionsCompliance collisions-misconceptions
Compliance collisions-misconceptionsRichard Bocchinfuso
 
BI at work for Port Operations
BI at work for Port OperationsBI at work for Port Operations
BI at work for Port OperationsDhiren Gala
 
LAMP_TRAINING_SESSION_6
LAMP_TRAINING_SESSION_6LAMP_TRAINING_SESSION_6
LAMP_TRAINING_SESSION_6umapst
 
Business Intelligence (BI) for Construction Industry
Business Intelligence (BI) for Construction IndustryBusiness Intelligence (BI) for Construction Industry
Business Intelligence (BI) for Construction IndustryDhiren Gala
 
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...Dhiren Gala
 
Digital Portfolio Mishka English
Digital  Portfolio Mishka EnglishDigital  Portfolio Mishka English
Digital Portfolio Mishka Englishangelus1
 

Destacado (6)

Compliance collisions-misconceptions
Compliance collisions-misconceptionsCompliance collisions-misconceptions
Compliance collisions-misconceptions
 
BI at work for Port Operations
BI at work for Port OperationsBI at work for Port Operations
BI at work for Port Operations
 
LAMP_TRAINING_SESSION_6
LAMP_TRAINING_SESSION_6LAMP_TRAINING_SESSION_6
LAMP_TRAINING_SESSION_6
 
Business Intelligence (BI) for Construction Industry
Business Intelligence (BI) for Construction IndustryBusiness Intelligence (BI) for Construction Industry
Business Intelligence (BI) for Construction Industry
 
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...
Intelligent Government Management - From Ad-hoc Reporting to Business Intelli...
 
Digital Portfolio Mishka English
Digital  Portfolio Mishka EnglishDigital  Portfolio Mishka English
Digital Portfolio Mishka English
 

Efd vs fc_summary_v3

  • 1. Enterprise Flash TechnologyBenchmark Summary Technology/Consulting/Managed Solutions
  • 2.
  • 3.
  • 4. IOPS (CX4-120 EFD vs. CX3-80 15K FC)
  • 5. Response Time (CX4-120 EFD vs. CX3-80 15K FC)
  • 6. IOPS per Drive (CX4-120 EFD vs. CX3-80 15K FC) Note: FC drive IOps exceed theoretical maximum of 180 IOPS due to cache benefit.
  • 7. CX3-80 Jetstress Benchmark Database Sizing and Throughput Jetstress System Parameters Disk Subsystem Performance CX4-120 Jetstress Benchmark Database Sizing and Throughput Jetstress System Parameters Disk Subsystem Performance
  • 8. Conclusions Reduced storage footprint from 24 drives to 14 drives = ~ 42% reducution ~ 12x Random Read Performance Improvement ~ 5x Random Write Performance Improvement ~ 8 – 10 Performance Improvement with Random 60% Read / 40% Write workload