SlideShare una empresa de Scribd logo
1 de 40
Descargar para leer sin conexión
Industry Standard Benchmarks:
Past, Present and Future
Raghunath Nambiar
Distinguished Engineer, Cisco
RNambiar@cisco.com

Invited Talk

1
• Cisco Distinguished Engineer, Chief Architect, Big Data Solutions, Cisco
• General Chair, TPC’s International Conference Series on Performance

Evaluation and Benchmarking (TPCTC)
• Chairman, TPC Big Data Committee
• Industry Chair, IEEE Big Data 2013, ICPE 2014
• Board Member TPC, WBDB, BigDataTop100

2
3
• Synthetic Benchmarks


Simulate functions that yield an indicative measure of the
subsystem performance



Widely adapted in the industry and academic community



Several open source tools

• Application Benchmarks


Developed and administered by application vendors



VMmark, SAP and Oracle application benchmarks

• Industry Standard Benchmarks


Driven by industry standard consortia which are represented by
vendors, customers, and research organizations



Democratic procedures for all key decision making



TPC, SPEC and SPC
4
• Industry standard benchmarks have played, and continue to

play, a crucial role in the advancement of the computing
industry
• Demands for them have existed since buyers were first

confronted with the choice between purchasing one system
over another
• Historically we have seen that industry standard

benchmarks enable healthy competition that results in
product improvements and the evolution of brand new
technologies
Better products,
Lower PricePerformance
5
Critical to Vendors, Customers and Researchers
• Vendor
 Demonstrate competitiveness of their products
 Monitor release-to-release progress of their products under
development

• Customer
 Cross-vendor evaluation of technologies and products in terms of
performance, price-performance, energy efficiency

• Researcher


Known, measurable, and repeatable workloads to develop and enhance
relevant technologies

6
Major Activities

Benchmark Development Process

• Development of new benchmarks

• Publication of benchmark results
• Refinement of existing benchmarks
• Resolution of disputes and challenges
Source: Raghunath Nambiar, Meikel Poess: The Making of TPC-DS. VLDB
2006: 1049-1058

7
• The TPC is a non-profit, vendor-neutral organization,

established in August 1988
• Reputation of providing the most credible performance

results to the industry.
• Role of “consumer reports” for the computing industry
• Solid foundation for complete system-level performance
• Methodology for calculating total-system-price and price-

performance
• Methodology for measuring energy efficiency of complete

system
Source: Raghunath Nambiar, Matthew Lanken, Nicholas Wakou, Forrest Carman, Michael Majdalany: Transaction Processing Performance Council
(TPC): Twenty Years Later - A Look Back, a Look Ahead, First TPC Technology Conference, TPCTC 2009, Lyon, France, ISBN 978-3-642-10423-7

8
Benchmark Standards
TPC-A
TPC-B
TPC-C
TPC-D
TPC-R
TPC-H
TPC-W
TPC-App
TPC-E
TPC-DS
TPC-VMS

Common Specifications
Pricing
Energy

Developments in Progress
TPC-DI
TPC-VMC
TPC-V
Source: Raghunath Nambiar, Meikel Poess, Andrew Masland, H. Reza Taheri, Matthew Emmerton, Forrest Carman,
Michael Majdalany: TPC Benchmark Roadmap 2012, 4th TPC Technology Conference, TPCTC 2012, Istanbul, Turkey,
ISBN 978-3-642-36726-7

•

Obsolete

•

Active

•

Common Specifications

•

In Progress

•
•
•

Developed 11 Benchmark Standards
5 Standards are current
What’s new ?

•

TPC-VMS – new standard for measuring database
performance in a virtualized environment
TPC-DI - standard for measuring data integration
performance. Expected to be standard in 2014
TPC-Big Data committee was formed in October 2013

•
•

9
Universities and research organizations are encouraged to join the TPC as Associate
Members.
To join the TPC: http://www.tpc.org/information/about/join.asp

10
• The Standard Performance Evaluation Corporation (SPEC) is a non-

profit organization established in 1988
• Develop standards for system level performance measurements
• History of developing relevant benchmarks to the industry in a timely

manner
• Four diverse groups - Graphics and Workstation Performance Group

(GWPG), High Performance Group (HPG), Open Systems Group
(OSG) and Research Group (RG)
• Represented by system and software vendors and a number of

academic and research organizations

11
• SPEC CPU2006 is designed to measure the compute power of systems, contains two
benchmark suites: CINT2006 for measuring and comparing compute-intensive integer
performance, and CFP2006 for measuring and comparing compute-intensive floating
point performance
• SPEC MPI2007 is designed for evaluating MPI-parallel, floating point, and computeintensive performance across a wide range of cluster and SMP hardware
• SPECjbb2013 measures server performance based on the Java application features
by emulating a three-tier client/server system

• SPECjEnterprise2010 measures system performance for Java Enterprise Edition
based application servers, databases, and supporting infrastructure
• SPECsfs2008 is designed to evaluate the speed and request-handling capabilities of
file servers utilizing the NFSv3 and CIFS protocols

• SPECpower_ssj2008 evaluates the power and performance characteristics of volume
server class computers
• SPECvirt_sc2010 measures the end-to-end performance of all system components,
including the hardware, virtualization platform, virtualized guest operating system, and
application software
12
• The Storage Performance Council (SPC) is a vendor-neutral

consortia established in 2000
• Focused on industry standards for storage system

performance
• Serve as a catalyst for performance improvement in storage

subsystems
• Robust methodology measuring, auditing, and publishing

performance, price-performance, and energy-efficiency
metrics for storage systems
• Major systems and storage vendors are members of the

SPC

13
•

SPC Benchmark 1 (SPC-1) consists of a single workload designed to demonstrate the performance of a storage
subsystem under OLTP workloads characterized by random reads and writes

•

SPC Benchmark 1/Energy (SPC-1/E) is an extension of SPC-1 that consists of the complete set of SPC-1
performance measurement and reporting plus the measurement and reporting of energy consumption

•

SPC Benchmark 2 (SPC-2) consists of three distinct workloads: large file processing, large database queries, and
video on-demand simulating the concurrent large-scale sequential movement of data

•

SPC Benchmark 2/Energy (SPC-2/E) is extension of SPC-2 that consists of the complete set of SPC-2
performance measurement and reporting plus the measurement and reporting of energy consumption

•

SPC Benchmark 1C (SPC-1C) is based on SPC-1 for storage component products such as disk drives, host bus
adapters, storage enclosures, and storage software stacks such as volume managers

•

SPC Benchmark 1C/Energy (SPC-1C/E) is an extension of SPEC-1C that consists of the complete set of SPC-1C
performance measurements and reporting plus measurement and reporting of energy consumption

•

SPC Benchmark 2C (SPC-2C) is based on SPC-2, predominately by large I/Os organized into one or more
concurrent sequential patterns for storage component products

•

SPC Benchmark 2C/Energy (SPC-2C/E) is an extension of SPC-2C that consists of the complete set of SPC-2
performance measurement and reporting plus the measurement and reporting of energy consumption

14
TPC-C Performance 1992-2010

Average tpmC/Processor

1000000

Average tpmC per processor
Moore's Law

First result using solid state drives (SDD)

First result using 15K RPM SAS SFF disk drives

100000

First result using 15K RPM disk drives
First Linux result
First multi core result

10000

Intel introduces multi threading
TPC-C Revision 5 and First x86-64 bit result

First storage area network (SAN) based result
First result using 7.2K RPM disk drives

1000

First clustered result
TPC-C Revision 3, First Windows result and first x86 result

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

100

1993

TPC-C Revision 2

Publication Year
Source: Nambiar R., Poess M. (2010). Transaction Performance vs. Moore’s Law. Performance Evaluation, Measurement and Characterization of
Complex Systems. Lecture Notes in Computer Science 6417, Springer 2011, ISBN 978-3-642-18205-1

15
TPC-C Price-Performance 1992-2010
10000
Price per NtpmC
TPC-C Revision 2

1000

Price per NtpmC

Moore's Law

TPC-C Revision 3, First Windows result and first x86 result
First clustered result
First result using 7.2 RPM disk drives
First storage area network (SAN) based result

100

TPC-C Revision 5 and First x86-64 bit result
Intel introduces multi threading

First multi core result

First Linux result

10
First result using 15 K RPM disk drives
First result using 15K RPM SAS SFF disk drives

1

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

0.1

1993

First result using solid state drives SDD

Publication Year
Source: Nambiar R., Poess M. (2010). Transaction Performance vs. Moore’s Law. Performance Evaluation, Measurement and Characterization of
Complex Systems. Lecture Notes in Computer Science 6417, Springer 2011, ISBN 978-3-642-18205-1

16
• IT 1.0: 1980-2000
 Transaction Processing, Data Warehousing, File

server, Web server, Multi-tier Applications

• IT 2.0: 2000-2010


Internet centric, Massive scale-out systems,
Virtualization, Energy efficient systems

• IT 3.0: 2010

Industry
Standard
Committees
have done a
great job

Call for new
standards

Cloud, Big Data, Internet of things, Software
defined and application centric infrastructure

17
18
34.3% of World’s Population
have internet access today
50% by 2020

19
60

50 Billion

40

20

15 Billion

0
2012

2020
Connected Devices

There are 15 billion devices connected to the Internet
that’s 2.2 devices for every man, woman, and child on the planet earth

50 Billion devices by 2020
Trillion+ with IOT.

Source: Cisco, webpronews.com

......
20
Time’s Man of the Year
1982

Source: time.com

21
1.

Were a country …

2.

India (1.218 billion)

3.

Facebook (1 billion)

4.

If

China (1.339 billion)

United States (311 million)

5.

Indonesia (237 million)

6.

Brazil (190 billion)

7.

Pakistan (175 million)

8.

Nigeria (158 million)

9.

Bangladesh (150 million)

10. Russia (142 million)

......
22
The third generation of
IT platform driven by
new applications and
services built on cloud,
mobile devices, social
media, IoTs and more

Intelligent Economy
Big Data/
Social
Analytics
Business
Mobile Cloud
of “Things”
BroadbandServices
Mobile
Billions
Millions
Devices
of Users
of Apps
and Apps

Trillions

2011

Hundreds of Millions

Examples: Recommendation engines,
Personalized contents, Crowd-sourcing

LAN/ ClientInternet Server

Tens of Thousands

PC

of Users

of Apps

1986

Millions
of Users

Thousands
of Apps

Source: IDC

23
2008
0.5 Zettabyte

2011
2.5 Zettabytes

1 Zettabyte

2020
35 Zettabytes

= 1 099 511 627 776 Gigabytes
= 1 Billion 1TB Disk Drives

How many disk drives were sold in 2012 ?
Source: IDC, EMC

24
Global IP Traffic

616EB

Per Capita Internet Traffic

Per Capita IP Traffic
Per Capita
Internet Traffic

In 2016, equivalent of all movies ever made will cross global IP networks every 3 minutes
Source: Cisco

25
26
• Big Data is becoming integral part of IT ecosystem across all major

verticals

• One of the most talked about topics in research and government sectors
• Big Data challenges can be summed up in 5V’s: Volume, Velocity, Varity,

Value, Veracity
• Big Data is becoming center of 3I’s: Investments, Innovation,

Improvization*
* Source: http://blogs.cisco.com/datacenter/ieee-bigdata/

27
Source: Gartner 2011

Source: Mckinsey Global Institute Analysis

Source: Cisco
• TeraSort
• YCSB
• GridMiX
• HiBench
• TPC-DS (at large scale ?)
• BigBench, BigDataBench

29
State of the Nature - early 1980's
the industry began a race that has accelerated over time: automation of daily end-user
business transactions. The first application that received wide-spread focus was
automated teller transactions (ATM), but we've seen this automation trend ripple through
almost every area of business, from grocery stores to gas stations. As opposed to the
batch-computing model that dominated the industry in the 1960's and 1970's, this new
online model of computing had relatively unsophisticated clerks and consumers directly
conducting simple update transactions against an on-line database system. Thus, the online transaction processing industry was born, an industry that now represents billions of
dollars in annual sales.
Early Attempts at Civilized Competition
In the April 1, 1985 issue of Datamation, Jim Gray in collaboration with 24 others from
academy and industry, published (anonymously) an article titled, "A Measure of
Transaction Processing Power." This article outlined a test for on-line transaction
processing which was given the title of "DebitCredit." Unlike the TP1 benchmark, Gray's
DebitCredit benchmark specified a true system-level benchmark where the network and
user interaction components of the workload were included. In addition, it outlined several
other key features of the benchmarking process that were later incorporated into the TPC
process:
The TPC Lays Down the Law
While Gray's DebitCredit ideas were widely praised by industry opinion makers, the
DebitCredit benchmark had the same success in curbing bad benchmarking as the
prohibition did in stopping excessive drinking. In fact, according to industry analysts like
Omri Serlin, the situation only got worse. Without a standards body to supervise the
testing and publishing, vendors began to publish extraordinary marketing claims on both
TP1 and DebitCredit. They often deleted key requirements in DebitCredit to improve their
performance results.
From 1985 through 1988, vendors used TP1 and DebitCredit--or their own interpretation
of these benchmarks--to muddy the already murky performance waters. Omri Serlin had
had enough. He spearheaded a campaign to see if this mess could be straightened out.
On August 10, 1988, Serlin had successfully convinced eight companies to form the
Transaction Processing Performance Council (TPC).

30
• Performance
• Cost of ownership
• Energy efficiency
• Floor space efficiency
• Manageability
• User experience

31
• Relevant

• Repeatable
• Understandable
• Fair
• Verifiable
• Economical
Reference: K. Huppler, The Art of Building a Good Benchmark, Performance Evaluation and Benchmarking, LNCS vol. 5895, Springer 2009

• Time to Market – Long development cycle is not acceptable
32
• Business Case
• Data Definition and Data Generation
• Workload
• Execution Rules
• Metric
• Audit Rules

• Full Disclosure Report

33
• TPC International Technology Conference

Series on Performance Evaluation and
Benchmarking (TPCTC)
• Workshop Series on Big Data Benchmarking

(WBDB)
• TPC - Big Data Benchmark Work Group (TPC-

BD)

34
TPC International Technology Conference Series on
Performance Evaluation and Benchmarking (TPCTC)
 Accelerate the development of relevant benchmark standards

 Enable collaboration between industry experts and researchers
 Collocated with International Conference on Very Large Data
Bases (VLDB) since 2009


TPCTC 2009 in conjunction with VLDB 2009, Lyon, France



TPCTC 2010 in conjunction with VLDB 2010, Singapore



TPCTC 2011 in conjunction with VLDB 2011, Seattle, Washington



TPCTC 2012 in conjunction with VLDB 2012, Istanbul, Turkey



TPCTC 2013 in conjunction with VLDB 2013, Riva Del Garda, Italy



TPCTC 2014 – will collocate with VLDB 2014 in Hangzhou, China (More information
available at http://www.tpc.org/tpctc/

35
Workshop Series on Big Data Benchmarking
(WBDB)
• A first important step towards the development of a set of

benchmarks for providing objective measures of the effectiveness
of hardware and software systems dealing with Big Data
applications
• Open forum for discussions issues related to Big Data

benchmarking

• WBDB Workshops


WBDB 2012, San Jose



WBDB 2012.in, Pune



WBDB 2013.cn, Xi’an



WBDB 2013, San Jose



WBDB 2014, Potsdam, Germany (August 5-6, 2014)

• BigData100
36
37
• Evaluate big data workload(s) and make recommendations
• Four workloads under evaluation

• Additional workloads will be considered
• Accept one or more benchmarks to address various Big

Data use cases
• More information: http://www.tpc.org/tpcbd/

38
• TPC is an international organization. Vendors, Customers,

Universities, and Research Institutions are invited to join
• Membership Benefits
 Influence in the TPC benchmarking development process
 Timely access to ongoing proceedings
 Product Improvement

• Memberships
 Full membership - Participate in all aspects of the TPC's work, including development
of benchmark standards and setting strategic direction.

 Associate Membership – Reserved for non-profit, educational institutions, market
researchers, publishers, consultants, governments, businesses who do not create,
market or sell computer products or services.
 Promotional membership for new members

• More Information: http://www.tpc.org/information/about/join.asp

39
Thank you.

Más contenido relacionado

La actualidad más candente

Slope Final Review Meeting - WP1
Slope Final Review Meeting - WP1 Slope Final Review Meeting - WP1
Slope Final Review Meeting - WP1 SLOPE Project
 
Storage Networking Solutions for High Performance Databases by QLogic
Storage Networking Solutions for High Performance Databases by QLogicStorage Networking Solutions for High Performance Databases by QLogic
Storage Networking Solutions for High Performance Databases by QLogicJone Smith
 
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HPEfficient Data Center Virtualization with QLogic 10GbE Solutions from HP
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HPJone Smith
 
Kick-Off Meeting - WP6
Kick-Off Meeting - WP6Kick-Off Meeting - WP6
Kick-Off Meeting - WP6SLOPE Project
 
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...TELKOMNIKA JOURNAL
 

La actualidad más candente (6)

Slope Final Review Meeting - WP1
Slope Final Review Meeting - WP1 Slope Final Review Meeting - WP1
Slope Final Review Meeting - WP1
 
Storage Networking Solutions for High Performance Databases by QLogic
Storage Networking Solutions for High Performance Databases by QLogicStorage Networking Solutions for High Performance Databases by QLogic
Storage Networking Solutions for High Performance Databases by QLogic
 
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HPEfficient Data Center Virtualization with QLogic 10GbE Solutions from HP
Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP
 
Satisha Kumar_RF Engg(3G)_3.6 year
Satisha Kumar_RF Engg(3G)_3.6 yearSatisha Kumar_RF Engg(3G)_3.6 year
Satisha Kumar_RF Engg(3G)_3.6 year
 
Kick-Off Meeting - WP6
Kick-Off Meeting - WP6Kick-Off Meeting - WP6
Kick-Off Meeting - WP6
 
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...
 

Destacado

Independent Maintenance on Industry Standard Systems
Independent Maintenance on Industry Standard SystemsIndependent Maintenance on Industry Standard Systems
Independent Maintenance on Industry Standard SystemsTERiX Computer Service
 
What’s Standard? Industry Application versus University Education of Engineer...
What’s Standard? Industry Application versus University Education of Engineer...What’s Standard? Industry Application versus University Education of Engineer...
What’s Standard? Industry Application versus University Education of Engineer...Chelsea Leachman
 
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)aljimenez
 
Oil & Gas Disciplines
Oil & Gas DisciplinesOil & Gas Disciplines
Oil & Gas DisciplinesFidan Aliyeva
 
valero energy Texas City Refinery Tour – November 13, 2007
valero energy Texas City Refinery Tour – November 13, 2007valero energy Texas City Refinery Tour – November 13, 2007
valero energy Texas City Refinery Tour – November 13, 2007finance2
 
Difference between code, standard & Specification
Difference between code, standard & SpecificationDifference between code, standard & Specification
Difference between code, standard & SpecificationVarun Patel
 
Oil refinery processes
Oil refinery processesOil refinery processes
Oil refinery processesJithu John
 
Petroleum Refinery Engineering-Part-2-30-July-2016
Petroleum Refinery Engineering-Part-2-30-July-2016Petroleum Refinery Engineering-Part-2-30-July-2016
Petroleum Refinery Engineering-Part-2-30-July-2016Muhammad Rashid Usman
 
Oil Industry Powerpoint Template
Oil Industry Powerpoint TemplateOil Industry Powerpoint Template
Oil Industry Powerpoint TemplatePowerPoint Team
 
Oil & gas sector presentation
Oil & gas sector presentationOil & gas sector presentation
Oil & gas sector presentationInfraline Energy
 
Oil and gas industry
Oil and gas industryOil and gas industry
Oil and gas industrydomsr
 
Introduction into Oil and Gas Industry. OIL: Part 1
Introduction into Oil and Gas Industry. OIL: Part 1Introduction into Oil and Gas Industry. OIL: Part 1
Introduction into Oil and Gas Industry. OIL: Part 1Fidan Aliyeva
 

Destacado (20)

Independent Maintenance on Industry Standard Systems
Independent Maintenance on Industry Standard SystemsIndependent Maintenance on Industry Standard Systems
Independent Maintenance on Industry Standard Systems
 
What’s Standard? Industry Application versus University Education of Engineer...
What’s Standard? Industry Application versus University Education of Engineer...What’s Standard? Industry Application versus University Education of Engineer...
What’s Standard? Industry Application versus University Education of Engineer...
 
Oil refinery-processes
Oil refinery-processesOil refinery-processes
Oil refinery-processes
 
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)
Hp Industry Standard Solutions For Microsoft Windows Server (96dpi)
 
Oil & Gas Disciplines
Oil & Gas DisciplinesOil & Gas Disciplines
Oil & Gas Disciplines
 
valero energy Texas City Refinery Tour – November 13, 2007
valero energy Texas City Refinery Tour – November 13, 2007valero energy Texas City Refinery Tour – November 13, 2007
valero energy Texas City Refinery Tour – November 13, 2007
 
Petroleum
PetroleumPetroleum
Petroleum
 
Difference between code, standard & Specification
Difference between code, standard & SpecificationDifference between code, standard & Specification
Difference between code, standard & Specification
 
Standard Recepies, Standard Specifications, Yield Analysis
Standard Recepies, Standard Specifications, Yield AnalysisStandard Recepies, Standard Specifications, Yield Analysis
Standard Recepies, Standard Specifications, Yield Analysis
 
PETROLEUM INDUSTRY STRUCTURE
PETROLEUM INDUSTRY STRUCTURE PETROLEUM INDUSTRY STRUCTURE
PETROLEUM INDUSTRY STRUCTURE
 
Petroleum
PetroleumPetroleum
Petroleum
 
Oil refinery processes
Oil refinery processesOil refinery processes
Oil refinery processes
 
Petroleum Refinery Engineering-Part-2-30-July-2016
Petroleum Refinery Engineering-Part-2-30-July-2016Petroleum Refinery Engineering-Part-2-30-July-2016
Petroleum Refinery Engineering-Part-2-30-July-2016
 
Oil Industry Powerpoint Template
Oil Industry Powerpoint TemplateOil Industry Powerpoint Template
Oil Industry Powerpoint Template
 
Oil & gas sector presentation
Oil & gas sector presentationOil & gas sector presentation
Oil & gas sector presentation
 
Oil and gas industry overview
Oil and gas industry overviewOil and gas industry overview
Oil and gas industry overview
 
Oil and gas industry ppt
Oil and gas industry pptOil and gas industry ppt
Oil and gas industry ppt
 
Oil and gas industry
Oil and gas industryOil and gas industry
Oil and gas industry
 
Introduction into Oil and Gas Industry. OIL: Part 1
Introduction into Oil and Gas Industry. OIL: Part 1Introduction into Oil and Gas Industry. OIL: Part 1
Introduction into Oil and Gas Industry. OIL: Part 1
 
Introduction to Oil and Gas Industry - Upstream Midstream Downstream
Introduction to Oil and Gas Industry - Upstream Midstream DownstreamIntroduction to Oil and Gas Industry - Upstream Midstream Downstream
Introduction to Oil and Gas Industry - Upstream Midstream Downstream
 

Similar a Raghu nambiar:industry standard benchmarks

TPC TC And TPC-Energy Slide Deck 5.4.09
TPC TC And TPC-Energy Slide Deck 5.4.09TPC TC And TPC-Energy Slide Deck 5.4.09
TPC TC And TPC-Energy Slide Deck 5.4.09forrestcarman
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance TestingC4Media
 
OCP Telco Engineering Workshop at BCE2017
OCP Telco Engineering Workshop at BCE2017OCP Telco Engineering Workshop at BCE2017
OCP Telco Engineering Workshop at BCE2017Radisys Corporation
 
Plan with confidence: Route to a successful Do178c multicore certification
Plan with confidence: Route to a successful Do178c multicore certificationPlan with confidence: Route to a successful Do178c multicore certification
Plan with confidence: Route to a successful Do178c multicore certificationMassimo Talia
 
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...IRJET Journal
 
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdfThe_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdfDotInsight1
 
IRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking ApplicationIRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking ApplicationIRJET Journal
 
Parallex - The Supercomputer
Parallex - The SupercomputerParallex - The Supercomputer
Parallex - The SupercomputerAnkit Singh
 
Ac2017 6. output based contracting
Ac2017   6. output based contractingAc2017   6. output based contracting
Ac2017 6. output based contractingNesma
 
Performance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12cPerformance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12cAjith Narayanan
 
SCQAA-SF Meeting on May 21 2014
SCQAA-SF Meeting on May 21 2014 SCQAA-SF Meeting on May 21 2014
SCQAA-SF Meeting on May 21 2014 Sujit Ghosh
 
Proof energy@work midih oc2-demo_day
Proof energy@work midih oc2-demo_dayProof energy@work midih oc2-demo_day
Proof energy@work midih oc2-demo_dayMIDIH_EU
 
Demantra Case Study Doug
Demantra Case Study DougDemantra Case Study Doug
Demantra Case Study Dougsichie
 
System Architecture Exploration Training Class
System Architecture Exploration Training ClassSystem Architecture Exploration Training Class
System Architecture Exploration Training ClassDeepak Shankar
 
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...Nesma
 
Cse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionCse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionShobha Kumar
 
Daniel dauwe ece 561 Trial 3
Daniel dauwe   ece 561 Trial 3Daniel dauwe   ece 561 Trial 3
Daniel dauwe ece 561 Trial 3cinedan
 

Similar a Raghu nambiar:industry standard benchmarks (20)

TPC TC And TPC-Energy Slide Deck 5.4.09
TPC TC And TPC-Energy Slide Deck 5.4.09TPC TC And TPC-Energy Slide Deck 5.4.09
TPC TC And TPC-Energy Slide Deck 5.4.09
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance Testing
 
OCP Telco Engineering Workshop at BCE2017
OCP Telco Engineering Workshop at BCE2017OCP Telco Engineering Workshop at BCE2017
OCP Telco Engineering Workshop at BCE2017
 
Plan with confidence: Route to a successful Do178c multicore certification
Plan with confidence: Route to a successful Do178c multicore certificationPlan with confidence: Route to a successful Do178c multicore certification
Plan with confidence: Route to a successful Do178c multicore certification
 
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
 
Gi oss offering top cell_partnership (1)
Gi oss offering top cell_partnership (1)Gi oss offering top cell_partnership (1)
Gi oss offering top cell_partnership (1)
 
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdfThe_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
 
IRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking ApplicationIRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking Application
 
Parallex - The Supercomputer
Parallex - The SupercomputerParallex - The Supercomputer
Parallex - The Supercomputer
 
computer architecture.
computer architecture.computer architecture.
computer architecture.
 
Ac2017 6. output based contracting
Ac2017   6. output based contractingAc2017   6. output based contracting
Ac2017 6. output based contracting
 
Performance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12cPerformance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12c
 
SCQAA-SF Meeting on May 21 2014
SCQAA-SF Meeting on May 21 2014 SCQAA-SF Meeting on May 21 2014
SCQAA-SF Meeting on May 21 2014
 
Proof energy@work midih oc2-demo_day
Proof energy@work midih oc2-demo_dayProof energy@work midih oc2-demo_day
Proof energy@work midih oc2-demo_day
 
Demantra Case Study Doug
Demantra Case Study DougDemantra Case Study Doug
Demantra Case Study Doug
 
System Architecture Exploration Training Class
System Architecture Exploration Training ClassSystem Architecture Exploration Training Class
System Architecture Exploration Training Class
 
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...
Nesma autumn conference 2015 - Is FPA a valuable addition to predictable agil...
 
Resume2015
Resume2015Resume2015
Resume2015
 
Cse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionCse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solution
 
Daniel dauwe ece 561 Trial 3
Daniel dauwe   ece 561 Trial 3Daniel dauwe   ece 561 Trial 3
Daniel dauwe ece 561 Trial 3
 

Más de hdhappy001

詹剑锋:Big databench—benchmarking big data systems
詹剑锋:Big databench—benchmarking big data systems詹剑锋:Big databench—benchmarking big data systems
詹剑锋:Big databench—benchmarking big data systemshdhappy001
 
翟艳堂:腾讯大规模Hadoop集群实践
翟艳堂:腾讯大规模Hadoop集群实践翟艳堂:腾讯大规模Hadoop集群实践
翟艳堂:腾讯大规模Hadoop集群实践hdhappy001
 
袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战hdhappy001
 
俞晨杰:Linked in大数据应用和azkaban
俞晨杰:Linked in大数据应用和azkaban俞晨杰:Linked in大数据应用和azkaban
俞晨杰:Linked in大数据应用和azkabanhdhappy001
 
杨少华:阿里开放数据处理服务
杨少华:阿里开放数据处理服务杨少华:阿里开放数据处理服务
杨少华:阿里开放数据处理服务hdhappy001
 
薛伟:腾讯广点通——大数据之上的实时精准推荐
薛伟:腾讯广点通——大数据之上的实时精准推荐薛伟:腾讯广点通——大数据之上的实时精准推荐
薛伟:腾讯广点通——大数据之上的实时精准推荐hdhappy001
 
徐萌:中国移动大数据应用实践
徐萌:中国移动大数据应用实践徐萌:中国移动大数据应用实践
徐萌:中国移动大数据应用实践hdhappy001
 
肖永红:科研数据应用和共享方面的实践
肖永红:科研数据应用和共享方面的实践肖永红:科研数据应用和共享方面的实践
肖永红:科研数据应用和共享方面的实践hdhappy001
 
肖康:Storm在实时网络攻击检测和分析的应用与改进
肖康:Storm在实时网络攻击检测和分析的应用与改进肖康:Storm在实时网络攻击检测和分析的应用与改进
肖康:Storm在实时网络攻击检测和分析的应用与改进hdhappy001
 
夏俊鸾:Spark——基于内存的下一代大数据分析框架
夏俊鸾:Spark——基于内存的下一代大数据分析框架夏俊鸾:Spark——基于内存的下一代大数据分析框架
夏俊鸾:Spark——基于内存的下一代大数据分析框架hdhappy001
 
魏凯:大数据商业利用的政策管制问题
魏凯:大数据商业利用的政策管制问题魏凯:大数据商业利用的政策管制问题
魏凯:大数据商业利用的政策管制问题hdhappy001
 
王涛:基于Cloudera impala的非关系型数据库sql执行引擎
王涛:基于Cloudera impala的非关系型数据库sql执行引擎王涛:基于Cloudera impala的非关系型数据库sql执行引擎
王涛:基于Cloudera impala的非关系型数据库sql执行引擎hdhappy001
 
王峰:阿里搜索实时流计算技术
王峰:阿里搜索实时流计算技术王峰:阿里搜索实时流计算技术
王峰:阿里搜索实时流计算技术hdhappy001
 
钱卫宁:在线社交媒体分析型查询基准评测初探
钱卫宁:在线社交媒体分析型查询基准评测初探钱卫宁:在线社交媒体分析型查询基准评测初探
钱卫宁:在线社交媒体分析型查询基准评测初探hdhappy001
 
穆黎森:Interactive batch query at scale
穆黎森:Interactive batch query at scale穆黎森:Interactive batch query at scale
穆黎森:Interactive batch query at scalehdhappy001
 
罗李:构建一个跨机房的Hadoop集群
罗李:构建一个跨机房的Hadoop集群罗李:构建一个跨机房的Hadoop集群
罗李:构建一个跨机房的Hadoop集群hdhappy001
 
刘书良:基于大数据公共云平台的Dsp技术
刘书良:基于大数据公共云平台的Dsp技术刘书良:基于大数据公共云平台的Dsp技术
刘书良:基于大数据公共云平台的Dsp技术hdhappy001
 
刘诚忠:Running cloudera impala on postgre sql
刘诚忠:Running cloudera impala on postgre sql刘诚忠:Running cloudera impala on postgre sql
刘诚忠:Running cloudera impala on postgre sqlhdhappy001
 
刘昌钰:阿里大数据应用平台
刘昌钰:阿里大数据应用平台刘昌钰:阿里大数据应用平台
刘昌钰:阿里大数据应用平台hdhappy001
 
李战怀:大数据背景下分布式系统的数据一致性策略
李战怀:大数据背景下分布式系统的数据一致性策略李战怀:大数据背景下分布式系统的数据一致性策略
李战怀:大数据背景下分布式系统的数据一致性策略hdhappy001
 

Más de hdhappy001 (20)

詹剑锋:Big databench—benchmarking big data systems
詹剑锋:Big databench—benchmarking big data systems詹剑锋:Big databench—benchmarking big data systems
詹剑锋:Big databench—benchmarking big data systems
 
翟艳堂:腾讯大规模Hadoop集群实践
翟艳堂:腾讯大规模Hadoop集群实践翟艳堂:腾讯大规模Hadoop集群实践
翟艳堂:腾讯大规模Hadoop集群实践
 
袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战
 
俞晨杰:Linked in大数据应用和azkaban
俞晨杰:Linked in大数据应用和azkaban俞晨杰:Linked in大数据应用和azkaban
俞晨杰:Linked in大数据应用和azkaban
 
杨少华:阿里开放数据处理服务
杨少华:阿里开放数据处理服务杨少华:阿里开放数据处理服务
杨少华:阿里开放数据处理服务
 
薛伟:腾讯广点通——大数据之上的实时精准推荐
薛伟:腾讯广点通——大数据之上的实时精准推荐薛伟:腾讯广点通——大数据之上的实时精准推荐
薛伟:腾讯广点通——大数据之上的实时精准推荐
 
徐萌:中国移动大数据应用实践
徐萌:中国移动大数据应用实践徐萌:中国移动大数据应用实践
徐萌:中国移动大数据应用实践
 
肖永红:科研数据应用和共享方面的实践
肖永红:科研数据应用和共享方面的实践肖永红:科研数据应用和共享方面的实践
肖永红:科研数据应用和共享方面的实践
 
肖康:Storm在实时网络攻击检测和分析的应用与改进
肖康:Storm在实时网络攻击检测和分析的应用与改进肖康:Storm在实时网络攻击检测和分析的应用与改进
肖康:Storm在实时网络攻击检测和分析的应用与改进
 
夏俊鸾:Spark——基于内存的下一代大数据分析框架
夏俊鸾:Spark——基于内存的下一代大数据分析框架夏俊鸾:Spark——基于内存的下一代大数据分析框架
夏俊鸾:Spark——基于内存的下一代大数据分析框架
 
魏凯:大数据商业利用的政策管制问题
魏凯:大数据商业利用的政策管制问题魏凯:大数据商业利用的政策管制问题
魏凯:大数据商业利用的政策管制问题
 
王涛:基于Cloudera impala的非关系型数据库sql执行引擎
王涛:基于Cloudera impala的非关系型数据库sql执行引擎王涛:基于Cloudera impala的非关系型数据库sql执行引擎
王涛:基于Cloudera impala的非关系型数据库sql执行引擎
 
王峰:阿里搜索实时流计算技术
王峰:阿里搜索实时流计算技术王峰:阿里搜索实时流计算技术
王峰:阿里搜索实时流计算技术
 
钱卫宁:在线社交媒体分析型查询基准评测初探
钱卫宁:在线社交媒体分析型查询基准评测初探钱卫宁:在线社交媒体分析型查询基准评测初探
钱卫宁:在线社交媒体分析型查询基准评测初探
 
穆黎森:Interactive batch query at scale
穆黎森:Interactive batch query at scale穆黎森:Interactive batch query at scale
穆黎森:Interactive batch query at scale
 
罗李:构建一个跨机房的Hadoop集群
罗李:构建一个跨机房的Hadoop集群罗李:构建一个跨机房的Hadoop集群
罗李:构建一个跨机房的Hadoop集群
 
刘书良:基于大数据公共云平台的Dsp技术
刘书良:基于大数据公共云平台的Dsp技术刘书良:基于大数据公共云平台的Dsp技术
刘书良:基于大数据公共云平台的Dsp技术
 
刘诚忠:Running cloudera impala on postgre sql
刘诚忠:Running cloudera impala on postgre sql刘诚忠:Running cloudera impala on postgre sql
刘诚忠:Running cloudera impala on postgre sql
 
刘昌钰:阿里大数据应用平台
刘昌钰:阿里大数据应用平台刘昌钰:阿里大数据应用平台
刘昌钰:阿里大数据应用平台
 
李战怀:大数据背景下分布式系统的数据一致性策略
李战怀:大数据背景下分布式系统的数据一致性策略李战怀:大数据背景下分布式系统的数据一致性策略
李战怀:大数据背景下分布式系统的数据一致性策略
 

Último

Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAshyamraj55
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URLRuncy Oommen
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxUdaiappa Ramachandran
 

Último (20)

Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
201610817 - edge part1
201610817 - edge part1201610817 - edge part1
201610817 - edge part1
 
Designing A Time bound resource download URL
Designing A Time bound resource download URLDesigning A Time bound resource download URL
Designing A Time bound resource download URL
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptx
 

Raghu nambiar:industry standard benchmarks

  • 1. Industry Standard Benchmarks: Past, Present and Future Raghunath Nambiar Distinguished Engineer, Cisco RNambiar@cisco.com Invited Talk 1
  • 2. • Cisco Distinguished Engineer, Chief Architect, Big Data Solutions, Cisco • General Chair, TPC’s International Conference Series on Performance Evaluation and Benchmarking (TPCTC) • Chairman, TPC Big Data Committee • Industry Chair, IEEE Big Data 2013, ICPE 2014 • Board Member TPC, WBDB, BigDataTop100 2
  • 3. 3
  • 4. • Synthetic Benchmarks  Simulate functions that yield an indicative measure of the subsystem performance  Widely adapted in the industry and academic community  Several open source tools • Application Benchmarks  Developed and administered by application vendors  VMmark, SAP and Oracle application benchmarks • Industry Standard Benchmarks  Driven by industry standard consortia which are represented by vendors, customers, and research organizations  Democratic procedures for all key decision making  TPC, SPEC and SPC 4
  • 5. • Industry standard benchmarks have played, and continue to play, a crucial role in the advancement of the computing industry • Demands for them have existed since buyers were first confronted with the choice between purchasing one system over another • Historically we have seen that industry standard benchmarks enable healthy competition that results in product improvements and the evolution of brand new technologies Better products, Lower PricePerformance 5
  • 6. Critical to Vendors, Customers and Researchers • Vendor  Demonstrate competitiveness of their products  Monitor release-to-release progress of their products under development • Customer  Cross-vendor evaluation of technologies and products in terms of performance, price-performance, energy efficiency • Researcher  Known, measurable, and repeatable workloads to develop and enhance relevant technologies 6
  • 7. Major Activities Benchmark Development Process • Development of new benchmarks • Publication of benchmark results • Refinement of existing benchmarks • Resolution of disputes and challenges Source: Raghunath Nambiar, Meikel Poess: The Making of TPC-DS. VLDB 2006: 1049-1058 7
  • 8. • The TPC is a non-profit, vendor-neutral organization, established in August 1988 • Reputation of providing the most credible performance results to the industry. • Role of “consumer reports” for the computing industry • Solid foundation for complete system-level performance • Methodology for calculating total-system-price and price- performance • Methodology for measuring energy efficiency of complete system Source: Raghunath Nambiar, Matthew Lanken, Nicholas Wakou, Forrest Carman, Michael Majdalany: Transaction Processing Performance Council (TPC): Twenty Years Later - A Look Back, a Look Ahead, First TPC Technology Conference, TPCTC 2009, Lyon, France, ISBN 978-3-642-10423-7 8
  • 9. Benchmark Standards TPC-A TPC-B TPC-C TPC-D TPC-R TPC-H TPC-W TPC-App TPC-E TPC-DS TPC-VMS Common Specifications Pricing Energy Developments in Progress TPC-DI TPC-VMC TPC-V Source: Raghunath Nambiar, Meikel Poess, Andrew Masland, H. Reza Taheri, Matthew Emmerton, Forrest Carman, Michael Majdalany: TPC Benchmark Roadmap 2012, 4th TPC Technology Conference, TPCTC 2012, Istanbul, Turkey, ISBN 978-3-642-36726-7 • Obsolete • Active • Common Specifications • In Progress • • • Developed 11 Benchmark Standards 5 Standards are current What’s new ? • TPC-VMS – new standard for measuring database performance in a virtualized environment TPC-DI - standard for measuring data integration performance. Expected to be standard in 2014 TPC-Big Data committee was formed in October 2013 • • 9
  • 10. Universities and research organizations are encouraged to join the TPC as Associate Members. To join the TPC: http://www.tpc.org/information/about/join.asp 10
  • 11. • The Standard Performance Evaluation Corporation (SPEC) is a non- profit organization established in 1988 • Develop standards for system level performance measurements • History of developing relevant benchmarks to the industry in a timely manner • Four diverse groups - Graphics and Workstation Performance Group (GWPG), High Performance Group (HPG), Open Systems Group (OSG) and Research Group (RG) • Represented by system and software vendors and a number of academic and research organizations 11
  • 12. • SPEC CPU2006 is designed to measure the compute power of systems, contains two benchmark suites: CINT2006 for measuring and comparing compute-intensive integer performance, and CFP2006 for measuring and comparing compute-intensive floating point performance • SPEC MPI2007 is designed for evaluating MPI-parallel, floating point, and computeintensive performance across a wide range of cluster and SMP hardware • SPECjbb2013 measures server performance based on the Java application features by emulating a three-tier client/server system • SPECjEnterprise2010 measures system performance for Java Enterprise Edition based application servers, databases, and supporting infrastructure • SPECsfs2008 is designed to evaluate the speed and request-handling capabilities of file servers utilizing the NFSv3 and CIFS protocols • SPECpower_ssj2008 evaluates the power and performance characteristics of volume server class computers • SPECvirt_sc2010 measures the end-to-end performance of all system components, including the hardware, virtualization platform, virtualized guest operating system, and application software 12
  • 13. • The Storage Performance Council (SPC) is a vendor-neutral consortia established in 2000 • Focused on industry standards for storage system performance • Serve as a catalyst for performance improvement in storage subsystems • Robust methodology measuring, auditing, and publishing performance, price-performance, and energy-efficiency metrics for storage systems • Major systems and storage vendors are members of the SPC 13
  • 14. • SPC Benchmark 1 (SPC-1) consists of a single workload designed to demonstrate the performance of a storage subsystem under OLTP workloads characterized by random reads and writes • SPC Benchmark 1/Energy (SPC-1/E) is an extension of SPC-1 that consists of the complete set of SPC-1 performance measurement and reporting plus the measurement and reporting of energy consumption • SPC Benchmark 2 (SPC-2) consists of three distinct workloads: large file processing, large database queries, and video on-demand simulating the concurrent large-scale sequential movement of data • SPC Benchmark 2/Energy (SPC-2/E) is extension of SPC-2 that consists of the complete set of SPC-2 performance measurement and reporting plus the measurement and reporting of energy consumption • SPC Benchmark 1C (SPC-1C) is based on SPC-1 for storage component products such as disk drives, host bus adapters, storage enclosures, and storage software stacks such as volume managers • SPC Benchmark 1C/Energy (SPC-1C/E) is an extension of SPEC-1C that consists of the complete set of SPC-1C performance measurements and reporting plus measurement and reporting of energy consumption • SPC Benchmark 2C (SPC-2C) is based on SPC-2, predominately by large I/Os organized into one or more concurrent sequential patterns for storage component products • SPC Benchmark 2C/Energy (SPC-2C/E) is an extension of SPC-2C that consists of the complete set of SPC-2 performance measurement and reporting plus the measurement and reporting of energy consumption 14
  • 15. TPC-C Performance 1992-2010 Average tpmC/Processor 1000000 Average tpmC per processor Moore's Law First result using solid state drives (SDD) First result using 15K RPM SAS SFF disk drives 100000 First result using 15K RPM disk drives First Linux result First multi core result 10000 Intel introduces multi threading TPC-C Revision 5 and First x86-64 bit result First storage area network (SAN) based result First result using 7.2K RPM disk drives 1000 First clustered result TPC-C Revision 3, First Windows result and first x86 result 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 100 1993 TPC-C Revision 2 Publication Year Source: Nambiar R., Poess M. (2010). Transaction Performance vs. Moore’s Law. Performance Evaluation, Measurement and Characterization of Complex Systems. Lecture Notes in Computer Science 6417, Springer 2011, ISBN 978-3-642-18205-1 15
  • 16. TPC-C Price-Performance 1992-2010 10000 Price per NtpmC TPC-C Revision 2 1000 Price per NtpmC Moore's Law TPC-C Revision 3, First Windows result and first x86 result First clustered result First result using 7.2 RPM disk drives First storage area network (SAN) based result 100 TPC-C Revision 5 and First x86-64 bit result Intel introduces multi threading First multi core result First Linux result 10 First result using 15 K RPM disk drives First result using 15K RPM SAS SFF disk drives 1 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 0.1 1993 First result using solid state drives SDD Publication Year Source: Nambiar R., Poess M. (2010). Transaction Performance vs. Moore’s Law. Performance Evaluation, Measurement and Characterization of Complex Systems. Lecture Notes in Computer Science 6417, Springer 2011, ISBN 978-3-642-18205-1 16
  • 17. • IT 1.0: 1980-2000  Transaction Processing, Data Warehousing, File server, Web server, Multi-tier Applications • IT 2.0: 2000-2010  Internet centric, Massive scale-out systems, Virtualization, Energy efficient systems • IT 3.0: 2010 Industry Standard Committees have done a great job Call for new standards Cloud, Big Data, Internet of things, Software defined and application centric infrastructure 17
  • 18. 18
  • 19. 34.3% of World’s Population have internet access today 50% by 2020 19
  • 20. 60 50 Billion 40 20 15 Billion 0 2012 2020 Connected Devices There are 15 billion devices connected to the Internet that’s 2.2 devices for every man, woman, and child on the planet earth 50 Billion devices by 2020 Trillion+ with IOT. Source: Cisco, webpronews.com ...... 20
  • 21. Time’s Man of the Year 1982 Source: time.com 21
  • 22. 1. Were a country … 2. India (1.218 billion) 3. Facebook (1 billion) 4. If China (1.339 billion) United States (311 million) 5. Indonesia (237 million) 6. Brazil (190 billion) 7. Pakistan (175 million) 8. Nigeria (158 million) 9. Bangladesh (150 million) 10. Russia (142 million) ...... 22
  • 23. The third generation of IT platform driven by new applications and services built on cloud, mobile devices, social media, IoTs and more Intelligent Economy Big Data/ Social Analytics Business Mobile Cloud of “Things” BroadbandServices Mobile Billions Millions Devices of Users of Apps and Apps Trillions 2011 Hundreds of Millions Examples: Recommendation engines, Personalized contents, Crowd-sourcing LAN/ ClientInternet Server Tens of Thousands PC of Users of Apps 1986 Millions of Users Thousands of Apps Source: IDC 23
  • 24. 2008 0.5 Zettabyte 2011 2.5 Zettabytes 1 Zettabyte 2020 35 Zettabytes = 1 099 511 627 776 Gigabytes = 1 Billion 1TB Disk Drives How many disk drives were sold in 2012 ? Source: IDC, EMC 24
  • 25. Global IP Traffic 616EB Per Capita Internet Traffic Per Capita IP Traffic Per Capita Internet Traffic In 2016, equivalent of all movies ever made will cross global IP networks every 3 minutes Source: Cisco 25
  • 26. 26
  • 27. • Big Data is becoming integral part of IT ecosystem across all major verticals • One of the most talked about topics in research and government sectors • Big Data challenges can be summed up in 5V’s: Volume, Velocity, Varity, Value, Veracity • Big Data is becoming center of 3I’s: Investments, Innovation, Improvization* * Source: http://blogs.cisco.com/datacenter/ieee-bigdata/ 27
  • 28. Source: Gartner 2011 Source: Mckinsey Global Institute Analysis Source: Cisco
  • 29. • TeraSort • YCSB • GridMiX • HiBench • TPC-DS (at large scale ?) • BigBench, BigDataBench 29
  • 30. State of the Nature - early 1980's the industry began a race that has accelerated over time: automation of daily end-user business transactions. The first application that received wide-spread focus was automated teller transactions (ATM), but we've seen this automation trend ripple through almost every area of business, from grocery stores to gas stations. As opposed to the batch-computing model that dominated the industry in the 1960's and 1970's, this new online model of computing had relatively unsophisticated clerks and consumers directly conducting simple update transactions against an on-line database system. Thus, the online transaction processing industry was born, an industry that now represents billions of dollars in annual sales. Early Attempts at Civilized Competition In the April 1, 1985 issue of Datamation, Jim Gray in collaboration with 24 others from academy and industry, published (anonymously) an article titled, "A Measure of Transaction Processing Power." This article outlined a test for on-line transaction processing which was given the title of "DebitCredit." Unlike the TP1 benchmark, Gray's DebitCredit benchmark specified a true system-level benchmark where the network and user interaction components of the workload were included. In addition, it outlined several other key features of the benchmarking process that were later incorporated into the TPC process: The TPC Lays Down the Law While Gray's DebitCredit ideas were widely praised by industry opinion makers, the DebitCredit benchmark had the same success in curbing bad benchmarking as the prohibition did in stopping excessive drinking. In fact, according to industry analysts like Omri Serlin, the situation only got worse. Without a standards body to supervise the testing and publishing, vendors began to publish extraordinary marketing claims on both TP1 and DebitCredit. They often deleted key requirements in DebitCredit to improve their performance results. From 1985 through 1988, vendors used TP1 and DebitCredit--or their own interpretation of these benchmarks--to muddy the already murky performance waters. Omri Serlin had had enough. He spearheaded a campaign to see if this mess could be straightened out. On August 10, 1988, Serlin had successfully convinced eight companies to form the Transaction Processing Performance Council (TPC). 30
  • 31. • Performance • Cost of ownership • Energy efficiency • Floor space efficiency • Manageability • User experience 31
  • 32. • Relevant • Repeatable • Understandable • Fair • Verifiable • Economical Reference: K. Huppler, The Art of Building a Good Benchmark, Performance Evaluation and Benchmarking, LNCS vol. 5895, Springer 2009 • Time to Market – Long development cycle is not acceptable 32
  • 33. • Business Case • Data Definition and Data Generation • Workload • Execution Rules • Metric • Audit Rules • Full Disclosure Report 33
  • 34. • TPC International Technology Conference Series on Performance Evaluation and Benchmarking (TPCTC) • Workshop Series on Big Data Benchmarking (WBDB) • TPC - Big Data Benchmark Work Group (TPC- BD) 34
  • 35. TPC International Technology Conference Series on Performance Evaluation and Benchmarking (TPCTC)  Accelerate the development of relevant benchmark standards  Enable collaboration between industry experts and researchers  Collocated with International Conference on Very Large Data Bases (VLDB) since 2009  TPCTC 2009 in conjunction with VLDB 2009, Lyon, France  TPCTC 2010 in conjunction with VLDB 2010, Singapore  TPCTC 2011 in conjunction with VLDB 2011, Seattle, Washington  TPCTC 2012 in conjunction with VLDB 2012, Istanbul, Turkey  TPCTC 2013 in conjunction with VLDB 2013, Riva Del Garda, Italy  TPCTC 2014 – will collocate with VLDB 2014 in Hangzhou, China (More information available at http://www.tpc.org/tpctc/ 35
  • 36. Workshop Series on Big Data Benchmarking (WBDB) • A first important step towards the development of a set of benchmarks for providing objective measures of the effectiveness of hardware and software systems dealing with Big Data applications • Open forum for discussions issues related to Big Data benchmarking • WBDB Workshops  WBDB 2012, San Jose  WBDB 2012.in, Pune  WBDB 2013.cn, Xi’an  WBDB 2013, San Jose  WBDB 2014, Potsdam, Germany (August 5-6, 2014) • BigData100 36
  • 37. 37
  • 38. • Evaluate big data workload(s) and make recommendations • Four workloads under evaluation • Additional workloads will be considered • Accept one or more benchmarks to address various Big Data use cases • More information: http://www.tpc.org/tpcbd/ 38
  • 39. • TPC is an international organization. Vendors, Customers, Universities, and Research Institutions are invited to join • Membership Benefits  Influence in the TPC benchmarking development process  Timely access to ongoing proceedings  Product Improvement • Memberships  Full membership - Participate in all aspects of the TPC's work, including development of benchmark standards and setting strategic direction.  Associate Membership – Reserved for non-profit, educational institutions, market researchers, publishers, consultants, governments, businesses who do not create, market or sell computer products or services.  Promotional membership for new members • More Information: http://www.tpc.org/information/about/join.asp 39