SlideShare una empresa de Scribd logo
1 de 14
Descargar para leer sin conexión
1
HANA Performance
Efficient Speed and Scale-out for Real-time BI
2
HANA Performance:
Efficient Speed and Scale-out for Real-time BI
Introduction
SAP HANA enables organizations to optimize their business operations by analyzing
large amounts of data in real time. HANA runs on inexpensive, commodity hardware
and requires no proprietary add-on components. It achieves very high performance
without requiring any tuning.
A 100TB performance test was developed to demonstrate that HANA is extremely
efficient and scalable and can very simply deliver break-through analytic performance
for real-time business intelligence (BI) on a very large database that is representative of
the data that businesses use to analyze their operations. A 100TB1
data set was
generated in a format as would be extracted from an SAP ERP system (e.g., data
records with multiple fields) for analysis in an SAP NetWeaver BW system.2
This paper will describe the test environment and present and analyze the test results.
The Test Environment
The performance test environment was developed to represent a Sales & Distribution
(SD) BI query environment that supports a broad range of SQL queries and business
users.
Database Schema
The star-schema design in Figure 1 shows the SD test data environment and each
table’s cardinality. No manual tuning structures were used in this design; there were no
indexes, materialized views, summary tables or other redundant structures added for
the purposes of achieving faster query performance.
1
100TB = raw data set size before compression.
2
These tests did not involve BW.
3
Figure 1 - SD Schema
Test Data
The test data consists of one large fact table and several smaller dimension tables as
seen in Figure 1. There are 100 billion records in the fact table alone, representing 5
years’ worth of SD data. The data is hash-partitioned equally across the 16 nodes
using Customer_ID. Within a node, the data is further partitioned into one-month
intervals. This results in 60 partitions per node and approximately 104 million records
per partition. (See also Figure 2 and “Loading”.)
System Configuration
The test system configuration (Figure 2) is a 16-node cluster of IBM X5 servers with 8TB
of total RAM priced at approximately $500,000. Each server has:
 4 CPUs with 10 cores and 2 hyper-threads per core, totaling
- 40 cores
- 80 hyper-threads
 512 GB of RAM
 3.3 TB of disk storage
4
Figure 2 - Test Configuration
Setup
These performance tests did not use pre-caching of results or manual tuning structures
of any kind and therefore validated HANA’s load-then-query ability. A design that is
completely free of tuning structures (internal or external) is important for building a
sustainable real-time BI environment; it not only speeds implementation, but it also
provides ongoing flexibility for ad hoc querying, while eliminating the maintenance cost
that tuning structures require.
Loading
The loading was done in parallel using HANA’s IMPORT Command which is a single
SQL statement that names the file to load. Loading is automatically parallelized across
all of the nodes and uses the distribution and partitioning scheme defined for each
table3
(in this case, hash distribution and monthly range partitions). The load rate was
measured at 16 million records per minute, or one million records per minute per node.
This load performance is sufficient to load 100 million records (representing one
business day’s activity) in just six minutes.
Compression
Data compression occurs during the data loading process. HANA demonstrated a
greater than 20X compression rate4
; the 100TB SD data set was reduced to a trim
3
As defined with the CREATE TABLE command.
4
Compression rates depend heavily on characteristics of the actual data to be compressed. Individual
results may vary.
5
3.78TB HANA database, consuming only 236GBs of RAM on each node in the cluster
(see Figure 2). The large fact table accounts for 85 of the 100TBs in the data set, and
was compressed to occupy less than half of the available RAM per node.
Queries
The query suite totals 20 distinct SQL queries including 11 base queries plus a number
of variations for time interval (month, quarter, etc.). The queries were chosen to
represent a mixed workload environment running on data that is representative of its
native form (i.e., not indexed, tuned or otherwise de-normalized to avoid joins, as would
be customary in a conventional database system). They cover a range of BI activities
from departmental to enterprise-level including
 general reporting,
 iterative querying (drill downs),
 ranking, and
 year-over-year analysis,
The resulting queries range from moderately complex to very complex, including SQL
constructs such as
 multiple joins,
 in-list,
 fact-to-fact joins,
 sub queries,
 correlated sub queries (CSQ), and
 union all.
The queries are grouped into three general categories of BI usage:
 Reporting – calculations on business performance measures for ranges of
materials and/or customers for a time period
 Drilldown – iterative user-initiated queries to gather details for a given individual
or group of materials and/or customers
 Analytical – periodic deep historical analysis across customers and/or materials
Table 1 documents each query in terms of its business description, SQL constructs,
ranges and qualifiers, and time period variations. If a query is described with multiple
time periods, each time period is run as a distinct query; for instance, query R1 is run as
four distinct queries, covering monthly, quarterly, semi-annual, and annual time periods,
to show its performance relative to changes in data volume (annual data is12X the
volume of monthly data). Items in bold denote query complexity factors.
6
BI
Query
Business Description SQL Constructs Ranges & Qualifiers Time Period(s)
R1
Sales and distribution report by month
for:
a) given range of materials
b) given range of customers
Star-schema query with multiple
joins to dimensions, with group
by and order by
 Range of materials
 Range of customers
Four variations:
1. One month;
2. Three months;
3. Six months;
4. Twelve months
R2
Sales and distribution report by month for
given list of customers
Star-schema query with multiple
joins to dimensions, with group
by and order by
Range of customers defined
using “in list”
One month
R3
Sales and distribution report by month
for:
a) given range of materials
b) given customer country
Star-schema query with multiple
joins to dimensions, group by
and order by
 Range of materials
 Customer country
One month
R4
Top 100 customers report for:
a) one or several materials groups
b) one or several customer countries
Star-schema query with multiple
joins to dimensions, group by
and order by
 One or several material
groups
 One or several customer
countries
One month of data for two
variations:
1. One country and one
product group;
2. Three countries and
two product groups
R5
Top 100 customers report for:
a) given material group
b) given customer country
Star-schema query with multiple
joins to dimensions, group by
and order by
 Material group
 Customer country
Two variations:
1. Three months;
2. Six months
D1
Sales and distribution report for:
a) single customer;
b) given material group
Star-schema query with multiple
joins to dimensions, group by
and order by
 Single customer
 Material group
One month
D2
Sales and distribution report for:
a) single customer;
b) single material
Star-schema query with multiple
joins to dimensions, group by
and order by
 Single material
 Single customer
One month
D3
Sales and distribution report for:
a) single material
b) given customer country
Star-schema query with multiple
joins to dimensions, group by
and order by
 Single material
 Customer country
One month
D4
Sales and distribution report for:
a) single customer;
b) given range of materials
Star-schema query with multiple
joins to dimensions, group by
and order by
 Range of materials
 Single customer
One month
A1
Year Over Year (YOY) Top 100
customers analytical report:
a) Top 100 customers identified for a
current time period
b) Business measures calculated for
previously identified top 100
customers in a given historical
period
c) A final result set calculated that
combines current and historical
business measures for top 100
customers
Star-schema query with fact-to-
fact join as a correlated sub-
query that includes multiple
joins to dimensions, and then
group by and
order by
 Range of materials;
 Customer country
Three variations:
1. One month with YOY
comparison
2. Three months with YOY
comparison
3. Six months with YOY
comparison
A2
YOY Trending Report for Top 100
customers:
a) Top 100 customers calculated for a
given time period of current year
b) Top 100 customers calculated for
identical time period of previous year
(or multiple years)
c) A final result set calculated that
combines business measures for top
100 customers over several years
Several star-schema sub-
queries each with multiple joins,
group by and order by, which
are combined by a union all
operator
 Material group;
 Customer country
Three variations:
1. Three months over two
years
2. Six months
over two years
3. Three months over five
years
Table 1 - Query Descriptions
7
Test Results
The test measured both query response time and query throughput per hour. The
queries were run first in a single stream to measure baseline query times, then in
multiple streams of 10, 20, and 25 to measure query throughput (calculated as queries
per hour) and query response time in the context of different workloads5
. The multi-
stream tests randomized the sequence of query submissions across streams6
and
inserted 10 milliseconds of “think” time between each query to simulate a multi-user ad
hoc BI querying environment. Individual runtimes are listed in the Appendix.
Baseline Test
As you can see from the baseline results in Figure 3, the majority of queries delivered
sub-second response times and even the most complex query, which involved the
entire 5-year span of data, ran in under 4 seconds.
Figure 3 - Baseline Performance, in Seconds
The Reporting and Drill-down queries (267 milliseconds to 1.041 seconds) demonstrate
HANA’s excellent ability to aggregate data. For instance, the longest-running of these,
R1-4, ran in just over a second (1.041) but took only 2.8X longer to crunch through 12X
more data than its monthly equivalent (R1-1).
5
A single query stream represents an application connection that supports potentially hundreds of
concurrent users.
6
This is done to prevent a query from being submitted by different streams at exactly the same time and
to ensure a practical workload mix (e.g., drill downs happen more frequently than quarterly reports).
8
The Drill-down queries (276 to 483 milliseconds) demonstrate HANA’s aggressive
support for ad hoc joins and, therefore, to provide unrestricted ability for users to “slice
and dice” without having to first involve the technical staff to provide indexes to support
it (as would be the case with a conventional database).
The Analytic queries (677 milliseconds to 3.8 seconds) efficiently performed
sophisticated joins (fact-to-fact, sub queries, CSQ, union all) and analysis across a
sliding time window (year-over-year). The queries with one- to six-month date ranges
(A1-1 through A2-2) ran in 2 seconds or less. Query A2-3, which analyzed the entire 5-
year date range, ran in under four seconds. All analytic query times are well within
timeframes to support iterative speed-of-thought analysis.
Across the board, the baseline tests show that HANA scaled efficiently (better-than-
linear) as the data volumes increased for a given query.
Throughput
The throughput tests are summarized in Table 2 and show that in the face of increasing
and mixed BI workloads, HANA scales very well.
Test
Case
Throughput
(Queries per Hour)
1 stream 6,282
10 streams 36,600
20 streams 48,770
25 streams 52,212
Table 2 - Throughput Tests
At 25 streams, the average query response time was less than three seconds and is
only 2.9X higher than at baseline (see, Appendix), an indicator of HANA’s excellent
internal efficiencies and ability to manage concurrency and mixed workloads.
A rough estimate of BI user concurrency can be derived by dividing total queries per
hour by an estimated average number of queries per user per hour. For instance, the
25-streams’ 52,212 queries per hour divided by 20 (a zesty per-user rate of one query
every three minutes) provides a reasonable estimate of 2610 concurrent BI users
across a mixture of reporting, drill-down and analytic query types.
9
Ad Hoc Historical Queries
The performance tests were intended primarily to demonstrate HANA’s performance in
customary real-time BI query environments that focus on analyzing business
operations. However, ad hoc historical queries that analyze the entire volume of data
that is stored may occasionally need to be run (and they should not impose a severe
penalty). In addition to Query A2-3 (included in the stream tests), Queries R1-1, R4-2
and R5-1 were deemed potentially relevant for a 5-year time window and were
resubmitted, this time for the entire 5-year date range. Table 3 shows that HANA
demonstrated better-than-linear scalability running against 5X more data with a 3X
increase in response time, or less.
Time Period R1-1 R4-2 R5-1
1 year 1.1 secs 1.4 secs 1.1 secs
2 years 1.8 secs 2.4 secs 1.8 secs
3 years 2.6 secs 3.4 secs 2.4 secs
4 years 3.3 secs 4.4 secs 3.0 secs
5 years 4.1 secs 5.4 secs 3.6 secs
Table 3- Ad Hoc Historical Queries
Overall, these queries all ran in a few seconds and further highlight HANA’s ability to
support real-time BI on massive amounts of data without tuning and without massive
hardware or proprietary hardware components.
Results Summary
In these performance tests, HANA demonstrated ability to deliver real-time BI-query
performance as workloads increase in terms of capacity (up to 100TBs of raw data),
complexity (queries with complex join constructs and significant intermediate results run
in less than 2 seconds), and concurrency (25-stream throughput representing about
2600 active users).
HANA performance is based on comprehensive, well-thought design elements,
including, for example,
 an in-memory design
 smart internal data structures (e.g., native columnar, advanced compression and
powerful partitioning)
 a clever, cost-based optimizer that can meld these smart data structures into an
efficient query plan
10
 efficient query execution that smartly executes the query plan to take advantage
of internal components (e.g., advanced algorithms, multi-level caching
optimization in the CPU and hyper-threading).
Much has been written on HANA’s design and it is not necessary to rewrite it here. For
more information on HANA technical features, please refer to “SAP HANA Technical
Overview” and “SAP HANA for Next-Generation Business Applications and Real-Time
Analytics”.
Real-world Experiences
These tests were run in a simulated environment to isolate HANA performance7
;
however, you can expect your own BI Query performance to be significantly better on
HANA than on your existing conventional DBMS, potentially thousands of times better.
Most importantly, SAP customer testimonials confirm HANA’s performance. Here are a
few examples.
“We have seen massive system speed improvements and increased ability to analyze
the most detailed levels of customers and products.” –Colgate Palmolive
“…replacing our enterprise-wide Oracle data mart and resulting in over 20,000X speed
improvement processing our most complex freight transportation cost calculation. ...our
stand-alone mobile applications that were previously running on Oracle are now running
on HANA. Our 2000 local sales representatives can now interact with real-time data
instead, and have the ability to make on-the-fly promotion decisions to improve sales.”
–Nongfu Spring
“…our internal technical comparison demonstrated that SAP HANA outperforms
traditional disk-based systems by a factor of 408,000...”–Mitsui Knowledge Industry
Co., Ltd.
“With SAP HANA, we see a tremendous opportunity to dramatically improve our
enterprise data warehouse solutions, drastically reducing data latency and improving
speed when we can return query results in 45 seconds from BW on HANA vs. waiting
up to 20 minutes for empty results from BW on a traditional disk-based database.” –
Shanghai Volkswagen
7
These performance tests were designed to isolate HANA system level performance independent of the
variety of applications that may be used in SAP environments. Therefore, these test results should be
used only as a general guideline, and results will vary from implementation to implementation due to
variability of platforms, applications and data.
11
Conclusion
Real-time BI query environments should be able to support new queries as soon as
users formulate them. This speed-of-thought capability is only possible if consistently
excellent performance is freely available in the underlying database platform.
These rigorous performance tests and their results presented here confirm that, without
tuning and without massive hardware or proprietary hardware components, HANA
delivers leading performance and scale-out ability and enables real-time BI for
businesses that must support a range of analytic workloads, massive volumes of data
and thousands of concurrent users.
12
Appendix – Query Runtimes
*10-, 20-, 25-stream runs reflect 10ms of think time between queries.
Query Name TESTCASE* Average Runtime
(seconds)
R1-1 1 stream 0.374301625
10 streams 0.7104476375
20 streams 1.11287894375
25 streams 1.263543115
R1-2 1 stream 0.5193995
10 streams 0.8852610125
20 streams 1.2852265875
25 streams 1.55048987
R1-3 1 stream 0.70110375
10 streams 1.098921025
20 streams 1.6092578375
25 streams 1.85689055
R1-4 1 stream 1.0411995
10 streams 1.5377812
20 streams 2.24272095
25 streams 2.53941444
R2 1 stream 0.2671005
10 streams 0.4740847375
20 streams 0.765801125
25 streams 0.94418651
R3 1 stream 0.3772025
10 streams 0.687762475
20 streams 1.03632401875
25 streams 1.28864399
R4-1 1 stream 0.67338575
10 streams 1.15797815
20 streams 1.7928188625
25 streams 2.081943225
R4-2 1 stream 0.61149025
10 streams 1.0192868125
20 streams 1.48342026875
25 streams 1.751548475
R5-1 1 stream 0.784670625
10 streams 1.3914058
20 streams 2.017145075
25 streams 2.37001971
R5-2 1 stream 0.9343955
10 streams 1.6150219
13
20 streams 2.326104475
25 streams 2.70342093
D1 1 stream 0.4254266875
10 streams 0.67727600625
20 streams 0.9883220375
25 streams 1.1591933925
D2 1 stream 0.2762340625
10 streams 0.49984061875
20 streams 0.7925614875
25 streams 0.9211584
D3 1 stream 0.3439448125
10 streams 0.6137608375
20 streams 0.969213509375
25 streams 1.1464113225
D4 1 stream 0.48321675
10 streams 0.88516171875
20 streams 1.345513390625
25 streams 1.56276527
A1-1 1 stream 0.67762625
10 streams 1.0925558
20 streams 1.77672375
25 streams 2.03975081
A1-2 1 stream 1.3362615
10 streams 2.0676603
20 streams 3.0319986125
25 streams 3.62353857
A1-3 1 stream 2.0605925
10 streams 3.15405455
20 streams 4.469254975
25 streams 5.01461024
A2-1 1 stream 1.5460765
10 streams 2.7397760625
20 streams 3.99316446875
25 streams 4.90314213157894
A2-2 1 stream 1.838777
10 streams 3.256667375
20 streams 4.5519178125
25 streams 5.32557834210526
A2-3 1 stream 3.816785
10 streams 6.371394
20 streams 9.88785475
25 streams 11.9014413333333
14

Más contenido relacionado

La actualidad más candente

Day 8.1 system_admin_tasks
Day 8.1 system_admin_tasksDay 8.1 system_admin_tasks
Day 8.1 system_admin_tasks
tovetrivel
 
Sap bi step by step procedure for data archiving by adk and reloading archive...
Sap bi step by step procedure for data archiving by adk and reloading archive...Sap bi step by step procedure for data archiving by adk and reloading archive...
Sap bi step by step procedure for data archiving by adk and reloading archive...
Charanjit Singh
 
Data warehouse
Data warehouseData warehouse
Data warehouse
_123_
 
Final year project ppt
Final year project pptFinal year project ppt
Final year project ppt
Kavya Srinet
 

La actualidad más candente (20)

Day 8.1 system_admin_tasks
Day 8.1 system_admin_tasksDay 8.1 system_admin_tasks
Day 8.1 system_admin_tasks
 
It symposium 2011-ods821_data_replication_04-11-2011
It symposium 2011-ods821_data_replication_04-11-2011It symposium 2011-ods821_data_replication_04-11-2011
It symposium 2011-ods821_data_replication_04-11-2011
 
05 OLAP v6 weekend
05 OLAP  v6 weekend05 OLAP  v6 weekend
05 OLAP v6 weekend
 
Modeling
ModelingModeling
Modeling
 
Olap
OlapOlap
Olap
 
SAP data archiving
SAP data archivingSAP data archiving
SAP data archiving
 
3dw
3dw3dw
3dw
 
Cs1011 dw-dm-1
Cs1011 dw-dm-1Cs1011 dw-dm-1
Cs1011 dw-dm-1
 
Sap bi step by step procedure for data archiving by adk and reloading archive...
Sap bi step by step procedure for data archiving by adk and reloading archive...Sap bi step by step procedure for data archiving by adk and reloading archive...
Sap bi step by step procedure for data archiving by adk and reloading archive...
 
Technical Research Document - Anurag
Technical Research Document - AnuragTechnical Research Document - Anurag
Technical Research Document - Anurag
 
Data archiving in sales and distribution (sd)
Data archiving in sales and distribution (sd)Data archiving in sales and distribution (sd)
Data archiving in sales and distribution (sd)
 
DATA WAREHOUSE IMPLEMENTATION BY SAIKIRAN PANJALA
DATA WAREHOUSE IMPLEMENTATION BY SAIKIRAN PANJALADATA WAREHOUSE IMPLEMENTATION BY SAIKIRAN PANJALA
DATA WAREHOUSE IMPLEMENTATION BY SAIKIRAN PANJALA
 
CoreBigBench: Benchmarking Big Data Core Operations
CoreBigBench: Benchmarking Big Data Core OperationsCoreBigBench: Benchmarking Big Data Core Operations
CoreBigBench: Benchmarking Big Data Core Operations
 
Data warehouse
Data warehouseData warehouse
Data warehouse
 
Final year project ppt
Final year project pptFinal year project ppt
Final year project ppt
 
DWO -Pertemuan 1
DWO -Pertemuan 1DWO -Pertemuan 1
DWO -Pertemuan 1
 
BI Suite Overview
BI Suite OverviewBI Suite Overview
BI Suite Overview
 
Chapter 2
Chapter 2Chapter 2
Chapter 2
 
SAP BW on HANA Training
SAP BW on HANA  TrainingSAP BW on HANA  Training
SAP BW on HANA Training
 
Data warehouse - Nivetha Durganathan
Data warehouse - Nivetha DurganathanData warehouse - Nivetha Durganathan
Data warehouse - Nivetha Durganathan
 

Similar a HANA Performance Efficient Speed and Scale-out for Real-time BI

introduction to datawarehouse
introduction to datawarehouseintroduction to datawarehouse
introduction to datawarehouse
kiran14360
 
Data Warehouse
Data WarehouseData Warehouse
Data Warehouse
ganblues
 
Sql portfolio admin_practicals
Sql portfolio admin_practicalsSql portfolio admin_practicals
Sql portfolio admin_practicals
Shelli Ciaschini
 
Designing high performance datawarehouse
Designing high performance datawarehouseDesigning high performance datawarehouse
Designing high performance datawarehouse
Uday Kothari
 

Similar a HANA Performance Efficient Speed and Scale-out for Real-time BI (20)

Expert talk
Expert talkExpert talk
Expert talk
 
Essbase intro
Essbase introEssbase intro
Essbase intro
 
introduction to datawarehouse
introduction to datawarehouseintroduction to datawarehouse
introduction to datawarehouse
 
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdfThe_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
The_Case_for_Single_Node_Systems_Supporting_Large_Scale_Data_Analytics (1).pdf
 
Data Warehouse
Data WarehouseData Warehouse
Data Warehouse
 
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ NewyorksysWhat is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
 
Sql portfolio admin_practicals
Sql portfolio admin_practicalsSql portfolio admin_practicals
Sql portfolio admin_practicals
 
CoreBigBench: Benchmarking Big Data Core Operations
CoreBigBench: Benchmarking Big Data Core OperationsCoreBigBench: Benchmarking Big Data Core Operations
CoreBigBench: Benchmarking Big Data Core Operations
 
Kpi handbook implementation on bizforce one
Kpi handbook implementation on bizforce oneKpi handbook implementation on bizforce one
Kpi handbook implementation on bizforce one
 
Data Warehousing
Data WarehousingData Warehousing
Data Warehousing
 
02 Essbase
02 Essbase02 Essbase
02 Essbase
 
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
 
OLAP Cubes in Datawarehousing
OLAP Cubes in DatawarehousingOLAP Cubes in Datawarehousing
OLAP Cubes in Datawarehousing
 
Project report aditi paul1
Project report aditi paul1Project report aditi paul1
Project report aditi paul1
 
Asug Gov Sig Data Quality Metrics Report Sapphire 2008
Asug Gov Sig Data Quality Metrics Report Sapphire 2008Asug Gov Sig Data Quality Metrics Report Sapphire 2008
Asug Gov Sig Data Quality Metrics Report Sapphire 2008
 
Bi requirements checklist
Bi requirements checklistBi requirements checklist
Bi requirements checklist
 
Course Outline Ch 2
Course Outline Ch 2Course Outline Ch 2
Course Outline Ch 2
 
Designing high performance datawarehouse
Designing high performance datawarehouseDesigning high performance datawarehouse
Designing high performance datawarehouse
 
Exploring Neo4j Graph Database as a Fast Data Access Layer
Exploring Neo4j Graph Database as a Fast Data Access LayerExploring Neo4j Graph Database as a Fast Data Access Layer
Exploring Neo4j Graph Database as a Fast Data Access Layer
 
Benchmarking Apache Druid
Benchmarking Apache DruidBenchmarking Apache Druid
Benchmarking Apache Druid
 

Más de IBM India Smarter Computing

Más de IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 

HANA Performance Efficient Speed and Scale-out for Real-time BI

  • 1. 1 HANA Performance Efficient Speed and Scale-out for Real-time BI
  • 2. 2 HANA Performance: Efficient Speed and Scale-out for Real-time BI Introduction SAP HANA enables organizations to optimize their business operations by analyzing large amounts of data in real time. HANA runs on inexpensive, commodity hardware and requires no proprietary add-on components. It achieves very high performance without requiring any tuning. A 100TB performance test was developed to demonstrate that HANA is extremely efficient and scalable and can very simply deliver break-through analytic performance for real-time business intelligence (BI) on a very large database that is representative of the data that businesses use to analyze their operations. A 100TB1 data set was generated in a format as would be extracted from an SAP ERP system (e.g., data records with multiple fields) for analysis in an SAP NetWeaver BW system.2 This paper will describe the test environment and present and analyze the test results. The Test Environment The performance test environment was developed to represent a Sales & Distribution (SD) BI query environment that supports a broad range of SQL queries and business users. Database Schema The star-schema design in Figure 1 shows the SD test data environment and each table’s cardinality. No manual tuning structures were used in this design; there were no indexes, materialized views, summary tables or other redundant structures added for the purposes of achieving faster query performance. 1 100TB = raw data set size before compression. 2 These tests did not involve BW.
  • 3. 3 Figure 1 - SD Schema Test Data The test data consists of one large fact table and several smaller dimension tables as seen in Figure 1. There are 100 billion records in the fact table alone, representing 5 years’ worth of SD data. The data is hash-partitioned equally across the 16 nodes using Customer_ID. Within a node, the data is further partitioned into one-month intervals. This results in 60 partitions per node and approximately 104 million records per partition. (See also Figure 2 and “Loading”.) System Configuration The test system configuration (Figure 2) is a 16-node cluster of IBM X5 servers with 8TB of total RAM priced at approximately $500,000. Each server has:  4 CPUs with 10 cores and 2 hyper-threads per core, totaling - 40 cores - 80 hyper-threads  512 GB of RAM  3.3 TB of disk storage
  • 4. 4 Figure 2 - Test Configuration Setup These performance tests did not use pre-caching of results or manual tuning structures of any kind and therefore validated HANA’s load-then-query ability. A design that is completely free of tuning structures (internal or external) is important for building a sustainable real-time BI environment; it not only speeds implementation, but it also provides ongoing flexibility for ad hoc querying, while eliminating the maintenance cost that tuning structures require. Loading The loading was done in parallel using HANA’s IMPORT Command which is a single SQL statement that names the file to load. Loading is automatically parallelized across all of the nodes and uses the distribution and partitioning scheme defined for each table3 (in this case, hash distribution and monthly range partitions). The load rate was measured at 16 million records per minute, or one million records per minute per node. This load performance is sufficient to load 100 million records (representing one business day’s activity) in just six minutes. Compression Data compression occurs during the data loading process. HANA demonstrated a greater than 20X compression rate4 ; the 100TB SD data set was reduced to a trim 3 As defined with the CREATE TABLE command. 4 Compression rates depend heavily on characteristics of the actual data to be compressed. Individual results may vary.
  • 5. 5 3.78TB HANA database, consuming only 236GBs of RAM on each node in the cluster (see Figure 2). The large fact table accounts for 85 of the 100TBs in the data set, and was compressed to occupy less than half of the available RAM per node. Queries The query suite totals 20 distinct SQL queries including 11 base queries plus a number of variations for time interval (month, quarter, etc.). The queries were chosen to represent a mixed workload environment running on data that is representative of its native form (i.e., not indexed, tuned or otherwise de-normalized to avoid joins, as would be customary in a conventional database system). They cover a range of BI activities from departmental to enterprise-level including  general reporting,  iterative querying (drill downs),  ranking, and  year-over-year analysis, The resulting queries range from moderately complex to very complex, including SQL constructs such as  multiple joins,  in-list,  fact-to-fact joins,  sub queries,  correlated sub queries (CSQ), and  union all. The queries are grouped into three general categories of BI usage:  Reporting – calculations on business performance measures for ranges of materials and/or customers for a time period  Drilldown – iterative user-initiated queries to gather details for a given individual or group of materials and/or customers  Analytical – periodic deep historical analysis across customers and/or materials Table 1 documents each query in terms of its business description, SQL constructs, ranges and qualifiers, and time period variations. If a query is described with multiple time periods, each time period is run as a distinct query; for instance, query R1 is run as four distinct queries, covering monthly, quarterly, semi-annual, and annual time periods, to show its performance relative to changes in data volume (annual data is12X the volume of monthly data). Items in bold denote query complexity factors.
  • 6. 6 BI Query Business Description SQL Constructs Ranges & Qualifiers Time Period(s) R1 Sales and distribution report by month for: a) given range of materials b) given range of customers Star-schema query with multiple joins to dimensions, with group by and order by  Range of materials  Range of customers Four variations: 1. One month; 2. Three months; 3. Six months; 4. Twelve months R2 Sales and distribution report by month for given list of customers Star-schema query with multiple joins to dimensions, with group by and order by Range of customers defined using “in list” One month R3 Sales and distribution report by month for: a) given range of materials b) given customer country Star-schema query with multiple joins to dimensions, group by and order by  Range of materials  Customer country One month R4 Top 100 customers report for: a) one or several materials groups b) one or several customer countries Star-schema query with multiple joins to dimensions, group by and order by  One or several material groups  One or several customer countries One month of data for two variations: 1. One country and one product group; 2. Three countries and two product groups R5 Top 100 customers report for: a) given material group b) given customer country Star-schema query with multiple joins to dimensions, group by and order by  Material group  Customer country Two variations: 1. Three months; 2. Six months D1 Sales and distribution report for: a) single customer; b) given material group Star-schema query with multiple joins to dimensions, group by and order by  Single customer  Material group One month D2 Sales and distribution report for: a) single customer; b) single material Star-schema query with multiple joins to dimensions, group by and order by  Single material  Single customer One month D3 Sales and distribution report for: a) single material b) given customer country Star-schema query with multiple joins to dimensions, group by and order by  Single material  Customer country One month D4 Sales and distribution report for: a) single customer; b) given range of materials Star-schema query with multiple joins to dimensions, group by and order by  Range of materials  Single customer One month A1 Year Over Year (YOY) Top 100 customers analytical report: a) Top 100 customers identified for a current time period b) Business measures calculated for previously identified top 100 customers in a given historical period c) A final result set calculated that combines current and historical business measures for top 100 customers Star-schema query with fact-to- fact join as a correlated sub- query that includes multiple joins to dimensions, and then group by and order by  Range of materials;  Customer country Three variations: 1. One month with YOY comparison 2. Three months with YOY comparison 3. Six months with YOY comparison A2 YOY Trending Report for Top 100 customers: a) Top 100 customers calculated for a given time period of current year b) Top 100 customers calculated for identical time period of previous year (or multiple years) c) A final result set calculated that combines business measures for top 100 customers over several years Several star-schema sub- queries each with multiple joins, group by and order by, which are combined by a union all operator  Material group;  Customer country Three variations: 1. Three months over two years 2. Six months over two years 3. Three months over five years Table 1 - Query Descriptions
  • 7. 7 Test Results The test measured both query response time and query throughput per hour. The queries were run first in a single stream to measure baseline query times, then in multiple streams of 10, 20, and 25 to measure query throughput (calculated as queries per hour) and query response time in the context of different workloads5 . The multi- stream tests randomized the sequence of query submissions across streams6 and inserted 10 milliseconds of “think” time between each query to simulate a multi-user ad hoc BI querying environment. Individual runtimes are listed in the Appendix. Baseline Test As you can see from the baseline results in Figure 3, the majority of queries delivered sub-second response times and even the most complex query, which involved the entire 5-year span of data, ran in under 4 seconds. Figure 3 - Baseline Performance, in Seconds The Reporting and Drill-down queries (267 milliseconds to 1.041 seconds) demonstrate HANA’s excellent ability to aggregate data. For instance, the longest-running of these, R1-4, ran in just over a second (1.041) but took only 2.8X longer to crunch through 12X more data than its monthly equivalent (R1-1). 5 A single query stream represents an application connection that supports potentially hundreds of concurrent users. 6 This is done to prevent a query from being submitted by different streams at exactly the same time and to ensure a practical workload mix (e.g., drill downs happen more frequently than quarterly reports).
  • 8. 8 The Drill-down queries (276 to 483 milliseconds) demonstrate HANA’s aggressive support for ad hoc joins and, therefore, to provide unrestricted ability for users to “slice and dice” without having to first involve the technical staff to provide indexes to support it (as would be the case with a conventional database). The Analytic queries (677 milliseconds to 3.8 seconds) efficiently performed sophisticated joins (fact-to-fact, sub queries, CSQ, union all) and analysis across a sliding time window (year-over-year). The queries with one- to six-month date ranges (A1-1 through A2-2) ran in 2 seconds or less. Query A2-3, which analyzed the entire 5- year date range, ran in under four seconds. All analytic query times are well within timeframes to support iterative speed-of-thought analysis. Across the board, the baseline tests show that HANA scaled efficiently (better-than- linear) as the data volumes increased for a given query. Throughput The throughput tests are summarized in Table 2 and show that in the face of increasing and mixed BI workloads, HANA scales very well. Test Case Throughput (Queries per Hour) 1 stream 6,282 10 streams 36,600 20 streams 48,770 25 streams 52,212 Table 2 - Throughput Tests At 25 streams, the average query response time was less than three seconds and is only 2.9X higher than at baseline (see, Appendix), an indicator of HANA’s excellent internal efficiencies and ability to manage concurrency and mixed workloads. A rough estimate of BI user concurrency can be derived by dividing total queries per hour by an estimated average number of queries per user per hour. For instance, the 25-streams’ 52,212 queries per hour divided by 20 (a zesty per-user rate of one query every three minutes) provides a reasonable estimate of 2610 concurrent BI users across a mixture of reporting, drill-down and analytic query types.
  • 9. 9 Ad Hoc Historical Queries The performance tests were intended primarily to demonstrate HANA’s performance in customary real-time BI query environments that focus on analyzing business operations. However, ad hoc historical queries that analyze the entire volume of data that is stored may occasionally need to be run (and they should not impose a severe penalty). In addition to Query A2-3 (included in the stream tests), Queries R1-1, R4-2 and R5-1 were deemed potentially relevant for a 5-year time window and were resubmitted, this time for the entire 5-year date range. Table 3 shows that HANA demonstrated better-than-linear scalability running against 5X more data with a 3X increase in response time, or less. Time Period R1-1 R4-2 R5-1 1 year 1.1 secs 1.4 secs 1.1 secs 2 years 1.8 secs 2.4 secs 1.8 secs 3 years 2.6 secs 3.4 secs 2.4 secs 4 years 3.3 secs 4.4 secs 3.0 secs 5 years 4.1 secs 5.4 secs 3.6 secs Table 3- Ad Hoc Historical Queries Overall, these queries all ran in a few seconds and further highlight HANA’s ability to support real-time BI on massive amounts of data without tuning and without massive hardware or proprietary hardware components. Results Summary In these performance tests, HANA demonstrated ability to deliver real-time BI-query performance as workloads increase in terms of capacity (up to 100TBs of raw data), complexity (queries with complex join constructs and significant intermediate results run in less than 2 seconds), and concurrency (25-stream throughput representing about 2600 active users). HANA performance is based on comprehensive, well-thought design elements, including, for example,  an in-memory design  smart internal data structures (e.g., native columnar, advanced compression and powerful partitioning)  a clever, cost-based optimizer that can meld these smart data structures into an efficient query plan
  • 10. 10  efficient query execution that smartly executes the query plan to take advantage of internal components (e.g., advanced algorithms, multi-level caching optimization in the CPU and hyper-threading). Much has been written on HANA’s design and it is not necessary to rewrite it here. For more information on HANA technical features, please refer to “SAP HANA Technical Overview” and “SAP HANA for Next-Generation Business Applications and Real-Time Analytics”. Real-world Experiences These tests were run in a simulated environment to isolate HANA performance7 ; however, you can expect your own BI Query performance to be significantly better on HANA than on your existing conventional DBMS, potentially thousands of times better. Most importantly, SAP customer testimonials confirm HANA’s performance. Here are a few examples. “We have seen massive system speed improvements and increased ability to analyze the most detailed levels of customers and products.” –Colgate Palmolive “…replacing our enterprise-wide Oracle data mart and resulting in over 20,000X speed improvement processing our most complex freight transportation cost calculation. ...our stand-alone mobile applications that were previously running on Oracle are now running on HANA. Our 2000 local sales representatives can now interact with real-time data instead, and have the ability to make on-the-fly promotion decisions to improve sales.” –Nongfu Spring “…our internal technical comparison demonstrated that SAP HANA outperforms traditional disk-based systems by a factor of 408,000...”–Mitsui Knowledge Industry Co., Ltd. “With SAP HANA, we see a tremendous opportunity to dramatically improve our enterprise data warehouse solutions, drastically reducing data latency and improving speed when we can return query results in 45 seconds from BW on HANA vs. waiting up to 20 minutes for empty results from BW on a traditional disk-based database.” – Shanghai Volkswagen 7 These performance tests were designed to isolate HANA system level performance independent of the variety of applications that may be used in SAP environments. Therefore, these test results should be used only as a general guideline, and results will vary from implementation to implementation due to variability of platforms, applications and data.
  • 11. 11 Conclusion Real-time BI query environments should be able to support new queries as soon as users formulate them. This speed-of-thought capability is only possible if consistently excellent performance is freely available in the underlying database platform. These rigorous performance tests and their results presented here confirm that, without tuning and without massive hardware or proprietary hardware components, HANA delivers leading performance and scale-out ability and enables real-time BI for businesses that must support a range of analytic workloads, massive volumes of data and thousands of concurrent users.
  • 12. 12 Appendix – Query Runtimes *10-, 20-, 25-stream runs reflect 10ms of think time between queries. Query Name TESTCASE* Average Runtime (seconds) R1-1 1 stream 0.374301625 10 streams 0.7104476375 20 streams 1.11287894375 25 streams 1.263543115 R1-2 1 stream 0.5193995 10 streams 0.8852610125 20 streams 1.2852265875 25 streams 1.55048987 R1-3 1 stream 0.70110375 10 streams 1.098921025 20 streams 1.6092578375 25 streams 1.85689055 R1-4 1 stream 1.0411995 10 streams 1.5377812 20 streams 2.24272095 25 streams 2.53941444 R2 1 stream 0.2671005 10 streams 0.4740847375 20 streams 0.765801125 25 streams 0.94418651 R3 1 stream 0.3772025 10 streams 0.687762475 20 streams 1.03632401875 25 streams 1.28864399 R4-1 1 stream 0.67338575 10 streams 1.15797815 20 streams 1.7928188625 25 streams 2.081943225 R4-2 1 stream 0.61149025 10 streams 1.0192868125 20 streams 1.48342026875 25 streams 1.751548475 R5-1 1 stream 0.784670625 10 streams 1.3914058 20 streams 2.017145075 25 streams 2.37001971 R5-2 1 stream 0.9343955 10 streams 1.6150219
  • 13. 13 20 streams 2.326104475 25 streams 2.70342093 D1 1 stream 0.4254266875 10 streams 0.67727600625 20 streams 0.9883220375 25 streams 1.1591933925 D2 1 stream 0.2762340625 10 streams 0.49984061875 20 streams 0.7925614875 25 streams 0.9211584 D3 1 stream 0.3439448125 10 streams 0.6137608375 20 streams 0.969213509375 25 streams 1.1464113225 D4 1 stream 0.48321675 10 streams 0.88516171875 20 streams 1.345513390625 25 streams 1.56276527 A1-1 1 stream 0.67762625 10 streams 1.0925558 20 streams 1.77672375 25 streams 2.03975081 A1-2 1 stream 1.3362615 10 streams 2.0676603 20 streams 3.0319986125 25 streams 3.62353857 A1-3 1 stream 2.0605925 10 streams 3.15405455 20 streams 4.469254975 25 streams 5.01461024 A2-1 1 stream 1.5460765 10 streams 2.7397760625 20 streams 3.99316446875 25 streams 4.90314213157894 A2-2 1 stream 1.838777 10 streams 3.256667375 20 streams 4.5519178125 25 streams 5.32557834210526 A2-3 1 stream 3.816785 10 streams 6.371394 20 streams 9.88785475 25 streams 11.9014413333333
  • 14. 14