In this paper we present the initial results of our work to run BigBench on Spark. First, we evaluated the data scalability behavior of the existing MapReduce implementation of BigBench. Next, we executed the group of 14 pure HiveQL queries on Spark SQL and compared the results with the respective Hive results. Our experiments show that: (1) for both MapReduce and Spark SQL, BigBench queries perform with the increase of the data size on average better than the linear scaling behavior and (2) pure HiveQL queries perform faster on Spark SQL than on Hive.
http://clds.sdsc.edu/wbdb2015.ca/program
WBDB 2015 Performance Evaluation of Spark SQL using BigBench
1. Performance Evaluation of
Spark SQL using BigBench
Todor Ivanov and Max-Georg Beer
Frankfurt Big Data Lab
Goethe University Frankfurt am Main, Germany
http://www.bigdata.uni-frankfurt.de/
6th Workshop on Big Data Benchmarking 2015
June 16th – 17th, Toronto, Canada
2. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 2
3. Motivation
• „Towards A Complete BigBench Implementation” by Tilmann Rabl @WBDB 2014
– end-to-end, application-level,
analytical big data benchmark
– technology agnostic
– based on TPC-DS
– consists of 30 queries
• Implementation for the Hadoop Ecosystem
– https://github.com/intel-hadoop/Big-Bench
What about implementing BigBench on Spark?
6th Workshop on Big Data Benchmarking 2015 3
BigBench Logical Data Schema
4. Research Objectives
• Understand and experiment with BigBench on MapReduce
• Implement & run BigBench on Spark
• Evaluate and compare both BigBench implementations
6th Workshop on Big Data Benchmarking 2015 4
5. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 5
6. Towards BigBench on Spark
• Analyse the different query groups in BigBench
Evaluate the Data Scalability of the BigBench queries.
• The largest group consists of 14 pure HiveQL queries
• Spark SQL supports the HiveQL syntax
Compare the performace of Hive and Spark SQL using the HiveQL queries.
6th Workshop on Big Data Benchmarking 2015 6
Query Types Queries
Number of
Queries
Pure HiveQL
6, 7, 9, 11, 12, 13, 14, 15,
16, 17, 21, 22, 23, 24
14
Java MapReduce with HiveQL 1, 2 2
Python Streaming MR with HiveQL 3, 4, 8, 29, 30 5
Mahout (Java MR) with HiveQL 5, 20, 25, 26, 28 5
OpenNLP (Java MR) with HiveQL 10, 18, 19, 27 4
7. Lessons Learned
Our BigBench on MapReduce experiments showed:
• The OpenNLP queries (Q19, Q10) scale best with the increase of the data size.
• Q27 (OpenNLP) is not suitable for scalability comparison.
• A subset of the Python Streaming (MR) queries (Q4, Q30, Q3) show the worst scaling
behavior.
Comparing Hive and Spark SQL we observed:
• A group of Spark SQL queries (Q7, Q16, Q21, Q22, Q23 and Q24) does not scale
properly with the increase of the data size. Possible reason join optimization issues.
• For the stable HiveQL queries (Q6, Q9, Q11, Q12, Q13, Q14, Q15 and Q17) Spark SQL
performs between 1.5x and 6.3x times faster than Hive.
6th Workshop on Big Data Benchmarking 2015 7
8. Our Experience with BigBench
• Validating the Spark SQL query results
– Empty query results
– Non-deterministic end results (OpenNLP and Mahout)
– No reference results are available
• BigBench Setup: https://github.com/BigData-Lab-Frankfurt/Big-Bench-Setup
– Executing single or subset of queries
– Gather execution times, row counts and sample values from result tables
6th Workshop on Big Data Benchmarking 2015 8
Query #
Row Count
SF 100
Row Count
SF 300
Row Count
SF 600
Row Count
SF 1000
Sample Row
Q1 0 0 0 0
Q2 1288 1837 1812 1669 1415 41 1
Q3 131 426 887 1415 20 5809 1
Q4 73926146 233959972 468803001 795252823 0_1199 1
Q5 logRegResult.txt
AUC = 0.50 confusion: [[0.0, 0.0],
[1.0, 3129856.0]] entropy: [[-0.7, -
0.7], [-0.7, -0.7]]
… … …
9. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 9
10. Cluster Setup
• Operating System: Ubuntu Server 14.04.1. LTS
• Cloudera’s Hadoop Distribution - CDH 5.2
• Replication Factor of 2 (only 3 worker nodes)
• Hive version 0.13.1
• Spark version 1.4.0-SNAPSHOT (March 27th 2015)
• BigBench & Scripts (https://github.com/BigData-Lab-Frankfurt/Big-Bench-Setup)
• 3 test repetitions
• Performance Analysis Tool (PAT) (https://github.com/intel-hadoop/PAT)
6th Workshop on Big Data Benchmarking 2015 10
Setup Description Summary
Total Nodes: 4 x Dell PowerEdge T420
Total Processors/
Cores/Threads:
5 CPUs/
30 Cores/ 60 Threads
Total Memory: 4x 32GB = 128 GB
Total Number of Disks:
13 x 1TB,SATA, 3.5 in, 7.2K
RPM, 64MB Cache
Total Storage Capacity: 13 TB
Network: 1 GBit Ehternet
11. Cluster Configuration
• Optimizing cluster performance can be very time-consuming process.
• Following the best practices published by Sandy Ryza (Cloudera):
– “How-to: Tune Your Apache Spark Jobs”, http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
6th Workshop on Big Data Benchmarking 2015 11
Component Parameter Configuration Values
YARN
yarn.nodemanager.resource.memory-mb 31GB
yarn.scheduler.maximum-allocation-mb 31GB
yarn.nodemanager.resource.cpu-vcores 11
Spark
master yarn
num-executors 9
executor-cores 3
executor-memory 9GB
spark.serializer
org.apache.spark.
serializer.KryoSerializer
MapReduce
mapreduce.map.java.opts.max.heap 2GB
mapreduce.reduce.java.opts.max.heap 2GB
mapreduce.map.memory.mb 3GB
mapreduce.reduce.memory.mb 3GB
Hive
hive.auto.convert.join (Q9 only) true
Client Java Heap Size 2GB
12. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 12
13. BigBench on MapReduce
• Tested Scale Factors: 100 GB, 300 GB, 600 GB and 1TB
• Times normalized with respect to 100GB SF as baseline.
• Longer normalized times indicate slower execution with the increase of the data size.
• Shorter normalized times indicate better scalability with the increase of the data size.
6th Workshop on Big Data Benchmarking 2015 13
0
1
2
3
4
5
6
7
8
9
10
11
12
13
NormalizedTime
Normalized BigBench Times with respect to baseline 100GB Scale Factor
300GB 600GB
1TB Linear 300GB
Linear 600GB Linear 1TB
14. BigBench on MapReduce – worst scalability
• Tested Scale Factors: 100 GB, 300 GB, 600 GB and 1TB
• Times normalized with respect to 100GB SF as baseline.
• Group A: Q4, Q30, Q3 (Python Streaming) and Q5 (Mahout) show the worst scaling
behavior.
6th Workshop on Big Data Benchmarking 2015 14
-2
0
2
4
6
8
10
12
14
NormalizedTime
Normalized BigBench + MapReduce Times with respect to baseline 100GB SF
300GB 600GB
1TB Linear 300GB
Linear 600GB Linear 1TB
15. Group A: Analysis of Q4 (Python) & Q5 (Mahout)
Scale Factor: 1TB
Q4 (Python
Streaming)
Q5 (Mahout)
Average Runtime
(minutes):
929 minutes 273 minutes
Avg. CPU
Utilization %:
48.82 (User %);
3.31 (System %);
4.98 (IOwait%)
51.50 (User %);
3.37 (System %);
3.65 (IOwait%)
Avg. Memory
Utilization %:
95.99 % 91.85 %
6th Workshop on Big Data Benchmarking 2015 15
• Q4 is memory bound with 96% utilization and around 5% IOwaits, which means that the
CPU is waiting for outstanding disk I/O requests.
• Q5 is memory bound with around 92% utilization. The Mahout execution takes only 18
minutes before the query end and utilizes very few resources.
0
50
100
0
1903
3814
5731
7650
9554
11464
13381
15300
17204
19115
21031
22950
24854
26765
28681
30600
32504
34415
36331
38250
40154
42065
43981
45900
47804
49715
51631
53550
55454
CPUUtilization%
Time (sec)
Q4 (Python)
IOwait % User % System %
0
50
100
0
640
1275
1914
2551
3190
3825
4464
5101
5740
6375
7014
7652
8290
8925
9564
10202
10840
11475
12114
12752
13390
14025
14664
15302
15940
16575
CPUUtilization%
Time (sec)
Q5 (Mahout)
IOwait % User % System %
Starts the Mahout
execution.
16. BigBench on MapReduce – best scalability
• Tested Scale Factors: 100 GB, 300 GB, 600 GB and 1TB
• Times normalized with respect to 100GB SF as baseline.
• Group B: Q27, Q19, Q10 (OpenNLP) and Q23 (HiveQL) show the best scaling behavior.
6th Workshop on Big Data Benchmarking 2015 16
-2
0
2
4
6
8
10
12
14
NormalizedTime
Normalized BigBench + MapReduce Times with respect to baseline 100GB SF
300GB 600GB
1TB Linear 300GB
Linear 600GB Linear 1TB
17. Group B: Analysis of Q27 (OpenNLP)
• Q27 keeps the system underutilized and outputs non-deterministic values.
6th Workshop on Big Data Benchmarking 2015 17
Scale Factor: 1TB Q27 (OpenNLP)
Input Data size/
Number of Tables:
2GB / 1 Tables
Average Runtime
(minutes):
0.7 minutes
Avg. CPU Utilization
%:
10.03 (User %);
1.94 (System %);
1.29 (IOwait%)
Avg. Memory
Utilization %:
27.19 %
Scale Factor 100GB 300GB 600GB 1TB
Number of rows in
result table
1 0 3 0
Times (minutes) 0.91 0.63 0.98 0.70
0
20
40
60
80
100
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 84
CPUUtilization%
Time (sec)IOwait % User % System %
0
20
40
60
80
100
2 5 8 11 14 17 20 23 26 29 32 35 38 41 44 47 50 53 56 59 62 65 68 71 74 77 82
MemoryUtilization%
Time (sec)
18. Group B: Analysis of Q18 (OpenNLP)
• Q18 is memory bound with around 90% utilization and high CPU usage of 56%.
6th Workshop on Big Data Benchmarking 2015 18
Scale Factor: 1TB Q18 (OpenNLP)
Input Data size/
Number of Tables:
71GB / 3 Tables
Average Runtime
(minutes):
28 minutes
Avg. CPU
Utilization %:
55.99 (User %);
2.04 (System %);
0.31 (IOwait%)
Avg. Memory
Utilization %:
90.22 %
0
20
40
60
80
100
0
50
96
144
190
236
284
330
376
424
470
516
564
610
656
704
750
796
844
890
936
984
1030
1076
1124
1170
1216
1264
1310
1356
1404
1450
1496
1544
1590
1636
1684
CPUUtilizatioin%
Time (sec)IOwait % User % System %
0
20
40
60
80
100
5
55
101
149
195
241
289
335
381
429
475
521
569
615
661
709
755
801
849
895
941
989
1035
1081
1129
1175
1221
1269
1315
1361
1409
1455
1501
1549
1595
1641
1689
MemoryUtilization%
Time (sec)
19. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 19
20. BigBench on Spark SQL – worst scalability
• Test the group of 14 pure HiveQL queries.
• Tested Scale Factors: 100 GB, 300 GB, 600 GB and 1TB
• Times normalized with respect to 100GB SF as baseline.
• Group A: Q24, Q21, Q16 and Q7 achieve the worst data scalability behavior.
• Possible reason for Group A behavior is reported in SPARK-2211 (Join Optimization).
6th Workshop on Big Data Benchmarking 2015 20
0
3
6
9
12
15
18
21
24
NormalizedTime
Normalized BigBench + Spark SQL Times with respect to baseline 100GB SF
300GB 600GB
1TB Linear 300GB
Linear 600GB Linear 1TB
21. BigBench on Spark SQL – best scalability
• Test the group of 14 pure HiveQL queries.
• Tested Scale Factors: 100 GB, 300 GB, 600 GB and 1TB
• Times normalized with respect to 100GB SF as baseline.
• Group B: Q15, Q11,Q9 and Q14 achieve the best data scalability behavior.
6th Workshop on Big Data Benchmarking 2015 21
0
3
6
9
12
15
18
21
24
NormalizedTime
Normalized BigBench + Spark SQL Times with respect to baseline 100GB SF
300GB 600GB
1TB Linear 300GB
Linear 600GB Linear 1TB
22. Agenda
• Motivation & Research Objectives
• Towards BigBench on Spark
– Our Experience with BigBench
– Lessons Learned
• Data Scalability Experiments
– Cluster Setup & Configuration
– BigBench on MapReduce
– BigBench on Spark SQL
– Hive & Spark SQL Comparison
• Next Steps
6th Workshop on Big Data Benchmarking 2015 22
23. Hive & Spark SQL Comparison (1)
• Calculate the Hive to Spark SQL ratio (%): ((HiveTime * 100) / SparkTime) - 100)
• Group 1: Q7, Q16, Q21, Q22, Q23 and Q24 drastically increase their Spark SQL execution
time for the larger data sets.
• Complex Join issues described in SPARK-2211(https://issues.apache.org/jira/browse/SPARK-2211 ).
6th Workshop on Big Data Benchmarking 2015 23
Q6 Q7 Q9 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q21 Q22 Q23 Q24
100GB 150 257 152 148 259 245 156 46 70 387 71 -55 9 44
300GB 204 180 284 234 279 262 251 89 88 398 -35 -68 -24 -54
600GB 246 37 398 344 279 263 328 132 25 402 -62 -78 -55 -76
1TB 279 13 528 443 295 278 389 170 12 423 -69 -76 -64 -81
-100
-50
0
50
100
150
200
250
300
350
400
450
500
TimeRatio(%)
Hive to Spark SQL Query Time Ratio (%) defined as ((HiveTime*100)/SparkTime)-100)
24. Group 1: Analysis of Q7 (HiveQL)
Scale Factor: 1TB Hive Spark SQL
Average Runtime
(minutes):
46 minutes 41 minutes
Avg. CPU
Utilization %:
56.97 (User %);
3.89 (System %);
0.40 (IOwait %)
16.65 (User %);
2.62 (System %);
21.28 (IOwait %)
Avg. Memory
Utilization %:
94.33 % 93.78 %
6th Workshop on Big Data Benchmarking 2015 24
• Q7 is only 13% slower on Hive compared to Spark SQL.
• Spark SQL spends around 21% (IOwait) of the CPU time on waiting for outstanding disk I/O
requests in Q7 utilizes efficiently only around 17% of the CPU.
0
50
100
0 256 511 766 1021 1276 1531 1786 2041 2296 2551 2806
CPUUtilization%
Time (sec)
Q7 Hive
IOwait % User % System %
0
50
100
0 256 511 766 1021 1276 1531 1787 2042 2297 2595
CPUUtilization
%
Time (sec)
Q7 Spark SQL
IOwait % User % System %
25. Hive & Spark SQL Comparison (2)
• Group 2: Q12,Q13 and Q17 show modest performance improvement with the increase of the
data size.
6th Workshop on Big Data Benchmarking 2015 25
Q6 Q7 Q9 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q21 Q22 Q23 Q24
100GB 150 257 152 148 259 245 156 46 70 387 71 -55 9 44
300GB 204 180 284 234 279 262 251 89 88 398 -35 -68 -24 -54
600GB 246 37 398 344 279 263 328 132 25 402 -62 -78 -55 -76
1TB 279 13 528 443 295 278 389 170 12 423 -69 -76 -64 -81
-100
-50
0
50
100
150
200
250
300
350
400
450
500
TimeRatio(%)
Hive to Spark SQL Query Time Ratio (%) defined as ((HiveTime*100)/SparkTime)-100)
26. Hive & Spark SQL Comparison (3)
• Group 3: Q6, Q9, Q11, Q14 and Q15 perform between 46% and 528% faster on Spark SQL
than on Hive.
6th Workshop on Big Data Benchmarking 2015 26
Q6 Q7 Q9 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q21 Q22 Q23 Q24
100GB 150 257 152 148 259 245 156 46 70 387 71 -55 9 44
300GB 204 180 284 234 279 262 251 89 88 398 -35 -68 -24 -54
600GB 246 37 398 344 279 263 328 132 25 402 -62 -78 -55 -76
1TB 279 13 528 443 295 278 389 170 12 423 -69 -76 -64 -81
-100
-50
0
50
100
150
200
250
300
350
400
450
500
TimeRatio(%)
Hive to Spark SQL Query Time Ratio (%) defined as ((HiveTime*100)/SparkTime)-100)
27. Group 3: Analysis of Q9 (HiveQL)
• Spark SQL is 6 times faster than Hive.
• Hive utilizes on average 60% CPU and 78% memory, whereas Spark SQL consumes on
average 28% CPU and 61% memory.
6th Workshop on Big Data Benchmarking 2015 27
Scale Factor: 1TB Hive Spark SQL
Average Runtime
(minutes):
18 minutes 3 minutes
Avg. CPU
Utilization %:
60.34 (User %);
3.44 (System %);
0.38 (IOwait %)
27.87 (User %);
2.22 (System %);
4.09 (IOwait %)
Avg. Memory
Utilization %:
78.87 % 61.27 %
0
50
100
0
30
59
88
117
146
175
204
233
262
291
320
349
378
407
436
465
494
523
552
581
610
639
668
697
726
755
784
813
842
871
900
929
958
987
1016
1045
1074
1103
CPUUtilization%
Time (sec)
Q9 Hive
IOwait % User % System %
0
50
100
0
7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
103
109
115
121
127
133
139
145
151
157
163
169
175
181
187
193
CPUUtilization%
Time (sec)
Q9 Spark SQL
IOwait % User % System %
29. Acknowledgments
• Fields Institute – Research in Mathematical Sciences
• SPEC Research Big Data Working Group
• Tilmann Rabl (Univesity of Toronto/Bankmark UG)
• John Poelman (IBM)
• Yi Yao Joshua & Bhaskar Gowda (Intel)
• Marten Rosselli, Karsten Tolle, Roberto V. Zicari & Raik Niemann (Frankfurt Big Data Lab)
6th Workshop on Big Data Benchmarking 2015 29