SlideShare una empresa de Scribd logo
1 de 130
Descargar para leer sin conexión
Adam Kawa
Data Engineer @ Spotify
A Perfect
Hive Query
For A Perfect Meeting
A deal was made!
~5:00 AM, June 16th, 2013
■ End of our Summer Jam party
■ Organized by Martin Lorentzon
~5:00 AM, June 16th, 2013
■ Co-founder and Chairman of Spotify
Martin Lorentzon
■ Data Engineer at Spotify
Adam Kawa
* If Adam proves somehow that
he is the biggest fan of Timbuktu in his home country
The Deal
Martin will invite Adam
and Timbuktu, my favourite Swedish artist,
for a beer or coke or whatever to drink *
by Martin
■ Actually, Jason Michael Bosak Diakité
■ A top Swedish rap and reggae artist
- My favourite song - http://bit.ly/1htAKhM
■ Does not know anything about the deal!
Timbuktu
Question
Who is the biggest fan of Timbuktu in Poland?
The Rules
Question
Who is the biggest fan of Timbuktu in Poland?
Answers
A person who has streamed Timbuktu’s songs at
Spotify the most frequently!
The Rules
Data will tell the truth!
■ Follow tips and best practices related to
- Testing
- Sampling
- Troubleshooting
- Optimizing
- Executing
Hive Query
Hive query
should be faster
than my potential meeting with Timbuktu
Why?
Re-run the query during our meeting
Additional Requirement
by Adam
■ YARN on 690 nodes
■ Hive
- MRv2
- Join optimizations
- ORC
- Tez
- Vectorization
- Table statistics
Friends
Introduction
Datasets
USER
user_id country … date
u1 PL … 20140415
u2 PL … 20140415
u3 SE … 20140415
TRACK
json_data … date track_id
{"album":{"artistname":"Coldplay"}} … 20140415 t1
{"album":{"artistname":"Timbuktu"}} … 20140415 t2
{"album":{"artistname":"Timbuktu"}} … 20140415 t3
track_id … user_id … date
t1 … u1 … 20130311
t3 … u1 … 20130622
t3 … u2 … 20131209
t2 … u2 … 20140319
t1 ... u3 ... 20140415
STREAM
■ Information about songs streamed by users
■ 54 columns
■ 25.10 TB of compressed data since Feb 12th, 2013
STREAM
track_id … … user_id date
t1 … … u1 20130311
t3 … … u1 20130622
t3 … … u2 20131209
t2 … … u2 20140319
t1 ... ... u3 20140415
■ Information about users
■ 40 columns
■ 10.5 GB of compressed data (2014-04-15)
USER
user_id country … date
u1 PL … 20140415
u2 PL … 20140415
u3 SE … 20140415
■ Metadata information about tracks
■ 5 columns
- But json_data contains dozen or so fields
■ 15.4 GB of data (2014-04-15)
TRACK
track_id json_data … date
t1 {"album":{"artistname":"Coldplay"}} … 20140415
t2 {"album":{"artistname":"Timbuktu"}} … 20140415
t3 {"album":{"artistname":"Timbuktu"}} … 20140415
✓ Avro and a custom serde
✓ DEFLATE as a compression codec
✓ Partitioned by day and/or hour
✗ Bucketed
✗ Indexed
Tables’ Properties
HiveQL
Query
FROM stream s
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
AND s.date BETWEEN 20130212 AND 20140415
Query
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
AND s.date BETWEEN 20130212 AND 20140415
AND u.date = 20140415 AND t.date = 20140415
Query
SELECT s.user_id, COUNT(*) AS count
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
AND s.date BETWEEN 20130212 AND 20140415
AND u.date = 20140415 AND t.date = 20140415
GROUP BY s.user_id
Query
SELECT s.user_id, COUNT(*) AS count
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
AND s.date BETWEEN 20130212 AND 20140415
AND u.date = 20140415 AND t.date = 20140415
GROUP BY s.user_id
ORDER BY count DESC
LIMIT 100;
Query
SELECT s.user_id, COUNT(*) AS count
FROM stream s
JOIN track t ON t.track_id = s.track_id
JOIN user u ON u.user_id = s.user_id
WHERE lower(get_json_object(t.json_data, '$.album.artistname'))
= 'timbuktu'
AND u.country = 'PL'
AND s.date BETWEEN 20130212 AND 20140415
AND u.date = 20140415 AND t.date = 20140415
GROUP BY s.user_id
ORDER BY count DESC
LIMIT 100;
Query
A line where
I may have a bug ? !
HiveQL
Unit Testing
hive_test
Verbose
and
complex
Java
code
■ Hive analyst usually is not Java developer
■ Hive queries are often ad-hoc
- No big incentives to unit test
Testing Hive Queries
■ Easy unit testing of Hive queries locally
- Implemented by me
- Available at github.com/kawaa/Beetest
Beetest
■ A unit test consists of
- select.hql - a query to test
Beetest
■ A unit test consists of
- select.hql - a query to test
- table.ddl - schemas of input tables
Beetest
■ A unit test consists of
- select.hql - a query to test
- table.ddl - schemas of input tables
- text files with input data
Beetest
■ A unit test consists of
- select.hql - a query to test
- table.ddl - schemas of input tables
- text files with input data
- expected.txt - expected output
Beetest
■ A unit test consists of
- select.hql - a query to test
- table.ddl - schemas of input tables
- text files with input data
- expected.txt - expected output
- (optional) setup.hql - any initialization query
Beetest
table.ddl
stream(track_id String, user_id STRING, date BIGINT)
user(user_id STRING, country STRING, date BIGINT)
track(track_id STRING, json_data STRING, date BIGINT)
For Each Line
1. Creates a table with a given schema
- In local Hive database called beetest
table.ddl
stream(track_id String, user_id STRING, date BIGINT)
user(user_id STRING, country STRING, date BIGINT)
track(track_id STRING, json_data STRING, date BIGINT)
For Each Line
1. Creates a table with a given schema
- In local Hive database called beetest
2. Loads table.txt file into the table
- A list of files can be also explicitly specified
table.ddl
stream(track_id String, user_id STRING, date BIGINT)
user(user_id STRING, country STRING, date BIGINT)
track(track_id STRING, json_data STRING, date BIGINT)
Input Files
t1 {"album":{"artistname":"Coldplay"}} 20140415
t2 {"album":{"artistname":"Timbuktu"}} 20140415
t3 {"album":{"artistname":"Timbuktu"}} 20140415
track.txt
Input Files
u1 PL 20140415
u2 PL 20140415
u3 SE 20140415
t1 {"album":{"artistname":"Coldplay"}} 20140415
t2 {"album":{"artistname":"Timbuktu"}} 20140415
t3 {"album":{"artistname":"Timbuktu"}} 20140415
user.txt
track.txt
Input Files
u1 PL 20140415
u2 PL 20140415
u3 SE 20140415
t1 u1 20140415
t3 u1 20140415
t2 u1 20140415
t3 u2 20140415
t2 u3 20140415
t1 {"album":{"artistname":"Coldplay"}} 20140415
t2 {"album":{"artistname":"Timbuktu"}} 20140415
t3 {"album":{"artistname":"Timbuktu"}} 20140415
stream.txt
user.txt
track.txt
Expected Output File
u1 2
u2 1
expected.txtt1 u1 20140415
t3 u1 20140415
t2 u1 20140415
t3 u2 20140415
t2 u3 20140415
stream.txt
$ ./run-test.sh timbuktu
Running Unit Test
$ ./run-test.sh timbuktu
INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql
INFO: Running: hive --config local-config -f /tmp/beetest-test-
1934426026-query.hql
Running Unit Test
$ ./run-test.sh timbuktu
INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql
INFO: Running: hive --config local-config -f /tmp/beetest-test-
1934426026-query.hql
INFO: Loading data to table beetest.stream
…
INFO: Total MapReduce jobs = 2
INFO: Job running in-process (local Hadoop)
…
Running Unit Test
$ ./run-test.sh timbuktu
INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql
INFO: Running: hive --config local-config -f /tmp/beetest-test-
1934426026-query.hql
INFO: Loading data to table beetest.stream
…
INFO: Total MapReduce jobs = 2
INFO: Job running in-process (local Hadoop)
…
INFO: Execution completed successfully
INFO: Table beetest.output_1934426026 stats: …
INFO: Time taken: 34.605 seconds
Running Unit Test
$ ./run-test.sh timbuktu
INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql
INFO: Running: hive --config local-config -f /tmp/beetest-test-
1934426026-query.hql
INFO: Loading data to table beetest.stream
…
INFO: Total MapReduce jobs = 2
INFO: Job running in-process (local Hadoop)
…
INFO: Execution completed successfully
INFO: Table beetest.output_1934426026 stats: …
INFO: Time taken: 34.605 seconds
INFO: Asserting: timbuktu/expected.txt and /tmp/beetest-test-1934426026-
output_1934426026/000000_0
INFO: Test passed!
Running Unit Test
Bee test
Be happy !
HiveQL
Sampling
■ TABLESAMPLE can generate samples of
- buckets
- HDFS blocks
- first N records from each input split
Sampling In Hive
✗ Bucket sampling
- Table must be bucketized by a given column
✗ HDFS block sampling
- Only CombineHiveInputFormat is supported
TABLESAMPLE Limitations
SELECT * FROM user TABLESAMPLE(K ROWS)
SELECT * FROM track TABLESAMPLE(L ROWS)
SELECT * FROM stream TABLESAMPLE(M ROWS)
SELECT … stream JOIN track ON … JOIN user ON …
✗ No guarantee that JOIN returns anything
Rows TABLESAMPLE
■ DataFu + Pig offer interesting sampling algorithms
- Weighted Random Sampling
- Reservoir Sampling
- Consistent Sampling By Key
■ HCatalog can provide integration
DataFu And Pig
DEFINE SBK datafu.pig.sampling.SampleByKey('0.01');
stream_s = FILTER stream BY SBK(user_id);
user_s = FILTER user BY SBK(user_id);
Consistent Sampling By Key
Threshold
DEFINE SBK datafu.pig.sampling.SampleByKey('0.01');
stream_s = FILTER stream BY SBK(user_id);
user_s = FILTER user BY SBK(user_id);
stream_user_s = JOIN user_s BY user_id,
stream_s BY user_id;
✓ Guarantees that JOIN returns rows after sampling
- Assuming that the threshold is high enough
Consistent Sampling By Key
Threshold
DEFINE SBK datafu.pig.sampling.SampleByKey('0.01');
stream_s = FILTER stream BY SBK(user_id);
user_s = FILTER user BY SBK(user_id);
user_s_pl = FILTER user_s BY country == ‘PL’;
stream_user_s_pl = JOIN user_s_pl BY user_id,
stream_s BY user_id;
✗ Do not guarantee that Polish users will be
included
Consistent Sampling By Key
Threshold
✗ None of the sampling methods meet all of my
requirements
Try and see
■ Use SampleByKey and 1% of day’s worth of data
- Maybe it will be good enough?
What’s Next?
1. Finished successfully!
2. Returned 3 Polish users with 4 Timbuktu’s tracks
- My username was not included :(
Running Query
?
HiveQL
Understanding The Plan
■ Shows how Hive translates queries into MR jobs
- Useful, but tricky to understand
EXPLAIN
■ 330 lines
■ 11 stages
- Conditional operators and backup stages
- MapReduce jobs, Map-Only jobs
- MapReduce Local Work
■ 4 MapReduce jobs
- Two of them needed for JOINs
EXPLAIN The Query
Optimized Query
2 MapReduce
job in total
hive.auto.convert.join.noconditionaltask.size
> filesize(user) + filesize(track)
No Conditional Map Join
Runs many Map
joins in a Map-Only
job
[HIVE-3784]
1. Local Work
- Fetch records from small table(s) from HDFS
- Build hash table
- If not enough memory, then abort
Map Join Procedure
1. Local Work
- Fetch records from small table(s) from HDFS
- Build hash table
- If not enough memory, then abort
2. MapReduce Driver
- Add hash table to the distributed cache
Map Join Procedure
1. Local Work
- Fetch records from small table(s) from HDFS
- Build hash table
- If not enough memory, then abort
2. MapReduce Driver
- Add hash table to the distributed cache
3. Map Task
- Read hash table from the distributed cache
- Stream through, supposedly, the largest table
- Match records and write to output
Map Join Procedure
1. Local Work
- Fetch records from small table(s) from HDFS
- Build hash table
- If not enough memory, then abort
2. MapReduce Driver
- Add hash table to the distributed cache
3. Map Task
- Read hash table from the distributed cache
- Stream through, supposedly, the largest table
- Match records and write to output
4. No Reduce Task
Map Join Procedure
■ Table has to be downloaded from HDFS
- Problem if this step takes long
■ Hash table has to be uploaded to HDFS
- Replicated via the distributed cache
- Fetched by hundred of nodes
Map Join Cons
Grouping After Joining
Runs as a single
MR job
[HIVE-3952]
Optimized Query
2 MapReduce
job in total
HiveQL
Running At Scale
2014-05-05 21:21:13Starting to launch local task
to process map join; maximum memory = 2GB
^Ckawaa@sol:~/timbuktu$ date
Mon May 5 21:50:26 UTC 2014
Day’s Worth Of Data
✗ Dimension tables too big to read fast from HDFS
✓ Add a preprocessing step
- Apply projection and filtering in Map-only job
- Make a script ugly with intermediate tables
Too Big Tables
✓ 15s to build hash tables!
✗ 8m 23s to run 2 Map-only jobs to prepare tables
- “Investment”
Day’s Worth Of Data
■ Map tasks
FATAL [main] org.apache.hadoop.mapred.YarnChild:
Error running child : java.lang.
OutOfMemoryError: Java heap space
Day’s Worth Of Data
■ Increase JVM heap size
■ Decrease the size of in-memory map output buffer
- mapreduce.task.io.sort.mb
Possible Solution
■ Increase JVM heap size
■ Decrease the size of in-memory map output buffer
- mapreduce.task.io.sort.mb
(Temporary) Solution
My query generates small amount of
intermediate data
- MAP_OUTPUT_* counters
✓ Works!
✗ 2-3x slower than two Regular joins
Second Challenge
Metric Regular Join Map Join
The
slowest
job
Shuffled
Bytes
12.5 GB 0.56 MB
CPU 120 M 331 M
GC 13 M 258 M
GC / CPU 0.11 0.78
Wall-clock 11m 47s 31m 30s
Heavy Garbage Collection!
■ Increase the container and heap sizes
- 4096 and -Xmx3584m
Fixing Heavy GC
Metric Regular Join Map Join
The
slowest job
GC / CPU 0.11 0.03
Query
CPU 1d 17m 23h 8m
Wall-clock 20m 15s 14m 42s
■ 1st MapReduce Job
- Heavy map phase
- No GC issues any more
- Tweak input split size
- Lightweight shuffle
- Lightweight reduce
- Use 1-2 reduce tasks
Where To Optimize?
■ 1st MapReduce Job
- Heavy map phase
- No GC issues any more
- Tweak input split size
- Lightweight shuffle
- Lightweight reduce
- Use 1-2 reduce tasks
■ 2nd MapReduce Job
- Very small
- Enable uberization [HIVE-5857]
Where To Optimize?
Current Best Time
2 months of data
50 min 2 sec
10th place
?
Changes are needed!
File Format
ORC
■ The biggest table has 52 “real” columns
- Only 2 of them are needed
- 2 columns are “partition” columns
Observation
||||||||||||||||||||||||||||||||||||||||||||||||||||
✓ Improve speed of query
- Only read required columns
- Especially important when reading remotely
My Benefits Of ORC
✗ Need to convert Avro to ORC
✓ ORC consumes less space
- Efficient encoding and compression
ORC Storage Consumption
Storage And Access
16x
Wall Clock Time
3.5x
CPU Time
32x
■ Use Snappy compression
- ZLIB was used to optimize more for storage than
processing time
Possible Improvements
Computation
Apache Tez
■ Tez will probably not help much, because…
- Only two MapReduce jobs
- Heavy work done in map tasks
- Little intermediate data written to HDFS
Thoughts?
Wall Clock Time
1.4x
2.4x
Wall Clock Time
8x
✓ Natural DAG
- No “empty” map task
- No necessity to write intermediate data to HDFS
- No job scheduling overhead
Benefits Of Tez
✓ Dynamic graph reconfiguration
- Decrease reduce task parallelism at runtime
- Pre-launch reduce tasks at runtime
Benefits Of Tez
✓ Smart number of map tasks based on e.g.
- Min/max limits input split size
- Capacity utilization
Benefits Of Tez
Container Reuse
Time
Container Reuse
The more congested
queue/cluster, the bigger
benefits of reusing
Time
Container Reuse
No scheduling overhead to
run new Reduce task
Time
Container Reuse
Time
Thinner tasks allows to
avoid stragglers
Container + Objects Reuse
Finished within 1,5 sec.
Warm !
■ Much easier to follow!
Map 1 <- Map 4 (BROADCAST_EDGE), Map 5 (BROADCAST_EDGE)
Reducer 2 <- Map 1 (SIMPLE_EDGE)
Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
■ A dot file for graphviz is also generated
EXPLAIN
■ Improved version of Map Join
■ Hash table is built in the cluster
- Parallelism
- Data locality
- No local and HDFS write barriers
■ Hash table is broadcasted to downstream tasks
- Tez container can cache them and reuse
Broadcast Join
■ Try run the query that was interrupted by ^C
- On 1 day of data to introduce overhead
Broadcast Join
Metric Avro MR Tez ORC
Wall-clock
UNKNOWN
(^C after 30
minutes)
11m 16s
✓ Automatically rescales the size of in-memory
buffer for intermediate key-value pairs
✓ Reduced in-memory size of map-join hash tables
Memory-Related Benefits
■ At scale, Tez Application Master can be busy
- DEBUG log level slows everything down
- Sometimes needs more RAM
■ A couple of tools are missing
- e.g. counters, Web UI
- CLI, log parser, jstat are your friends
Lessons Learned
Feature
Vectorization
✓ Process data in a batch of 1024 rows together
- Instead of processing one row at a time
✓ Reduces the CPU usage
✓ Scales very well especially with large datasets
- Kicks in after a number of function calls
✓ Supported by ORC
Vectorization
Powered By Vectorization
1.4x
✗ Not yet fully supported by vectorization
✓ Still less GC
✓ Still better use of memory to CPU bandwidth
✗ Cost of switching from vectorized to row-
mode
✓ It will be revised soon by the community
Vectorized Map Join
■ Ensure that vectorization is fully or partially
supported for your query
- Datatypes
- Operators
■ Run own benchmarks and profilers
Lessons Learned
Feature
Table Statistics
✓ Table, partition and column statistics
✓ Possibility to
- collect when loading data
- generate for existing tables
- use to produce optimal query plan
✓ Supported by ORC
Table Statistics
Powered By Statistics
Period ORC/Tez
Vectorized
ORC/Tez
Vectorized ORC/Tez +
Stats
2 weeks
173 sec 225 sec 129 sec
1.22K maps 1.22K maps 0.9K maps
6.5
months
618 sec 653 sec 468 sec
13.3K maps 13.3K maps 9.4K maps
14
months
931 sec 676 sec 611 sec
24.8K maps 24.8K maps 17.9K maps
Current Best Time
14 months of data
10 min 11 sec
?
Results
Challenge
■ Who is the biggest fan of Timbuktu in Poland?
Results!
■ Who is the biggest fan of Timbuktu in Poland?
Results!
That’s all !
■ Deals at 5 AM might be the best idea ever
■ Hive can be a single solution for all SQL queries
- Both interactive and batch
- Regardless of input dataset size
■ Tez is really MapReduce v2
- Very flexible and smart
- Improves what was inefficient with MapReduce
- Easy to deploy
Summary
■ Gopal Vijayaraghavan
- Technical answers to Tez-related questions
- Installation scripts
github.com/t3rmin4t0r/tez-autobuild
■ My colleagues for technical review and feedback
- Piotr Krewski, Rafal Wojdyla, Uldis Barbans,
Magnus Runesson
Special Thanks
Section name
Questions?
Thank you!

Más contenido relacionado

Más de Adam Kawa

Apache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeApache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeAdam Kawa
 
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)Adam Kawa
 
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)Adam Kawa
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationApache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationAdam Kawa
 
Apache Hadoop Java API
Apache Hadoop Java APIApache Hadoop Java API
Apache Hadoop Java APIAdam Kawa
 
Apache Hadoop Ecosystem (based on an exemplary data-driven…
Apache Hadoop Ecosystem (based on an exemplary data-driven…Apache Hadoop Ecosystem (based on an exemplary data-driven…
Apache Hadoop Ecosystem (based on an exemplary data-driven…Adam Kawa
 
Apache Hadoop YARN
Apache Hadoop YARNApache Hadoop YARN
Apache Hadoop YARNAdam Kawa
 
Data model for analysis of scholarly documents in the MapReduce paradigm
Data model for analysis of scholarly documents in the MapReduce paradigm Data model for analysis of scholarly documents in the MapReduce paradigm
Data model for analysis of scholarly documents in the MapReduce paradigm Adam Kawa
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGAdam Kawa
 
Systemy rekomendacji
Systemy rekomendacjiSystemy rekomendacji
Systemy rekomendacjiAdam Kawa
 
Introduction To Elastic MapReduce at WHUG
Introduction To Elastic MapReduce at WHUGIntroduction To Elastic MapReduce at WHUG
Introduction To Elastic MapReduce at WHUGAdam Kawa
 

Más de Adam Kawa (11)

Apache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeApache Hadoop In Theory And Practice
Apache Hadoop In Theory And Practice
 
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
 
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationApache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS Federation
 
Apache Hadoop Java API
Apache Hadoop Java APIApache Hadoop Java API
Apache Hadoop Java API
 
Apache Hadoop Ecosystem (based on an exemplary data-driven…
Apache Hadoop Ecosystem (based on an exemplary data-driven…Apache Hadoop Ecosystem (based on an exemplary data-driven…
Apache Hadoop Ecosystem (based on an exemplary data-driven…
 
Apache Hadoop YARN
Apache Hadoop YARNApache Hadoop YARN
Apache Hadoop YARN
 
Data model for analysis of scholarly documents in the MapReduce paradigm
Data model for analysis of scholarly documents in the MapReduce paradigm Data model for analysis of scholarly documents in the MapReduce paradigm
Data model for analysis of scholarly documents in the MapReduce paradigm
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUG
 
Systemy rekomendacji
Systemy rekomendacjiSystemy rekomendacji
Systemy rekomendacji
 
Introduction To Elastic MapReduce at WHUG
Introduction To Elastic MapReduce at WHUGIntroduction To Elastic MapReduce at WHUG
Introduction To Elastic MapReduce at WHUG
 

Último

So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 

Último (20)

So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 

A perfect Hive query for a perfect meeting (Hadoop Summit 2014)

  • 1. Adam Kawa Data Engineer @ Spotify A Perfect Hive Query For A Perfect Meeting
  • 2. A deal was made! ~5:00 AM, June 16th, 2013
  • 3. ■ End of our Summer Jam party ■ Organized by Martin Lorentzon ~5:00 AM, June 16th, 2013
  • 4. ■ Co-founder and Chairman of Spotify Martin Lorentzon
  • 5. ■ Data Engineer at Spotify Adam Kawa
  • 6. * If Adam proves somehow that he is the biggest fan of Timbuktu in his home country The Deal Martin will invite Adam and Timbuktu, my favourite Swedish artist, for a beer or coke or whatever to drink * by Martin
  • 7. ■ Actually, Jason Michael Bosak Diakité ■ A top Swedish rap and reggae artist - My favourite song - http://bit.ly/1htAKhM ■ Does not know anything about the deal! Timbuktu
  • 8. Question Who is the biggest fan of Timbuktu in Poland? The Rules
  • 9. Question Who is the biggest fan of Timbuktu in Poland? Answers A person who has streamed Timbuktu’s songs at Spotify the most frequently! The Rules
  • 10. Data will tell the truth!
  • 11. ■ Follow tips and best practices related to - Testing - Sampling - Troubleshooting - Optimizing - Executing Hive Query
  • 12. Hive query should be faster than my potential meeting with Timbuktu Why? Re-run the query during our meeting Additional Requirement by Adam
  • 13. ■ YARN on 690 nodes ■ Hive - MRv2 - Join optimizations - ORC - Tez - Vectorization - Table statistics Friends
  • 15. USER user_id country … date u1 PL … 20140415 u2 PL … 20140415 u3 SE … 20140415 TRACK json_data … date track_id {"album":{"artistname":"Coldplay"}} … 20140415 t1 {"album":{"artistname":"Timbuktu"}} … 20140415 t2 {"album":{"artistname":"Timbuktu"}} … 20140415 t3 track_id … user_id … date t1 … u1 … 20130311 t3 … u1 … 20130622 t3 … u2 … 20131209 t2 … u2 … 20140319 t1 ... u3 ... 20140415 STREAM
  • 16. ■ Information about songs streamed by users ■ 54 columns ■ 25.10 TB of compressed data since Feb 12th, 2013 STREAM track_id … … user_id date t1 … … u1 20130311 t3 … … u1 20130622 t3 … … u2 20131209 t2 … … u2 20140319 t1 ... ... u3 20140415
  • 17. ■ Information about users ■ 40 columns ■ 10.5 GB of compressed data (2014-04-15) USER user_id country … date u1 PL … 20140415 u2 PL … 20140415 u3 SE … 20140415
  • 18. ■ Metadata information about tracks ■ 5 columns - But json_data contains dozen or so fields ■ 15.4 GB of data (2014-04-15) TRACK track_id json_data … date t1 {"album":{"artistname":"Coldplay"}} … 20140415 t2 {"album":{"artistname":"Timbuktu"}} … 20140415 t3 {"album":{"artistname":"Timbuktu"}} … 20140415
  • 19. ✓ Avro and a custom serde ✓ DEFLATE as a compression codec ✓ Partitioned by day and/or hour ✗ Bucketed ✗ Indexed Tables’ Properties
  • 22. FROM stream s JOIN track t ON t.track_id = s.track_id Query
  • 23. FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id Query
  • 24. FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' Query
  • 25. FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' Query
  • 26. FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' AND s.date BETWEEN 20130212 AND 20140415 Query
  • 27. FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' AND s.date BETWEEN 20130212 AND 20140415 AND u.date = 20140415 AND t.date = 20140415 Query
  • 28. SELECT s.user_id, COUNT(*) AS count FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' AND s.date BETWEEN 20130212 AND 20140415 AND u.date = 20140415 AND t.date = 20140415 GROUP BY s.user_id Query
  • 29. SELECT s.user_id, COUNT(*) AS count FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' AND s.date BETWEEN 20130212 AND 20140415 AND u.date = 20140415 AND t.date = 20140415 GROUP BY s.user_id ORDER BY count DESC LIMIT 100; Query
  • 30. SELECT s.user_id, COUNT(*) AS count FROM stream s JOIN track t ON t.track_id = s.track_id JOIN user u ON u.user_id = s.user_id WHERE lower(get_json_object(t.json_data, '$.album.artistname')) = 'timbuktu' AND u.country = 'PL' AND s.date BETWEEN 20130212 AND 20140415 AND u.date = 20140415 AND t.date = 20140415 GROUP BY s.user_id ORDER BY count DESC LIMIT 100; Query A line where I may have a bug ? !
  • 33. ■ Hive analyst usually is not Java developer ■ Hive queries are often ad-hoc - No big incentives to unit test Testing Hive Queries
  • 34. ■ Easy unit testing of Hive queries locally - Implemented by me - Available at github.com/kawaa/Beetest Beetest
  • 35. ■ A unit test consists of - select.hql - a query to test Beetest
  • 36. ■ A unit test consists of - select.hql - a query to test - table.ddl - schemas of input tables Beetest
  • 37. ■ A unit test consists of - select.hql - a query to test - table.ddl - schemas of input tables - text files with input data Beetest
  • 38. ■ A unit test consists of - select.hql - a query to test - table.ddl - schemas of input tables - text files with input data - expected.txt - expected output Beetest
  • 39. ■ A unit test consists of - select.hql - a query to test - table.ddl - schemas of input tables - text files with input data - expected.txt - expected output - (optional) setup.hql - any initialization query Beetest
  • 40. table.ddl stream(track_id String, user_id STRING, date BIGINT) user(user_id STRING, country STRING, date BIGINT) track(track_id STRING, json_data STRING, date BIGINT)
  • 41. For Each Line 1. Creates a table with a given schema - In local Hive database called beetest table.ddl stream(track_id String, user_id STRING, date BIGINT) user(user_id STRING, country STRING, date BIGINT) track(track_id STRING, json_data STRING, date BIGINT)
  • 42. For Each Line 1. Creates a table with a given schema - In local Hive database called beetest 2. Loads table.txt file into the table - A list of files can be also explicitly specified table.ddl stream(track_id String, user_id STRING, date BIGINT) user(user_id STRING, country STRING, date BIGINT) track(track_id STRING, json_data STRING, date BIGINT)
  • 43. Input Files t1 {"album":{"artistname":"Coldplay"}} 20140415 t2 {"album":{"artistname":"Timbuktu"}} 20140415 t3 {"album":{"artistname":"Timbuktu"}} 20140415 track.txt
  • 44. Input Files u1 PL 20140415 u2 PL 20140415 u3 SE 20140415 t1 {"album":{"artistname":"Coldplay"}} 20140415 t2 {"album":{"artistname":"Timbuktu"}} 20140415 t3 {"album":{"artistname":"Timbuktu"}} 20140415 user.txt track.txt
  • 45. Input Files u1 PL 20140415 u2 PL 20140415 u3 SE 20140415 t1 u1 20140415 t3 u1 20140415 t2 u1 20140415 t3 u2 20140415 t2 u3 20140415 t1 {"album":{"artistname":"Coldplay"}} 20140415 t2 {"album":{"artistname":"Timbuktu"}} 20140415 t3 {"album":{"artistname":"Timbuktu"}} 20140415 stream.txt user.txt track.txt
  • 46. Expected Output File u1 2 u2 1 expected.txtt1 u1 20140415 t3 u1 20140415 t2 u1 20140415 t3 u2 20140415 t2 u3 20140415 stream.txt
  • 48. $ ./run-test.sh timbuktu INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql INFO: Running: hive --config local-config -f /tmp/beetest-test- 1934426026-query.hql Running Unit Test
  • 49. $ ./run-test.sh timbuktu INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql INFO: Running: hive --config local-config -f /tmp/beetest-test- 1934426026-query.hql INFO: Loading data to table beetest.stream … INFO: Total MapReduce jobs = 2 INFO: Job running in-process (local Hadoop) … Running Unit Test
  • 50. $ ./run-test.sh timbuktu INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql INFO: Running: hive --config local-config -f /tmp/beetest-test- 1934426026-query.hql INFO: Loading data to table beetest.stream … INFO: Total MapReduce jobs = 2 INFO: Job running in-process (local Hadoop) … INFO: Execution completed successfully INFO: Table beetest.output_1934426026 stats: … INFO: Time taken: 34.605 seconds Running Unit Test
  • 51. $ ./run-test.sh timbuktu INFO: Generated query filename: /tmp/beetest-test-1934426026-query.hql INFO: Running: hive --config local-config -f /tmp/beetest-test- 1934426026-query.hql INFO: Loading data to table beetest.stream … INFO: Total MapReduce jobs = 2 INFO: Job running in-process (local Hadoop) … INFO: Execution completed successfully INFO: Table beetest.output_1934426026 stats: … INFO: Time taken: 34.605 seconds INFO: Asserting: timbuktu/expected.txt and /tmp/beetest-test-1934426026- output_1934426026/000000_0 INFO: Test passed! Running Unit Test
  • 54. ■ TABLESAMPLE can generate samples of - buckets - HDFS blocks - first N records from each input split Sampling In Hive
  • 55. ✗ Bucket sampling - Table must be bucketized by a given column ✗ HDFS block sampling - Only CombineHiveInputFormat is supported TABLESAMPLE Limitations
  • 56. SELECT * FROM user TABLESAMPLE(K ROWS) SELECT * FROM track TABLESAMPLE(L ROWS) SELECT * FROM stream TABLESAMPLE(M ROWS) SELECT … stream JOIN track ON … JOIN user ON … ✗ No guarantee that JOIN returns anything Rows TABLESAMPLE
  • 57. ■ DataFu + Pig offer interesting sampling algorithms - Weighted Random Sampling - Reservoir Sampling - Consistent Sampling By Key ■ HCatalog can provide integration DataFu And Pig
  • 58. DEFINE SBK datafu.pig.sampling.SampleByKey('0.01'); stream_s = FILTER stream BY SBK(user_id); user_s = FILTER user BY SBK(user_id); Consistent Sampling By Key Threshold
  • 59. DEFINE SBK datafu.pig.sampling.SampleByKey('0.01'); stream_s = FILTER stream BY SBK(user_id); user_s = FILTER user BY SBK(user_id); stream_user_s = JOIN user_s BY user_id, stream_s BY user_id; ✓ Guarantees that JOIN returns rows after sampling - Assuming that the threshold is high enough Consistent Sampling By Key Threshold
  • 60. DEFINE SBK datafu.pig.sampling.SampleByKey('0.01'); stream_s = FILTER stream BY SBK(user_id); user_s = FILTER user BY SBK(user_id); user_s_pl = FILTER user_s BY country == ‘PL’; stream_user_s_pl = JOIN user_s_pl BY user_id, stream_s BY user_id; ✗ Do not guarantee that Polish users will be included Consistent Sampling By Key Threshold
  • 61. ✗ None of the sampling methods meet all of my requirements Try and see ■ Use SampleByKey and 1% of day’s worth of data - Maybe it will be good enough? What’s Next?
  • 62. 1. Finished successfully! 2. Returned 3 Polish users with 4 Timbuktu’s tracks - My username was not included :( Running Query ?
  • 64. ■ Shows how Hive translates queries into MR jobs - Useful, but tricky to understand EXPLAIN
  • 65. ■ 330 lines ■ 11 stages - Conditional operators and backup stages - MapReduce jobs, Map-Only jobs - MapReduce Local Work ■ 4 MapReduce jobs - Two of them needed for JOINs EXPLAIN The Query
  • 67. hive.auto.convert.join.noconditionaltask.size > filesize(user) + filesize(track) No Conditional Map Join Runs many Map joins in a Map-Only job [HIVE-3784]
  • 68. 1. Local Work - Fetch records from small table(s) from HDFS - Build hash table - If not enough memory, then abort Map Join Procedure
  • 69. 1. Local Work - Fetch records from small table(s) from HDFS - Build hash table - If not enough memory, then abort 2. MapReduce Driver - Add hash table to the distributed cache Map Join Procedure
  • 70. 1. Local Work - Fetch records from small table(s) from HDFS - Build hash table - If not enough memory, then abort 2. MapReduce Driver - Add hash table to the distributed cache 3. Map Task - Read hash table from the distributed cache - Stream through, supposedly, the largest table - Match records and write to output Map Join Procedure
  • 71. 1. Local Work - Fetch records from small table(s) from HDFS - Build hash table - If not enough memory, then abort 2. MapReduce Driver - Add hash table to the distributed cache 3. Map Task - Read hash table from the distributed cache - Stream through, supposedly, the largest table - Match records and write to output 4. No Reduce Task Map Join Procedure
  • 72. ■ Table has to be downloaded from HDFS - Problem if this step takes long ■ Hash table has to be uploaded to HDFS - Replicated via the distributed cache - Fetched by hundred of nodes Map Join Cons
  • 73. Grouping After Joining Runs as a single MR job [HIVE-3952]
  • 76. 2014-05-05 21:21:13Starting to launch local task to process map join; maximum memory = 2GB ^Ckawaa@sol:~/timbuktu$ date Mon May 5 21:50:26 UTC 2014 Day’s Worth Of Data
  • 77. ✗ Dimension tables too big to read fast from HDFS ✓ Add a preprocessing step - Apply projection and filtering in Map-only job - Make a script ugly with intermediate tables Too Big Tables
  • 78. ✓ 15s to build hash tables! ✗ 8m 23s to run 2 Map-only jobs to prepare tables - “Investment” Day’s Worth Of Data
  • 79. ■ Map tasks FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang. OutOfMemoryError: Java heap space Day’s Worth Of Data
  • 80. ■ Increase JVM heap size ■ Decrease the size of in-memory map output buffer - mapreduce.task.io.sort.mb Possible Solution
  • 81. ■ Increase JVM heap size ■ Decrease the size of in-memory map output buffer - mapreduce.task.io.sort.mb (Temporary) Solution My query generates small amount of intermediate data - MAP_OUTPUT_* counters
  • 82. ✓ Works! ✗ 2-3x slower than two Regular joins Second Challenge
  • 83. Metric Regular Join Map Join The slowest job Shuffled Bytes 12.5 GB 0.56 MB CPU 120 M 331 M GC 13 M 258 M GC / CPU 0.11 0.78 Wall-clock 11m 47s 31m 30s Heavy Garbage Collection!
  • 84. ■ Increase the container and heap sizes - 4096 and -Xmx3584m Fixing Heavy GC Metric Regular Join Map Join The slowest job GC / CPU 0.11 0.03 Query CPU 1d 17m 23h 8m Wall-clock 20m 15s 14m 42s
  • 85. ■ 1st MapReduce Job - Heavy map phase - No GC issues any more - Tweak input split size - Lightweight shuffle - Lightweight reduce - Use 1-2 reduce tasks Where To Optimize?
  • 86. ■ 1st MapReduce Job - Heavy map phase - No GC issues any more - Tweak input split size - Lightweight shuffle - Lightweight reduce - Use 1-2 reduce tasks ■ 2nd MapReduce Job - Very small - Enable uberization [HIVE-5857] Where To Optimize?
  • 87. Current Best Time 2 months of data 50 min 2 sec 10th place ?
  • 90. ■ The biggest table has 52 “real” columns - Only 2 of them are needed - 2 columns are “partition” columns Observation ||||||||||||||||||||||||||||||||||||||||||||||||||||
  • 91. ✓ Improve speed of query - Only read required columns - Especially important when reading remotely My Benefits Of ORC
  • 92. ✗ Need to convert Avro to ORC ✓ ORC consumes less space - Efficient encoding and compression ORC Storage Consumption
  • 96. ■ Use Snappy compression - ZLIB was used to optimize more for storage than processing time Possible Improvements
  • 98. ■ Tez will probably not help much, because… - Only two MapReduce jobs - Heavy work done in map tasks - Little intermediate data written to HDFS Thoughts?
  • 101. ✓ Natural DAG - No “empty” map task - No necessity to write intermediate data to HDFS - No job scheduling overhead Benefits Of Tez
  • 102. ✓ Dynamic graph reconfiguration - Decrease reduce task parallelism at runtime - Pre-launch reduce tasks at runtime Benefits Of Tez
  • 103. ✓ Smart number of map tasks based on e.g. - Min/max limits input split size - Capacity utilization Benefits Of Tez
  • 105. Container Reuse The more congested queue/cluster, the bigger benefits of reusing Time
  • 106. Container Reuse No scheduling overhead to run new Reduce task Time
  • 107. Container Reuse Time Thinner tasks allows to avoid stragglers
  • 108. Container + Objects Reuse Finished within 1,5 sec. Warm !
  • 109. ■ Much easier to follow! Map 1 <- Map 4 (BROADCAST_EDGE), Map 5 (BROADCAST_EDGE) Reducer 2 <- Map 1 (SIMPLE_EDGE) Reducer 3 <- Reducer 2 (SIMPLE_EDGE) ■ A dot file for graphviz is also generated EXPLAIN
  • 110. ■ Improved version of Map Join ■ Hash table is built in the cluster - Parallelism - Data locality - No local and HDFS write barriers ■ Hash table is broadcasted to downstream tasks - Tez container can cache them and reuse Broadcast Join
  • 111. ■ Try run the query that was interrupted by ^C - On 1 day of data to introduce overhead Broadcast Join Metric Avro MR Tez ORC Wall-clock UNKNOWN (^C after 30 minutes) 11m 16s
  • 112. ✓ Automatically rescales the size of in-memory buffer for intermediate key-value pairs ✓ Reduced in-memory size of map-join hash tables Memory-Related Benefits
  • 113. ■ At scale, Tez Application Master can be busy - DEBUG log level slows everything down - Sometimes needs more RAM ■ A couple of tools are missing - e.g. counters, Web UI - CLI, log parser, jstat are your friends Lessons Learned
  • 115. ✓ Process data in a batch of 1024 rows together - Instead of processing one row at a time ✓ Reduces the CPU usage ✓ Scales very well especially with large datasets - Kicks in after a number of function calls ✓ Supported by ORC Vectorization
  • 117. ✗ Not yet fully supported by vectorization ✓ Still less GC ✓ Still better use of memory to CPU bandwidth ✗ Cost of switching from vectorized to row- mode ✓ It will be revised soon by the community Vectorized Map Join
  • 118. ■ Ensure that vectorization is fully or partially supported for your query - Datatypes - Operators ■ Run own benchmarks and profilers Lessons Learned
  • 120. ✓ Table, partition and column statistics ✓ Possibility to - collect when loading data - generate for existing tables - use to produce optimal query plan ✓ Supported by ORC Table Statistics
  • 121. Powered By Statistics Period ORC/Tez Vectorized ORC/Tez Vectorized ORC/Tez + Stats 2 weeks 173 sec 225 sec 129 sec 1.22K maps 1.22K maps 0.9K maps 6.5 months 618 sec 653 sec 468 sec 13.3K maps 13.3K maps 9.4K maps 14 months 931 sec 676 sec 611 sec 24.8K maps 24.8K maps 17.9K maps
  • 122. Current Best Time 14 months of data 10 min 11 sec ?
  • 124. ■ Who is the biggest fan of Timbuktu in Poland? Results!
  • 125. ■ Who is the biggest fan of Timbuktu in Poland? Results!
  • 127. ■ Deals at 5 AM might be the best idea ever ■ Hive can be a single solution for all SQL queries - Both interactive and batch - Regardless of input dataset size ■ Tez is really MapReduce v2 - Very flexible and smart - Improves what was inefficient with MapReduce - Easy to deploy Summary
  • 128. ■ Gopal Vijayaraghavan - Technical answers to Tez-related questions - Installation scripts github.com/t3rmin4t0r/tez-autobuild ■ My colleagues for technical review and feedback - Piotr Krewski, Rafal Wojdyla, Uldis Barbans, Magnus Runesson Special Thanks