This session covers our experience with using the Spark and Shark frameworks for running real-time queries on top of Cassandra data.We will start by surveying the current Cassandra analytics landscape, including Hadoop and HIVE, and touch on the use of custom input formats to extract data from Cassandra. We will then dive into Spark and Shark, two memory-based cluster computing frameworks, and how they enable often dramatic improvements in query speed and productivity, over the standard solutions today.
2. Who is this guy
• Staff Engineer, Compute and Data Services, Ooyala
• Building multiple web-scale real-time systems on top of C*, Kafka,
Storm, etc.
• Scala/Akka guy
• Very excited by open source, big data projects - share some today
• @evanfchan
3. Agenda
• Ooyala and Cassandra
• What problem are we trying to solve?
• Spark and Shark
• Our Spark/Cassandra Architecture
• Demo
6. CONFIDENTIAL—DO NOT DISTRIBUTE 6CONFIDENTIAL—DO NOT DISTRIBUTE
Founded in 2007
Commercially launch in 2009
230+ employees in Silicon Valley, LA, NYC,
London, Paris, Tokyo, Sydney & Guadalajara
Global footprint, 200M unique users,
110+ countries, and more than 6,000 websites
Over 1 billion videos played per month
and 2 billion analytic events per day
25% of U.S. online viewers watch video
powered by Ooyala
COMPANY OVERVIEW
8. We are a large Cassandra user
• 12 clusters ranging in size from 3 to 115 nodes
• Total of 28TB of data managed over ~200 nodes
• Largest cluster - 115 nodes, 1.92PB storage, 15TB
RAM
• Over 2 billion C* column writes per day
• Powers all of our analytics infrastructure
9. What problem are we trying to
solve?
Lots of data, complex queries, answered really quickly... but how??
12. To nuggets of truth...
• Quickly
• Painlessly
• At
scale?
13. Today: Precomputed aggregates
• Video metrics computed along several high cardinality dimensions
• Very fast lookups, but inflexible, and hard to change
• Most computed aggregates are never read
• What if we need more dynamic queries?
– Top content for mobile users in France
– Engagement curves for users who watched recommendations
– Data mining, trends, machine learning
14. The static - dynamic continuum
• Super fast lookups
• Inflexible, wasteful
• Best for 80% most
common queries
• Always compute results
from raw data
• Flexible but slow
100% Precomputation 100% Dynamic
15. Where we want to be
Partly dynamic
• Pre-aggregate most
common queries
• Flexible, fast dynamic
queries
• Easily generate many
materialized views
16. Industry Trends
• Fast execution frameworks
– Impala
• In-memory databases
– VoltDB, Druid
• Streaming and real-time
• Higher-level, productive data frameworks
– Cascading, Hive, Pig
17. Why Spark and Shark?
“Lightning-fast in-memory cluster computing”
18. Introduction to Spark
• In-memory distributed computing framework
• Created by UC Berkeley AMP Lab in 2010
• Targeted problems that MR is bad at:
– Iterative algorithms (machine learning)
– Interactive data mining
• More general purpose than Hadoop MR
• Active contributions from ~ 15 companies
21. Developers love it
• “I wrote my first aggregation job in 30 minutes”
• High level “distributed collections” API
• No Hadoop cruft
• Full power of Scala, Java, Python
• Interactive REPL shell
• EASY testing!!
• Low latency - quick development cycles
22. Spark word count example
file = spark.textFile("hdfs://...")
file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
1 package org.myorg;
2
3 import java.io.IOException;
4 import java.util.*;
5
6 import org.apache.hadoop.fs.Path;
7 import org.apache.hadoop.conf.*;
8 import org.apache.hadoop.io.*;
9 import org.apache.hadoop.mapreduce.*;
10 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
11 import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
12 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
13 import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
14
15 public class WordCount {
16
17 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
18 private final static IntWritable one = new IntWritable(1);
19 private Text word = new Text();
20
21 public void map(LongWritable key, Text value, Context context) throws IOException,
InterruptedException {
22 String line = value.toString();
23 StringTokenizer tokenizer = new StringTokenizer(line);
24 while (tokenizer.hasMoreTokens()) {
25 word.set(tokenizer.nextToken());
26 context.write(word, one);
27 }
28 }
29 }
30
31 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
32
33 public void reduce(Text key, Iterable<IntWritable> values, Context context)
34 throws IOException, InterruptedException {
35 int sum = 0;
36 for (IntWritable val : values) {
37 sum += val.get();
38 }
39 context.write(key, new IntWritable(sum));
40 }
41 }
42
43 public static void main(String[] args) throws Exception {
44 Configuration conf = new Configuration();
45
46 Job job = new Job(conf, "wordcount");
47
48 job.setOutputKeyClass(Text.class);
49 job.setOutputValueClass(IntWritable.class);
50
51 job.setMapperClass(Map.class);
52 job.setReducerClass(Reduce.class);
53
54 job.setInputFormatClass(TextInputFormat.class);
55 job.setOutputFormatClass(TextOutputFormat.class);
56
57 FileInputFormat.addInputPath(job, new Path(args[0]));
58 FileOutputFormat.setOutputPath(job, new Path(args[1]));
59
60 job.waitForCompletion(true);
61 }
62
63 }
23. The Spark Ecosystem
Bagel
-‐
Pregel
on
Spark
HIVE
on
Spark
Spark
Streaming
-‐
discreRzed
stream
processing
Spark
Tachyon
-‐
in-‐memory
caching
DFS
24. Shark - HIVE on Spark
• 100% HiveQL compatible
• 10-100x faster than HIVE, answers in seconds
• Reuse UDFs, SerDe’s, StorageHandlers
• Can use DSE / CassandraFS for Metastore
• Easy Scala/Java integration via Spark - easier than
writing UDFs
25. Our new analytics architecture
How we integrate Cassandra and Spark/Shark
26. From raw events to fast queries
IngesRon C*
event
store
Raw
Events
Raw
Events
Raw
Events
Spark
Spark
Spark
View
1
View
2
View
3
Spark
Shark
Predefined
queries
Ad-‐hoc
HiveQL
33. Tips for InputFormat Development
• Know which target platforms you are developing for
– Which API to write against? New? Old? Both?
• Be prepared to spend time tuning your split computation
– Low latency jobs require fast splits
• Consider sorting row keys by token for data locality
• Implement predicate pushdown for HIVE SerDe’s
– Use your indexes to reduce size of dataset
36. Spark: Under the hood
Map DatasetReduce Map
Driver Map DatasetReduce Map
Map DatasetReduce Map
One
executor
process
per
node
Driver
37. Fault Tolerance
• Cached dataset lives in Java Heap only - what if process dies?
• Spark lineage - automatic recomputation from source, but this is
expensive!
• Can also replicate cached dataset to survive single node failures
• Persist materialized views back to C*, then load into cache -- now
recovery path is much faster
• Persistence also enables multiple processes to hold cached dataset
39. Shark Demo
• Local shark node, 1 core, MBP
• How to create a table from C* using our inputformat
• Creating a cached Shark table
• Running fast queries
40. Backup Slides
• THE NEXT FEW SLIDES ARE STRICTLY BACKUP IN CASE
LIVE DEMO DOESN’T WORK