Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.
The How and Why of Fast Data
Analytics with Apache Spark
with Justin Pihony 

@JustinPihony
Today’s agenda:
▪ Concerns
▪ Why Spark?
▪ Spark basics
▪ Common pitfalls
▪ We can help!
2
Target Audience
3
Concerns
▪ Am I too small?
4
▪ Will switching be too costly?
▪ Can I utilize my current infrastructure?
▪ Will I be able t...
Why Spark?
5
grep?
Why Spark?
6
object WordCount{
def main(args: Array[String])){
val conf = new SparkConf()
.setAppName("wordcount")
val sc = new SparkCo...
Why Spark?
8
Readability
Expressiveness
Fast
Testability
Interactive
Fault Tolerant
Unify Big Data
9
The MapReduce Explosion
10
“Spark will kill MapReduce,
but save Hadoop.”
- http://insidebigdata.com/2015/12/08/big-data-industry-predictions-2016/
Big Data Unified API
13
Spark Core
Spark
SQL
Spark
Streaming
MLlib
(machine
learning)
GraphX
(graph)
DataFrames
14
Yahoo!
Who Is Using Spark?
Spark Mechanics
15
Worker WorkerWorker
Driver
Spark Mechanics
16
Spark Context
Worker WorkerWorker
Driver
Spark Context
17
Task creator
Scheduler
Data locality
Fault tolerance
RDD
18
▪ Resilient Distributed Dataset
▪ Transformations
- map
- filter
- …
▪ Actions
- collect
- count
- reduce
- …
Expressive and Interactive
19
Built-in UI
20
Common Pitfalls
▪ Functional
▪ Out of memory
▪ Debugging
▪ …
21
Concerns
▪ Am I too small?
22
▪ Will switching from MapReduce be too costly?
▪ Can I utilize my current infrastructure?
▪ ...
Q & A
23
EXPERT SUPPORT
Why Contact Typesafe for Your Apache Spark Project?
Ignite your Spark project with 24/7 production SLA,
unl...
©Typesafe 2016 – All Rights Reserved
The How and Why of Fast Data Analytics with Apache Spark
Próxima SlideShare
Cargando en…5
×

The How and Why of Fast Data Analytics with Apache Spark

3.586 visualizaciones

Publicado el

Are you tired of struggling with your existing data analytic applications?

When MapReduce first emerged it was a great boon to the big data world, but modern big data processing demands have outgrown this framework.

That’s where Apache Spark steps in, boasting speeds 10-100x faster than Hadoop and setting the world record in large scale sorting. Spark’s general abstraction means it can expand beyond simple batch processing, making it capable of such things as blazing-fast, iterative algorithms and exactly once streaming semantics. This combined with it’s interactive shell make it a powerful tool useful for everybody, from data tinkerers to data scientists to data developers.

Publicado en: Software
  • Sé el primero en comentar

The How and Why of Fast Data Analytics with Apache Spark

  1. 1. The How and Why of Fast Data Analytics with Apache Spark with Justin Pihony 
 @JustinPihony
  2. 2. Today’s agenda: ▪ Concerns ▪ Why Spark? ▪ Spark basics ▪ Common pitfalls ▪ We can help! 2
  3. 3. Target Audience 3
  4. 4. Concerns ▪ Am I too small? 4 ▪ Will switching be too costly? ▪ Can I utilize my current infrastructure? ▪ Will I be able to find developers? ▪ Are there enough resources available?
  5. 5. Why Spark? 5 grep?
  6. 6. Why Spark? 6
  7. 7. object WordCount{ def main(args: Array[String])){ val conf = new SparkConf() .setAppName("wordcount") val sc = new SparkContext(conf) sc.textFile(args(0)) .flatMap(_.split(" ")) .countByValue .saveAsTextFile(args(1)) } } 7 public class WordCount { public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } } Tiny CodeBig Code Why Spark?
  8. 8. Why Spark? 8 Readability Expressiveness Fast Testability Interactive Fault Tolerant Unify Big Data
  9. 9. 9 The MapReduce Explosion
  10. 10. 10
  11. 11. “Spark will kill MapReduce, but save Hadoop.” - http://insidebigdata.com/2015/12/08/big-data-industry-predictions-2016/
  12. 12. Big Data Unified API 13 Spark Core Spark SQL Spark Streaming MLlib (machine learning) GraphX (graph) DataFrames
  13. 13. 14 Yahoo! Who Is Using Spark?
  14. 14. Spark Mechanics 15 Worker WorkerWorker Driver
  15. 15. Spark Mechanics 16 Spark Context Worker WorkerWorker Driver
  16. 16. Spark Context 17 Task creator Scheduler Data locality Fault tolerance
  17. 17. RDD 18 ▪ Resilient Distributed Dataset ▪ Transformations - map - filter - … ▪ Actions - collect - count - reduce - …
  18. 18. Expressive and Interactive 19
  19. 19. Built-in UI 20
  20. 20. Common Pitfalls ▪ Functional ▪ Out of memory ▪ Debugging ▪ … 21
  21. 21. Concerns ▪ Am I too small? 22 ▪ Will switching from MapReduce be too costly? ▪ Can I utilize my current infrastructure? ▪ Will I be able to find developers? ▪ Are there enough resources available?
  22. 22. Q & A 23
  23. 23. EXPERT SUPPORT Why Contact Typesafe for Your Apache Spark Project? Ignite your Spark project with 24/7 production SLA, unlimited expert support and on-site training: • Full application lifecycle support for Spark Core, Spark SQL & Spark Streaming • Deployment to Standalone, EC2, Mesos clusters • Expert support from dedicated Spark team • Optional 10-day “getting started” services package Typesafe is a partner with Databricks, Mesosphere and IBM. Learn more about on-site trainingCONTACT US
  24. 24. ©Typesafe 2016 – All Rights Reserved

×