SlideShare una empresa de Scribd logo
1 de 47
Descargar para leer sin conexión
Flink 
internals 
Kostas Tzoumas 
Flink committer & 
Co-founder, data Artisans 
ktzoumas@apache.org 
@kostas_tzoumas
Welcome 
§ Last talk: how to program PageRank in Flink, 
and Flink programming model 
§ This talk: how Flink works internally 
§ Again, a big bravo to the Flink community 
2
Recap: 
Using Flink 
3
DataSet and transformations 
Input X First Y Second 
Operator X Operator Y 
ExecutionEnvironment 
env 
= 
ExecutionEnvironment.getExecutionEnvironment(); 
DataSet<String> 
input 
= 
env.readTextFile(input); 
DataSet<String> 
first 
= 
input 
.filter 
(str 
-­‐> 
str.contains(“Apache 
Flink“)); 
DataSet<String> 
second 
= 
first 
.filter 
(str 
-­‐> 
str.length() 
> 
40); 
second.print() 
env.execute(); 
4
Available transformations 
§ map 
§ flatMap 
§ filter 
§ reduce 
§ reduceGroup 
§ join 
§ coGroup 
§ aggregate 
§ cross 
§ project 
§ distinct 
§ union 
§ iterate 
§ iterateDelta 
§ repartition 
§ … 
5
Other API elements & tools 
§ Accumulators and counters 
• Int, Long, Double counters 
• Histogram accumulator 
• Define your own 
§ Broadcast variables 
§ Plan visualization 
§ Local debugging/testing mode 
6
Data types and grouping 
public 
static 
class 
Access 
{ 
public 
int 
userId; 
public 
String 
url; 
... 
} 
public 
static 
class 
User 
{ 
public 
int 
userId; 
public 
int 
region; 
public 
Date 
customerSince; 
... 
} 
DataSet<Tuple2<Access,User>> 
campaign 
= 
access.join(users) 
.where(“userId“).equalTo(“userId“) 
DataSet<Tuple3<Integer,String,String> 
someLog; 
someLog.groupBy(0,1).reduceGroup(...); 
§ Bean-style Java classes & field names 
§ Tuples and position addressing 
§ Any data type with key selector function 
7
Other API elements 
§ Hadoop compatibility 
• Supports all Hadoop data types, input/output 
formats, Hadoop mappers and reducers 
§ Data streaming API 
• DataStream instead of DataSet 
• Similar set of operators 
• Currently in alpha but moving very fast 
§ Scala and Java APIs (mirrored) 
§ Graph API (Spargel) 
8
Intro to 
internals 
9
for 
(String 
token 
: 
value.split("W")) 
{ 
out.collect(new 
Tuple2<>(token, 
1)); 
Task 
Manager 
DataSet<String> 
text 
= 
env.readTextFile(input); 
DataSet<Tuple2<String, 
Integer>> 
result 
= 
text 
Job 
Manager 
Task 
Manager 
.flatMap((str, 
out) 
-­‐> 
{ 
}) 
.groupBy(0) 
.aggregate(SUM, 
1); 
Flink Client & 
Optimizer 
O 
Romeo, 
Romeo, 
wherefore 
art 
thou 
Romeo? 
O, 
1 
Romeo, 
3 
wherefore, 
1 
art, 
1 
thou, 
1 
Apache Flink 
10 
Nor 
arm, 
nor 
face, 
nor 
any 
other 
part 
nor, 
3 
arm, 
1 
face, 
1, 
any, 
1, 
other, 
1 
part, 
1
If you want to know one 
thing about Flink is that 
you don’t need to know 
the internals of Flink. 
11
Philosophy 
§ Flink “hides” its internal workings from the 
user 
§ This is good 
• User does not worry about how jobs are 
executed 
• Internals can be changed without breaking 
changes 
§ … and bad 
• Execution model more complicated to explain 
compared to MapReduce or Spark RDD 
12
Recap: DataSet 
Input X First Y Second 
Operator X Operator Y 
13 
ExecutionEnvironment 
env 
= 
ExecutionEnvironment.getExecutionEnvironment(); 
DataSet<String> 
input 
= 
env.readTextFile(input); 
DataSet<String> 
first 
= 
input 
.filter 
(str 
-­‐> 
str.contains(“Apache 
Flink“)); 
DataSet<String> 
second 
= 
first 
.filter 
(str 
-­‐> 
str.length() 
> 
40); 
second.print() 
env.execute();
Common misconception 
Input X First Y Second 
§ Programs are not executed eagerly 
§ Instead, system compiles program to an 
execution plan and executes that plan 
14
DataSet<String> 
§ Think of it as a PCollection<String>, or a 
Spark RDD[String] 
§ With a major difference: it can be produced/ 
recovered in several ways 
• … like a Java collection 
• … like an RDD 
• … perhaps it is never fully materialized (because 
the program does not need it to) 
• … implicitly updated in an iteration 
§ And this is transparent to the user 
15
Example: grep 
Romeo, 
Romeo, 
where 
art 
thou 
Romeo? 
Load Log 
Search 
for str1 
Search 
for str2 
Search 
for str3 
Grep 1 
Grep 2 
Grep 3 
16
Staged (batch) execution 
Romeo, 
Romeo, 
where 
art 
thou 
Romeo? 
Load Log 
Load Log 
Search 
for str1 
Search 
for str2 
Search 
for str3 
Grep 1 
Grep 2 
Grep 3 
Stage 1: 
Create/cache Log 
Subseqent stages: 
Grep log for matches 
Caching in-memory 
and disk if needed 
Search 
for str1 
Search 
for str2 
Search 
for str2 
Grep 1 
Grep 2 
Grep 2 
Load Log 
Search 
for str1 
Search 
for str2 
Search 
for str2 
Grep 1 
Grep 2 
Grep 2 
17
Load Log 
Search 
for str1 
Search 
for str2 
Search 
for str2 
Grep 1 
Grep 2 
Grep 2 
Pipelined execution 
Romeo, 
Romeo, 
where 
art 
thou 
Romeo? 
Load Log 
Load Log 
Search 
for str1 
Search 
for str2 
Search 
for str3 
Grep 1 
Grep 2 
Grep 3 
000000111111000000111111 
Stage 1: 
Deploy and start operators 
Data transfer in-memory 
and disk if 
needed 
Search 
for str1 
Search 
for str2 
Search 
for str2 
Grep 1 
Grep 2 
Grep 2 
18 
Note: Log 
DataSet is 
never 
“created”!
Benefits of pipelining 
§ 25 node cluster 
§ Grep log for 3 
terms 
§ Scale data size 
from 100GB to 
1TB 
2500 
Time to complete grep (sec) Data size (GB) 
2250 
2000 
1750 
1500 
1250 
1000 
750 
500 
250 
0 
Pipelined with Flink 
0 100 200 300 400 500 600 700 800 900 1000 
Cluster memory 
exceeded 19
20
Drawbacks of pipelining 
§ Long pipelines may be active at the same time leading 
to memory fragmentation 
• FLINK-1101: Changes memory allocation from static to 
adaptive 
§ Fault-tolerance harder to get right 
• FLINK-986: Adds intermediate data sets (similar to RDDS) as 
first-class citizen to Flink Runtime. Will lead to fine-grained 
fault-tolerance among other features. 
21
Example: Iterative processing 
DataSet<Page> 
pages 
= 
... 
DataSet<Neighborhood> 
edges 
= 
... 
DataSet<Page> 
oldRanks 
= 
pages; 
DataSet<Page> 
newRanks; 
for 
(i 
= 
0; 
i 
< 
maxIterations; 
i++) 
{ 
newRanks 
= 
update(oldRanks, 
edges) 
oldRanks 
= 
newRanks 
} 
DataSet<Page> 
result 
= 
newRanks; 
DataSet<Page> 
update 
(DataSet<Page> 
ranks, 
DataSet<Neighborhood> 
adjacency) 
{ 
return 
oldRanks 
.join(adjacency) 
.where(“id“).equalTo(“id“) 
.with 
( 
(page, 
adj, 
out) 
-­‐> 
{ 
for 
(long 
n 
: 
adj.neighbors) 
out.collect(new 
Page(n, 
df 
* 
page.rank 
/ 
adj.neighbors.length)) 
}) 
.groupBy(“id“) 
.reduce 
( 
(a, 
b) 
-­‐> 
new 
Page(a.id, 
a.rank 
+ 
b.rank) 
); 
22
Iterate by unrolling 
Client 
Step Step Step Step Step 
§ for/while loop in client submits one job per 
iteration step 
§ Data reuse by caching in memory and/or disk 
23
Iterate natively 
Y initial 
solution 
DataSet<Page> 
pages 
= 
... 
DataSet<Neighborhood> 
edges 
= 
... 
IterativeDataSet<Page> 
pagesIter 
= 
pages.iterate(maxIterations); 
DataSet<Page> 
newRanks 
= 
update 
(pagesIter, 
edges); 
DataSet<Page> 
result 
= 
pagesIter.closeWith(newRanks) 
24 
partial 
solution 
partial 
X solution 
other 
datasets 
iteration 
result 
Replace 
Step function
Iterate natively with deltas 
Replace 
workset A B workset 
initial 
workset 
initial 
partial 
solution 
solution 
Y delta 
X set 
other 
datasets 
Merge deltas 
DeltaIteration<...> 
pagesIter 
= 
pages.iterateDelta(initialDeltas, 
iteration 
result 
maxIterations, 
0); 
DataSet<...> 
newRanks 
= 
update 
(pagesIter, 
edges); 
DataSet<...> 
newRanks 
= 
... 
DataSet<...> 
result 
= 
pagesIter.closeWith(newRanks, 
deltas) 
See http://data-artisans.com/data-analysis-with-flink.html 25
Native, unrolling, and delta 
26
Dissecting 
Flink 
27
28
The growing Flink stack 
29 
Python API 
(upcoming) Graph API Apache 
Common API 
Flink Optimizer Flink Stream Builder 
Scala API 
(batch) 
Java API 
(streaming) 
Java API 
(batch) 
MRQL 
Flink Local Runtime 
Embedded 
environment 
(Java collections) 
Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Single node execution Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Redis Rabbit 
Kafka MQ Azure 
tables …
Stack without Flink Streaming 
30 
30 
Python API 
(upcoming) Graph API Apache 
Focus on regular (batch) 
processing… 
Scala API Java API 
Common API 
Flink Optimizer 
MRQL 
Embedded Flink Local Runtime 
environment 
(Java collections) Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Azure 
tables … 
Single node execution
Program lifecycle 
30 
30 
Python API 
(upcoming) Graph API Apache 
Scala API Java API 
Common API 
Flink Optimizer 
MRQL 
Embedded Flink Local Runtime 
environment 
(Java collections) Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Azure 
tables … 
Single node execution 
31 
val 
source1 
= 
… 
val 
source2 
= 
… 
maxed 
= 
source1 
.map(v 
=> 
(v._1,v._2, 
val 
math.max(v._1,v._2)) 
val 
filtered 
= 
source2 
.filter(v 
=> 
(v._1 
> 
4)) 
val 
result 
= 
maxed 
.join(filtered).where(0).equalTo(0) 
.filter(_1 
> 
3) 
.groupBy(0) 
.reduceGroup 
{……} 
1 
3 
4 
5 
2
30 
30 
Python API 
(upcoming) Graph API Apache 
Scala API Java API 
Common API 
Flink Optimizer 
MRQL 
Embedded Flink Local Runtime 
environment 
(Java collections) Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Azure 
tables … 
Single node execution 
§ The optimizer is the 
component that selects 
an execution plan for a 
Common API program 
§ Think of an AI system 
manipulating your 
program for you J 
§ But don’t be scared – it 
works 
• Relational databases have 
been doing this for 
decades – Flink ports the 
technology to API-based 
systems 
Flink Optimizer 
32
A simple program 
33 
DataSet<Tuple5<Integer, 
String, 
String, 
String, 
Integer>> 
orders 
= 
… 
DataSet<Tuple2<Integer, 
Double>> 
lineitems 
= 
… 
DataSet<Tuple2<Integer, 
Integer>> 
filteredOrders 
= 
orders 
.filter(. 
. 
.) 
.project(0,4).types(Integer.class, 
Integer.class); 
DataSet<Tuple3<Integer, 
Integer, 
Double>> 
lineitemsOfOrders 
= 
filteredOrders 
.join(lineitems) 
.where(0).equalTo(0) 
.projectFirst(0,1).projectSecond(1) 
.types(Integer.class, 
Integer.class, 
Double.class); 
DataSet<Tuple3<Integer, 
Integer, 
Double>> 
priceSums 
= 
lineitemsOfOrders 
.groupBy(0,1).aggregate(Aggregations.SUM, 
2); 
priceSums.writeAsCsv(outputPath);
Two execution plans 
34 
GroupRed 
sort 
Combine 
Map DataSource 
Filter 
DataSource 
orders.tbl 
lineitem.tbl 
Join 
Hybrid Hash 
buildHT probe 
broadcast forward 
Map DataSource 
Filter 
DataSource 
orders.tbl 
lineitem.tbl 
Join 
Hybrid Hash 
buildHT probe 
hash-part [0] hash-part [0] 
hash-part [0,1] 
GroupRed 
sort 
Best plan forward 
depends on 
relative sizes 
of input files
Flink Local Runtime 
30 
30 
Python API 
(upcoming) Graph API Apache 
Scala API Java API 
Common API 
Flink Optimizer 
MRQL 
Embedded Flink Local Runtime 
environment 
(Java collections) Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Azure 
tables … 
Single node execution 
§ Local runtime, not 
the distributed 
execution engine 
§ Aka: what happens 
inside every 
parallel task 
35
Flink runtime operators 
§ Sorting and hashing data 
• Necessary for grouping, aggregation, 
reduce, join, cogroup, delta iterations 
§ Flink contains tailored implementations 
of hybrid hashing and external sorting in 
Java 
• Scale well with both abundant and restricted 
memory sizes 
36
Internal data representation 
37 
JVM Heap 
map 
JVM Heap 
reduce 
O 
Romeo, 
Romeo, 
wherefore 
art 
thou 
Romeo? 
00110011 
art, 
1 
O, 
1 
Romeo, 
1 
Romeo, 
1 
00110011 
00010111 
01110001 
01111010 
00010111 
00110011 
Network transfer 
Local sort 
How is intermediate data internally represented?
Internal data representation 
§ Two options: Java objects or raw bytes 
§ Java objects 
• Easier to program 
• Can suffer from GC overhead 
• Hard to de-stage data to disk, may suffer from “out 
of memory exceptions” 
§ Raw bytes 
• Harder to program (customer serialization stack, 
more involved runtime operators) 
• Solves most of memory and GC problems 
• Overhead from object (de)serialization 
§ Flink follows the raw byte approach 
38
Memory in Flink 
public 
class 
WC 
{ 
public 
String 
word; 
public 
int 
count; 
} 
empty 
page 
Pool of Memory Pages 
JVM Heap 
User code 
objects 
Sorting, 
hashing, 
caching 
Shuffling, 
broadcasts 
Unmanaged 
heap 
Managed 
heap 
Network 
buffers 
39
Memory in Flink (2) 
§ Internal memory management 
• Flink initially allocates 70% of the free heap as byte[] 
segments 
• Internal operators allocate() and release() these 
segments 
§ Flink has its own serialization stack 
• All accepted data types serialized to data segments 
§ Easy to reason about memory, (almost) no 
OutOfMemory errors, reduces the pressure to 
the GC (smooth performance) 
40
Operating on serialized data 
Microbenchmark 
§ Sorting 1GB worth of (long, double) tuples 
§ 67,108,864 elements 
§ Simple quicksort 
41
Flink distributed execution 
30 
30 
Python API 
(upcoming) Graph API Apache 
Scala API Java API 
Common API 
Flink Optimizer 
MRQL 
Embedded Flink Local Runtime 
environment 
(Java collections) Local 
Environment 
(for debugging) 
Remote environment 
(Regular cluster execution) Apache Tez 
Standalone or YARN cluster 
Data 
storage Files HDFS S3 JDBC Azure 
tables … 
Single node execution 
42 
§ Pipelined 
• Same engine for 
Flink and Flink 
streaming 
§ Pluggable 
• Local runtime can be 
executed on other 
engines 
• E.g., Java collections 
and Apache Tez
Closing 
43
Summary 
§ Flink decouples API from execution 
• Same program can be executed in many different 
ways 
• Hopefully users do not need to care about this and 
still get very good performance 
§ Unique Flink internal features 
• Pipelined execution, native iterations, optimizer, 
serialized data manipulation, good disk destaging 
§ Very good performance 
• Known issues currently worked on actively 
44
Stay informed 
§ flink.incubator.apache.org 
• Subscribe to the mailing lists! 
• http://flink.incubator.apache.org/community.html#mailing-lists 
§ Blogs 
• flink.incubator.apache.org/blog 
• data-artisans.com/blog 
§ Twitter 
• follow @ApacheFlink 
45
46
That’s it, time for beer 
47

Más contenido relacionado

La actualidad más candente

Introduction to Stream Processing
Introduction to Stream ProcessingIntroduction to Stream Processing
Introduction to Stream Processing
Guido Schmutz
 

La actualidad más candente (20)

Click-Through Example for Flink’s KafkaConsumer Checkpointing
Click-Through Example for Flink’s KafkaConsumer CheckpointingClick-Through Example for Flink’s KafkaConsumer Checkpointing
Click-Through Example for Flink’s KafkaConsumer Checkpointing
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and PinotExactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
 
Apache Flink and what it is used for
Apache Flink and what it is used forApache Flink and what it is used for
Apache Flink and what it is used for
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
 
Apache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmapApache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmap
 
Apache flink
Apache flinkApache flink
Apache flink
 
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & IcebergBatch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
 
Druid deep dive
Druid deep diveDruid deep dive
Druid deep dive
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native EraApache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
 
Step-by-Step Introduction to Apache Flink
Step-by-Step Introduction to Apache Flink Step-by-Step Introduction to Apache Flink
Step-by-Step Introduction to Apache Flink
 
Apache Flink 101 - the rise of stream processing and beyond
Apache Flink 101 - the rise of stream processing and beyondApache Flink 101 - the rise of stream processing and beyond
Apache Flink 101 - the rise of stream processing and beyond
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptxTuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
 
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
Towards Flink 2.0: Unified Batch & Stream Processing - Aljoscha Krettek, Verv...
 
Flink Streaming
Flink StreamingFlink Streaming
Flink Streaming
 
Introduction to Stream Processing
Introduction to Stream ProcessingIntroduction to Stream Processing
Introduction to Stream Processing
 
How Uber scaled its Real Time Infrastructure to Trillion events per day
How Uber scaled its Real Time Infrastructure to Trillion events per dayHow Uber scaled its Real Time Infrastructure to Trillion events per day
How Uber scaled its Real Time Infrastructure to Trillion events per day
 

Destacado

Christian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream ProcessingChristian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream Processing
Flink Forward
 
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Flink Forward
 
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
Flink Forward
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Flink Forward
 
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
Flink Forward
 
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache ZeppelinMoon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Flink Forward
 
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache FlinkMaximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Flink Forward
 
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & KafkaMohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Flink Forward
 
Apache Flink Training: DataStream API Part 1 Basic
 Apache Flink Training: DataStream API Part 1 Basic Apache Flink Training: DataStream API Part 1 Basic
Apache Flink Training: DataStream API Part 1 Basic
Flink Forward
 
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache FlinkSuneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
Flink Forward
 
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on FlinkTran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Flink Forward
 

Destacado (20)

Christian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream ProcessingChristian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream Processing
 
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
 
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe – Google Cloud Dataflow and Flink , Stream Processing by De...
 
Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School
 
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-timeChris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
Chris Hillman – Beyond Mapreduce Scientific Data Processing in Real-time
 
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
Marc Schwering – Using Flink with MongoDB to enhance relevancy in personaliza...
 
Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming Mikio Braun – Data flow vs. procedural programming
Mikio Braun – Data flow vs. procedural programming
 
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache ZeppelinMoon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
Moon soo Lee – Data Science Lifecycle with Apache Flink and Apache Zeppelin
 
Flink Case Study: Bouygues Telecom
Flink Case Study: Bouygues TelecomFlink Case Study: Bouygues Telecom
Flink Case Study: Bouygues Telecom
 
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache FlinkMaximilian Michels – Google Cloud Dataflow on Top of Apache Flink
Maximilian Michels – Google Cloud Dataflow on Top of Apache Flink
 
Slim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. SparkSlim Baltagi – Flink vs. Spark
Slim Baltagi – Flink vs. Spark
 
Marton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream ProcessingMarton Balassi – Stateful Stream Processing
Marton Balassi – Stateful Stream Processing
 
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & KafkaMohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
Mohamed Amine Abdessemed – Real-time Data Integration with Apache Flink & Kafka
 
Apache Flink Training: DataStream API Part 1 Basic
 Apache Flink Training: DataStream API Part 1 Basic Apache Flink Training: DataStream API Part 1 Basic
Apache Flink Training: DataStream API Part 1 Basic
 
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache FlinkSuneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
Suneel Marthi – BigPetStore Flink: A Comprehensive Blueprint for Apache Flink
 
Flink Apachecon Presentation
Flink Apachecon PresentationFlink Apachecon Presentation
Flink Apachecon Presentation
 
Streaming Data Flow with Apache Flink @ Paris Flink Meetup 2015
Streaming Data Flow with Apache Flink @ Paris Flink Meetup 2015Streaming Data Flow with Apache Flink @ Paris Flink Meetup 2015
Streaming Data Flow with Apache Flink @ Paris Flink Meetup 2015
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API Basics
 
Aljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of TimeAljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of Time
 
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on FlinkTran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
Tran Nam-Luc – Stale Synchronous Parallel Iterations on Flink
 

Similar a Apache Flink internals

Scrap Your MapReduce - Apache Spark
 Scrap Your MapReduce - Apache Spark Scrap Your MapReduce - Apache Spark
Scrap Your MapReduce - Apache Spark
IndicThreads
 

Similar a Apache Flink internals (20)

Flink internals web
Flink internals web Flink internals web
Flink internals web
 
FastR+Apache Flink
FastR+Apache FlinkFastR+Apache Flink
FastR+Apache Flink
 
Apache Flink Deep Dive
Apache Flink Deep DiveApache Flink Deep Dive
Apache Flink Deep Dive
 
Apache Flink: Better, Faster & Uncut - Piotr Nowojski, data Artisans
Apache Flink: Better, Faster & Uncut - Piotr Nowojski, data ArtisansApache Flink: Better, Faster & Uncut - Piotr Nowojski, data Artisans
Apache Flink: Better, Faster & Uncut - Piotr Nowojski, data Artisans
 
Handout3o
Handout3oHandout3o
Handout3o
 
Operating and Supporting Delta Lake in Production
Operating and Supporting Delta Lake in ProductionOperating and Supporting Delta Lake in Production
Operating and Supporting Delta Lake in Production
 
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Apache Spark Workshop
Apache Spark WorkshopApache Spark Workshop
Apache Spark Workshop
 
Hadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log projectHadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log project
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark
 
Ufuc Celebi – Stream & Batch Processing in one System
Ufuc Celebi – Stream & Batch Processing in one SystemUfuc Celebi – Stream & Batch Processing in one System
Ufuc Celebi – Stream & Batch Processing in one System
 
Improving Apache Spark Downscaling
 Improving Apache Spark Downscaling Improving Apache Spark Downscaling
Improving Apache Spark Downscaling
 
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Strata Singapore: GearpumpReal time DAG-Processing with Akka at ScaleStrata Singapore: GearpumpReal time DAG-Processing with Akka at Scale
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
 
H2O Design and Infrastructure with Matt Dowle
H2O Design and Infrastructure with Matt DowleH2O Design and Infrastructure with Matt Dowle
H2O Design and Infrastructure with Matt Dowle
 
Java Hates Linux. Deal With It.
Java Hates Linux.  Deal With It.Java Hates Linux.  Deal With It.
Java Hates Linux. Deal With It.
 
Scrap Your MapReduce - Apache Spark
 Scrap Your MapReduce - Apache Spark Scrap Your MapReduce - Apache Spark
Scrap Your MapReduce - Apache Spark
 
Advanced Data Science on Spark-(Reza Zadeh, Stanford)
Advanced Data Science on Spark-(Reza Zadeh, Stanford)Advanced Data Science on Spark-(Reza Zadeh, Stanford)
Advanced Data Science on Spark-(Reza Zadeh, Stanford)
 

Más de Kostas Tzoumas

Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
Kostas Tzoumas
 

Más de Kostas Tzoumas (6)

Debunking Common Myths in Stream Processing
Debunking Common Myths in Stream ProcessingDebunking Common Myths in Stream Processing
Debunking Common Myths in Stream Processing
 
Debunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream ProcessingDebunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream Processing
 
Streaming in the Wild with Apache Flink
Streaming in the Wild with Apache FlinkStreaming in the Wild with Apache Flink
Streaming in the Wild with Apache Flink
 
Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016
 
First Flink Bay Area meetup
First Flink Bay Area meetupFirst Flink Bay Area meetup
First Flink Bay Area meetup
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 

Apache Flink internals

  • 1. Flink internals Kostas Tzoumas Flink committer & Co-founder, data Artisans ktzoumas@apache.org @kostas_tzoumas
  • 2. Welcome § Last talk: how to program PageRank in Flink, and Flink programming model § This talk: how Flink works internally § Again, a big bravo to the Flink community 2
  • 4. DataSet and transformations Input X First Y Second Operator X Operator Y ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<String> input = env.readTextFile(input); DataSet<String> first = input .filter (str -­‐> str.contains(“Apache Flink“)); DataSet<String> second = first .filter (str -­‐> str.length() > 40); second.print() env.execute(); 4
  • 5. Available transformations § map § flatMap § filter § reduce § reduceGroup § join § coGroup § aggregate § cross § project § distinct § union § iterate § iterateDelta § repartition § … 5
  • 6. Other API elements & tools § Accumulators and counters • Int, Long, Double counters • Histogram accumulator • Define your own § Broadcast variables § Plan visualization § Local debugging/testing mode 6
  • 7. Data types and grouping public static class Access { public int userId; public String url; ... } public static class User { public int userId; public int region; public Date customerSince; ... } DataSet<Tuple2<Access,User>> campaign = access.join(users) .where(“userId“).equalTo(“userId“) DataSet<Tuple3<Integer,String,String> someLog; someLog.groupBy(0,1).reduceGroup(...); § Bean-style Java classes & field names § Tuples and position addressing § Any data type with key selector function 7
  • 8. Other API elements § Hadoop compatibility • Supports all Hadoop data types, input/output formats, Hadoop mappers and reducers § Data streaming API • DataStream instead of DataSet • Similar set of operators • Currently in alpha but moving very fast § Scala and Java APIs (mirrored) § Graph API (Spargel) 8
  • 10. for (String token : value.split("W")) { out.collect(new Tuple2<>(token, 1)); Task Manager DataSet<String> text = env.readTextFile(input); DataSet<Tuple2<String, Integer>> result = text Job Manager Task Manager .flatMap((str, out) -­‐> { }) .groupBy(0) .aggregate(SUM, 1); Flink Client & Optimizer O Romeo, Romeo, wherefore art thou Romeo? O, 1 Romeo, 3 wherefore, 1 art, 1 thou, 1 Apache Flink 10 Nor arm, nor face, nor any other part nor, 3 arm, 1 face, 1, any, 1, other, 1 part, 1
  • 11. If you want to know one thing about Flink is that you don’t need to know the internals of Flink. 11
  • 12. Philosophy § Flink “hides” its internal workings from the user § This is good • User does not worry about how jobs are executed • Internals can be changed without breaking changes § … and bad • Execution model more complicated to explain compared to MapReduce or Spark RDD 12
  • 13. Recap: DataSet Input X First Y Second Operator X Operator Y 13 ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<String> input = env.readTextFile(input); DataSet<String> first = input .filter (str -­‐> str.contains(“Apache Flink“)); DataSet<String> second = first .filter (str -­‐> str.length() > 40); second.print() env.execute();
  • 14. Common misconception Input X First Y Second § Programs are not executed eagerly § Instead, system compiles program to an execution plan and executes that plan 14
  • 15. DataSet<String> § Think of it as a PCollection<String>, or a Spark RDD[String] § With a major difference: it can be produced/ recovered in several ways • … like a Java collection • … like an RDD • … perhaps it is never fully materialized (because the program does not need it to) • … implicitly updated in an iteration § And this is transparent to the user 15
  • 16. Example: grep Romeo, Romeo, where art thou Romeo? Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 16
  • 17. Staged (batch) execution Romeo, Romeo, where art thou Romeo? Load Log Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 Stage 1: Create/cache Log Subseqent stages: Grep log for matches Caching in-memory and disk if needed Search for str1 Search for str2 Search for str2 Grep 1 Grep 2 Grep 2 Load Log Search for str1 Search for str2 Search for str2 Grep 1 Grep 2 Grep 2 17
  • 18. Load Log Search for str1 Search for str2 Search for str2 Grep 1 Grep 2 Grep 2 Pipelined execution Romeo, Romeo, where art thou Romeo? Load Log Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 000000111111000000111111 Stage 1: Deploy and start operators Data transfer in-memory and disk if needed Search for str1 Search for str2 Search for str2 Grep 1 Grep 2 Grep 2 18 Note: Log DataSet is never “created”!
  • 19. Benefits of pipelining § 25 node cluster § Grep log for 3 terms § Scale data size from 100GB to 1TB 2500 Time to complete grep (sec) Data size (GB) 2250 2000 1750 1500 1250 1000 750 500 250 0 Pipelined with Flink 0 100 200 300 400 500 600 700 800 900 1000 Cluster memory exceeded 19
  • 20. 20
  • 21. Drawbacks of pipelining § Long pipelines may be active at the same time leading to memory fragmentation • FLINK-1101: Changes memory allocation from static to adaptive § Fault-tolerance harder to get right • FLINK-986: Adds intermediate data sets (similar to RDDS) as first-class citizen to Flink Runtime. Will lead to fine-grained fault-tolerance among other features. 21
  • 22. Example: Iterative processing DataSet<Page> pages = ... DataSet<Neighborhood> edges = ... DataSet<Page> oldRanks = pages; DataSet<Page> newRanks; for (i = 0; i < maxIterations; i++) { newRanks = update(oldRanks, edges) oldRanks = newRanks } DataSet<Page> result = newRanks; DataSet<Page> update (DataSet<Page> ranks, DataSet<Neighborhood> adjacency) { return oldRanks .join(adjacency) .where(“id“).equalTo(“id“) .with ( (page, adj, out) -­‐> { for (long n : adj.neighbors) out.collect(new Page(n, df * page.rank / adj.neighbors.length)) }) .groupBy(“id“) .reduce ( (a, b) -­‐> new Page(a.id, a.rank + b.rank) ); 22
  • 23. Iterate by unrolling Client Step Step Step Step Step § for/while loop in client submits one job per iteration step § Data reuse by caching in memory and/or disk 23
  • 24. Iterate natively Y initial solution DataSet<Page> pages = ... DataSet<Neighborhood> edges = ... IterativeDataSet<Page> pagesIter = pages.iterate(maxIterations); DataSet<Page> newRanks = update (pagesIter, edges); DataSet<Page> result = pagesIter.closeWith(newRanks) 24 partial solution partial X solution other datasets iteration result Replace Step function
  • 25. Iterate natively with deltas Replace workset A B workset initial workset initial partial solution solution Y delta X set other datasets Merge deltas DeltaIteration<...> pagesIter = pages.iterateDelta(initialDeltas, iteration result maxIterations, 0); DataSet<...> newRanks = update (pagesIter, edges); DataSet<...> newRanks = ... DataSet<...> result = pagesIter.closeWith(newRanks, deltas) See http://data-artisans.com/data-analysis-with-flink.html 25
  • 28. 28
  • 29. The growing Flink stack 29 Python API (upcoming) Graph API Apache Common API Flink Optimizer Flink Stream Builder Scala API (batch) Java API (streaming) Java API (batch) MRQL Flink Local Runtime Embedded environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Single node execution Standalone or YARN cluster Data storage Files HDFS S3 JDBC Redis Rabbit Kafka MQ Azure tables …
  • 30. Stack without Flink Streaming 30 30 Python API (upcoming) Graph API Apache Focus on regular (batch) processing… Scala API Java API Common API Flink Optimizer MRQL Embedded Flink Local Runtime environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Standalone or YARN cluster Data storage Files HDFS S3 JDBC Azure tables … Single node execution
  • 31. Program lifecycle 30 30 Python API (upcoming) Graph API Apache Scala API Java API Common API Flink Optimizer MRQL Embedded Flink Local Runtime environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Standalone or YARN cluster Data storage Files HDFS S3 JDBC Azure tables … Single node execution 31 val source1 = … val source2 = … maxed = source1 .map(v => (v._1,v._2, val math.max(v._1,v._2)) val filtered = source2 .filter(v => (v._1 > 4)) val result = maxed .join(filtered).where(0).equalTo(0) .filter(_1 > 3) .groupBy(0) .reduceGroup {……} 1 3 4 5 2
  • 32. 30 30 Python API (upcoming) Graph API Apache Scala API Java API Common API Flink Optimizer MRQL Embedded Flink Local Runtime environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Standalone or YARN cluster Data storage Files HDFS S3 JDBC Azure tables … Single node execution § The optimizer is the component that selects an execution plan for a Common API program § Think of an AI system manipulating your program for you J § But don’t be scared – it works • Relational databases have been doing this for decades – Flink ports the technology to API-based systems Flink Optimizer 32
  • 33. A simple program 33 DataSet<Tuple5<Integer, String, String, String, Integer>> orders = … DataSet<Tuple2<Integer, Double>> lineitems = … DataSet<Tuple2<Integer, Integer>> filteredOrders = orders .filter(. . .) .project(0,4).types(Integer.class, Integer.class); DataSet<Tuple3<Integer, Integer, Double>> lineitemsOfOrders = filteredOrders .join(lineitems) .where(0).equalTo(0) .projectFirst(0,1).projectSecond(1) .types(Integer.class, Integer.class, Double.class); DataSet<Tuple3<Integer, Integer, Double>> priceSums = lineitemsOfOrders .groupBy(0,1).aggregate(Aggregations.SUM, 2); priceSums.writeAsCsv(outputPath);
  • 34. Two execution plans 34 GroupRed sort Combine Map DataSource Filter DataSource orders.tbl lineitem.tbl Join Hybrid Hash buildHT probe broadcast forward Map DataSource Filter DataSource orders.tbl lineitem.tbl Join Hybrid Hash buildHT probe hash-part [0] hash-part [0] hash-part [0,1] GroupRed sort Best plan forward depends on relative sizes of input files
  • 35. Flink Local Runtime 30 30 Python API (upcoming) Graph API Apache Scala API Java API Common API Flink Optimizer MRQL Embedded Flink Local Runtime environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Standalone or YARN cluster Data storage Files HDFS S3 JDBC Azure tables … Single node execution § Local runtime, not the distributed execution engine § Aka: what happens inside every parallel task 35
  • 36. Flink runtime operators § Sorting and hashing data • Necessary for grouping, aggregation, reduce, join, cogroup, delta iterations § Flink contains tailored implementations of hybrid hashing and external sorting in Java • Scale well with both abundant and restricted memory sizes 36
  • 37. Internal data representation 37 JVM Heap map JVM Heap reduce O Romeo, Romeo, wherefore art thou Romeo? 00110011 art, 1 O, 1 Romeo, 1 Romeo, 1 00110011 00010111 01110001 01111010 00010111 00110011 Network transfer Local sort How is intermediate data internally represented?
  • 38. Internal data representation § Two options: Java objects or raw bytes § Java objects • Easier to program • Can suffer from GC overhead • Hard to de-stage data to disk, may suffer from “out of memory exceptions” § Raw bytes • Harder to program (customer serialization stack, more involved runtime operators) • Solves most of memory and GC problems • Overhead from object (de)serialization § Flink follows the raw byte approach 38
  • 39. Memory in Flink public class WC { public String word; public int count; } empty page Pool of Memory Pages JVM Heap User code objects Sorting, hashing, caching Shuffling, broadcasts Unmanaged heap Managed heap Network buffers 39
  • 40. Memory in Flink (2) § Internal memory management • Flink initially allocates 70% of the free heap as byte[] segments • Internal operators allocate() and release() these segments § Flink has its own serialization stack • All accepted data types serialized to data segments § Easy to reason about memory, (almost) no OutOfMemory errors, reduces the pressure to the GC (smooth performance) 40
  • 41. Operating on serialized data Microbenchmark § Sorting 1GB worth of (long, double) tuples § 67,108,864 elements § Simple quicksort 41
  • 42. Flink distributed execution 30 30 Python API (upcoming) Graph API Apache Scala API Java API Common API Flink Optimizer MRQL Embedded Flink Local Runtime environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Standalone or YARN cluster Data storage Files HDFS S3 JDBC Azure tables … Single node execution 42 § Pipelined • Same engine for Flink and Flink streaming § Pluggable • Local runtime can be executed on other engines • E.g., Java collections and Apache Tez
  • 44. Summary § Flink decouples API from execution • Same program can be executed in many different ways • Hopefully users do not need to care about this and still get very good performance § Unique Flink internal features • Pipelined execution, native iterations, optimizer, serialized data manipulation, good disk destaging § Very good performance • Known issues currently worked on actively 44
  • 45. Stay informed § flink.incubator.apache.org • Subscribe to the mailing lists! • http://flink.incubator.apache.org/community.html#mailing-lists § Blogs • flink.incubator.apache.org/blog • data-artisans.com/blog § Twitter • follow @ApacheFlink 45
  • 46. 46
  • 47. That’s it, time for beer 47