SlideShare una empresa de Scribd logo
1 de 75
Kostas Tzoumas
Flink committer
co-founder, data Artisans
kostas@data-artisans.com
@kostas_tzoumas
Apache
Flink
Apache Flink
2
flink.apache.org
github.com/apache/flink
user@flink.apache.org
Cool squirrel logo (also open source)
The Flink Community
 Over 70 contributors from academia &
industry; growing fast
 Apache Top-Level Project since December
3
Using Flink
4
DataSet and transformations
Input First SecondX Y
Operator X Operator Y
ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> input = env.readTextFile(input);
DataSet<String> first = input
.filter (str -> str.contains(“Apache Flink“));
DataSet<String> second = first
.filter (str -> str.length() > 40);
second.print();
env.execute();
5
DataSet
 Central notion of the programming API
 Files and other data sources are read into
DataSets
• DataSet<String> text = env.readTextFile(…)
 Transformations on DataSets produce DataSets
• DataSet<String> first = text.map(…)
 DataSets are printed to files or on stdout
• first.writeAsCsv(…)
 Execution is triggered with env.execute()
6
WordCount
7
text
.flatMap((line,out) -> {
String[] tokens = value.toLowerCase().split(“ “);
for (String token : tokens) {
if (token.length() > 0) {
out.collect(new Tuple2<String, Integer>(token, 1));
}
}
})
.groupBy(0)
.sum(1);
Transitive Closure
8
IterativeDataSet<Tuple2<Long,Long>> paths = edges.iterate (10);
DataSet<Tuple2<Long,Long>> nextPaths = paths
.join(edges).where(1).equalTo(0)
.with((left, right) -> return new Tuple2<Long, Long>(left.f0, right.f1);)
.union(paths)
.distinct();
DataSet<Tuple2<Long, Long>> tc = paths.closeWith(nextPaths);
Scala API
9
text
.flatMap {line => line.split(" ").map(word => Word(word,1)}
.groupBy("word").sum("frequency")
case class Path (from: Long, to: Long)
val tc = edges.iterate(10) { paths: DataSet[Path] =>
val next = paths
.join(edges).where("to").equalTo("from") {
(path, edge) => Path(path.from, edge.to)
}
.union(paths).distinct()
next
}
WordCount
Transitive Closure
Data sources
Batch API
 Files
• HDFS, Local file system,
MapR file system
• Text, Csv, Avro, Hadoop
input formats
 JDBC
 HBase
 Collections
Stream API
 Files
 Socket streams
 Kafka
 RabbitMQ
 Flume
 Collections
 Implement your own
• SourceFunction.collect
10
Available transformations
 map
 flatMap
 filter
 reduce
 reduceGroup
 join
 coGroup
 aggregate
 cross
 project
 distinct
 union
 iterate
 iterateDelta
 repartition
 …
11
Data types and grouping
 Bean-style Java classes & field names
 Tuples and position addressing
 Any data type with key selector function
public static class Access {
public int userId;
public String url;
...
}
public static class User {
public int userId;
public int region;
public Date customerSince;
...
}
DataSet<Tuple2<Access,User>> campaign = access.join(users)
.where(“userId“).equalTo(“userId“)
DataSet<Tuple3<Integer,String,String> someLog;
someLog.groupBy(0,1).reduceGroup(...);
12
Real-time stream processing
13
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> tweets = env.socketTextStream(host,port);
DataStream<Tuple2<String,Integer>> filteredTweets = tweets
.flatMap(new SelectLanguageAndTokenize())
.partition(0)
.map(s -> new Tuple2<String,Integer>(s, 1))
.groupBy(0).sum(1)
.flatMap(new SelectMaxOccurence());
tweets.print();
env.execute();
DataStream instead of DataSet
StreamExecutionEnvironment instead of ExecutionEnvironment
Streaming operators
 Most DataSet
operators can be
used
• map, filter, flatMap,
reduce, reduceGroup,
join, cross, coGroup,
iterate, project,
grouping, partitioning,
aggregations, union
(merge), …
 DataStream-specific
operators (snip)
• CoMap, CoReduce,
etc: share state
between streams
• Temporal binary ops:
join, cross, …
• Windows: policy-
based flexible
windowing
• Time, Count, Delta
14
Windowing
 Trigger policy
• When to trigger the computation on current window
 Eviction policy
• When data points should leave the window
• Defines window width/size
 E.g., count-based policy
• evict when #elements > n
• start a new window every n-th element
 Built-in: Count, Time, Delta policies
15
Windowing example
//Build new model every minute on the last 5 minutes
//worth of data
val model = trainingData
.window(Time.of(5,TimeUnit.MINUTES))
.every(Time.of(1,TimeUnit.MINUTES))
.reduceGroup(buildModel)
//Predict new data using the most up-to-date model
val prediction = newData
.connect(model)
.map(predict); M
P
Training Data
New Data Prediction
12/01/15 16
Window Join example
case class Name(id: Long, name: String)
case class Age(id: Long, age: Int)
case class Person(name: String, age: Int)
val names = ...
val ages = ...
names.join(ages)
.onWindow(5, TimeUnit.SECONDS)
.where("id")
.equalTo("id") {(n, a) => Person(n.name, a.age)}
12/01/15 17
Iterative processing example
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.generateSequence(1, 10).iterate(incrementToTen, 1000)
.print
env.execute("Iterative example")
def incrementToTen(input: DataStream[Long]) = {
val incremented = input.map {_ + 1}
val split = incremented.split
{x => if (x >= 10) "out" else "feedback"}
(split.select("feedback"), split.select("out"))
}
12/01/15 18
Number
stream
Map Reduce
Output
stream
“out”
“feedback”
Spargel: Flink’s Graph API
19
DataSet<Tuple2<Long, Long>> result = vertices
.runOperation(VertexCentricIteration.withPlainEdges(
edges, new CCUpdater(), new CCMessager(), 100));
class CCUpdater extends VertexUpdateFunction …
class CCMessenger extends MessagingFunction …
Spargel is a very thin layer (~500 LOC) on top of
Flink’s delta iterations
Other API elements & tools
 Accumulators and counters
• Int, Long, Double counters
• Histogram accumulator
• Define your own
 Broadcast variables
 Plan visualization
 Local debugging/testing mode
20
Hadoop Compatibility
 Flink runs on YARN, can read from HDFS, HBase
 Can use all Hadoop input and output formats
 Can use unmodified Hadoop Mappers and Reducers
and mix-and-match them with Flink operators
 Coming up: can run unmodified Hadoop jobs (faster)
21
Visualization tools
22
Visualization tools
23
Visualization tools
24
Flink runtime
features
25
Task
Manager
Job
Manager
Task
Manager
Flink Client &
Optimizer
DataSet<String> text = env.readTextFile(input);
DataSet<Tuple2<String, Integer>> result = text
.flatMap((str, out) -> {
for (String token : value.split("W")) {
out.collect(new Tuple2<>(token, 1));
})
.groupBy(0)
.aggregate(SUM, 1);
O Romeo,
Romeo,
wherefore
art thou
Romeo?
O, 1
Romeo, 3
wherefore, 1
art, 1
thou, 1
Apache Flink
26
Nor arm,
nor face,
nor any
other part
nor, 3
arm, 1
face, 1,
any, 1,
other, 1
part, 1
If you need to know one
thing about Flink is that
you don’t need to know
the internals of Flink.
27
Philosophy
 Flink “hides” its internal workings from the
user
 This is good
• User does not worry about how jobs are executed
• Internals can be changed without breaking
changes
 … and bad
• Execution model more complicated to explain
compared to MapReduce or Spark RDD
28
Recap: DataSet
Input First SecondX Y
Operator X Operator Y
29
ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> input = env.readTextFile(input);
DataSet<String> first = input
.filter (str -> str.contains(“Apache Flink“));
DataSet<String> second = first
.filter (str -> str.length() > 40);
second.print()
env.execute();
Common misconception
 Programs are not executed eagerly
 Instead, system compiles program to an
execution plan and executes that plan
Input First SecondX Y
30
DataSet<String>
 Think of it as a PCollection<String>, or a
Spark RDD[String]
 With a major difference: it can be
produced/recovered in several ways
• … like a Java collection
• … like an RDD
• … perhaps it is never fully materialized (because
the program does not need it to)
• … implicitly updated in an iteration
 And this is transparent to the user
31
Example: grep
Romeo,
Romeo,
where art
thou Romeo?
Load Log
Search
for str1
Search
for str2
Search
for str3
Grep 1
Grep 2
Grep 3
32
Staged (batch) execution
Romeo,
Romeo,
where art
thou Romeo?
Load Log
Search
for str1
Search
for str2
Search
for str3
Grep 1
Grep 2
Grep 3
Stage 1:
Create/cache Log
Subseqent stages:
Grep log for matches
Caching in-memory
and disk if needed
33
Pipelined execution
Romeo,
Romeo,
where art
thou Romeo?
Load Log
Search
for str1
Search
for str2
Search
for str3
Grep 1
Grep 2
Grep 3
001100110011001100110011
Stage 1:
Deploy and start operators
Data transfer in-
memory and disk if
needed 34
Note: Log
DataSet is
never
“created”!
Benefits of pipelining
 25 node cluster
 Grep log for 3
terms
 Scale data size
from 100GB to
1TB
0
250
500
750
1000
1250
1500
1750
2000
2250
2500
0 100 200 300 400 500 600 700 800 900 1000
Timetocompletegrep(sec)
Data size (GB)Cluster memory
exceeded 35
36
Drawbacks of pipelining
 Long pipelines may be active at the same time leading to
memory fragmentation
• FLINK-1101: Changes memory allocation from static to adaptive
 Fault-tolerance harder to get right
• FLINK-986: Adds intermediate data sets (similar to RDDS) as
first-class citizen to Flink Runtime. Will lead to fine-grained fault-
tolerance among other features.
37
Example: Iterative processing
DataSet<Page> pages = ...
DataSet<Neighborhood> edges = ...
DataSet<Page> oldRanks = pages; DataSet<Page> newRanks;
for (i = 0; i < maxIterations; i++) {
newRanks = update(oldRanks, edges)
oldRanks = newRanks
}
DataSet<Page> result = newRanks;
DataSet<Page> update (DataSet<Page> ranks, DataSet<Neighborhood> adjacency) {
return oldRanks
.join(adjacency)
.where(“id“).equalTo(“id“)
.with ( (page, adj, out) -> {
for (long n : adj.neighbors)
out.collect(new Page(n, df * page.rank / adj.neighbors.length))
})
.groupBy(“id“)
.reduce ( (a, b) -> new Page(a.id, a.rank + b.rank) );
38
Iterate by unrolling
 for/while loop in client submits one job per iteration
step
 Data reuse by caching in memory and/or disk
Step Step Step Step Step
Client
39
Iterate natively
DataSet<Page> pages = ...
DataSet<Neighborhood> edges = ...
IterativeDataSet<Page> pagesIter = pages.iterate(maxIterations);
DataSet<Page> newRanks = update (pagesIter, edges);
DataSet<Page> result = pagesIter.closeWith(newRanks)
40
partial
solution
partial
solutionX
other
datasets
Y
initial
solution
iteration
result
Replace
Step function
Iterate natively with deltas
DeltaIteration<...> pagesIter = pages.iterateDelta(initialDeltas, maxIterations, 0);
DataSet<...> newRanks = update (pagesIter, edges);
DataSet<...> newRanks = ...
DataSet<...> result = pagesIter.closeWith(newRanks, deltas)
See http://data-artisans.com/data-analysis-with-flink.html 41
partial
solution
delta
setX
other
datasets
Y
initial
solution
iteration
result
workset A B workset
Merge deltas
Replace
initial
workset
Native, unrolling, and delta
42
Optimization of iterative algorithms
43
Caching Loop-invariant Data
Pushing work
„out of the loop“
Maintain state as index
Dissecting
Flink
44
45
The growing Flink stack
46
Flink Optimizer Flink Stream Builder
Common API
Scala API Java API
Python API
(upcoming)
Graph API
Apache
MRQL
Flink Local Runtime
Embedded
environment
(Java collections)
Local
Environment
(for debugging)
Remote environment
(Regular cluster execution)
Apache Tez
Data
storage
HDFSFiles S3 JDBC Redis
Rabbit
MQ
Kafka
Azure
tables
…
Single node execution Standalone or YARN cluster
Program lifecycle
47
val source1 = …
val source2 = …
val maxed = source1
.map(v => (v._1,v._2,
math.max(v._1,v._2))
val filtered = source2
.filter(v => (v._1 > 4))
val result = maxed
.join(filtered).where(0).equalTo(0)
.filter(_1 > 3)
.groupBy(0)
.reduceGroup {……}
1
3
4
5
2
 The optimizer is the
component that selects
an execution plan for a
Common API program
 Think of an AI system
manipulating your
program for you 
 But don’t be scared – it
works
• Relational databases have
been doing this for
decades – Flink ports the
technology to API-based
systems
Flink Optimizer
48
A simple program
49
DataSet<Tuple5<Integer, String, String, String, Integer>> orders = …
DataSet<Tuple2<Integer, Double>> lineitems = …
DataSet<Tuple2<Integer, Integer>> filteredOrders = orders
.filter(. . .)
.project(0,4).types(Integer.class, Integer.class);
DataSet<Tuple3<Integer, Integer, Double>> lineitemsOfOrders = filteredOrders
.join(lineitems)
.where(0).equalTo(0)
.projectFirst(0,1).projectSecond(1)
.types(Integer.class, Integer.class, Double.class);
DataSet<Tuple3<Integer, Integer, Double>> priceSums = lineitemsOfOrders
.groupBy(0,1).aggregate(Aggregations.SUM, 2);
priceSums.writeAsCsv(outputPath);
Two execution plans
50
DataSource
orders.tbl
Filter
Map DataSource
lineitem.tbl
Join
Hybrid Hash
buildHT probe
broadcast forward
Combine
GroupRed
sort
DataSource
orders.tbl
Filter
Map DataSource
lineitem.tbl
Join
Hybrid Hash
buildHT probe
hash-part [0] hash-part [0]
hash-part [0,1]
GroupRed
sort
forwardBest plan
depends on
relative sizes
of input files
Flink Local Runtime
51
 Local runtime, not
the distributed
execution engine
 Aka: what happens
inside every parallel
task
Flink runtime operators
 Sorting and hashing data
• Necessary for grouping, aggregation, reduce,
join, cogroup, delta iterations
 Flink contains tailored implementations of
hybrid hashing and external sorting in
Java
• Scale well with both abundant and restricted
memory sizes
52
Internal data representation
53
JVM Heap
map
JVM Heap
reduce
O Romeo,
Romeo,
wherefore
art thou
Romeo?
00110011
00110011
00010111
01110001
01111010
00010111
art, 1
O, 1
Romeo, 1
Romeo, 1
00110011
Network transfer
Local sort
How is intermediate data internally represented?
Internal data representation
 Two options: Java objects or raw bytes
 Java objects
• Easier to program
• Can suffer from GC overhead
• Hard to de-stage data to disk, may suffer from “out of
memory exceptions”
 Raw bytes
• Harder to program (customer serialization stack, more
involved runtime operators)
• Solves most of memory and GC problems
• Overhead from object (de)serialization
 Flink follows the raw byte approach
54
Memory in Flink
public class WC {
public String word;
public int count;
}
empty
page
Pool of Memory Pages
JVM Heap
Sorting,
hashing,
caching
Shuffling,
broadcasts
User code
objects
Network
buffers
Managed
heap
Unmanaged
heap
55
Memory in Flink (2)
 Internal memory management
• Flink initially allocates 70% of the free heap as byte[]
segments
• Internal operators allocate() and release() these
segments
 Flink has its own serialization stack
• All accepted data types serialized to data segments
 Easy to reason about memory, (almost) no
OutOfMemory errors, reduces the pressure to the
GC (smooth performance)
56
Operating on serialized data
Microbenchmark
 Sorting 1GB worth of (long, double) tuples
 67,108,864 elements
 Simple quicksort
57
Flink distributed execution
58
 Pipelined
• Same engine for Flink
and Flink streaming
 Pluggable
• Local runtime can be
executed on other
engines
• E.g., Java collections
and Apache Tez
Coordination built on Akka library
Streaming overview
 Streaming and batch use same code
paths in runtime
 Differences
• Streaming does not use Flink’s memory
management right now
• Streaming uses its own compiler/optimizer
59
Roadmap
60
Flink Roadmap
 Currently being discussed by the Flink
community
 Flink has a major release every 3 months,
and one or more bug-fixing releases between
major releases
 Caveat: rough roadmap, depends on
volunteer work, outcome of community
discussion, and Apache open source
processes
61
Roadmap for 2015 (highlights)
Q1 Q2 Q3
APIs Logical Query
integration
Additional
operators
Interactive
programs
Interactive
Scala shell
SQL-on-Flink
Optimizer Semantic
annotations
HCatalog
integration
Optimizer
hints
Runtime Dual engine
(blocking &
pipelining)
Fine-grained
fault
tolerance
Dynamic
memory
allocation
Streaming Better
memory
management
More
operators in
API
At-least-once
processing
guarantees
Unify batch
and
streaming
Exactly-once
processing
guarantees
ML library First version Additional
algorithms
Mahout
integration
Graph
library
First version
Integration Tez, Samoa Mahout
62
Dual streaming and batch engine
 Runtime will support
• Current pipelined
execution mode
• Blocking (RDD-like)
execution
 Batch programs use a
combination
 Stream programs use
pipelining
 Interactive programs
need blocking
 Expected Q1 2015
Blocking
runtime
execution
Pipelined
runtime
execution
Batch API
(DataSet) ✔ ✔
Stream API
(DataStream) ✔
63
Fault tolerance improvements
 Fine-grained fault tolerance for batch
programs
• Currently lineage goes back to sources unless
user explicitly persists intermediate data set
• In the future replay will restart from
automatically checkpointed intermediate data
set
 Expected Q1 2015
64
Streaming fault tolerance
 At-least-once guarantees
• Reset of operator state
• Source in-memory replication
• Expected Q1 2015
 Exactly-once guarantees
• Checkpoint operator state
• Rest similar to batch fine-grained recovery
• Expected Q2 2015
65
Interactive programs & shells
Support interactive ad-hoc data analysis
 Programs that are executed partially in the cluster
and partially in the client
• Needs same runtime code as fine-grained batch fault
tolerance
• Expected Q1 2015
 Interactive Scala shell
• Needs to transfer class files between shell and
master
• Expected Q2 2015
• Want to contribute?
66
Machine Learning
Machine learning capabilities on Flink will stand
on two legs
 Dedicated Machine Learning library
• Optimized algorithms
• First version (with infrastructure code, ALS, k-
means, logistic regression) open for contributions
in Q1 2015
 Mahout linear algebra DSL integration
• Expected Q2 2015
67
Graph Library
 Codename “Gelly”
 Library with graph operations
• Common graph stats, PageRank, SSSP,
Connected Components, label propagation
• Vertex-centric API
• Gather-apply-scatter API
 Expected Q1 2015
68
Logical Query Integration
 Define queries in SQL style inside
Scala/Java similar to Microsoft LINQ
 Expected Q1 2015
69
DataSet<Row> dates = env.readCsv(...).as("order_id", "date");
dates.filter("date < 2010-7-14")
.join(other).where("order_id").equalto("id");
SQL on Flink
 Allow HiveQL queries to run on Flink directly,
both standalone and embedded in batch
programs
 Approach: translate SQL to Logical Query
Interface
• Hey, Flink already has an optimizer 
 Currently in design phase
 Expected Q3/Q4 2015
70
Integration with other projects
 Apache Tez
• Expected Q1 2015
 Apache Mahout
• Expected Q2 2015
 Tachyon
• Basic integration
available, waiting for
Tachyon API
 Apache Zepelin (inc.)
• Interest from both
communities
 Apache Samoa (inc.)
• Expected Q1 2015
• Interest from both
communities
 H2O
• In consideration
 Apache Hive
• Expected Q3/Q4 2015
71
And many more…
 Runtime: even better performance and
robustness
• Using off-heap memory, dynamic memory
allocation
 Improvements to the Flink optimizer
• Integration with HCatalog, better statistics
• Runtime optimization
 Streaming graph and ML pipeline libraries
72
Closing
73
Stay informed
 flink. apache.org
• Subscribe to the mailing lists!
• http://flink.apache.org/community.html#mailing-lists
 Blogs
• flink.apache.org/blog
• data-artisans.com/blog
 Twitter
• follow @ApacheFlink
74
flink.apache.org

Más contenido relacionado

La actualidad más candente

The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemDatabricks
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache icebergAlluxio, Inc.
 
Don’t optimize my queries, optimize my data!
Don’t optimize my queries, optimize my data!Don’t optimize my queries, optimize my data!
Don’t optimize my queries, optimize my data!Julian Hyde
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
 
Project Tungsten Phase II: Joining a Billion Rows per Second on a Laptop
Project Tungsten Phase II: Joining a Billion Rows per Second on a LaptopProject Tungsten Phase II: Joining a Billion Rows per Second on a Laptop
Project Tungsten Phase II: Joining a Billion Rows per Second on a LaptopDatabricks
 
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
 
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkArbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
 
Introduction to Stream Processing
Introduction to Stream ProcessingIntroduction to Stream Processing
Introduction to Stream ProcessingGuido Schmutz
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaDatabricks
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
 
Real-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotReal-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotXiang Fu
 
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's Scale
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's ScalePinot: Enabling Real-time Analytics Applications @ LinkedIn's Scale
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's ScaleSeunghyun Lee
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkBo Yang
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021StreamNative
 
The Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersThe Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersSATOSHI TAGOMORI
 

La actualidad más candente (20)

The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format Ecosystem
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
 
Don’t optimize my queries, optimize my data!
Don’t optimize my queries, optimize my data!Don’t optimize my queries, optimize my data!
Don’t optimize my queries, optimize my data!
 
Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
Apache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic Datasets
 
Project Tungsten Phase II: Joining a Billion Rows per Second on a Laptop
Project Tungsten Phase II: Joining a Billion Rows per Second on a LaptopProject Tungsten Phase II: Joining a Billion Rows per Second on a Laptop
Project Tungsten Phase II: Joining a Billion Rows per Second on a Laptop
 
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
 
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkArbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
 
Introduction to Stream Processing
Introduction to Stream ProcessingIntroduction to Stream Processing
Introduction to Stream Processing
 
Apache flink
Apache flinkApache flink
Apache flink
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
 
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangApache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
Real-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotReal-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache Pinot
 
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's Scale
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's ScalePinot: Enabling Real-time Analytics Applications @ LinkedIn's Scale
Pinot: Enabling Real-time Analytics Applications @ LinkedIn's Scale
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
 
The Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and ContainersThe Patterns of Distributed Logging and Containers
The Patterns of Distributed Logging and Containers
 
Flink vs. Spark
Flink vs. SparkFlink vs. Spark
Flink vs. Spark
 

Destacado

Alexander Kolb – Flink. Yet another Streaming Framework?
Alexander Kolb – Flink. Yet another Streaming Framework?Alexander Kolb – Flink. Yet another Streaming Framework?
Alexander Kolb – Flink. Yet another Streaming Framework?Flink Forward
 
Christian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream ProcessingChristian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream ProcessingFlink Forward
 
Aljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of TimeAljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of TimeFlink Forward
 
Apache Flink Training: System Overview
Apache Flink Training: System OverviewApache Flink Training: System Overview
Apache Flink Training: System OverviewFlink Forward
 
Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School Flink Forward
 
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache Flink
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache FlinkMartin Junghans – Gradoop: Scalable Graph Analytics with Apache Flink
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache FlinkFlink Forward
 
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkFlink Forward
 
Matthias J. Sax – A Tale of Squirrels and Storms
Matthias J. Sax – A Tale of Squirrels and StormsMatthias J. Sax – A Tale of Squirrels and Storms
Matthias J. Sax – A Tale of Squirrels and StormsFlink Forward
 
Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)Stephan Ewen
 
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?Flink Forward
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionFlink Forward
 
Fabian Hueske – Cascading on Flink
Fabian Hueske – Cascading on FlinkFabian Hueske – Cascading on Flink
Fabian Hueske – Cascading on FlinkFlink Forward
 
Anwar Rizal – Streaming & Parallel Decision Tree in Flink
Anwar Rizal – Streaming & Parallel Decision Tree in FlinkAnwar Rizal – Streaming & Parallel Decision Tree in Flink
Anwar Rizal – Streaming & Parallel Decision Tree in FlinkFlink Forward
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Flink Forward
 
Kamal Hakimzadeh – Reproducible Distributed Experiments
Kamal Hakimzadeh – Reproducible Distributed ExperimentsKamal Hakimzadeh – Reproducible Distributed Experiments
Kamal Hakimzadeh – Reproducible Distributed ExperimentsFlink Forward
 
Sebastian Schelter – Distributed Machine Learing with the Samsara DSL
Sebastian Schelter – Distributed Machine Learing with the Samsara DSLSebastian Schelter – Distributed Machine Learing with the Samsara DSL
Sebastian Schelter – Distributed Machine Learing with the Samsara DSLFlink Forward
 
Assaf Araki – Real Time Analytics at Scale
Assaf Araki – Real Time Analytics at ScaleAssaf Araki – Real Time Analytics at Scale
Assaf Araki – Real Time Analytics at ScaleFlink Forward
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteFlink Forward
 
Till Rohrmann – Fault Tolerance and Job Recovery in Apache Flink
Till Rohrmann – Fault Tolerance and Job Recovery in Apache FlinkTill Rohrmann – Fault Tolerance and Job Recovery in Apache Flink
Till Rohrmann – Fault Tolerance and Job Recovery in Apache FlinkFlink Forward
 
Fabian Hueske – Juggling with Bits and Bytes
Fabian Hueske – Juggling with Bits and BytesFabian Hueske – Juggling with Bits and Bytes
Fabian Hueske – Juggling with Bits and BytesFlink Forward
 

Destacado (20)

Alexander Kolb – Flink. Yet another Streaming Framework?
Alexander Kolb – Flink. Yet another Streaming Framework?Alexander Kolb – Flink. Yet another Streaming Framework?
Alexander Kolb – Flink. Yet another Streaming Framework?
 
Christian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream ProcessingChristian Kreuzfeld – Static vs Dynamic Stream Processing
Christian Kreuzfeld – Static vs Dynamic Stream Processing
 
Aljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of TimeAljoscha Krettek – Notions of Time
Aljoscha Krettek – Notions of Time
 
Apache Flink Training: System Overview
Apache Flink Training: System OverviewApache Flink Training: System Overview
Apache Flink Training: System Overview
 
Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School Vasia Kalavri – Training: Gelly School
Vasia Kalavri – Training: Gelly School
 
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache Flink
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache FlinkMartin Junghans – Gradoop: Scalable Graph Analytics with Apache Flink
Martin Junghans – Gradoop: Scalable Graph Analytics with Apache Flink
 
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
 
Matthias J. Sax – A Tale of Squirrels and Storms
Matthias J. Sax – A Tale of Squirrels and StormsMatthias J. Sax – A Tale of Squirrels and Storms
Matthias J. Sax – A Tale of Squirrels and Storms
 
Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)
 
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
Vyacheslav Zholudev – Flink, a Convenient Abstraction Layer for Yarn?
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
 
Fabian Hueske – Cascading on Flink
Fabian Hueske – Cascading on FlinkFabian Hueske – Cascading on Flink
Fabian Hueske – Cascading on Flink
 
Anwar Rizal – Streaming & Parallel Decision Tree in Flink
Anwar Rizal – Streaming & Parallel Decision Tree in FlinkAnwar Rizal – Streaming & Parallel Decision Tree in Flink
Anwar Rizal – Streaming & Parallel Decision Tree in Flink
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced
 
Kamal Hakimzadeh – Reproducible Distributed Experiments
Kamal Hakimzadeh – Reproducible Distributed ExperimentsKamal Hakimzadeh – Reproducible Distributed Experiments
Kamal Hakimzadeh – Reproducible Distributed Experiments
 
Sebastian Schelter – Distributed Machine Learing with the Samsara DSL
Sebastian Schelter – Distributed Machine Learing with the Samsara DSLSebastian Schelter – Distributed Machine Learing with the Samsara DSL
Sebastian Schelter – Distributed Machine Learing with the Samsara DSL
 
Assaf Araki – Real Time Analytics at Scale
Assaf Araki – Real Time Analytics at ScaleAssaf Araki – Real Time Analytics at Scale
Assaf Araki – Real Time Analytics at Scale
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
 
Till Rohrmann – Fault Tolerance and Job Recovery in Apache Flink
Till Rohrmann – Fault Tolerance and Job Recovery in Apache FlinkTill Rohrmann – Fault Tolerance and Job Recovery in Apache Flink
Till Rohrmann – Fault Tolerance and Job Recovery in Apache Flink
 
Fabian Hueske – Juggling with Bits and Bytes
Fabian Hueske – Juggling with Bits and BytesFabian Hueske – Juggling with Bits and Bytes
Fabian Hueske – Juggling with Bits and Bytes
 

Similar a Apache Flink: API, runtime, and project roadmap

Apache Flink internals
Apache Flink internalsApache Flink internals
Apache Flink internalsKostas Tzoumas
 
Apache Flink Deep Dive
Apache Flink Deep DiveApache Flink Deep Dive
Apache Flink Deep DiveVasia Kalavri
 
Introduction to Apache Flink
Introduction to Apache FlinkIntroduction to Apache Flink
Introduction to Apache Flinkmxmxm
 
Apache Flink Stream Processing
Apache Flink Stream ProcessingApache Flink Stream Processing
Apache Flink Stream ProcessingSuneel Marthi
 
Hadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log projectHadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log projectMao Geng
 
Spark what's new what's coming
Spark what's new what's comingSpark what's new what's coming
Spark what's new what's comingDatabricks
 
Hack Like It's 2013 (The Workshop)
Hack Like It's 2013 (The Workshop)Hack Like It's 2013 (The Workshop)
Hack Like It's 2013 (The Workshop)Itzik Kotler
 
Apache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurApache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurSiddharth Mathur
 
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
Deep Dive with Spark Streaming - Tathagata  Das - Spark Meetup 2013-06-17Deep Dive with Spark Streaming - Tathagata  Das - Spark Meetup 2013-06-17
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17spark-project
 
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Strata Singapore: GearpumpReal time DAG-Processing with Akka at ScaleStrata Singapore: GearpumpReal time DAG-Processing with Akka at Scale
Strata Singapore: Gearpump Real time DAG-Processing with Akka at ScaleSean Zhong
 
AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)Paul Chao
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to HadoopApache Apex
 
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbs
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbsSystem Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbs
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbsashukiller7
 
Hierarchical free monads and software design in fp
Hierarchical free monads and software design in fpHierarchical free monads and software design in fp
Hierarchical free monads and software design in fpAlexander Granin
 
Apache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurApache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurSiddharth Mathur
 

Similar a Apache Flink: API, runtime, and project roadmap (20)

Flink internals web
Flink internals web Flink internals web
Flink internals web
 
Apache Flink internals
Apache Flink internalsApache Flink internals
Apache Flink internals
 
Apache Flink Deep Dive
Apache Flink Deep DiveApache Flink Deep Dive
Apache Flink Deep Dive
 
Introduction to Apache Flink
Introduction to Apache FlinkIntroduction to Apache Flink
Introduction to Apache Flink
 
Apache Flink Stream Processing
Apache Flink Stream ProcessingApache Flink Stream Processing
Apache Flink Stream Processing
 
Hadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log projectHadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log project
 
So you think you can stream.pptx
So you think you can stream.pptxSo you think you can stream.pptx
So you think you can stream.pptx
 
Spark what's new what's coming
Spark what's new what's comingSpark what's new what's coming
Spark what's new what's coming
 
Hack Like It's 2013 (The Workshop)
Hack Like It's 2013 (The Workshop)Hack Like It's 2013 (The Workshop)
Hack Like It's 2013 (The Workshop)
 
Data race
Data raceData race
Data race
 
Apache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurApache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathur
 
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
Deep Dive with Spark Streaming - Tathagata  Das - Spark Meetup 2013-06-17Deep Dive with Spark Streaming - Tathagata  Das - Spark Meetup 2013-06-17
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
 
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Strata Singapore: GearpumpReal time DAG-Processing with Akka at ScaleStrata Singapore: GearpumpReal time DAG-Processing with Akka at Scale
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
 
AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)
 
Hadoop-Introduction
Hadoop-IntroductionHadoop-Introduction
Hadoop-Introduction
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbs
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbsSystem Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbs
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbs
 
Hierarchical free monads and software design in fp
Hierarchical free monads and software design in fpHierarchical free monads and software design in fp
Hierarchical free monads and software design in fp
 
Apache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathurApache pig presentation_siddharth_mathur
Apache pig presentation_siddharth_mathur
 
Influx data basic
Influx data basicInflux data basic
Influx data basic
 

Más de Kostas Tzoumas

Debunking Common Myths in Stream Processing
Debunking Common Myths in Stream ProcessingDebunking Common Myths in Stream Processing
Debunking Common Myths in Stream ProcessingKostas Tzoumas
 
Debunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream ProcessingDebunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream ProcessingKostas Tzoumas
 
Streaming in the Wild with Apache Flink
Streaming in the Wild with Apache FlinkStreaming in the Wild with Apache Flink
Streaming in the Wild with Apache FlinkKostas Tzoumas
 
Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016Kostas Tzoumas
 
First Flink Bay Area meetup
First Flink Bay Area meetupFirst Flink Bay Area meetup
First Flink Bay Area meetupKostas Tzoumas
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseKostas Tzoumas
 

Más de Kostas Tzoumas (6)

Debunking Common Myths in Stream Processing
Debunking Common Myths in Stream ProcessingDebunking Common Myths in Stream Processing
Debunking Common Myths in Stream Processing
 
Debunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream ProcessingDebunking Six Common Myths in Stream Processing
Debunking Six Common Myths in Stream Processing
 
Streaming in the Wild with Apache Flink
Streaming in the Wild with Apache FlinkStreaming in the Wild with Apache Flink
Streaming in the Wild with Apache Flink
 
Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016Apache Flink at Strata San Jose 2016
Apache Flink at Strata San Jose 2016
 
First Flink Bay Area meetup
First Flink Bay Area meetupFirst Flink Bay Area meetup
First Flink Bay Area meetup
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
 

Último

Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 

Último (20)

Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 

Apache Flink: API, runtime, and project roadmap

  • 1. Kostas Tzoumas Flink committer co-founder, data Artisans kostas@data-artisans.com @kostas_tzoumas Apache Flink
  • 3. The Flink Community  Over 70 contributors from academia & industry; growing fast  Apache Top-Level Project since December 3
  • 5. DataSet and transformations Input First SecondX Y Operator X Operator Y ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<String> input = env.readTextFile(input); DataSet<String> first = input .filter (str -> str.contains(“Apache Flink“)); DataSet<String> second = first .filter (str -> str.length() > 40); second.print(); env.execute(); 5
  • 6. DataSet  Central notion of the programming API  Files and other data sources are read into DataSets • DataSet<String> text = env.readTextFile(…)  Transformations on DataSets produce DataSets • DataSet<String> first = text.map(…)  DataSets are printed to files or on stdout • first.writeAsCsv(…)  Execution is triggered with env.execute() 6
  • 7. WordCount 7 text .flatMap((line,out) -> { String[] tokens = value.toLowerCase().split(“ “); for (String token : tokens) { if (token.length() > 0) { out.collect(new Tuple2<String, Integer>(token, 1)); } } }) .groupBy(0) .sum(1);
  • 8. Transitive Closure 8 IterativeDataSet<Tuple2<Long,Long>> paths = edges.iterate (10); DataSet<Tuple2<Long,Long>> nextPaths = paths .join(edges).where(1).equalTo(0) .with((left, right) -> return new Tuple2<Long, Long>(left.f0, right.f1);) .union(paths) .distinct(); DataSet<Tuple2<Long, Long>> tc = paths.closeWith(nextPaths);
  • 9. Scala API 9 text .flatMap {line => line.split(" ").map(word => Word(word,1)} .groupBy("word").sum("frequency") case class Path (from: Long, to: Long) val tc = edges.iterate(10) { paths: DataSet[Path] => val next = paths .join(edges).where("to").equalTo("from") { (path, edge) => Path(path.from, edge.to) } .union(paths).distinct() next } WordCount Transitive Closure
  • 10. Data sources Batch API  Files • HDFS, Local file system, MapR file system • Text, Csv, Avro, Hadoop input formats  JDBC  HBase  Collections Stream API  Files  Socket streams  Kafka  RabbitMQ  Flume  Collections  Implement your own • SourceFunction.collect 10
  • 11. Available transformations  map  flatMap  filter  reduce  reduceGroup  join  coGroup  aggregate  cross  project  distinct  union  iterate  iterateDelta  repartition  … 11
  • 12. Data types and grouping  Bean-style Java classes & field names  Tuples and position addressing  Any data type with key selector function public static class Access { public int userId; public String url; ... } public static class User { public int userId; public int region; public Date customerSince; ... } DataSet<Tuple2<Access,User>> campaign = access.join(users) .where(“userId“).equalTo(“userId“) DataSet<Tuple3<Integer,String,String> someLog; someLog.groupBy(0,1).reduceGroup(...); 12
  • 13. Real-time stream processing 13 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream<String> tweets = env.socketTextStream(host,port); DataStream<Tuple2<String,Integer>> filteredTweets = tweets .flatMap(new SelectLanguageAndTokenize()) .partition(0) .map(s -> new Tuple2<String,Integer>(s, 1)) .groupBy(0).sum(1) .flatMap(new SelectMaxOccurence()); tweets.print(); env.execute(); DataStream instead of DataSet StreamExecutionEnvironment instead of ExecutionEnvironment
  • 14. Streaming operators  Most DataSet operators can be used • map, filter, flatMap, reduce, reduceGroup, join, cross, coGroup, iterate, project, grouping, partitioning, aggregations, union (merge), …  DataStream-specific operators (snip) • CoMap, CoReduce, etc: share state between streams • Temporal binary ops: join, cross, … • Windows: policy- based flexible windowing • Time, Count, Delta 14
  • 15. Windowing  Trigger policy • When to trigger the computation on current window  Eviction policy • When data points should leave the window • Defines window width/size  E.g., count-based policy • evict when #elements > n • start a new window every n-th element  Built-in: Count, Time, Delta policies 15
  • 16. Windowing example //Build new model every minute on the last 5 minutes //worth of data val model = trainingData .window(Time.of(5,TimeUnit.MINUTES)) .every(Time.of(1,TimeUnit.MINUTES)) .reduceGroup(buildModel) //Predict new data using the most up-to-date model val prediction = newData .connect(model) .map(predict); M P Training Data New Data Prediction 12/01/15 16
  • 17. Window Join example case class Name(id: Long, name: String) case class Age(id: Long, age: Int) case class Person(name: String, age: Int) val names = ... val ages = ... names.join(ages) .onWindow(5, TimeUnit.SECONDS) .where("id") .equalTo("id") {(n, a) => Person(n.name, a.age)} 12/01/15 17
  • 18. Iterative processing example val env = StreamExecutionEnvironment.getExecutionEnvironment env.generateSequence(1, 10).iterate(incrementToTen, 1000) .print env.execute("Iterative example") def incrementToTen(input: DataStream[Long]) = { val incremented = input.map {_ + 1} val split = incremented.split {x => if (x >= 10) "out" else "feedback"} (split.select("feedback"), split.select("out")) } 12/01/15 18 Number stream Map Reduce Output stream “out” “feedback”
  • 19. Spargel: Flink’s Graph API 19 DataSet<Tuple2<Long, Long>> result = vertices .runOperation(VertexCentricIteration.withPlainEdges( edges, new CCUpdater(), new CCMessager(), 100)); class CCUpdater extends VertexUpdateFunction … class CCMessenger extends MessagingFunction … Spargel is a very thin layer (~500 LOC) on top of Flink’s delta iterations
  • 20. Other API elements & tools  Accumulators and counters • Int, Long, Double counters • Histogram accumulator • Define your own  Broadcast variables  Plan visualization  Local debugging/testing mode 20
  • 21. Hadoop Compatibility  Flink runs on YARN, can read from HDFS, HBase  Can use all Hadoop input and output formats  Can use unmodified Hadoop Mappers and Reducers and mix-and-match them with Flink operators  Coming up: can run unmodified Hadoop jobs (faster) 21
  • 26. Task Manager Job Manager Task Manager Flink Client & Optimizer DataSet<String> text = env.readTextFile(input); DataSet<Tuple2<String, Integer>> result = text .flatMap((str, out) -> { for (String token : value.split("W")) { out.collect(new Tuple2<>(token, 1)); }) .groupBy(0) .aggregate(SUM, 1); O Romeo, Romeo, wherefore art thou Romeo? O, 1 Romeo, 3 wherefore, 1 art, 1 thou, 1 Apache Flink 26 Nor arm, nor face, nor any other part nor, 3 arm, 1 face, 1, any, 1, other, 1 part, 1
  • 27. If you need to know one thing about Flink is that you don’t need to know the internals of Flink. 27
  • 28. Philosophy  Flink “hides” its internal workings from the user  This is good • User does not worry about how jobs are executed • Internals can be changed without breaking changes  … and bad • Execution model more complicated to explain compared to MapReduce or Spark RDD 28
  • 29. Recap: DataSet Input First SecondX Y Operator X Operator Y 29 ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<String> input = env.readTextFile(input); DataSet<String> first = input .filter (str -> str.contains(“Apache Flink“)); DataSet<String> second = first .filter (str -> str.length() > 40); second.print() env.execute();
  • 30. Common misconception  Programs are not executed eagerly  Instead, system compiles program to an execution plan and executes that plan Input First SecondX Y 30
  • 31. DataSet<String>  Think of it as a PCollection<String>, or a Spark RDD[String]  With a major difference: it can be produced/recovered in several ways • … like a Java collection • … like an RDD • … perhaps it is never fully materialized (because the program does not need it to) • … implicitly updated in an iteration  And this is transparent to the user 31
  • 32. Example: grep Romeo, Romeo, where art thou Romeo? Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 32
  • 33. Staged (batch) execution Romeo, Romeo, where art thou Romeo? Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 Stage 1: Create/cache Log Subseqent stages: Grep log for matches Caching in-memory and disk if needed 33
  • 34. Pipelined execution Romeo, Romeo, where art thou Romeo? Load Log Search for str1 Search for str2 Search for str3 Grep 1 Grep 2 Grep 3 001100110011001100110011 Stage 1: Deploy and start operators Data transfer in- memory and disk if needed 34 Note: Log DataSet is never “created”!
  • 35. Benefits of pipelining  25 node cluster  Grep log for 3 terms  Scale data size from 100GB to 1TB 0 250 500 750 1000 1250 1500 1750 2000 2250 2500 0 100 200 300 400 500 600 700 800 900 1000 Timetocompletegrep(sec) Data size (GB)Cluster memory exceeded 35
  • 36. 36
  • 37. Drawbacks of pipelining  Long pipelines may be active at the same time leading to memory fragmentation • FLINK-1101: Changes memory allocation from static to adaptive  Fault-tolerance harder to get right • FLINK-986: Adds intermediate data sets (similar to RDDS) as first-class citizen to Flink Runtime. Will lead to fine-grained fault- tolerance among other features. 37
  • 38. Example: Iterative processing DataSet<Page> pages = ... DataSet<Neighborhood> edges = ... DataSet<Page> oldRanks = pages; DataSet<Page> newRanks; for (i = 0; i < maxIterations; i++) { newRanks = update(oldRanks, edges) oldRanks = newRanks } DataSet<Page> result = newRanks; DataSet<Page> update (DataSet<Page> ranks, DataSet<Neighborhood> adjacency) { return oldRanks .join(adjacency) .where(“id“).equalTo(“id“) .with ( (page, adj, out) -> { for (long n : adj.neighbors) out.collect(new Page(n, df * page.rank / adj.neighbors.length)) }) .groupBy(“id“) .reduce ( (a, b) -> new Page(a.id, a.rank + b.rank) ); 38
  • 39. Iterate by unrolling  for/while loop in client submits one job per iteration step  Data reuse by caching in memory and/or disk Step Step Step Step Step Client 39
  • 40. Iterate natively DataSet<Page> pages = ... DataSet<Neighborhood> edges = ... IterativeDataSet<Page> pagesIter = pages.iterate(maxIterations); DataSet<Page> newRanks = update (pagesIter, edges); DataSet<Page> result = pagesIter.closeWith(newRanks) 40 partial solution partial solutionX other datasets Y initial solution iteration result Replace Step function
  • 41. Iterate natively with deltas DeltaIteration<...> pagesIter = pages.iterateDelta(initialDeltas, maxIterations, 0); DataSet<...> newRanks = update (pagesIter, edges); DataSet<...> newRanks = ... DataSet<...> result = pagesIter.closeWith(newRanks, deltas) See http://data-artisans.com/data-analysis-with-flink.html 41 partial solution delta setX other datasets Y initial solution iteration result workset A B workset Merge deltas Replace initial workset
  • 43. Optimization of iterative algorithms 43 Caching Loop-invariant Data Pushing work „out of the loop“ Maintain state as index
  • 45. 45
  • 46. The growing Flink stack 46 Flink Optimizer Flink Stream Builder Common API Scala API Java API Python API (upcoming) Graph API Apache MRQL Flink Local Runtime Embedded environment (Java collections) Local Environment (for debugging) Remote environment (Regular cluster execution) Apache Tez Data storage HDFSFiles S3 JDBC Redis Rabbit MQ Kafka Azure tables … Single node execution Standalone or YARN cluster
  • 47. Program lifecycle 47 val source1 = … val source2 = … val maxed = source1 .map(v => (v._1,v._2, math.max(v._1,v._2)) val filtered = source2 .filter(v => (v._1 > 4)) val result = maxed .join(filtered).where(0).equalTo(0) .filter(_1 > 3) .groupBy(0) .reduceGroup {……} 1 3 4 5 2
  • 48.  The optimizer is the component that selects an execution plan for a Common API program  Think of an AI system manipulating your program for you   But don’t be scared – it works • Relational databases have been doing this for decades – Flink ports the technology to API-based systems Flink Optimizer 48
  • 49. A simple program 49 DataSet<Tuple5<Integer, String, String, String, Integer>> orders = … DataSet<Tuple2<Integer, Double>> lineitems = … DataSet<Tuple2<Integer, Integer>> filteredOrders = orders .filter(. . .) .project(0,4).types(Integer.class, Integer.class); DataSet<Tuple3<Integer, Integer, Double>> lineitemsOfOrders = filteredOrders .join(lineitems) .where(0).equalTo(0) .projectFirst(0,1).projectSecond(1) .types(Integer.class, Integer.class, Double.class); DataSet<Tuple3<Integer, Integer, Double>> priceSums = lineitemsOfOrders .groupBy(0,1).aggregate(Aggregations.SUM, 2); priceSums.writeAsCsv(outputPath);
  • 50. Two execution plans 50 DataSource orders.tbl Filter Map DataSource lineitem.tbl Join Hybrid Hash buildHT probe broadcast forward Combine GroupRed sort DataSource orders.tbl Filter Map DataSource lineitem.tbl Join Hybrid Hash buildHT probe hash-part [0] hash-part [0] hash-part [0,1] GroupRed sort forwardBest plan depends on relative sizes of input files
  • 51. Flink Local Runtime 51  Local runtime, not the distributed execution engine  Aka: what happens inside every parallel task
  • 52. Flink runtime operators  Sorting and hashing data • Necessary for grouping, aggregation, reduce, join, cogroup, delta iterations  Flink contains tailored implementations of hybrid hashing and external sorting in Java • Scale well with both abundant and restricted memory sizes 52
  • 53. Internal data representation 53 JVM Heap map JVM Heap reduce O Romeo, Romeo, wherefore art thou Romeo? 00110011 00110011 00010111 01110001 01111010 00010111 art, 1 O, 1 Romeo, 1 Romeo, 1 00110011 Network transfer Local sort How is intermediate data internally represented?
  • 54. Internal data representation  Two options: Java objects or raw bytes  Java objects • Easier to program • Can suffer from GC overhead • Hard to de-stage data to disk, may suffer from “out of memory exceptions”  Raw bytes • Harder to program (customer serialization stack, more involved runtime operators) • Solves most of memory and GC problems • Overhead from object (de)serialization  Flink follows the raw byte approach 54
  • 55. Memory in Flink public class WC { public String word; public int count; } empty page Pool of Memory Pages JVM Heap Sorting, hashing, caching Shuffling, broadcasts User code objects Network buffers Managed heap Unmanaged heap 55
  • 56. Memory in Flink (2)  Internal memory management • Flink initially allocates 70% of the free heap as byte[] segments • Internal operators allocate() and release() these segments  Flink has its own serialization stack • All accepted data types serialized to data segments  Easy to reason about memory, (almost) no OutOfMemory errors, reduces the pressure to the GC (smooth performance) 56
  • 57. Operating on serialized data Microbenchmark  Sorting 1GB worth of (long, double) tuples  67,108,864 elements  Simple quicksort 57
  • 58. Flink distributed execution 58  Pipelined • Same engine for Flink and Flink streaming  Pluggable • Local runtime can be executed on other engines • E.g., Java collections and Apache Tez Coordination built on Akka library
  • 59. Streaming overview  Streaming and batch use same code paths in runtime  Differences • Streaming does not use Flink’s memory management right now • Streaming uses its own compiler/optimizer 59
  • 61. Flink Roadmap  Currently being discussed by the Flink community  Flink has a major release every 3 months, and one or more bug-fixing releases between major releases  Caveat: rough roadmap, depends on volunteer work, outcome of community discussion, and Apache open source processes 61
  • 62. Roadmap for 2015 (highlights) Q1 Q2 Q3 APIs Logical Query integration Additional operators Interactive programs Interactive Scala shell SQL-on-Flink Optimizer Semantic annotations HCatalog integration Optimizer hints Runtime Dual engine (blocking & pipelining) Fine-grained fault tolerance Dynamic memory allocation Streaming Better memory management More operators in API At-least-once processing guarantees Unify batch and streaming Exactly-once processing guarantees ML library First version Additional algorithms Mahout integration Graph library First version Integration Tez, Samoa Mahout 62
  • 63. Dual streaming and batch engine  Runtime will support • Current pipelined execution mode • Blocking (RDD-like) execution  Batch programs use a combination  Stream programs use pipelining  Interactive programs need blocking  Expected Q1 2015 Blocking runtime execution Pipelined runtime execution Batch API (DataSet) ✔ ✔ Stream API (DataStream) ✔ 63
  • 64. Fault tolerance improvements  Fine-grained fault tolerance for batch programs • Currently lineage goes back to sources unless user explicitly persists intermediate data set • In the future replay will restart from automatically checkpointed intermediate data set  Expected Q1 2015 64
  • 65. Streaming fault tolerance  At-least-once guarantees • Reset of operator state • Source in-memory replication • Expected Q1 2015  Exactly-once guarantees • Checkpoint operator state • Rest similar to batch fine-grained recovery • Expected Q2 2015 65
  • 66. Interactive programs & shells Support interactive ad-hoc data analysis  Programs that are executed partially in the cluster and partially in the client • Needs same runtime code as fine-grained batch fault tolerance • Expected Q1 2015  Interactive Scala shell • Needs to transfer class files between shell and master • Expected Q2 2015 • Want to contribute? 66
  • 67. Machine Learning Machine learning capabilities on Flink will stand on two legs  Dedicated Machine Learning library • Optimized algorithms • First version (with infrastructure code, ALS, k- means, logistic regression) open for contributions in Q1 2015  Mahout linear algebra DSL integration • Expected Q2 2015 67
  • 68. Graph Library  Codename “Gelly”  Library with graph operations • Common graph stats, PageRank, SSSP, Connected Components, label propagation • Vertex-centric API • Gather-apply-scatter API  Expected Q1 2015 68
  • 69. Logical Query Integration  Define queries in SQL style inside Scala/Java similar to Microsoft LINQ  Expected Q1 2015 69 DataSet<Row> dates = env.readCsv(...).as("order_id", "date"); dates.filter("date < 2010-7-14") .join(other).where("order_id").equalto("id");
  • 70. SQL on Flink  Allow HiveQL queries to run on Flink directly, both standalone and embedded in batch programs  Approach: translate SQL to Logical Query Interface • Hey, Flink already has an optimizer   Currently in design phase  Expected Q3/Q4 2015 70
  • 71. Integration with other projects  Apache Tez • Expected Q1 2015  Apache Mahout • Expected Q2 2015  Tachyon • Basic integration available, waiting for Tachyon API  Apache Zepelin (inc.) • Interest from both communities  Apache Samoa (inc.) • Expected Q1 2015 • Interest from both communities  H2O • In consideration  Apache Hive • Expected Q3/Q4 2015 71
  • 72. And many more…  Runtime: even better performance and robustness • Using off-heap memory, dynamic memory allocation  Improvements to the Flink optimizer • Integration with HCatalog, better statistics • Runtime optimization  Streaming graph and ML pipeline libraries 72
  • 74. Stay informed  flink. apache.org • Subscribe to the mailing lists! • http://flink.apache.org/community.html#mailing-lists  Blogs • flink.apache.org/blog • data-artisans.com/blog  Twitter • follow @ApacheFlink 74