2. Agenda
● History / Context
○Hadoop
○Lambda
●Spark Basics
○RDDs, Dataframe, SQL, Streaming
● Play along / Demo
3. We work at Monetate...
Client
(e.g. Retailer)
Decision
Engine
Data
Analytics
Engine
consumer marketer
Dashboard
Warehouse
Meta
Observations
4. We call it a...
Personalization Platform
Not so hard until...
m’s → B’s
100ms’s → 10ms’s
days → minutes
(sessions / month)
(response times)
(analytics lag)
17. Concept : RDDs
“Spark revolves around the concept of a resilient distributed
dataset (RDD), which is a fault-tolerant collection of
elements that can be operated on in parallel. There are two
ways to create RDDs: parallelizing an existing collection in
your driver program, or referencing a dataset in an external
storage system, such as a shared filesystem, HDFS,
HBase, or any data source offering a Hadoop InputFormat.”
http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds
18. Concept : Transformations &
Operations
Transformation:
RDD(s) → RDD
e.g. map, filter, groupBy, etc.
Action:
RDD → value
e.g. reduce, count, etc.
21. Concept : DataFrames
DataFrames = RDD + Schema
“A DataFrame is a distributed collection of data organized
into named columns. It is conceptually equivalent to a table
in a relational database or a data frame in R/Python, but
with richer optimizations under the hood. DataFrames can
be constructed from a wide array of sources such as:
structured data files, tables in Hive, external databases, or
existing RDDs.”
http://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes
22. Concept : Spark SQL
SELECT
min(event_time) AS start_time,
max(event_time) AS end_time,
account_id
FROM events GROUP BY account_id