These slides were presented by Hossein Falaki of Databricks to the Atlanta Apache Spark User Group on Thursday, March 9, 2017: https://www.meetup.com/Atlanta-Apache-Spark-User-Group/events/238120227/
2. About me
• Former Data Scientist at Apple Siri
• Software Engineer at Databricks
• Started using Apache Spark since version 0.6
• Developed first version of Apache Spark CSV data
source
• Worked on SparkR &Databricks R Notebook feature
• Currently focusing on R experience at Databricks
2
3. What is SparkR?
An R package distributed with Apache Spark (soon CRAN):
- Provides R frontend to Spark
- Exposes Spark DataFrames (inspired by R and Pandas)
- Convenient interoperability between R and Spark DataFrames
3
distributed/robust processing, data
sources, off-memory data
structures
+ Dynamic environment, interactivity,
packages, visualization
7. Overview of SparkR API :: Session
Spark session is your interface to Spark functionality in R
o SparkR DataFrames are implemented on top of SparkSQL tables
o All DataFrame operations go through a SQL optimizer (catalyst)
o Since 2.0 sqlContext is wrapped in a new object called SparkR Session.
7
> spark <- sparkR.session()
All SparkR functions work if you pass them a session or will
assume an existing session.
9. Moving data between R and JVM
9
R JVM
R Backend
SparkR::collect()
SparkR::createDataFrame()
10. Overview of SparkR API :: DataFrame
API
SparkR DataFrame behaves similar to R data.frames
> sparkDF$newCol <- sparkDF$col + 1
> subsetDF <- sparkDF[, c(“date”, “type”)]
> recentData <- subset(sparkDF$date == “2015-10-24”)
> firstRow <- sparkDF[[1, ]]
> names(subsetDF) <- c(“Date”, “Type”)
> dim(recentData)
> head(count(group_by(subsetDF, “Date”)))
10
11. Overview of SparkR API :: SQL
You can register a DataFrame as a table and query it in SQL
> logs <- read.df(“data/logs”, source = “json”)
> registerTempTable(df, “logsTable”)
> errorsByCode <- sql(“select count(*) as num, type from
logsTable where type == “error” group by code order by
date desc”)
> reviewsDF <- tableToDF(“reviewsTable”)
> registerTempTable(filter(reviewsDF, reviewsDF$rating ==
5), “fiveStars”)
11
12. Moving between languages
12
R Scala
Spark
df <- read.df(...)
wiki <- filter(df, ...)
registerTempTable(wiki,
“wiki”)
val wiki = table(“wiki”)
val parsed = wiki.map {
Row(_, _, text: String,
_, _) =>text.split(‘ ’)
}
val model =
Kmeans.train(parsed)
14. SparkR UDF API
14
spark.lapply
Runs a function
over a list of
elements
spark.lapply()
dapply
Applies a function
to each partition of
a SparkDataFrame
dapply()
dapplyCollect()
gapply
Applies a function
to each group
within a
SparkDataFrame
gapply()
gapplyCollect()
15. spark.lapply
15
Simplest SparkR UDF pattern
For each element of a list:
1. Sends the function to an R worker
2. Executes the function
3. Returns the result of all workers as a list to R driver
spark.lapply(1:100, function(x) {
runBootstrap(x)
}
16. spark.lapply control flow
16
R Worker JVM
R Worker JVM
R Worker JVM R Driver JVM
1. Serialize R closure
3. Transfer serialized closure over the network
5. De-serialize closure
4. Transfer over
local socket
6. Serialize result
2. Transfer over
local socket
7. Transfer over
local socket 9. Transfer over
local socket
10. Deserialize result
8. Transfer serialized closure over the network
17. dapply
17
For each partition of a Spark DataFrame
1. collects each partition as an R data.frame
2. sends the R function to the R worker
3. executes the function
dapply(sparkDF, func, schema)
combines results as
DataFrame with schema
dapplyCollect(sparkDF, func)
combines results as R
data.frame
18. dapply control & data flow
18
R Worker JVM
R Worker JVM
R Worker JVM R Driver JVM
local socket cluster network local socket
input data
ser/de transfer
result data
ser/de transfer
19. dapplyCollect control & data flow
19
R Worker JVM
R Worker JVM
R Worker JVM R Driver JVM
local socket cluster network local socket
input data
ser/de transfer
result transfer
result deser
20. gapply
20
Groups a Spark DataFrame on one or more columns
1. collects each group as an R data.frame
2. sends the R function to the R worker
3. executes the function
gapply(sparkDF, cols, func, schema)
combines results as
DataFrame with schema
gapplyCollect(sparkDF, cols, func)
combines results as R
data.frame
21. gapply control & data flow
21
R Worker JVM
R Worker JVM
R Worker JVM R Driver JVM
local socket cluster network local socket
input data
ser/de transfer
result data
ser/de transfer
data
shuffle
22. dapply vs. gapply
22
gapply
dapply
signature gapply(df, cols, func, schema)
gapply(gdf, func, schema)
dapply(df, func, schema)
user function
signature
function(key, data) function(data)
data partition controlled by grouping not controlled
23. Parallelizing data
• Do not use spark.lapply() to distribute large data sets
• Do not pack data in the closure
• Watch for skew in data
– Are partitions evenly sized?
• Auxiliary data
– Can be joined with input DataFrame
– Can be distributed to all the workers using FileSystem
23
24. Packages on workers
• SparkR closure capture does not include packages
• You need to import packages on each worker inside your
function
• If not installed install packages on workers out-of-band
• spark.lapply() can be used to install packages
24
25. Debugging user code
1. Verify your code on the Driver
2. Interactively execute the code on the cluster
– When R worker fails, Spark Driver throws exception with the R error
text
3. Inspect details of failure reason of failed job in spark UI
4. Inspect stdout/stderror of workers
25