2. www.mammothdata.com | @mammothdataco
Lab Overview
● ‘Hello world’ RDD example
● Importing a dataset
● Dataframe operations and visualizations
● Using MLLib on dataset
4. www.mammothdata.com | @mammothdataco
Lab — Hello World
● val text = sc.parallelize(Seq(“your text here”))
● val words = text.flatMap(line => line.split(" "))
● words.collect
5. www.mammothdata.com | @mammothdataco
Lab — Hello World
● val taggedWords = words.map(word => (word,1))
● val counts = taggedWords.reduceByKey(_ + _)
● counts.collect()
6. www.mammothdata.com | @mammothdataco
Lab — Dataset
● https://archive.ics.uci.edu/ml/datasets/Wine
● Information on 3 different types of wine from Genoa
● 178 entries (small!)
12. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering
● K-Means clustering is an unsupervised algorithm which splits a
dataset into a number of clusters (k) based on a notion of
similarity between points. It is often applied to real-world data
to obtain a picture of structure hidden in large datasets, for
example, identifying location clusters or breaking down sales
into distinct purchasing groups.
13. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering
k initial "means" (in this case k=3)
are randomly generated within the
data domain (shown in colour).
15. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering
The centroid of each of these
clusters is found, and these are
used as new means. New clusters
are formed via observing the
closest data points to these new
mean as shown in Step 2. The
process is repeated until the means
converge (or until we hit our
iteration limit)
17. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: Features
● val featureCols = wines.select("Alcohol", "Hue", "Proline")
● val features = featureCols.rdd.map { case Row(a: Double, h:
Double, p: Double) => Vectors.dense(a,h,p) }
● features.cache
18. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: Training Model
● val numClusters = 2
● val numIterations = 20
● val model = KMeans.train(features, numClusters,
numIterations)
19. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: Finding k
● k can be any number you like!
● WSSSE - Within Set Sum of Squared Error
● Squared sum of distances between points and their respective
centroid
● val wssse = model.computeCost(features)
20. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: Finding k
● Test on k = 1 to 5
● (1 to 5 by 1).map (k => KMeans.train(features, k,
numIterations).computeCost(features))
● WSSSE normally decreases as k increases
● Look for the ‘elbow’
21. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: Training Model
● val numClusters = 1
● val numIterations = 20
● val wssse = KMeans.train(features, numClusters,
numIterations).computeCost(features)
22. www.mammothdata.com | @mammothdataco
Lab — K-means Clustering: k = 3
● val numClusters = 3
● val numIterations = 10
● val model = KMeans.train(features, numClusters,
numIterations)
25. www.mammothdata.com | @mammothdataco
Lab — Next Steps
● Looks good, right? Let’s look at what the labels for each point
really are.
● val features = featureCols.rdd.map { case Row(t: Double, a:
Double, h: Double, p: Double) => (t,Vectors.dense(a,h,p)) }
● val predictions = features.map ( feature => (feature._1,
model.predict(feature._2)))
● val counts = predictions.map (p => (p,1)).reduceByKey(_+_)
● counts.collect
● A slightly different story!
26. www.mammothdata.com | @mammothdataco
Lab — Next Steps
● k-means clustering - useful! But not perfect!
● Try again with more features in the vector and see if it
improves the clustering.
● Bayes? Random Forests? All in MLLib and with similar
interfaces!