This document discusses methods for scaling up machine learning algorithms to massive datasets. It presents the Dual Cached Loops method which runs disk input/output (I/O) and computation simultaneously to avoid bottlenecks. This outperforms existing methods on large, sparse datasets. The document also discusses ongoing work on distributed asynchronous optimization across multiple machines to further improve scaling. The goal is methods that can optimize machine learning models on datasets too large to fit in memory.
Scaling up Machine Learning Algorithms for Classification
1. Scaling up Machine Learning
algorithms for classification
Department of Mathematical Informatics
The University of Tokyo
Shin Matsushima
2. How can we scale up Machine Learning to
Massive datasets?
• Exploit hardware traits
– Disk IO is bottleneck
– Dual Cached Loops
– Run Disk IO and Computation simultaneously
• Distributed asynchronous optimization
(ongoing)
– Current work using multiple machines
2
19. Attractive property
• Suitable for large scale learning
– We need only one data for each update.
• Theoretical guarantees
– Linear convergence(cf. SGD)
• Shrinking[Joachims 1999]
– We can eliminate “uninformative” data:
cf.
19
22. Problem in scaling up to massive data
• In dealing with small-scale data, we first copy the
entire dataset into main memory
• In dealing with large-scale data, we cannot copy
the dataset at once
22
Read
Disk
Memory
Data
23. Read
Data
• Schemes when data cannot fit in memory
1. Block Minimization [Yu et al. 2010]
– Split the entire dataset into blocks so that each
block can fit in memory
24. Train RAM
• Schemes when data cannot fit in memory
1. Block Minimization [Yu et al. 2010]
– Split the entire dataset into blocks so that each
block can fit in memory
25. Read
Data
• Schemes when data cannot fit in memory
1. Block Minimization [Yu et al. 2010]
– Split the entire dataset into blocks so that each
block can fit in memory
26. Train RAM
• Schemes when data cannot fit in memory
1. Block Minimization [Yu et al. 2010]
– Split the entire dataset into blocks so that each
block can fit in memory
35. • Previous schemes switch CPU and DiskIO
– Training (CPU) is idle while reading
– Reading (DiskIO) is idle while training
35
36. • We want to exploit modern hardware
1. Multicore processors are commonplace
2. CPU(Memory IO) is often 10-100 times
faster than Hard disk IO
36
37. 1.Make reader and trainer run
simultaneously and almost asynchronously.
2.Trainer updates the parameter many
times faster than reader loads new
datapoints.
3.Keep informative data in main memory.
(=Evict uninformative data primarily from main memory)
37
Dual Cached Loops
42. Which data is “uninformative”?
• A datapoint far from the current decision
boundary is unlikely to become a support vector
• Ignore the datapoint for a while.
42
×
×
×
×
×
○
○○
54. • Expanding Features on the fly
– Expand features explicitly when the reader thread
loads an example into memory.
• Read (y,x) from the Disk
• Compute f(x) and load (y,f(x)) into RAM
Read
Disk
Data
12495340
( )xf R
x=GTCCCACCT…
56
56. • Summary
– Linear SVM Optimization when data cannot fit in
memory
– Use the scheme of Dual Cached Loops
– Outperforms state of the art by orders of magnitude
– Can be extended to
• Logistic regression
• Support vector regression
• Multiclass classification
58
58. Future/Current Work
• Utilize the same principle as dual cached loops in
multi-machine algorithm
– Transportation of data can be efficiently done
without harming optimization performance
– The key is to run Communication and Computation
simultaneously and asynchronously
– Can we do more sophisticated communication
emerging in multi-machine optimization?
60
59. • (Selective) Block Minimization scheme for Large-
scale SVM
61
Move data Process Optimization
HDD/ File
system
One
machine
One
machine
60. • Map-Reduce scheme for multi-machine algorithm
62
Move parameters Process Optimization
Master
node
Worker
node
Worker
node
75. Asynchronous multi-machine scheme
• Each machine holds a subset of data
• Keep communicating a potion of parameter from
each other
• Simultaneously run updating parameters for
those each machine possesses
77
76. • Distributed stochastic gradient descent for saddle
point problems
– Another formulation of SVM (Regularized Risk
Minimization in general)
– Suitable for parallelization
78
77. How can we scale up Machine Learning to
Massive datasets?
• Exploit hardware traits
– Disk IO is bottleneck
– Run Disk IO and Computation simultaneously
• Distributed asynchronous optimization
(ongoing)
– Current work using multiple machines
79