Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Challenges on Distributed Machine Learning

1.358 visualizaciones

Publicado el

Presentation for Petuum, CMU Eric Xing

Publicado en: Datos y análisis
  • Hello there! Get Your Professional Job-Winning Resume Here!
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • can't download
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí

Challenges on Distributed Machine Learning

  1. 1. Challenges on Data Parallelism and Model Parallelism Presenter: Jie Cao Petuum: A New Platform for Distributed Machine Learning on Big Data CMU Eric Xing
  2. 2. Outline • Background • Data Parallelism & Model Parallelism(3 Properties) • Error Tolerance • Stale Synchronous Parallel • Dynamic Structure Dependence • Dynamic Schedule • Non-uniform Convergence • Priority-based • Blocked-based 2
  3. 3. Current Solution 1. Implementation of Specific ML algorithm YahooLDA Vowelpal Rabbit, Fast Learning Algorithm Collections, Yahoo -> Microsoft Caffe, Deep Learning Framework, 2. Platforms for General Purpose ML Hadoop Spark spark+parameterserver, spark+caffe GraphLab 3. Other Systems Parameter Server Petuum 4. Specified Hardware Accelerating GPU,FPGA-based accelerators, “DianNao” for large NN. Petuum 1. Systematically analyze ML models and tools 2. Find Common Properties 3. Build “workhorse” engine to solve entire models
  4. 4. The Key Statement!!!
  5. 5. many probabilistic model or algorithms are also based on an iterative convergence
  6. 6. Data Parallelism & Model Parallelism
  7. 7. Data Partition, I.I.D
  8. 8. Parallel Stochastic Gradient Descent Parallel SGD: Partition data to different workers; all workers update full parameter vector Parallel SGD [Zinkevich et al., 2010] PSGD runs SGD on local copy of params in each machine Input Data Input Data Input Data split Update local copy of ALL params Update local copy of ALL params aggregate Update ALL params Input Data Input Data Input Data
  9. 9. Challenges in Data Parallelism Existing ways are either safe/slow (BSP), or fast/risky (Async) Need “Partial” synchronicity: Bounded Async Parallelism (BAP) — Spread network comms evenly (don’t sync unless needed)
 — Threads usually shouldn’t wait – but mustn’t drift too far apart! Need straggler tolerance
 — Slow threads must somehow catch up ???? Error Tolerance
  10. 10. Challenges 1 in Model Parallelism Model Dependence • Only effective if independent(or weakly-correlated) • Need carefully-chosen parameters for updating • Dependency-aware On model parallelism and scheduling strategies for distributed machine learning (NIPS’2014) Parallel coordinate descent for l1-regularized loss minimization (ICML’2011) Feature clustering for accelerating parallel coordinate descent (NIPS’2012) A Frame- work for Machine Learning and Data Mining in the Cloud (PVLDB’2012)
  11. 11. Challenges 2 in Model Parallelism Noneuniform Convergence
  12. 12. Petuum Bösen Strads
  13. 13. Parameter Server memcached YahooLDA — Besteffort Pettum: — Bösen Parameter Server (OSDI’ 2014 ): — flexiable consitency models,customized, developer decide. Desirable Consistency Model 1) Correctness of the distributed algorithm can be theoretically proven 2) Computing power of the system is fully utilized
  14. 14. Consistency Models• Classic Consistency in Database • BSP (Bulk Synchronous Parallel,Valiant(PCA), 1990, • Correct, but slow • Hadoop[MR],Spark [RDD] • GraphLab (Careful colored graph or by locking) • Best-effort • fast but no theoretic guarantee • YahooLDA • Async • Hogwild!(NIPS 2011) • A Lock-Free Approach to PSGD. • Condition: optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable. cannot grantee correctness in other case.
  15. 15. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server (NIPS.2013)
  16. 16. Bounded Updates
  17. 17. Analysis of High-Performance Distributed ML at Scale through Parameter Server Consistency Models (AAAI’2015) Distributed_delayed_stochastic_optimization(NIPS2011) Slow Learners are Fast (NIPS’2009)
  18. 18. 3 Conceptual Actions • Schedule specifies the next subset of model variables to be updated in parallel • e.g: fixed sequence,random • Improved: • 1. fastest converging variable,avoiding already-converged variables • 2. avoid inter-dependence. • Push specifies how individual workers compute partial results on those variables • Pull specifies how those partial results are aggregated to perform the full variable update. • Sync: Builtin primitives, BSP,SSP,AP Dynamic Structural Dependency
  19. 19. Not Automatically Structure Aware Examples on following 3 algorithms based on STRADS Framework
 schedule,push,pull by manually analysis and customized implement
  20. 20. word rotation scheduling • 1.V dictionary words into U disjoint subsets V1, . . . , VU (where U is the number of workers • 2. Subsequent invocations of schedule will “rotate” subsets amongst workers, so that every worker touches all U subsets every U invocations • For data partitioning, we divide the document tokens W evenly across workers, and denote worker p’s set of tokens by Wqp • all zij will be sampled exactly once after U invocations of schedule
  21. 21. Worker 1 Worker 2 Worker N Data not moved Words in docs1: A,B,C,E,F,G,H,Z Data not moved Words in docs2: A,B,C,E,F,G,H,Z Data not moved Words in docs3: A,B,C,E,F,G,H,Z V1. dictionary: A,B,C V2. dictionary: A,B,C Vu. dictionary: A,B,C … … …
  22. 22. STRADS Performance
  23. 23. Non-uniform convergence Prioritization has not received as much attention in ML Mainly proposal 2 methods: 1. Priority-based 2. Block-based with load balancing (Fugue) 

  24. 24. Shotgun-Lasso Random-Robbin selection rapidly changing parameters are 
 more frequently updated than others
  25. 25. Priority-Based
  26. 26. Priority-based Schedule 1. Rewrite objective function 
 by duplicating original features with opposite sign Here, X contains 2J features,all 2.
  27. 27. It works for latent space model, but does not apply to all possible ML models: 1. graphical models and deep networks can have arbitrary structure between parameters and variables, 2. problems on time-series data will have sequential or autoregressive dependencies between datapoints.
  28. 28. Fugue:Slow-Worker-Agnostic Distributed Learning for Big Models on Big Data
  29. 29. Thanks