Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Sequence-to-Sequence Modeling for Time Series

122 visualizaciones

Publicado el

Sequence-to-sequence modeling (seq2seq) is now being used for applications based on time series data. We overview Seq-2-Seq and explore its early use cases. They then walk the audience through how to leverage Seq-2-Seq modeling for a couple of concrete use cases - real-time anomaly detection and forecasting.

Publicado en: Tecnología
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

Sequence-to-Sequence Modeling for Time Series

  1. 1. For Time Series Forecasting ARUN KEJARIWAL Sequence-2-Sequence Learning
  2. 2. ABOUT US
  3. 3. TIME SERIES FORECASTING 3 Meteorology Machine Translation Operations Transportation Econometrics Marketing, Sales Finance Speech Synthesis
  4. 4. 4 AN EXAMPLE # Figure borrowed from Brockwell and Davis. #
  5. 5. TITLE HERE # * Heteroscedasticity STRUCTURAL CHARACTERISTICS *FigureborrowedfromHyndmanetal.2015. Changepoint Anomalies, Extreme Values Trend + Seasonality
  6. 6. FLAVORS TIMES SERIES FORECASTING 6 # Figure borrowed from Tao et al. 2018. #
  7. 7. [Faullkner, Comstock, Fossum] [Craw] [Brockwell, Davis] [Chatfield] [Bowerman, O’Connell, Koehler] [Granger, Newbold] Long History Research Books
  8. 8. 8 [Gilchrist] [Hyndman, Athanasopoulos ] [Box et al.] [Wilson, Keating] [Makridarkis et al.] [Mallios] [Montgomery et al.] [Pankratz]
  9. 9. WHY DEEP LEARNING? 9
  10. 10. 10 Seasonality Multiple levels: weekly, monthly, yearly or Non-seasonal (aperiodic) Stationarity Time varying mean and variance (heteroskedasticity), Exogenous shocks Structural Unevenly Spaced, Missing Data, Anomalies, Changepoints, Small sample size, Skewness, Kurtosis, Chaos, Noise Trend Growth, Virality (network effects), Non-linearity PROPERTIES
  11. 11. TEMPORAL CREDIT ASSIGNEMENT 11 (TCA)
  12. 12. DEEP LEARNING UBIQUITOUS 12
  13. 13. S2S 13 # http://karpathy.github.io/2015/05/21/rnn-effectiveness/ # [2014]
  14. 14. 14 BACKPROPAGATION THROUGH TIME
  15. 15. 15 BACKPROPAGATION THROUGH TIME # Figure borrowed from Lillicrap and Santoro, 2019. #
  16. 16. 16 BACKPROPAGATION THROUGH TIME [1986] [1990] [1986] [EARLY WORK] [1990]
  17. 17. 17 REAL-TIME RECURRENT LEARNING#* # A Learning Algorithm for Continually Running Fully Recurrent Neural Networks [Williams and Zipser, 1989] * A Method for Improving the Real-Time Recurrent Learning Algorithm [Catfolis, 1993]
  18. 18. UORO A APPROXIMATE RTRL UORO [Unbiased Online Recurrent Optimization] Works in a streaming fashion Online, Memoryless Avoids backtracking through past activations and inputs Low-rank approximation to forward- mode automatic differentiation Reduced computation and storage KF-RTRL [Kronecker Factored RTRL] Kronecker product decomposition to approximate the gradients Reduces noise in the approximation Asymptotically, smaller by a factor of n Memory requirement equivalent to UORO Higher computation than UORO Not applicable to arbitrary architectures # Unbiased Online Recurrent Optimization [Tallec and Ollivier, 2017] # * Approximating Real-Time Recurrent Learning with Random Kronecker Factors [Mujika et al. 2018] *
  19. 19. MEMORY BASED ATTENTION BASED 19 ARCHITECTURE TYPES OF RNNs
  20. 20. MEMORY-BASED RNN ARCHITECTURES 20 BRNN: Bi-directional RNN [Schuster and Paliwal, 1997] GLU: Gated Linear Unit [Dauphin et al. 2016] Long Short-Term Memory: LSTM [Hochreiter and Schmidhuber, 1996] Gated Recurrent Unit: GRU [Cho et al. 2014] Gated Highway Network: GHN [Zilly et al. 2017]
  21. 21. Neural Computation, 1997 * Figure borrowed from http://colah.github.io/posts/2015-08-Understanding-LSTMs/ (a) Forget gate (b) Input gate (c) Output gate St: hidden state “The LSTM’s main idea is that, instead of compu7ng St from St-1 directly with a matrix-vector product followed by a nonlinearity, the LSTM directly computes St, which is then added to St-1 to obtain St.” [Jozefowicz et al. 2015] Resistant to vanishing gradient problem Achieve better results when dropout is used Adding bias of 1 to LSTM’s forget gate *
  22. 22. Stacking d RNNs Recurrence depth d LONG CREDIT ASSIGNMENT PATHS Incorporates Highway layers inside the recurrent transition Highway layers in RHNs perform adaptive computation Transform Carry H, T, C: Non-linear transforms Regularization Variational inference based dropout * Figure borrowed from Silly et al. 2017 * *
  23. 23. 23 NEW FLAVORS OF RNNs # Figure borrowed from https://distill.pub/2016/augmented-rnns/ #
  24. 24. What caught your eye at first glance? 24
  25. 25. And this one? 25 * Figure borrowed from Golub et al. 2012
  26. 26. 26 Psychology, Neuroscience, Cognitive Sciences [1959] [1974] [1956] Span of absolute judgement
  27. 27. ATTENTION 27 # [2014] [2017]
  28. 28. 28 # Figure borrowed from https://distill.pub/2016/augmented-rnns/ # ATTENTION MECHANISM
  29. 29. 29 ATTENTION MECHANISM # Figure borrowed from Lillicrap and Santoro, 2019. #
  30. 30. CONTENT BASED LOCATION BASED 30ATTENTION
  31. 31. 31 Self Relates different positions of a single sequence in order to compute a representation of the same sequence Also referred to as intra-attention Global vs. Local Global: alignment weights at are inferred from the current target state and all the source states Local: alignment weights at are inferred from the current target state and those source states in the window. Soft vs. Hard Soft: Alignment weights are learned and placed “softly” over all patches in the source image Hard: only selects one patch of the image to attend to at a time ATTENTION FAMILY
  32. 32. ATTENTION-BASED Models 32 Sparse Attentive Backpropagation [Ke et al. 2018] Hierarchical Attention-Based RHN [Tao et al. 2018] Long Short-Term Memory-Networks [Cheng et al. 2016] Self-Attention GAN [Zhang et al. 2018] [A SNAPSHOT]
  33. 33. 33 HIERARCHICAL ATTENTION-BASED RECURRENT HIGHWAY NETWORK # Figure borrowed from Tao et al. 2018. #
  34. 34. ✦ Inspired by the cognitive analogy of reminding ๏ Designed to retrieve one or very few past states ✦ Incorporates a differentiable, sparse (hard) attention mechanism to select from past states 34SPARSE ATTENTIVE BACKTRACKING TCA THROUGH REMINDING # Figure borrowed from Ke et al. 2018. #
  35. 35. 35 HEALTH CARE # Figure borrowed from Song et al. 2018. Multi-head Attention Additional masking to enable causality Inference Diagnoses, Length of stay Future illness, Mortality Temporal ordering Positional Encoding & Dense interpolation embedding MULTI-VARIATE Sensor measurement, Test results Irregular sampling, Missing values and measurement errors Heterogeneous, Presence of long range dependencies #
  36. 36. Thank you 36
  37. 37. READINGS 37 [Rosenblatt] Principles of Neurodynamics: Perceptrons and the theory of brain mechanisms [Eds. Anderson and Rosenfeld] Neurocomputing: Foundations of Research [Eds. Rumelhart and McClelland] Parallel and Distributed Processing [Werbos] The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting [Eds. Chauvin and Rumelhart] Backpropagation: Theory, Architectures and Applications [Rojas] Neural Networks: A Systematic Introduction [BOOKS]
  38. 38. READINGS 38 Perceptrons [Minsky and Papert, 1969] Une procedure d'apprentissage pour reseau a seuil assymetrique [Le Cun, 1985] The problem of serial order in behavior [Lashley, 1951] Beyond regression: New tools for prediction and analysis in the behavioral sciences [Werbos, 1974] Connectionist models and their properties [Feldman and Ballard, 1982] Learning-logic [Parker, 1985] [EARLY WORKS]
  39. 39. READINGS 39 Learning internal representations by error propagation [Rumelhart, Hinton, and Williams, Chapter 8 in D. Rumelhart and F. McClelland, Eds., Parallel Distributed Processing, Vol. 1, 1986] (Generalized Delta Rule) Generalization of backpropagation with application to a recurrent gas market model [Werbos, 1988] Generalization of backpropagation to recurrent and higher order networks [Pineda, 1987] Backpropagation in perceptrons with feedback [Almeida, 1987] Second-order backpropagation: Implementing an optimal O(n) approximation to Newton's method in an artificial neural network [Parker, 1987] Learning phonetic features using connectionist networks: an experiment in speech recognition [Watrous and Shastri, 1987] (Time-delay NN) [BACKPROPAGATION]
  40. 40. READINGS 40 Backpropagation: Past and future [Werbos, 1988] Adaptive state representation and estimation using recurrent connectionist networks [Williams, 1990] Generalization of back propagation to recurrent and higher order neural networks [Pineda, 1988] Learning state space trajectories in recurrent neural networks [Pearlmutter 1989] Parallelism, hierarchy, scaling in time-delay neural networks for spotting Japanese phonemes/CV-syllables [Sawai et al. 1989] The role of time in natural intelligence: implications for neural network and artificial intelligence research [Klopf and Morgan, 1990] [BACKPROPAGATION]
  41. 41. READINGS 41 Recurrent Neural Network Regularization [Zaremba et al. 2014] Regularizing RNNs by Stabilizing Activations [Krueger and Memisevic, 2016] Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks [Chernodub and Nowicki 2016] A Theoretically Grounded Application of Dropout in Recurrent Neural Networks [Gal and Ghahramani, 2016] Noisin: Unbiased Regularization for Recurrent Neural Networks [Dieng et al. 2018] State-Regularized Recurrent Neural Networks [Wang and Niepert, 2019] [REGULARIZATION of RNNs]
  42. 42. READINGS 42 A Decomposable Attention Model for Natural Language Inference [Parikh et al. 2016] Hybrid Computing Using A Neural Network With Dynamic External Memory [Graves et al. 2017] Image Transformer [Parmar et al. 2018] Universal Transformers [Dehghani et al. 2019] The Evolved Transformer [So et al. 2019] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context [Dai et al. 2019] [ATTENTION & TRANSFORMERS]
  43. 43. READINGS 43 Financial Time Series Prediction using hybrids of Chaos Theory, Multi-layer Perceptron and Multi-objective Evolutionary Algorithms [Ravi et al. 2017] Model-free Prediction of Noisy Chaotic Time Series by Deep Learning [Yeo, 2017] DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks [Salinas et al. 2017] Real-Valued (Medical) Time Series Generation With Recurrent Conditional GANs [Hyland et al. 2017] R2N2: Residual Recurrent Neural Networks for Multivariate Time Series Forecasting [Goel et al. 2017] Temporal Pattern Attention for Multivariate Time Series Forecasting [Shih et al. 2018] [TIME SERIES PREDICTION]
  44. 44. READINGS 44 Unbiased Online Recurrent Optimization [Tallec and Ollivier, 2017] Approximating real-time recurrent learning with random Kronecker factors [Mujika et al. 2018] Theory and Algorithms for Forecasting Time Series [Kuznetsov and Mohri, 2018] Foundations of Sequence-to-Sequence Modeling for Time Series [Kuznetsov and Meriet, 2018] On the Variance Unbiased Recurrent Optimization [Cooijmans and Martens, 2019] Backpropagation through time and the brain [Lillicrap and Santoro, 2019] [POTPOURRI]
  45. 45. RESOURCES 45 http://colah.github.io/posts/2015-08-Understanding-LSTMs/ http://karpathy.github.io/2015/05/21/rnn-effectiveness/ A review of Dropout as applied to RNNs https://medium.com/@bingobee01/a-review-of-dropout-as-applied-to-rnns-72e79ecd5b7b https://distill.pub/2016/augmented-rnns/ https://distill.pub/2019/memorization-in-rnns/ https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html Using the latest advancements in deep learning to predict stock price movements https://towardsdatascience.com/aifortrading-2edd6fac689d How to Use Weight Regularization with LSTM Networks for Time Series Forecasting https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/

×