SlideShare una empresa de Scribd logo
1 de 17
Descargar para leer sin conexión
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Reservoir Computing with Emphasis on Liquid
State Machines
Alex Klibisz
University of Tennessee
aklibisz@gmail.com
November 28, 2016
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Context and Motivation
• Traditional ANNs are useful for non-linear problems, but
struggle with temporal problems.
• Recurrent Neural Networks show promise for temporal
problems, but the models are very complex and difficult,
expensive to train.
• Reservoir computing provides a model of neural
network/microcircuit for solving temporal problems with much
simpler training.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Artificial Neural Networks
1
• Useful for learning non-linear
f (xi ) = yi .
• Input layers takes vectorized input.
• Hidden layers transform the input.
• Output layer indicates something
meaningful (e.g. binary class,
distribution over classes).
• Trained by feeding in many
examples to minimize some
objective function.
1
source: wikimedia
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Feed-forward Neural Networks
• Information passed in one direction from input to output.
• Each neuron has a weight w for each of its inputs and a single
bias b.
• Weights, bias, input used to compute the output:
output = 1
1+exp(− j wj xj −b).
• Outputs evaluated by objective function (e.g. classification
accuracy).
• Backpropagation algorithm adjusts w and b to minimize the
objective function.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Recurrent Neural Networks
2
Figure: The network state and resulting output change with time.
• Some data have temporal dependencies across inputs (e.g.
time series, video, text, speech, movement).
• FFNN assume inputs are independent and fail to capture this.
• Recurrent neural nets capture temporal dependencies by:
1. Allowing cyclic connections in the hidden layer.
2. Preserving internal state between inputs.
• Training is expensive; backpropagation-through-time is used
to unroll all cycles and adjust neuron parameters.
2
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Continuous Activation vs. Spiking Neurons
How does a neuron produce its output?
• Continuous activation neurons
1. Compute an activation function using inputs, weights, bias.
2. Pass the result to all connected neurons.
• Spiking neurons
1. Accumulate and store inputs.
2. Only pass the results if a threshold is exceeded.
• Advantages
• Proven that spiking neurons can compute any function
computed by sigmoidal neurons with fewer neurons (Maass,
1997).
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Conceptual Introduction
Figure: Reservoir Computing: construct a reservoir of random recurrent
neurons and train a single readout layer.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
History
Random networks with a trained readout layer
• Frank Rosenblatt, 1962; Geoffrey Hinton, 1981; Buonamano,
Merzenich, 1995
Echo-State Networks
• Herbert Jaeger, 2001
Liquid State Machines
• Wolfgang Maass, 2002
Backpropagation Decorrelation3
• Jochen Steil, 2004
Unifying as Reservoir Computing
• Verstraeten, 2007
3
Applying concepts from RC to train RNNs
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Successful Applications
Broad Topics
• Robotics controls, object tracking, motion prediction, event
detection, pattern classification, signal processing, noise
modeling, time series prediction
Specific Examples
• Venayagamoorthy, 2007 used an ESN as a wide-area power
system controller with on-line learning.
• Jaeger, 2004 improved noisy time series prediction accuracy
2400x over previous techniques.
• Salehi, 2016 simulated a nanophotonic reservoir computing
system with 100% speech recognition on TIDIGITS dataset.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Liquid State Machines vs. Echo State Networks
Primary difference: neuron implementation
• ESN: neurons do not hold charge, state is maintained using
recurrent loops.
• LSM: neurons can hold charge and maintain internal state.
• LSM formulation is general enough to encompass ESNs.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
LSM Formal Definition
4 5
• A filter maps between two functions of time u(·) → y(·).
• A Liquid State Machine M defined M = LM, f M .
• Filter LM
and readout function f M
.
• State at time t defined xM(t) = (LMu)(t).
• Read: filter LM
applied to input function u(·) at time t
• Output at time t defined y(t) = (Mu)(t) = f M(xM(t))
• Read: the readout function f applied to the current state xM
(t)
4
Maass, 2002
5
Joshi, Maass 2004
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
LSM Requirements Th. 3.1 Maass 2004
Filters in LM
satisfy the point-wise separation property
Class CB of filters has the PWSP with regard to input functions from Un
if for any two functions u(·), v(·) ∈ Un
with u(s) = v(s) for some s ≤ 0,
there exists some filter B ∈ CB such that (Bu)(0) = (Bv)(0).
Intuition: there exists a filter that can distinguish two input functions from one
another at the same time step.
Readout f M
satisfies the universal approximation property
Class CF of functions has the UAP if for any m ∈ N, any set X ⊆ Rm
,
any continuous function h : X → R, and any given ρ > 0, there exists
some f in CF such that |h(x) − f (x)| ≤ ρ for all x ∈ X.
Intuition: any continuous function on a compact domain can be uniformly
approximated by functions from CF.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Examples of Filters and Readout Functions
Filters Satisfying Pointwise Separation Property
• Linear filters with impulse responses h(t) = e−at, a > 0
• All delay filters u(·) → ut0 (·)
• Leaky Integrate and Fire neurons
• Threshold logic gates
Readout functions satisfying Universal Approximation Property
• Simple linear regression
• Simple perceptrons
• Support vector machines
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Building, Training LSMs
In General
• Take inspiration from known characteristics of the brain.
• Perform search/optimization to find a configuration.
Example: Simulated Robotic Arm Movement (Joshi, Maass 2004)
• Inputs: x,y target, 2 angles, 2 prior torque magnitudes.
• Output: 2 torque magnitudes to move the arm.
• 600 neurons in a 20 x 5 x 6 grid.
• Connections chosen from distribution favoring local
conections.
• Neuron and connection parameters (e.g. firing threshold)
chosen based on knowledge of rat brains.
• Readout trained to deliver torque values using linear
regression.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Figure: Reservoir architecture and control loop from Joshi, Maass 2004
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Research Trends
1. Hardware implemenations: laser optics and other novel
hardware to implement the reservoir.
2. Optimizing reservoirs: analytical insight and techniques to
optimize a reservoirs for specific task. (Current standard is
intuition and manual search.)
3. Interconnecting modular reservoirs for more complex tasks.
Background Reservoir Computing Liquid State Machines Current and Future Research Summary
Summary
• Randomly initialized reservoir and a simple trained readout
mapping.
• Neurons in the reservoir and the readout map should satisfy
two properties for universal computation.
• Particularly useful for tasks with temporal data/signal.
• More work to be done for optimizing and hardware
implementation.

Más contenido relacionado

La actualidad más candente

Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
 
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation..."Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...Edge AI and Vision Alliance
 
NLP using transformers
NLP using transformers NLP using transformers
NLP using transformers Arvind Devaraj
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNNPradnya Saval
 
Independent Component Analysis
Independent Component AnalysisIndependent Component Analysis
Independent Component AnalysisTatsuya Yokota
 
Transformer in Computer Vision
Transformer in Computer VisionTransformer in Computer Vision
Transformer in Computer VisionDongmin Choi
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Simulated Binary Crossover
Simulated Binary CrossoverSimulated Binary Crossover
Simulated Binary Crossoverpaskorn
 
Self-supervised Learning Lecture Note
Self-supervised Learning Lecture NoteSelf-supervised Learning Lecture Note
Self-supervised Learning Lecture NoteSangwoo Mo
 
Chaotic system and its Application in Cryptography
Chaotic system and its Application in  CryptographyChaotic system and its Application in  Cryptography
Chaotic system and its Application in CryptographyMuhammad Hamid
 
Transformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroTransformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroBill Liu
 
Attention is All You Need (Transformer)
Attention is All You Need (Transformer)Attention is All You Need (Transformer)
Attention is All You Need (Transformer)Jeong-Gwan Lee
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkYan Xu
 
Deep Learning for Time Series Data
Deep Learning for Time Series DataDeep Learning for Time Series Data
Deep Learning for Time Series DataArun Kejariwal
 
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...Edge AI and Vision Alliance
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Simplilearn
 

La actualidad más candente (20)

Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation..."Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
 
NLP using transformers
NLP using transformers NLP using transformers
NLP using transformers
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNN
 
Independent Component Analysis
Independent Component AnalysisIndependent Component Analysis
Independent Component Analysis
 
Transformer in Computer Vision
Transformer in Computer VisionTransformer in Computer Vision
Transformer in Computer Vision
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Simulated Binary Crossover
Simulated Binary CrossoverSimulated Binary Crossover
Simulated Binary Crossover
 
Self-supervised Learning Lecture Note
Self-supervised Learning Lecture NoteSelf-supervised Learning Lecture Note
Self-supervised Learning Lecture Note
 
Attention Models (D3L6 2017 UPC Deep Learning for Computer Vision)
Attention Models (D3L6 2017 UPC Deep Learning for Computer Vision)Attention Models (D3L6 2017 UPC Deep Learning for Computer Vision)
Attention Models (D3L6 2017 UPC Deep Learning for Computer Vision)
 
Chaotic system and its Application in Cryptography
Chaotic system and its Application in  CryptographyChaotic system and its Application in  Cryptography
Chaotic system and its Application in Cryptography
 
Transformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroTransformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to Hero
 
Attention is All You Need (Transformer)
Attention is All You Need (Transformer)Attention is All You Need (Transformer)
Attention is All You Need (Transformer)
 
Introduction to Soft Computing
Introduction to Soft ComputingIntroduction to Soft Computing
Introduction to Soft Computing
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Deep Learning for Time Series Data
Deep Learning for Time Series DataDeep Learning for Time Series Data
Deep Learning for Time Series Data
 
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
 
Distributed ADMM
Distributed ADMMDistributed ADMM
Distributed ADMM
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
 
R-CNN
R-CNNR-CNN
R-CNN
 

Similar a Reservoir Computing Overview (with emphasis on Liquid State Machines)

RNN and LSTM model description and working advantages and disadvantages
RNN and LSTM model description and working advantages and disadvantagesRNN and LSTM model description and working advantages and disadvantages
RNN and LSTM model description and working advantages and disadvantagesAbhijitVenkatesh1
 
recurrent_neural_networks_april_2020.pptx
recurrent_neural_networks_april_2020.pptxrecurrent_neural_networks_april_2020.pptx
recurrent_neural_networks_april_2020.pptxSagarTekwani4
 
Module 1 sp
Module 1 spModule 1 sp
Module 1 spVijaya79
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer PerceptronAndres Mendez-Vazquez
 
[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You NeedDaiki Tanaka
 
Tsinghua invited talk_zhou_xing_v2r0
Tsinghua invited talk_zhou_xing_v2r0Tsinghua invited talk_zhou_xing_v2r0
Tsinghua invited talk_zhou_xing_v2r0Joe Xing
 
Parallel Left Ventricle Simulation Using the FEniCS Framework
Parallel Left Ventricle Simulation Using the FEniCS FrameworkParallel Left Ventricle Simulation Using the FEniCS Framework
Parallel Left Ventricle Simulation Using the FEniCS FrameworkUral-PDC
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalizationKamal Bhatt
 
Lecture 4 neural networks
Lecture 4 neural networksLecture 4 neural networks
Lecture 4 neural networksParveenMalik18
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxneelamsanjeevkumar
 
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningArtificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
 
Unified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataUnified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataFedEx Institute of Technology
 
artificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptartificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptRINUSATHYAN
 
artificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptartificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptSanaMateen7
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptxSeungeon Baek
 
Neural machine translation by jointly learning to align and translate.pptx
Neural machine translation by jointly learning to align and translate.pptxNeural machine translation by jointly learning to align and translate.pptx
Neural machine translation by jointly learning to align and translate.pptxssuser2624f71
 
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...gabrielesisinna
 

Similar a Reservoir Computing Overview (with emphasis on Liquid State Machines) (20)

RNN and LSTM model description and working advantages and disadvantages
RNN and LSTM model description and working advantages and disadvantagesRNN and LSTM model description and working advantages and disadvantages
RNN and LSTM model description and working advantages and disadvantages
 
recurrent_neural_networks_april_2020.pptx
recurrent_neural_networks_april_2020.pptxrecurrent_neural_networks_april_2020.pptx
recurrent_neural_networks_april_2020.pptx
 
Module 1 sp
Module 1 spModule 1 sp
Module 1 sp
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
 
[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need
 
Tsinghua invited talk_zhou_xing_v2r0
Tsinghua invited talk_zhou_xing_v2r0Tsinghua invited talk_zhou_xing_v2r0
Tsinghua invited talk_zhou_xing_v2r0
 
Parallel Left Ventricle Simulation Using the FEniCS Framework
Parallel Left Ventricle Simulation Using the FEniCS FrameworkParallel Left Ventricle Simulation Using the FEniCS Framework
Parallel Left Ventricle Simulation Using the FEniCS Framework
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
Lecture 4 neural networks
Lecture 4 neural networksLecture 4 neural networks
Lecture 4 neural networks
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptx
 
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningArtificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep Learning
 
Yeasin
YeasinYeasin
Yeasin
 
Unified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataUnified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG Data
 
artificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptartificial-neural-networks-rev.ppt
artificial-neural-networks-rev.ppt
 
artificial-neural-networks-rev.ppt
artificial-neural-networks-rev.pptartificial-neural-networks-rev.ppt
artificial-neural-networks-rev.ppt
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
 
Trajectory Transformer.pptx
Trajectory Transformer.pptxTrajectory Transformer.pptx
Trajectory Transformer.pptx
 
Neural machine translation by jointly learning to align and translate.pptx
Neural machine translation by jointly learning to align and translate.pptxNeural machine translation by jointly learning to align and translate.pptx
Neural machine translation by jointly learning to align and translate.pptx
 
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...
On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Compu...
 

Más de Alex Klibisz

Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...
Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...
Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...Alex Klibisz
 
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...Research Summary: Efficiently Estimating Statistics of Points of Interest on ...
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...Alex Klibisz
 
Exploring Serverless Architectures: AWS Lambda
Exploring Serverless Architectures: AWS LambdaExploring Serverless Architectures: AWS Lambda
Exploring Serverless Architectures: AWS LambdaAlex Klibisz
 
React, Flux, and Realtime RSVPs
React, Flux, and Realtime RSVPsReact, Flux, and Realtime RSVPs
React, Flux, and Realtime RSVPsAlex Klibisz
 
Research Summary: Hidden Topic Markov Models, Gruber
Research Summary: Hidden Topic Markov Models, GruberResearch Summary: Hidden Topic Markov Models, Gruber
Research Summary: Hidden Topic Markov Models, GruberAlex Klibisz
 
Research Summary: Unsupervised Prediction of Citation Influences, Dietz
Research Summary: Unsupervised Prediction of Citation Influences, DietzResearch Summary: Unsupervised Prediction of Citation Influences, Dietz
Research Summary: Unsupervised Prediction of Citation Influences, DietzAlex Klibisz
 

Más de Alex Klibisz (6)

Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...
Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...
Research Summary: Scalable Algorithms for Nearest-Neighbor Joins on Big Traje...
 
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...Research Summary: Efficiently Estimating Statistics of Points of Interest on ...
Research Summary: Efficiently Estimating Statistics of Points of Interest on ...
 
Exploring Serverless Architectures: AWS Lambda
Exploring Serverless Architectures: AWS LambdaExploring Serverless Architectures: AWS Lambda
Exploring Serverless Architectures: AWS Lambda
 
React, Flux, and Realtime RSVPs
React, Flux, and Realtime RSVPsReact, Flux, and Realtime RSVPs
React, Flux, and Realtime RSVPs
 
Research Summary: Hidden Topic Markov Models, Gruber
Research Summary: Hidden Topic Markov Models, GruberResearch Summary: Hidden Topic Markov Models, Gruber
Research Summary: Hidden Topic Markov Models, Gruber
 
Research Summary: Unsupervised Prediction of Citation Influences, Dietz
Research Summary: Unsupervised Prediction of Citation Influences, DietzResearch Summary: Unsupervised Prediction of Citation Influences, Dietz
Research Summary: Unsupervised Prediction of Citation Influences, Dietz
 

Último

HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 

Último (20)

HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 

Reservoir Computing Overview (with emphasis on Liquid State Machines)

  • 1. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Reservoir Computing with Emphasis on Liquid State Machines Alex Klibisz University of Tennessee aklibisz@gmail.com November 28, 2016
  • 2. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Context and Motivation • Traditional ANNs are useful for non-linear problems, but struggle with temporal problems. • Recurrent Neural Networks show promise for temporal problems, but the models are very complex and difficult, expensive to train. • Reservoir computing provides a model of neural network/microcircuit for solving temporal problems with much simpler training.
  • 3. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Artificial Neural Networks 1 • Useful for learning non-linear f (xi ) = yi . • Input layers takes vectorized input. • Hidden layers transform the input. • Output layer indicates something meaningful (e.g. binary class, distribution over classes). • Trained by feeding in many examples to minimize some objective function. 1 source: wikimedia
  • 4. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Feed-forward Neural Networks • Information passed in one direction from input to output. • Each neuron has a weight w for each of its inputs and a single bias b. • Weights, bias, input used to compute the output: output = 1 1+exp(− j wj xj −b). • Outputs evaluated by objective function (e.g. classification accuracy). • Backpropagation algorithm adjusts w and b to minimize the objective function.
  • 5. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Recurrent Neural Networks 2 Figure: The network state and resulting output change with time. • Some data have temporal dependencies across inputs (e.g. time series, video, text, speech, movement). • FFNN assume inputs are independent and fail to capture this. • Recurrent neural nets capture temporal dependencies by: 1. Allowing cyclic connections in the hidden layer. 2. Preserving internal state between inputs. • Training is expensive; backpropagation-through-time is used to unroll all cycles and adjust neuron parameters. 2 http://colah.github.io/posts/2015-08-Understanding-LSTMs/
  • 6. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Continuous Activation vs. Spiking Neurons How does a neuron produce its output? • Continuous activation neurons 1. Compute an activation function using inputs, weights, bias. 2. Pass the result to all connected neurons. • Spiking neurons 1. Accumulate and store inputs. 2. Only pass the results if a threshold is exceeded. • Advantages • Proven that spiking neurons can compute any function computed by sigmoidal neurons with fewer neurons (Maass, 1997).
  • 7. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Conceptual Introduction Figure: Reservoir Computing: construct a reservoir of random recurrent neurons and train a single readout layer.
  • 8. Background Reservoir Computing Liquid State Machines Current and Future Research Summary History Random networks with a trained readout layer • Frank Rosenblatt, 1962; Geoffrey Hinton, 1981; Buonamano, Merzenich, 1995 Echo-State Networks • Herbert Jaeger, 2001 Liquid State Machines • Wolfgang Maass, 2002 Backpropagation Decorrelation3 • Jochen Steil, 2004 Unifying as Reservoir Computing • Verstraeten, 2007 3 Applying concepts from RC to train RNNs
  • 9. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Successful Applications Broad Topics • Robotics controls, object tracking, motion prediction, event detection, pattern classification, signal processing, noise modeling, time series prediction Specific Examples • Venayagamoorthy, 2007 used an ESN as a wide-area power system controller with on-line learning. • Jaeger, 2004 improved noisy time series prediction accuracy 2400x over previous techniques. • Salehi, 2016 simulated a nanophotonic reservoir computing system with 100% speech recognition on TIDIGITS dataset.
  • 10. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Liquid State Machines vs. Echo State Networks Primary difference: neuron implementation • ESN: neurons do not hold charge, state is maintained using recurrent loops. • LSM: neurons can hold charge and maintain internal state. • LSM formulation is general enough to encompass ESNs.
  • 11. Background Reservoir Computing Liquid State Machines Current and Future Research Summary LSM Formal Definition 4 5 • A filter maps between two functions of time u(·) → y(·). • A Liquid State Machine M defined M = LM, f M . • Filter LM and readout function f M . • State at time t defined xM(t) = (LMu)(t). • Read: filter LM applied to input function u(·) at time t • Output at time t defined y(t) = (Mu)(t) = f M(xM(t)) • Read: the readout function f applied to the current state xM (t) 4 Maass, 2002 5 Joshi, Maass 2004
  • 12. Background Reservoir Computing Liquid State Machines Current and Future Research Summary LSM Requirements Th. 3.1 Maass 2004 Filters in LM satisfy the point-wise separation property Class CB of filters has the PWSP with regard to input functions from Un if for any two functions u(·), v(·) ∈ Un with u(s) = v(s) for some s ≤ 0, there exists some filter B ∈ CB such that (Bu)(0) = (Bv)(0). Intuition: there exists a filter that can distinguish two input functions from one another at the same time step. Readout f M satisfies the universal approximation property Class CF of functions has the UAP if for any m ∈ N, any set X ⊆ Rm , any continuous function h : X → R, and any given ρ > 0, there exists some f in CF such that |h(x) − f (x)| ≤ ρ for all x ∈ X. Intuition: any continuous function on a compact domain can be uniformly approximated by functions from CF.
  • 13. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Examples of Filters and Readout Functions Filters Satisfying Pointwise Separation Property • Linear filters with impulse responses h(t) = e−at, a > 0 • All delay filters u(·) → ut0 (·) • Leaky Integrate and Fire neurons • Threshold logic gates Readout functions satisfying Universal Approximation Property • Simple linear regression • Simple perceptrons • Support vector machines
  • 14. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Building, Training LSMs In General • Take inspiration from known characteristics of the brain. • Perform search/optimization to find a configuration. Example: Simulated Robotic Arm Movement (Joshi, Maass 2004) • Inputs: x,y target, 2 angles, 2 prior torque magnitudes. • Output: 2 torque magnitudes to move the arm. • 600 neurons in a 20 x 5 x 6 grid. • Connections chosen from distribution favoring local conections. • Neuron and connection parameters (e.g. firing threshold) chosen based on knowledge of rat brains. • Readout trained to deliver torque values using linear regression.
  • 15. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Figure: Reservoir architecture and control loop from Joshi, Maass 2004
  • 16. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Research Trends 1. Hardware implemenations: laser optics and other novel hardware to implement the reservoir. 2. Optimizing reservoirs: analytical insight and techniques to optimize a reservoirs for specific task. (Current standard is intuition and manual search.) 3. Interconnecting modular reservoirs for more complex tasks.
  • 17. Background Reservoir Computing Liquid State Machines Current and Future Research Summary Summary • Randomly initialized reservoir and a simple trained readout mapping. • Neurons in the reservoir and the readout map should satisfy two properties for universal computation. • Particularly useful for tasks with temporal data/signal. • More work to be done for optimizing and hardware implementation.