This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
Cheminformatics tools supporting dissemination of data associated with US EPA...
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation in the Neocortex by Subutai Ahmad (02/08/2018)
1. Workshop: Dendritic integration and
computation with active dendrites
February 8, 2018
Subutai Ahmad
sahmad@numenta.com
The Predictive Neuron: How Active Dendrites Enable
Spatiotemporal Computation In The Neocortex
3. Observation:
The neocortex is constantly predicting its inputs.
“the most important and also the most neglected problem of
cerebral physiology” (Lashley, 1951)
How can networks of pyramidal neurons learn predictive
models of the world?
Research question:
4. 1) How can neurons learn predictive models of extrinsic temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
5. 5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
Pyramidal Neuron
(Mel, 1992; Branco & Häusser, 2011; Schiller et al, 2000; Losonczy, 2006; Antic
et al, 2010; Major et al, 2013; Spruston, 2008; Milojkovic et al, 2005, etc.)
Prediction Starts in the Neuron
6. Proximal synapses: Cause somatic spikes
Define classic receptive field of neuron
Distal synapses: Cause dendritic spikes
Put the cell into a depolarized, or “predictive” state
Depolarized neurons fire sooner, inhibiting nearby neurons.
A neuron can predict its activity in hundreds of unique contexts.
5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
HTM Neuron Model
Prediction Starts in the Neuron
Pyramidal Neuron
(Poirazi et al., 2003)
(Hawkins & Ahmad, 2016)
7. A Single Layer Network Model for Sequence Memory
- Neurons in a mini-column learn same FF receptive field.
- Active dendritic segments form connections to nearby cells.
- Depolarized cells fire first, and inhibit other cells within mini-column.
No prediction Predicted input
(Hawkins & Ahmad, 2016)
(Cui et al, 2016)
t=0
t=1
Predicted cells inhibit
neighbors
Next prediction t=2
t=0
t=1
8. Synaptic changes localized to dendritic segments:
(Stuart and Häusser, 2001; Losonczy et al., 2008)
1. If a cell was correctly predicted, positively reinforce the dendritic
segment that caused the prediction.
2. If a cell was incorrectly predicted, slightly negatively reinforce the
corresponding dendritic segment.
3. If no cell was predicted in a mini-column, reinforce the dendritic
segment that best matched the previous input.
Continuous Branch Specific Learning
9. X
A B
B
C
C
D
Y
Before learning
X B’’ C’’
D’
Y’’
After learning
A B’ C’
Same columns,
but only one cell active per column.
High Order (Non-Markovian) Sequences
Two sequences: A-B-C-D
X-B-C-Y
10. Application To Real World Streaming Data Sources
- Accuracy is comparable to state of the art ML techniques (LSTM, ARIMA, etc.)
- Continuous unsupervised learning - adapts to changes far better than other techniques
- Top benchmark score in detecting anomalies and unusual behavior
- Extremely fault tolerant (tolerant to 40% noise and faults)
- Multiple open source implementations (some commercial)
“Continuous online sequence learning with an unsupervised neural network model”
Cui, Ahmad and Hawkins, Neural Computation, 2016
“Unsupervised real-time anomaly detection for streaming data”
Ahmad, Lavin, Purdy and Zuha, Neurocomputing, 2017
2015-04-20
Monday
2015-04-21
Tuesday
2015-04-22
Wednesday
2015-04-23
Thursday
2015-04-24
Friday
2015-04-25
Saturday
2015-04-26
Sunday
0 k
5 k
10 k
15 k
20 k
25 k
30 k
PassengerCountin30minwindow
A
B C
Shift
AR
IM
ALSTM
1000LSTM
3000LSTM
6000
TM
0.0
0.2
0.4
0.6
0.8
1.0
NRMSE
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
MAPE
0.0
0.5
1.0
1.5
2.0
2.5
NegativeLog-likelihood
Shift
AR
IM
ALSTM
1000LSTM
3000LSTM
6000
TM
LSTM
1000LSTM
3000LSTM
6000
TM
D
?
NYC Taxi Demand Machine Temperature Sensor Data
11. 1) How can neurons learn predictive models of temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
12. How Could Neurons Learn Predictive Models of Sensorimotor
Sequences?
Sequence memory
Sensorimotor sequences
SensorMotor-related context
Hypothesis:
- We need to provide an “allocentric location”, updated through movement
- Grid cell modules, applied to objects, can compute allocentric location
13. Two Layer Model of Sensorimotor Inference
Feature @ location
Object Stable over movement of sensor
- Using allocentric location, cellular layers can accurately predict its
input as the sensor moves.
Sensor
Feature
Allocentric
Location
Pooling
Seq Mem
Changes with each movement
14. Yale-CMU-Berkeley (YCB) Object Benchmark (Calli et al, 2017)
- 80 objects designed for robotics grasping tasks
- Includes high-resolution 3D CAD files
YCB Object Benchmark
We created a virtual hand using the Unity game engine
Curvature based sensor on each fingertip
4096 neurons per layer per column
98.7% recall accuracy (77/78 uniquely classified)
Convergence time depends on object, sequence of
sensations, number of fingers.
Simulation using YCB Object Benchmark
19. 1) How can neurons learn predictive models of temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
23. Emergence of High Order Cell Assemblies
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
3
4 0
D
ataShuffled
Poisson
0
30
30
Cell Assembly Order
3 4 5 6
0
500
1000
1500
2000
2500
3000
Data
Shuffled
Poisson
D
ataShuffled
Poisson
ALV1
ALV1
3-order cell
assembly
single cell
-1 -0.5 0 0.5 1
0
0.1
0.2
0.3
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Cell assemblies are significantly more likely to
occur in sequences than predicted by a
Poisson model (p<0.001).
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
3
0
D
ataShuffled
Poisson
0
Cell Assembly Order
3 4 5 6
0
500
1000
1500
2000
2500
3000
Data
Shuffled
Poisson
D
ataShuffled
Poisson
ALV1
ALV1
3-order cell
assembly
single cell
-1 -0.5 0 0.5 1
0
0.1
0.2
0.3
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Sparse code predicts specific point in a
sequence (single cells don’t).
Similar to (Miller et al, 2014)
24. - A model of sequence learning in cortex
- Relies on “predictive neuron” with active dendrites and fast inhibitory networks
- Can learn complex temporal sequences
- Applied to real world streaming applications
- A model of sensorimotor sequence learning in cortex
- Identical network of pyramidal cells can perform a very different computation
- Difference: motion related context sent to active dendrites
- Detailed list of experimentally testable properties
24
Summary
25. Open Issues / Discussion
Are active dendrites necessary? (Yes!)
- Is a two layer network of uniform point neurons sufficient? (No!)
How to integrate calcium spikes, BAC firing, and apical dendrites?
Continuous time model of HTM, including inhibitory networks
Collaborations
We are always interested in hosting visiting scholars and interns.
Co-authors: Jeff Hawkins, Scott Purdy, Marcus Lewis (Numenta), Yuwei Cui
Contact info: sahmad@numenta.com
@SubutaiAhmad