SlideShare una empresa de Scribd logo
1 de 53
SS2016 Modern Neural
Computation
Lecture 3: Network
Dynamics
Hirokazu Tanaka
School of Information Science
Japan Institute of Science and Technology
Neural network as a dynamical system.
In this lecture we will learn:
• Attractor dynamics
- Hopfield model
- Winner-take-all and Winner-less competition
• Randomly connectivity
- Girko’s circular law
- Phase transition by synaptic variability
• Collective dynamics
- Hebb’s cell assemblies
- Synfire chain, Neuronal avalanche, small-world network
• Recurrent network dynamics
- Echo-state network, liquid-state network
- Self-organizing recurrent network (SORN)
• Synchronization
- Kuramoto model
Hopfield model inspired by physics of ferromagnetism.
1
1
iS

 

Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
“Spin” variable for neuron i
Excited
Rest
General connectivity Symmetric connectivity
S1
S2
S3
S4
S5
S6
S7
S1
S2
S3
S4
S5
S6
S7
ij jiw w ij jiw w
Here we will see that a neural network with symmetric connectivity exhibits an
attractor dynamics.
Hopfield model inspired by physics of ferromagnetism.
   i ij j
j
h t w S t 
   
   
 
 
 
 
1
1
Pr 1|
Pr 1|
i
i
i
i
S
i i
S
i
h
t
i
t
h
S t t h t e e
S t t h t ee
 

  
 
       
     
H
H
   
 
   
  1 tanh
Pr 1|
2
i
i i
h t
h t t
i
i hi
h te
S t t h t
e e

 

 

       

 
,
ij i j
i j
S w S S    H1
1
iS

 

      lim Pr 1| sgni i iS t t h t h t
 
      
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Associative memory is stored in connection strengths.
1
1 M
ij i jw p p
N
 

 
 1; 1, , , 1, ,ip i N M
   L L
M-stored patterns
Overlap with patterns
 
1 1 1
1 M N M
i ij j i j j i
j j
h t w S p p S p m
N
   
   
    
1
j j
j
m p S
N
 
 
Hebbian learning
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Memory recall is a relaxation process to fixed pointes.
Tank & Hopfield (1987) Scientific American
1
1 M
ij i jw p p
N
 

 
Memory recall is a relaxation process to fixed points.
M=3 case
Deterministic case
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Suppose that an initial pattern of population activity has a significant similarity with
pattern μ=3:
while there are no overlap with the other patterns:
 3
0 0.4m t 
   1 2
0 0 0.m t m t 
  1 1 2 2 3 3 3 3 3
0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p 

 
               
 

     
23 3 3
0 0
1 1
1.i i i
i i
m t t p S t t p
N N
       
Therefore, the population activity converges to the pattern μ=3.
Memory recall is a relaxation process to fixed points.
M=3 case
Deterministic case
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Suppose that an initial pattern of population activity has a significant similarity with
pattern μ=3:
while there are no overlap with the other patterns:
 3
0 0.4m t 
   1 2
0 0 0.m t m t 
  1 1 2 2 3 3 3 3 3
0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p 

 
               
 

     
23 3 3
0 0
1 1
1.i i i
i i
m t t p S t t p
N N
       
Therefore, the population activity converges to the pattern μ=3.
Memory recall is a relaxation process to fixed pointes.
Stochastic case
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
       3 3
0Pr 1|i i i iS t t h t g p m t g p m t 

 
            
 

     3
0Pr 1|i iS t t h t g m t         
     3
0Pr 1|i iS t t h t g m t          
3
1ip  For
3
1ip  For
   
   
3 3
3 3
0 0
3 3
0 03 3
1 1
1
1 1
i i
i i
i
i i
i i
p p
m t t p S t t
N
N N
S t t S t t
N N N N
 
 
 
    
     

 
Memory recall is a relaxation process to fixed pointes.
Stochastic case
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
   
   
3 3
3 3
0 0
3 3
0 03 3
1 1
1
1 1
i i
i i
i
i i
i i
p p
m t t p S t t
N
N N
S t t S t t
N N N N
 
 
 
    
     

 
        
3
3 3 3
03 3
1
1 1
(*) Pr 1 Pr 1 2 1
i
i i i
i
p
S t t N S t t N S t t g m
N N
 
 

                    
        
3
3 3 3
03 3
1
1 1
(**) Pr 1 Pr 1 2 1
i
i i i
i
p
S t t N S t t N S t t g m
N N
 
 

                     
     3 3 3
0 0 0m t t g m t g m t          
Update rule
(*) (**)
Memory recall is a relaxation process to fixed pointes.
Figure 17,8 in Gerstner (2014) Neuronal Dynamics
     3 3 3
0 0 0m t t g m t g m t          
    1
1 tanh
2
g m m 
   3 3
0 0tanhm t t m t     
If we assume a sigmoid function for activation:
When β>1, the network is attracted to the pattern.
Memory recall is a relaxation process to fixed pointes.
Tank & Hopfield (1987) Scientific American
Demo: Matlab example.
N=25, β=3
Demo: Matlab example.
N=25, β=0.8
Exercise: fill the Matlab code.
%% parameters
N = 5^2; % # neurons
beta = 3; % inverse temperature
T = 9; % # simulation steps
%% M=3 patterns
P = [ [1,1,1,1,-1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,1,1,-1]; ...
[1,1,1,1,1, -1,-1,-1,1,-1, -1,-1,-1,1,-1, 1,-1,-1,1,-1, 1,1,1,-1,-1]; ...
[-1,1,1,1,1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, -1,1,1,1,1] ];
figure(1);
subplot(131); imagesc(reshape(P(1,:),5,5)); title('pattern 1');
subplot(132); imagesc(reshape(P(2,:),5,5)); title('pattern 2');
subplot(133); imagesc(reshape(P(3,:),5,5)); title('pattern 3');
% connectivity matrix
W = 1/N*(P'*P);
%% simulation
S = 2*(rand(N,1)>0.5)-1; % initial pattern
figure(2); subplot(1,9,1); imagesc(reshape(S,5,5)); title(['t=1']);
for t=2:T
h = W*S; % inputs
p = 1/2*(1+tanh(beta*h)); % prob(S=+1)
S = 2*(rand(N,1)<p)-1; % stochastic Glaubner dynamics
figure(2); subplot(1,9,t);
imagesc(reshape(S,5,5)); title(['t=' num2str(t)]);
end
Write your own code here.
How many patterns can N-neuron network remember?
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Stability condition
 0i iS t p

Assume that the network represents the pattern ν at time t0:
Then, in deterministic case, the network at time t0+Δt is determined by
 
 
 
0
1
1 1
1
1
1
sgn
1 1
sgn
1
sgn 1
1
sgn 1
N M
i j
j
N N
j j
j j
N
j
i j
i j i j
i i i j
i
j
S t t p p p
N
p p p p p p
N N
p p p p p
N
p
N
  

     
 
    
 


  



 
    
 
  
   
   
  
    
   
 

  
 
 1
i i
j
j
N
jp p p p   
 
 
 
  
 
How many patterns can N-neuron network remember?
     error
2 1
Pr 1 Pr 1 Pr 1| 1 erf
2
0
2
,i i i i
N
P a a a a
M
    
  
              
   
: N
Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
Stability condition
   0 sgn 1ii iS t t p a
   
 1
1 N
i ji i j
j
a p p p p
N
   

 
  
   
1
E 0, Vari i
M
a a
N
 

 
Therefore, the number of patterns M must be small enough
compared to the number of neurons N.
Physics of spin systems.
Ising model
Spin glass
 
,
i j
i j
S J S S    H
 
,
ij i j
i j
S J S S    H
, : nearest neighbori j
2
0
,;ij ij
J J
N N
J J
 
 
 
: N
 
,
ij i j
i j
S J S S    H
Short-range interaction
(Edwards-Anderson model)
Long-range interaction
(Sherrington-Kirkpatrick
model)
Ising (1925); Edwards & Anderson (1975); Sherrington & Kirkpartick (1975)
Simple dynamics with random connections.
 tanh
d
dt
   
x
x Wx
2
0,ijW
N
 
 
 
: N
 N
d
dt
   
x
I W x
Dynamics of N interconnected neural network with random connections
Linearized dynamics
The origin x=0 is a fixed point. Whether it is stable or unstable is
determined by the eigenvalues of the connectivity matrix W.
S1
S2
S3
S4
S5
S6
S7
Semi-circle law of eigenvalue density function.
1
0,ij
N
W
 
 
 
: NAll components are
normally distributed:
And symmetric: ij ijW W
Semi-circle law: Wigner (1951)
In the limit of infinite n, the eigenvalues of an nxn random symmetric matrix W
follow a semi-circle distribution:
  21
4
2
p  

 
%% Parameters
n =10000; t =1; v = [ ]; dx = 0.1;
%% Experiment
for i=1:t,
a=randn(n); % random nxn matrix
s =(a+a')/2 ; % symmetrized matrix
v =[v ; eig(s)] ; % eigenvalues
end
v=v/sqrt (n/2) ;
%% Plot
[count , x]= hist(v , -2:dx:2) ;
cla reset; hold on ;
%% Theory
plot(x , sqrt (4-x.^2)/(2*pi) , 'k-', 'LineWidth' , 2);
bar (x , count/(t*n*dx) , 'facecolor', [0.7 0.7 0.7]) ;
Wigner (1951)
Circular law of eigenvalue density function.
1
0,ij
N
W
 
 
 
: NAll components are
normally distributed:
Circular law: Girko (1985)
In the limit of infinite n, the eigenvalues of an nxn random (not necessarily
symmetric) matrix W follow a uniform distribution in a unit circle in the complex
plane.
N = 20000;
sigma = 1.01;
W = randn(N,N)/sqrt(N)*sigma;
figure(k); clf; hold on;
plot(eig(W)-1, 'k.');
theta = linspace(0, 2*pi, 100);
plot(cos(theta)-1, sin(theta), 'k');
plot([0 0], [-1 1], 'r')
set(gca, 'color', [0.9400 0.9400 0.9400]); axis equal;
Girko (1984) Teor. Veroyatnost. i Primenen.
Circle law of eigenvalue density function.
1.00 0.99  1.01 
Dale’s law: neurons are either excitatory or inhibitory.
Rajan & Abbott (2006) Phys Rev Lett
Excitatory neuron
inhibitory neuron
0 forijW i 
0 forijW i 
11 1
1
1 1j
n n
i n
ni nj n
W W
W W
W W
W W
 
 
  
 
 
W
L L L
L L L
M MM M
Exercise: Examine the dynamics of the neural network when Dale’s law is
imposed on the random connection matrix, i.e., all components in a column are
either positive (excitatory) or negative (inhibitory). This problem is already
analyzed by Rajan and Abbott (2006).
Hierarchical modular connectivity structure.
Exercise: Examine the dynamics of the neural network with a hierarchical
modular connectivity matrix. This problem has NOT been analyzed so far, to my
knowledge.
Anatomical studies suggest that cortical neurons are not randomly connected and that
they are connected with a modular and hierarchical manner.
regular network random network small world
HM stochastic HM Cat visual cortex
n×n Hierarchical network:
(1) m on-diagonal blocks of size s are
connected with prob pm and n=ms.
(2) 1st level of off-diagonal blocks of
size s are connected with prob pc.
(3) Subsequent levels of off-diagonal
blocks are of size 2s, 4s, 8s, … and
connected with prob pcq, pcq2, pcq3, …
Robinson et al. (2009) Phys Rev Lett; Aljadeff, Stern, Sharpee (2015) Phys Rev Lett
Firing-rate equation.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 r s
dv
v F I
dt
   
1
uN
s
s s b b s
b
dI
I w u I
dt


       w uSynaptic input dynamics
Firing rate dynamics
s r ? s
s s
dI
I
dt
    w u  sv F I
s r =  r
dv
v F
dt
    w u
Feedforward and recurrent networks.
Feedforward network
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 
d
dt
   
v
v F Wu
Recurrent network
 
d
dt
    
v
v F Wu Mv
Excitatory-inhibitory network.
 
 
E
E E E EE E EI I
I I IE E III I
I
d
dt
d
dt


  
  


F h
v
v vM M
F vh M M
v
v
v v
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
Ev IvEE 0M II 0M
IE 0M
EI 0M
: Excitatory population activity : Inhibitory population activityEv Iv
Continuously labeled network.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
   1 v
T
a Nv v v v LDiscretely labeled network
Continuously labeled network
 v v 
 
11 1
1
v
v v v
N
ab
N N N
M M
M
M M
 
 
   
 
 
M
L
M O M
L
 ,M M  
 
          , ,r
dv
v F d W u M v
dt



         

        
  
Linear network: Selective amplification.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 
       ,r
dv
v h d M v
dt




       

      
   1
, cosM

 


 

  
Nonlinear network: Gain modulation.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 
       ,r
dv
v h d M v
dt

 

       

     

 

   1
, cosM

 


 

  
Nonlinear network: Winner-takes-all selection.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 
       ,r
dv
v h d M v
dt

 

       

     

 

   1
, cosM

 


 

  
Nonlinear network: Working memory.
Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience
 
       ,r
dv
v h d M v
dt

 

       

     

 

   1
, cosM

 


 

  
Non-symmetric connectivity: Winner-less competition.
- Dynamics in phase space connecting saddle points (heteroclinic connections).
- Memories are represented in terms of heteroclinic trajectories.
         i
i i ij j i
j
da
a t a t S t
dt
 
 
   
 
S S
Rabinovich et al. (2001) Phys Rev Lett
Winnerless competition: Coupled FN neurons.
   
 
i
x i i i i
i
i i
i
z i ij j
j
dx
f x y z x
dt
dy
x by a
dt
dz
z g G x
dt
 

   
  
   
0
1
1
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
1
0
1
0
0
0
0
1
0
0
0
1
1
0
1
0
1
1
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
1
1
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
1
Rabinovich et al. (2001) Phys Rev Lett
Winnerless competition: Coupled FN neurons.
   
 
i
x i i i i
i
i i
i
z i ji j
j
dx
f x y z x
dt
dy
x by a
dt
dz
z g G x
dt
 

   
  
   
Rabinovich et al. (2001) Phys Rev Lett
15 52 21 24 45 65 26 36
53 74 57 84 58 86 89 95 2
g g g
g
g g g g g
g g g g g gg
    
     
 
  
4
2
9
1
3
5
8
7
6
Winnerless competition: Matlab simulation.
Rabinovich et al. (2001) Phys Rev Lett
function Y = odeWLC(t, X)
% parameters
tau1 = 0.08; tau2 = 3.1;
a = 0.7; b = 0.8;
nu = -1.5;
% stimulus
S = [0.1; 0.15; 0.0; 0.0; 0.15; 0.1; 0.0; 0.0; 0.0];
% connectivity
g = zeros(9, 9); g0=2;
g(1,5)=g0; g(5,2)=g0; g(2,1)=g0; g(2,4)=g0;
g(4,5)=g0;
g(6,5)=g0; g(2,6)=g0; g(3,6)=g0; g(5,3)=g0;
g(7,4)=g0;
g(5,7)=g0; g(8,4)=g0; g(5,8)=g0; g(8,6)=g0;
g(8,9)=g0;
g(9,5)=g0;
% differential equations
x = X(1:9); y = X(10:18); z = X(19:27);
dxdt = ((x-x.^3/3)-y-z.*(x-nu)+0.35+S)/tau1;
dydt = x-b*y+a;
dzdt = (g'*(x>=0)-z)/tau2;
Y = [dxdt; dydt; dzdt];
x0 = [-1.2*ones(9,1); -0.62*ones(9,1);
0*ones(9,1)];
[T,X] = ode45(@odeWLC,[0 500], x0);
% all FHN neurons
figure(1);
for n=1:9
subplot(9,1,n); plot(T,X(:,n),'k');
end
% PCA
[y, s, l] = pca(X(:,1:9));
figure(3); plot3(s(:,2), s(:,3), s(:,4)); grid on;
Winnerless competition: Matlab simulation.
Rabinovich et al. (2001) Phys Rev Lett
Example: Olfactory processing in insects.
Mazor & Laurent (2005) Neuron
Cell assemblies: functional units of brain computation.
Definition
Cell assembly: a group of neurons that perform a given action or
represent a given percept.
Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci
Cell assemblies: functional units of brain computation.
Definition
Cell assembly: a group of neurons that perform a given action or
represent a given percept.
Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
1
1
1
1
2
2
2
2
3
3
3
3
4 4
4
4
Synfire chain in a feedforward network.
Diesmann et al. (1999) Nature
Synfire chain in a feedforward network.
Brian Spiking Neural Network Simulator,http://briansimulator.org/
Feedforward synfire chain requires activity tuning.
Diesmann et al. (1999) Nature
   , 30 spikes, 2 msn      , 40 spikes, 2 msn      , 50 spikes, 2 msn  
   , 60 spikes, 2 msn      , 70 spikes, 2 msn      , 80 spikes, 2 msn  
Feedforward synfire chain requires activity tuning.
Diesmann et al. (1999) Nature
   , 80 spikes,1msn      , 80 spikes, 2 msn      , 80 spikes, 3 msn  
   , 80 spikes, 4 msn      , 80 spikes, 5 msn      , 80 spikes,10 msn  
Synfire chain can be made robust by feedback connections.
Moldakarimov et al. (2015) PNAS
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
Synfire chain can be made robust by feedback connections.
Moldakarimov et al. (2015) PNAS
   , 80 spikes, 5 msn      , 80 spikes, 5 msn      , 80 spikes, 5 msn  
excitatory feedback: 0.1 excitatory feedback: 0.2 excitatory feedback: 0.3
Neural avalanche with scale-free dynamics.
Beggs & Plenz (2003) J Neurosci
 
3
size where
2
P



 :
Neural avalanche as a branching process.
Zapperi et al. (1995) Phys Rev Lett
   , , s
n n
s
f x p P s p x     , ,n ng x p Q p x

 
     2
1 , 1 ,n nf x p x p pf x p
          2
1 , 1 ,n ng x p p pg x p   
 ,nP s p
 ,nQ p
probability of avalanche of size s
probability of avalanche boundary
of size s
Generating function of avalanche size Generating function of avalanche
boundary size
Recursive relation of fn(x,p) Recursive relation of gn(x,p)
Neural avalanche as a branching process.
Zapperi et al. (1995) Phys Rev Lett
     2
, 1 ,f x p x p pf x p      
2
1 1 4
,
2
pqx
f x p
px
  

 
 2 2 3 4 5
2
2 3 !!1 1 1 5 7 1
1 1 1
2 8 16 128 256 2 2 !
s
s
s
s
x x x x x x x x
s



           L
 
 
 
2
2 1
2
2 3 !!1 1 4 1
, 2
2 2 !
s s
s
spqx
f x p qx pq x
px p s



  
   
 
 
 
 
  
 
 
3
2
2
2 3 !! 2 2 !1 1
2 1 4 2 exp
2 ! 2 ln 41 !
s ss s s
P s pq pq s
p s ps pqs
   
       
:
Recursive relation of fn(x,p) when n is large enough
Using a Taylor expansion
The generating function can be expanded as
Therefore the probability of avalanche size s is given as:
Echo-state network: harnessing chaotic units.
Jaeger & Haas (2004) Science
Echo-state network.
% load the data
trainLen = 2000; testLen = 2000; initLen = 100;
data = load('MackeyGlass_t17.txt');
% generate the ESN reservoir
inSize = 1; outSize = 1;
resSize = 1000;
a = 0.3; % leaking rate
rand( 'seed', 42 );
Win = (rand(resSize,1+inSize)-0.5) .* 1;
W = rand(resSize,resSize)-0.5;
opt.disp = 0;
rhoW = abs(eigs(W,1,'LM',opt));
disp 'done.'
W = W .* ( 1.25 /rhoW);
% allocated memory for the design (collected states)
matrix
X = zeros(1+inSize+resSize,trainLen-initLen);
% set the corresponding target matrix directly
Yt = data(initLen+2:trainLen+1)';
% run the reservoir with the data and collect X
x = zeros(resSize,1);
for t = 1:trainLen
u = data(t);
x = (1-a)*x + a*tanh( Win*[1;u] + W*x );
if t > initLen
X(:,t-initLen) = [1;u;x];
end
end
% train the output
reg = 1e-8; % regularization coefficient
X_T = X';
Wout = Yt*X_T * inv(X*X_T +
reg*eye(1+inSize+resSize));
Y = zeros(outSize,testLen);
u = data(trainLen+1);
for t = 1:testLen
x = (1-a)*x + a*tanh( Win*[1;u] + W*x );
y = Wout*[1;u;x];
Y(:,t) = y;
u = y;
end
http://minds.jacobs-university.de/mantas/code
Echo-state network.
Summary
• Population neural dynamics can be formulated by using
the techniques developed in physics and dynamical
systems, including spin models, phase transitions, scale-
free dynamics and so on.
• Population activity exhibits a variety of emergent
phenomena such as attractor memory dynamics, winner-
takes-all process, short-term memory, winner-less
competition, and so on.
• Population neural dynamics is a very active field, and a
number of novel researches keep going on today.

Más contenido relacionado

La actualidad más candente

Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
Hiroshi Kuwajima
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
Ming-Chi Liu
 

La actualidad más candente (20)

Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
 
Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)
 
Nn3
Nn3Nn3
Nn3
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
 
Dynamic response of structures with uncertain properties
Dynamic response of structures with uncertain propertiesDynamic response of structures with uncertain properties
Dynamic response of structures with uncertain properties
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
 
Neuronal self-organized criticality
Neuronal self-organized criticalityNeuronal self-organized criticality
Neuronal self-organized criticality
 
03 image transform
03 image transform03 image transform
03 image transform
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphs
 
Ch13
Ch13Ch13
Ch13
 
Deep learning algorithms
Deep learning algorithmsDeep learning algorithms
Deep learning algorithms
 
Fuzzy and nn
Fuzzy and nnFuzzy and nn
Fuzzy and nn
 
Perceptron
PerceptronPerceptron
Perceptron
 
Dynamics of nonlocal structures
Dynamics of nonlocal structuresDynamics of nonlocal structures
Dynamics of nonlocal structures
 
Stereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment DistributionStereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment Distribution
 
G234247
G234247G234247
G234247
 
Two parameter entropy of uncertain variable
Two parameter entropy of uncertain variableTwo parameter entropy of uncertain variable
Two parameter entropy of uncertain variable
 
03 Single layer Perception Classifier
03 Single layer Perception Classifier03 Single layer Perception Classifier
03 Single layer Perception Classifier
 

Destacado

Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jl
Shintaro Fukushima
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門
Shintaro Fukushima
 

Destacado (20)

強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
 
Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jl
 
Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1
 
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
 
最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編
 
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
 
Women in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyWomen in Tech: How to Build A Human Company
Women in Tech: How to Build A Human Company
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門
 
Os module 2 d
Os module 2 dOs module 2 d
Os module 2 d
 
KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent) KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent)
 
【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016
 
機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話
 
Kerberos
KerberosKerberos
Kerberos
 
What is the maker movement?
What is the maker movement?What is the maker movement?
What is the maker movement?
 
Network security
Network securityNetwork security
Network security
 
The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0
 
A Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeA Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with Stroke
 
Migraine: A dynamics disease
Migraine: A dynamics diseaseMigraine: A dynamics disease
Migraine: A dynamics disease
 
From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework. From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework.
 

Similar a JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics

Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Umberto Picchini
 
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR EXTRACTING ARTIFACT...
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR  EXTRACTING ARTIFACT...AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR  EXTRACTING ARTIFACT...
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR EXTRACTING ARTIFACT...
IJTRET-International Journal of Trendy Research in Engineering and Technology
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciACone
Adam Cone
 
Stability of Iteration for Some General Operators in b-Metric
Stability of Iteration for Some General Operators in b-MetricStability of Iteration for Some General Operators in b-Metric
Stability of Iteration for Some General Operators in b-Metric
Komal Goyal
 
chapter10
chapter10chapter10
chapter10
butest
 
Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01
laboratoridalbasso
 

Similar a JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics (20)

Model of visual cortex
Model of visual cortexModel of visual cortex
Model of visual cortex
 
PR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksPR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural Networks
 
iit
iitiit
iit
 
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
 
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
 
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural NetworksQuantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
 
science journal.pdf
science journal.pdfscience journal.pdf
science journal.pdf
 
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR EXTRACTING ARTIFACT...
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR  EXTRACTING ARTIFACT...AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR  EXTRACTING ARTIFACT...
AN IMPROVED NONINVASIVE AND MULTIMODEL PSO ALGORITHM FOR EXTRACTING ARTIFACT...
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciACone
 
Stability of Iteration for Some General Operators in b-Metric
Stability of Iteration for Some General Operators in b-MetricStability of Iteration for Some General Operators in b-Metric
Stability of Iteration for Some General Operators in b-Metric
 
chapter10
chapter10chapter10
chapter10
 
Common Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative SchemesCommon Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative Schemes
 
Technical
TechnicalTechnical
Technical
 
Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01
 
Models of neuronal populations
Models of neuronal populationsModels of neuronal populations
Models of neuronal populations
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
 
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 

Más de hirokazutanaka

Más de hirokazutanaka (11)

東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
 
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
 
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
 
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
 
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
 
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
 
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
 
東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2
 
東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1
 
Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course)
 
Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)
 

Último

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
MateoGardella
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 

Último (20)

Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 

JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics

  • 1. SS2016 Modern Neural Computation Lecture 3: Network Dynamics Hirokazu Tanaka School of Information Science Japan Institute of Science and Technology
  • 2. Neural network as a dynamical system. In this lecture we will learn: • Attractor dynamics - Hopfield model - Winner-take-all and Winner-less competition • Randomly connectivity - Girko’s circular law - Phase transition by synaptic variability • Collective dynamics - Hebb’s cell assemblies - Synfire chain, Neuronal avalanche, small-world network • Recurrent network dynamics - Echo-state network, liquid-state network - Self-organizing recurrent network (SORN) • Synchronization - Kuramoto model
  • 3. Hopfield model inspired by physics of ferromagnetism. 1 1 iS     Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics “Spin” variable for neuron i Excited Rest General connectivity Symmetric connectivity S1 S2 S3 S4 S5 S6 S7 S1 S2 S3 S4 S5 S6 S7 ij jiw w ij jiw w Here we will see that a neural network with symmetric connectivity exhibits an attractor dynamics.
  • 4. Hopfield model inspired by physics of ferromagnetism.    i ij j j h t w S t                  1 1 Pr 1| Pr 1| i i i i S i i S i h t i t h S t t h t e e S t t h t ee                       H H             1 tanh Pr 1| 2 i i i h t h t t i i hi h te S t t h t e e                   , ij i j i j S w S S    H1 1 iS           lim Pr 1| sgni i iS t t h t h t          Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
  • 5. Associative memory is stored in connection strengths. 1 1 M ij i jw p p N       1; 1, , , 1, ,ip i N M    L L M-stored patterns Overlap with patterns   1 1 1 1 M N M i ij j i j j i j j h t w S p p S p m N              1 j j j m p S N     Hebbian learning Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics
  • 6. Memory recall is a relaxation process to fixed pointes. Tank & Hopfield (1987) Scientific American 1 1 M ij i jw p p N     
  • 7. Memory recall is a relaxation process to fixed points. M=3 case Deterministic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Suppose that an initial pattern of population activity has a significant similarity with pattern μ=3: while there are no overlap with the other patterns:  3 0 0.4m t     1 2 0 0 0.m t m t    1 1 2 2 3 3 3 3 3 0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p                              23 3 3 0 0 1 1 1.i i i i i m t t p S t t p N N         Therefore, the population activity converges to the pattern μ=3.
  • 8. Memory recall is a relaxation process to fixed points. M=3 case Deterministic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Suppose that an initial pattern of population activity has a significant similarity with pattern μ=3: while there are no overlap with the other patterns:  3 0 0.4m t     1 2 0 0 0.m t m t    1 1 2 2 3 3 3 3 3 0 sgn sgn sgni i i i i i iS t t p m p m p m p m p m p                              23 3 3 0 0 1 1 1.i i i i i m t t p S t t p N N         Therefore, the population activity converges to the pattern μ=3.
  • 9. Memory recall is a relaxation process to fixed pointes. Stochastic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics        3 3 0Pr 1|i i i iS t t h t g p m t g p m t                          3 0Pr 1|i iS t t h t g m t               3 0Pr 1|i iS t t h t g m t           3 1ip  For 3 1ip  For         3 3 3 3 0 0 3 3 0 03 3 1 1 1 1 1 i i i i i i i i i p p m t t p S t t N N N S t t S t t N N N N                    
  • 10. Memory recall is a relaxation process to fixed pointes. Stochastic case Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics         3 3 3 3 0 0 3 3 0 03 3 1 1 1 1 1 i i i i i i i i i p p m t t p S t t N N N S t t S t t N N N N                              3 3 3 3 03 3 1 1 1 (*) Pr 1 Pr 1 2 1 i i i i i p S t t N S t t N S t t g m N N                                    3 3 3 3 03 3 1 1 1 (**) Pr 1 Pr 1 2 1 i i i i i p S t t N S t t N S t t g m N N                                 3 3 3 0 0 0m t t g m t g m t           Update rule (*) (**)
  • 11. Memory recall is a relaxation process to fixed pointes. Figure 17,8 in Gerstner (2014) Neuronal Dynamics      3 3 3 0 0 0m t t g m t g m t               1 1 tanh 2 g m m     3 3 0 0tanhm t t m t      If we assume a sigmoid function for activation: When β>1, the network is attracted to the pattern.
  • 12. Memory recall is a relaxation process to fixed pointes. Tank & Hopfield (1987) Scientific American
  • 15. Exercise: fill the Matlab code. %% parameters N = 5^2; % # neurons beta = 3; % inverse temperature T = 9; % # simulation steps %% M=3 patterns P = [ [1,1,1,1,-1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,-1,-1,1, -1,1,1,1,-1]; ... [1,1,1,1,1, -1,-1,-1,1,-1, -1,-1,-1,1,-1, 1,-1,-1,1,-1, 1,1,1,-1,-1]; ... [-1,1,1,1,1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, 1,-1,-1,-1,-1, -1,1,1,1,1] ]; figure(1); subplot(131); imagesc(reshape(P(1,:),5,5)); title('pattern 1'); subplot(132); imagesc(reshape(P(2,:),5,5)); title('pattern 2'); subplot(133); imagesc(reshape(P(3,:),5,5)); title('pattern 3'); % connectivity matrix W = 1/N*(P'*P); %% simulation S = 2*(rand(N,1)>0.5)-1; % initial pattern figure(2); subplot(1,9,1); imagesc(reshape(S,5,5)); title(['t=1']); for t=2:T h = W*S; % inputs p = 1/2*(1+tanh(beta*h)); % prob(S=+1) S = 2*(rand(N,1)<p)-1; % stochastic Glaubner dynamics figure(2); subplot(1,9,t); imagesc(reshape(S,5,5)); title(['t=' num2str(t)]); end Write your own code here.
  • 16. How many patterns can N-neuron network remember? Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Stability condition  0i iS t p  Assume that the network represents the pattern ν at time t0: Then, in deterministic case, the network at time t0+Δt is determined by       0 1 1 1 1 1 1 sgn 1 1 sgn 1 sgn 1 1 sgn 1 N M i j j N N j j j j N j i j i j i j i i i j i j S t t p p p N p p p p p p N N p p p p p N p N                                                                     1 i i j j N jp p p p              
  • 17. How many patterns can N-neuron network remember?      error 2 1 Pr 1 Pr 1 Pr 1| 1 erf 2 0 2 ,i i i i N P a a a a M                            : N Hopfield (1982) PNAS; Gerstner (2014) Neuronal Dynamics Stability condition    0 sgn 1ii iS t t p a      1 1 N i ji i j j a p p p p N               1 E 0, Vari i M a a N      Therefore, the number of patterns M must be small enough compared to the number of neurons N.
  • 18. Physics of spin systems. Ising model Spin glass   , i j i j S J S S    H   , ij i j i j S J S S    H , : nearest neighbori j 2 0 ,;ij ij J J N N J J       : N   , ij i j i j S J S S    H Short-range interaction (Edwards-Anderson model) Long-range interaction (Sherrington-Kirkpatrick model) Ising (1925); Edwards & Anderson (1975); Sherrington & Kirkpartick (1975)
  • 19. Simple dynamics with random connections.  tanh d dt     x x Wx 2 0,ijW N       : N  N d dt     x I W x Dynamics of N interconnected neural network with random connections Linearized dynamics The origin x=0 is a fixed point. Whether it is stable or unstable is determined by the eigenvalues of the connectivity matrix W. S1 S2 S3 S4 S5 S6 S7
  • 20. Semi-circle law of eigenvalue density function. 1 0,ij N W       : NAll components are normally distributed: And symmetric: ij ijW W Semi-circle law: Wigner (1951) In the limit of infinite n, the eigenvalues of an nxn random symmetric matrix W follow a semi-circle distribution:   21 4 2 p      %% Parameters n =10000; t =1; v = [ ]; dx = 0.1; %% Experiment for i=1:t, a=randn(n); % random nxn matrix s =(a+a')/2 ; % symmetrized matrix v =[v ; eig(s)] ; % eigenvalues end v=v/sqrt (n/2) ; %% Plot [count , x]= hist(v , -2:dx:2) ; cla reset; hold on ; %% Theory plot(x , sqrt (4-x.^2)/(2*pi) , 'k-', 'LineWidth' , 2); bar (x , count/(t*n*dx) , 'facecolor', [0.7 0.7 0.7]) ; Wigner (1951)
  • 21. Circular law of eigenvalue density function. 1 0,ij N W       : NAll components are normally distributed: Circular law: Girko (1985) In the limit of infinite n, the eigenvalues of an nxn random (not necessarily symmetric) matrix W follow a uniform distribution in a unit circle in the complex plane. N = 20000; sigma = 1.01; W = randn(N,N)/sqrt(N)*sigma; figure(k); clf; hold on; plot(eig(W)-1, 'k.'); theta = linspace(0, 2*pi, 100); plot(cos(theta)-1, sin(theta), 'k'); plot([0 0], [-1 1], 'r') set(gca, 'color', [0.9400 0.9400 0.9400]); axis equal; Girko (1984) Teor. Veroyatnost. i Primenen.
  • 22. Circle law of eigenvalue density function. 1.00 0.99  1.01 
  • 23. Dale’s law: neurons are either excitatory or inhibitory. Rajan & Abbott (2006) Phys Rev Lett Excitatory neuron inhibitory neuron 0 forijW i  0 forijW i  11 1 1 1 1j n n i n ni nj n W W W W W W W W            W L L L L L L M MM M Exercise: Examine the dynamics of the neural network when Dale’s law is imposed on the random connection matrix, i.e., all components in a column are either positive (excitatory) or negative (inhibitory). This problem is already analyzed by Rajan and Abbott (2006).
  • 24. Hierarchical modular connectivity structure. Exercise: Examine the dynamics of the neural network with a hierarchical modular connectivity matrix. This problem has NOT been analyzed so far, to my knowledge. Anatomical studies suggest that cortical neurons are not randomly connected and that they are connected with a modular and hierarchical manner. regular network random network small world HM stochastic HM Cat visual cortex n×n Hierarchical network: (1) m on-diagonal blocks of size s are connected with prob pm and n=ms. (2) 1st level of off-diagonal blocks of size s are connected with prob pc. (3) Subsequent levels of off-diagonal blocks are of size 2s, 4s, 8s, … and connected with prob pcq, pcq2, pcq3, … Robinson et al. (2009) Phys Rev Lett; Aljadeff, Stern, Sharpee (2015) Phys Rev Lett
  • 25. Firing-rate equation. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience  r s dv v F I dt     1 uN s s s b b s b dI I w u I dt          w uSynaptic input dynamics Firing rate dynamics s r ? s s s dI I dt     w u  sv F I s r =  r dv v F dt     w u
  • 26. Feedforward and recurrent networks. Feedforward network Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience   d dt     v v F Wu Recurrent network   d dt      v v F Wu Mv
  • 27. Excitatory-inhibitory network.     E E E E EE E EI I I I IE E III I I d dt d dt           F h v v vM M F vh M M v v v v Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience Ev IvEE 0M II 0M IE 0M EI 0M : Excitatory population activity : Inhibitory population activityEv Iv
  • 28. Continuously labeled network. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience    1 v T a Nv v v v LDiscretely labeled network Continuously labeled network  v v    11 1 1 v v v v N ab N N N M M M M M             M L M O M L  ,M M               , ,r dv v F d W u M v dt                          
  • 29. Linear network: Selective amplification. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                        1 , cosM           
  • 30. Nonlinear network: Gain modulation. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  • 31. Nonlinear network: Winner-takes-all selection. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  • 32. Nonlinear network: Working memory. Chapter 7 in Dayan & Abbott (2000) Theoretical Neuroscience          ,r dv v h d M v dt                           1 , cosM           
  • 33. Non-symmetric connectivity: Winner-less competition. - Dynamics in phase space connecting saddle points (heteroclinic connections). - Memories are represented in terms of heteroclinic trajectories.          i i i ij j i j da a t a t S t dt           S S Rabinovich et al. (2001) Phys Rev Lett
  • 34. Winnerless competition: Coupled FN neurons.       i x i i i i i i i i z i ij j j dx f x y z x dt dy x by a dt dz z g G x dt               0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 Rabinovich et al. (2001) Phys Rev Lett
  • 35. Winnerless competition: Coupled FN neurons.       i x i i i i i i i i z i ji j j dx f x y z x dt dy x by a dt dz z g G x dt               Rabinovich et al. (2001) Phys Rev Lett 15 52 21 24 45 65 26 36 53 74 57 84 58 86 89 95 2 g g g g g g g g g g g g g g gg                 4 2 9 1 3 5 8 7 6
  • 36. Winnerless competition: Matlab simulation. Rabinovich et al. (2001) Phys Rev Lett function Y = odeWLC(t, X) % parameters tau1 = 0.08; tau2 = 3.1; a = 0.7; b = 0.8; nu = -1.5; % stimulus S = [0.1; 0.15; 0.0; 0.0; 0.15; 0.1; 0.0; 0.0; 0.0]; % connectivity g = zeros(9, 9); g0=2; g(1,5)=g0; g(5,2)=g0; g(2,1)=g0; g(2,4)=g0; g(4,5)=g0; g(6,5)=g0; g(2,6)=g0; g(3,6)=g0; g(5,3)=g0; g(7,4)=g0; g(5,7)=g0; g(8,4)=g0; g(5,8)=g0; g(8,6)=g0; g(8,9)=g0; g(9,5)=g0; % differential equations x = X(1:9); y = X(10:18); z = X(19:27); dxdt = ((x-x.^3/3)-y-z.*(x-nu)+0.35+S)/tau1; dydt = x-b*y+a; dzdt = (g'*(x>=0)-z)/tau2; Y = [dxdt; dydt; dzdt]; x0 = [-1.2*ones(9,1); -0.62*ones(9,1); 0*ones(9,1)]; [T,X] = ode45(@odeWLC,[0 500], x0); % all FHN neurons figure(1); for n=1:9 subplot(9,1,n); plot(T,X(:,n),'k'); end % PCA [y, s, l] = pca(X(:,1:9)); figure(3); plot3(s(:,2), s(:,3), s(:,4)); grid on;
  • 37. Winnerless competition: Matlab simulation. Rabinovich et al. (2001) Phys Rev Lett
  • 38. Example: Olfactory processing in insects. Mazor & Laurent (2005) Neuron
  • 39. Cell assemblies: functional units of brain computation. Definition Cell assembly: a group of neurons that perform a given action or represent a given percept. Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci
  • 40. Cell assemblies: functional units of brain computation. Definition Cell assembly: a group of neurons that perform a given action or represent a given percept. Hebb (1949) The Organization of Behavior; Harris (2005) Nature Rev Neurosci 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
  • 41. Synfire chain in a feedforward network. Diesmann et al. (1999) Nature
  • 42. Synfire chain in a feedforward network. Brian Spiking Neural Network Simulator,http://briansimulator.org/
  • 43. Feedforward synfire chain requires activity tuning. Diesmann et al. (1999) Nature    , 30 spikes, 2 msn      , 40 spikes, 2 msn      , 50 spikes, 2 msn      , 60 spikes, 2 msn      , 70 spikes, 2 msn      , 80 spikes, 2 msn  
  • 44. Feedforward synfire chain requires activity tuning. Diesmann et al. (1999) Nature    , 80 spikes,1msn      , 80 spikes, 2 msn      , 80 spikes, 3 msn      , 80 spikes, 4 msn      , 80 spikes, 5 msn      , 80 spikes,10 msn  
  • 45. Synfire chain can be made robust by feedback connections. Moldakarimov et al. (2015) PNAS 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
  • 46. Synfire chain can be made robust by feedback connections. Moldakarimov et al. (2015) PNAS    , 80 spikes, 5 msn      , 80 spikes, 5 msn      , 80 spikes, 5 msn   excitatory feedback: 0.1 excitatory feedback: 0.2 excitatory feedback: 0.3
  • 47. Neural avalanche with scale-free dynamics. Beggs & Plenz (2003) J Neurosci   3 size where 2 P     :
  • 48. Neural avalanche as a branching process. Zapperi et al. (1995) Phys Rev Lett    , , s n n s f x p P s p x     , ,n ng x p Q p x         2 1 , 1 ,n nf x p x p pf x p           2 1 , 1 ,n ng x p p pg x p     ,nP s p  ,nQ p probability of avalanche of size s probability of avalanche boundary of size s Generating function of avalanche size Generating function of avalanche boundary size Recursive relation of fn(x,p) Recursive relation of gn(x,p)
  • 49. Neural avalanche as a branching process. Zapperi et al. (1995) Phys Rev Lett      2 , 1 ,f x p x p pf x p       2 1 1 4 , 2 pqx f x p px        2 2 3 4 5 2 2 3 !!1 1 1 5 7 1 1 1 1 2 8 16 128 256 2 2 ! s s s s x x x x x x x x s               L       2 2 1 2 2 3 !!1 1 4 1 , 2 2 2 ! s s s spqx f x p qx pq x px p s                          3 2 2 2 3 !! 2 2 !1 1 2 1 4 2 exp 2 ! 2 ln 41 ! s ss s s P s pq pq s p s ps pqs             : Recursive relation of fn(x,p) when n is large enough Using a Taylor expansion The generating function can be expanded as Therefore the probability of avalanche size s is given as:
  • 50. Echo-state network: harnessing chaotic units. Jaeger & Haas (2004) Science
  • 51. Echo-state network. % load the data trainLen = 2000; testLen = 2000; initLen = 100; data = load('MackeyGlass_t17.txt'); % generate the ESN reservoir inSize = 1; outSize = 1; resSize = 1000; a = 0.3; % leaking rate rand( 'seed', 42 ); Win = (rand(resSize,1+inSize)-0.5) .* 1; W = rand(resSize,resSize)-0.5; opt.disp = 0; rhoW = abs(eigs(W,1,'LM',opt)); disp 'done.' W = W .* ( 1.25 /rhoW); % allocated memory for the design (collected states) matrix X = zeros(1+inSize+resSize,trainLen-initLen); % set the corresponding target matrix directly Yt = data(initLen+2:trainLen+1)'; % run the reservoir with the data and collect X x = zeros(resSize,1); for t = 1:trainLen u = data(t); x = (1-a)*x + a*tanh( Win*[1;u] + W*x ); if t > initLen X(:,t-initLen) = [1;u;x]; end end % train the output reg = 1e-8; % regularization coefficient X_T = X'; Wout = Yt*X_T * inv(X*X_T + reg*eye(1+inSize+resSize)); Y = zeros(outSize,testLen); u = data(trainLen+1); for t = 1:testLen x = (1-a)*x + a*tanh( Win*[1;u] + W*x ); y = Wout*[1;u;x]; Y(:,t) = y; u = y; end http://minds.jacobs-university.de/mantas/code
  • 53. Summary • Population neural dynamics can be formulated by using the techniques developed in physics and dynamical systems, including spin models, phase transitions, scale- free dynamics and so on. • Population activity exhibits a variety of emergent phenomena such as attractor memory dynamics, winner- takes-all process, short-term memory, winner-less competition, and so on. • Population neural dynamics is a very active field, and a number of novel researches keep going on today.