SlideShare una empresa de Scribd logo
1 de 73
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 1
Lecture 7Lecture 7
Artificial neural networks:Artificial neural networks:
Supervised learningSupervised learning
ss Introduction, or how the brain worksIntroduction, or how the brain works
ss The neuron as a simple computing elementThe neuron as a simple computing element
ss The perceptronThe perceptron
ss Multilayer neural networksMultilayer neural networks
ss Accelerated learning in multilayer neural networksAccelerated learning in multilayer neural networks
ss The Hopfield networkThe Hopfield network
ss Bidirectional associative memories (BAM)Bidirectional associative memories (BAM)
ss SummarySummary
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 2
Introduction, orIntroduction, or howhow thethe brainbrain worksworks
MachineMachine learning involves adaptive mechanismslearning involves adaptive mechanisms
that enable computers to learn from experience,that enable computers to learn from experience,
learn by example and learn by analogy. Learninglearn by example and learn by analogy. Learning
capabilities can improve the performance of ancapabilities can improve the performance of an
intelligent system over time. The most popularintelligent system over time. The most popular
approaches to machine learning areapproaches to machine learning are artificialartificial
neural networksneural networks andand genetic algorithmsgenetic algorithms. This. This
lecture is dedicated to neural networks.lecture is dedicated to neural networks.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 3
ss AA neural networkneural network can be defined as a model ofcan be defined as a model of
reasoning based on the human brain. The brainreasoning based on the human brain. The brain
consists of a densely interconnected set of nerveconsists of a densely interconnected set of nerve
cells, or basic information-processing units, calledcells, or basic information-processing units, called
neuronsneurons..
ss The human brain incorporates nearly 10 billionThe human brain incorporates nearly 10 billion
neurons and 60 trillion connections,neurons and 60 trillion connections, synapsessynapses,,
between them. By using multiple neuronsbetween them. By using multiple neurons
simultaneously, the brain cansimultaneously, the brain can perform its functionsperform its functions
much faster than the fastest computers in existencemuch faster than the fastest computers in existence
today.today.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 4
ss EachEach neuronneuron has a very simple structure, but anhas a very simple structure, but an
army of such elements constitutes a tremendousarmy of such elements constitutes a tremendous
processing power.processing power.
ss AA neuronneuron consists of a cell body,consists of a cell body, somasoma, a number of, a number of
fibersfibers calledcalled dendritesdendrites, and a single long, and a single long fiberfiber
called thecalled the axonaxon..
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 5
Biological neural networkBiological neural network
Soma Soma
Synapse
Synapse
Dendrites
Axon
Synapse
Dendrites
Axon
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 6
ss Our brain can be considered as a highly complex,Our brain can be considered as a highly complex,
non-linear and parallel information-processingnon-linear and parallel information-processing
system.system.
ss Information is stored and processed in a neuralInformation is stored and processed in a neural
network simultaneously throughout the wholenetwork simultaneously throughout the whole
network, rather than at specific locations. In othernetwork, rather than at specific locations. In other
words, in neural networks, both data and itswords, in neural networks, both data and its
processing areprocessing are globalglobal rather than local.rather than local.
ss Learning is a fundamental and essentialLearning is a fundamental and essential
characteristic of biological neural networks. Thecharacteristic of biological neural networks. The
ease with which they can learn led to attempts toease with which they can learn led to attempts to
emulate a biological neural network in a computer.emulate a biological neural network in a computer.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 7
ss An artificial neural network consists of a number ofAn artificial neural network consists of a number of
very simple processors, also calledvery simple processors, also called neuronsneurons, which, which
are analogous to the biological neurons in the brain.are analogous to the biological neurons in the brain.
ss The neurons are connected by weighted linksThe neurons are connected by weighted links
passingpassing signals from onesignals from one neuronneuron to another.to another.
ss The output signal is transmitted through theThe output signal is transmitted through the
neuron’s outgoing connection. The outgoingneuron’s outgoing connection. The outgoing
connection splitsconnection splits into a number of branches thatinto a number of branches that
transmit the same signal. The outgoing branchestransmit the same signal. The outgoing branches
terminate at the incoming connections of otherterminate at the incoming connections of other
neuronsneurons in the network.in the network.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 8
Architecture of a typical artificial neural networkArchitecture of a typical artificial neural network
Input Layer Output Layer
Middle Layer
InputSignals
OutputSignals
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 9
Biological Neural Network Artificial Neural Network
Soma
Dendrite
Axon
Synapse
Neuron
Input
Output
Weight
Analogy between biological andAnalogy between biological and
artificialartificial neuralneural networksnetworks
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 10
TheThe neuronneuron as aas a simplesimple computingcomputing elementelement
Diagram of aDiagram of a neuronneuron
Neuron Y
Input Signals
x1
x2
xn
Output Signals
Y
Y
Y
w2
w1
wn
Weights
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 11
ss TheThe neuronneuron computes the weighted sum of the inputcomputes the weighted sum of the input
signals and compares the result with asignals and compares the result with a thresholdthreshold
valuevalue,, θθ. If the net input is less than the threshold,. If the net input is less than the threshold,
thethe neuronneuron output is –1. But if the net input is greateroutput is –1. But if the net input is greater
than or equal to the threshold, thethan or equal to the threshold, the neuronneuron becomesbecomes
activated and its output attains a value +1activated and its output attains a value +1..
ss The neuron uses the following transfer orThe neuron uses the following transfer or activationactivation
functionfunction::
ss This type of activation function is called aThis type of activation function is called a signsign
functionfunction..
∑
=
=
n
i
iiwxX
1 


θ<−
θ≥+
=
X
X
Y
if,1
if,1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 12
Activation functions of aActivation functions of a neuronneuron
Stepfunction Signfunction
+1
-1
0
+1
-1
0X
Y
X
Y
+1
-1
0 X
Y
Sigmoidfunction
+1
-1
0 X
Y
Linearfunction



<
≥
=
0if,0
0if,1
X
X
Ystep



<−
≥+
=
0if,1
0if,1
X
X
Ysign
X
sigmoid
e
Y
−
+
=
1
1
XYlinear
=
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 13
Can a singleCan a single neuronneuron learn a task?learn a task?
ss In 1958,In 1958, FrankFrank RosenblattRosenblatt introduced a trainingintroduced a training
algorithm that provided the first procedure foralgorithm that provided the first procedure for
training a simple ANN: atraining a simple ANN: a perceptronperceptron..
ss TheThe perceptronperceptron is the simplest form of a neuralis the simplest form of a neural
network. It consists of a singlenetwork. It consists of a single neuronneuron withwith
adjustableadjustable synaptic weights and asynaptic weights and a hardhard limiterlimiter..
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 14
Threshold
Inputs
x1
x2
Output
Y∑∑∑∑
Hard
Limiter
w2
w1
Linear
Combiner
θ
Single-layer two-inputSingle-layer two-input perceptronperceptron
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 15
TheThe PerceptronPerceptron
ss The operation ofThe operation of Rosenblatt’s perceptronRosenblatt’s perceptron is basedis based
on theon the McCullochMcCulloch andand Pitts neuronPitts neuron modelmodel. The. The
model consists of a linearmodel consists of a linear combinercombiner followed by afollowed by a
hardhard limiterlimiter..
ss The weighted sum of the inputs is applied to theThe weighted sum of the inputs is applied to the
hardhard limiterlimiter, which produces an output equal to +1, which produces an output equal to +1
if its input is positive andif its input is positive and −−1 if it is negative.1 if it is negative.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 16
ss The aim of the perceptron is to classify inputs,The aim of the perceptron is to classify inputs,
xx11,, xx22, . . .,, . . ., xxnn, into one of two classes, say, into one of two classes, say
AA11 andand AA22..
ss In the case of an elementary perceptron, the n-In the case of an elementary perceptron, the n-
dimensional space is divided by adimensional space is divided by a hyperplanehyperplane intointo
two decision regions. The hyperplane is defined bytwo decision regions. The hyperplane is defined by
thethe linearlylinearly separableseparable functionfunction::
0
1
=θ−∑
=
n
i
iiwx
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 17
Linear separabilityLinear separability in thein the perceptronsperceptrons
x1
x2
Class A2
Class A1
1
2
x1w1 + x2w2 − θ = 0
(a) Two-input perceptron. (b) Three-input perceptron.
x2
x1
x3
x1w1 + x2w2 + x3w3 − θ = 0
1
2
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 18
ThisThis is done by making small adjustments in theis done by making small adjustments in the
weights to reduce the difference between the actualweights to reduce the difference between the actual
and desired outputs of theand desired outputs of the perceptronperceptron. The initial. The initial
weights are randomly assigned, usually in the rangeweights are randomly assigned, usually in the range
[[−−0.5, 0.5], and then updated to obtain the output0.5, 0.5], and then updated to obtain the output
consistent with the training examples.consistent with the training examples.
How does the perceptronHow does the perceptron learn its classificationlearn its classification
tasks?tasks?
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 19
ss If at iterationIf at iteration pp, the actual output is, the actual output is YY((pp) and the) and the
desired output isdesired output is YYdd ((pp), then the error is given by:), then the error is given by:
wherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . .
IterationIteration pp here refers to thehere refers to the ppth training exampleth training example
presented to the perceptron.presented to the perceptron.
ss If the error,If the error, ee((pp), is positive, we need to increase), is positive, we need to increase
perceptron outputperceptron output YY((pp), but if it is negative, we), but if it is negative, we
need to decreaseneed to decrease YY((pp).).
)()()( pYpYpe d −=
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 20
The perceptron learning ruleThe perceptron learning rule
wherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . .
αα is theis the learning ratelearning rate, a positive constant less than, a positive constant less than
unity.unity.
The perceptron learning rule was first proposed byThe perceptron learning rule was first proposed by
RosenblattRosenblatt in 1960. Using this rule we can derivein 1960. Using this rule we can derive
the perceptron training algorithm for classificationthe perceptron training algorithm for classification
tasks.tasks.
)()()()1( pepxpwpw iii ⋅⋅+=+ α
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 21
Step 1Step 1:: InitialisationInitialisation
SetSet initial weightsinitial weights ww11,, ww22,…,,…, wwnn and thresholdand threshold θθ
to random numbers in the range [to random numbers in the range [−−0.5, 0.5].0.5, 0.5].
IfIf the error,the error, ee((pp), is positive, we need to increase), is positive, we need to increase
perceptronperceptron outputoutput YY((pp), but if it is negative, we), but if it is negative, we
need to decreaseneed to decrease YY((pp).).
Perceptron’s tarining algorithmPerceptron’s tarining algorithm
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 22
Step 2Step 2: Activation: Activation
Activate the perceptron by applying inputsActivate the perceptron by applying inputs xx11((pp),),
xx22((pp),…,),…, xxnn((pp) and desired output) and desired output YYdd ((pp).).
Calculate the actual output at iterationCalculate the actual output at iteration pp = 1= 1
wherewhere nn is the number of the perceptron inputs,is the number of the perceptron inputs,
andand stepstep is a step activationis a step activation function.function.
Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued)








θ−= ∑
=
n
i
ii pwpxsteppY
1
)()()(
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 23
Step 3Step 3: Weight: Weight trainingtraining
UpdateUpdate the weights of thethe weights of the perceptronperceptron
wherewhere ∆∆wwii((pp) is) is the weight correction at iterationthe weight correction at iteration pp..
The weight correction is computed by theThe weight correction is computed by the deltadelta
rulerule::
Step 4Step 4:: IterationIteration
IncreaseIncrease iterationiteration pp by one, go back toby one, go back to Step 2Step 2 andand
repeat the process until convergence.repeat the process until convergence.
)()()1( pwpwpw iii ∆+=+
Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued)
)()()( pepxpw ii ⋅⋅α=∆
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 24
Example of perceptronExample of perceptron learning: the logical operationlearning: the logical operation ANDAND
Inputs
x1 x2
0
0
1
1
0
1
0
1
0
0
0
Epoch
Desired
output
Yd
1
Initial
weights
w1 w2
1
0.3
0.3
0.3
0.2
−0.1
−0.1
−0.1
−0.1
0
0
1
0
Actual
output
Y
Error
e
0
0
−1
1
Final
weights
w1 w2
0.3
0.3
0.2
0.3
−0.1
−0.1
−0.1
0.0
0
0
1
1
0
1
0
1
0
0
0
2
1
0.3
0.3
0.3
0.2
0
0
1
1
0
0
−1
0
0.3
0.3
0.2
0.2
0.0
0.0
0.0
0.0
0
0
1
1
0
1
0
1
0
0
0
3
1
0.2
0.2
0.2
0.1
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
1
0
0
0
−1
1
0.2
0.2
0.1
0.2
0.0
0.0
0.0
0.1
0
0
1
1
0
1
0
1
0
0
0
4
1
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
0
−1
0
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
1
0
1
0
0
0
5
1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
1
0
0
0
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
Threshold: θ = 0.2; learning rate: α = 0.1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 25
Two-dimensionalTwo-dimensional plots of basic logical operationsplots of basic logical operations
x1
x2
1
(a) AND (x1 ∩ x2)
1
x1
x2
1
1
(b) OR (x1 ∪ x2)
x1
x2
1
1
(c) Exclusive-OR
(x1 ⊕ x2)
00 0
AA perceptronperceptron can learn the operationscan learn the operations ANDAND andand OROR,,
but notbut not Exclusive-ORExclusive-OR..
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 26
MultilayerMultilayer neuralneural networksnetworks
ss AA multilayer perceptronmultilayer perceptron is ais a feedforwardfeedforward neuralneural
network with one or more hidden layers.network with one or more hidden layers.
ss TheThe network consists of annetwork consists of an input layerinput layer of sourceof source
neuronsneurons, at least one middle or, at least one middle or hidden layerhidden layer ofof
computationalcomputational neuronsneurons, and an, and an output layeroutput layer ofof
computationalcomputational neuronsneurons..
ss TheThe input signals are propagated in a forwardinput signals are propagated in a forward
direction on a layer-by-layer basis.direction on a layer-by-layer basis.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 27
Multilayer perceptronMultilayer perceptron with two hidden layerswith two hidden layers
Input
layer
First
hidden
layer
Second
hidden
layer
Output
layer
OutputSignals
InputSignals
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 28
What does the middle layer hideWhat does the middle layer hide??
ss A hidden layer “hides” its desired output.A hidden layer “hides” its desired output.
NeuronsNeurons in the hidden layer cannot be observedin the hidden layer cannot be observed
through the input/output behaviour of the network.through the input/output behaviour of the network.
There is no obvious way to know what the desiredThere is no obvious way to know what the desired
output of the hidden layer should be.output of the hidden layer should be.
ss Commercial ANNs incorporate three andCommercial ANNs incorporate three and
sometimes four layers, including one or twosometimes four layers, including one or two
hidden layershidden layers. Each layer can contain from 10 to. Each layer can contain from 10 to
10001000 neuronsneurons. Experimental neural networks may. Experimental neural networks may
have five or even six layers, including three orhave five or even six layers, including three or
four hidden layers, and utilise millions offour hidden layers, and utilise millions of neurons.neurons.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 29
Back-propagation neural networkBack-propagation neural network
ss Learning in aLearning in a multilayermultilayer network proceeds thenetwork proceeds the
same way as for asame way as for a perceptronperceptron..
ss A training set of input patterns is presented to theA training set of input patterns is presented to the
network.network.
ss TheThe network computes its output pattern, and ifnetwork computes its output pattern, and if
there is an errorthere is an error −− or in other words a differenceor in other words a difference
between actual and desired output patternsbetween actual and desired output patterns −− thethe
weights are adjusted to reduce this error.weights are adjusted to reduce this error.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 30
ss In a back-propagation neural network, the learningIn a back-propagation neural network, the learning
algorithm has two phases.algorithm has two phases.
ss First, a training input pattern is presented to theFirst, a training input pattern is presented to the
network input layer. The network propagates thenetwork input layer. The network propagates the
input pattern from layer to layer until the outputinput pattern from layer to layer until the output
pattern is generated by the output layer.pattern is generated by the output layer.
ss If this pattern is different from the desired output,If this pattern is different from the desired output,
an error is calculated and then propagatedan error is calculated and then propagated
backwards through the network from the outputbackwards through the network from the output
layer to the input layer. The weights are modifiedlayer to the input layer. The weights are modified
as the error is propagated.as the error is propagated.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 31
Three-layer back-propagation neural networkThree-layer back-propagation neural network
Input
layer
xi
x1
x2
xn
1
2
i
n
Output
layer
1
2
k
l
yk
y1
y2
yl
Input signals
Error signals
wjk
Hidden
layer
wij
1
2
j
m
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 32
Step 1Step 1:: InitialisationInitialisation
SetSet all the weights and threshold levels of theall the weights and threshold levels of the
network to random numbers uniformlynetwork to random numbers uniformly
distributed inside a small range:distributed inside a small range:
wherewhere FFii is the total number of inputs ofis the total number of inputs of neuronneuron ii
in the network. The weight initialisation is donein the network. The weight initialisation is done
on aon a neuronneuron-by--by-neuronneuron basis.basis.
The back-propagation trainingThe back-propagation training algorithmalgorithm






+−
ii FF
4.2
,
4.2
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 33
Step 2Step 2:: ActivationActivation
ActivateActivate the back-propagation neural network bythe back-propagation neural network by
applying inputsapplying inputs xx11((pp),), xx22((pp),…,),…, xxnn((pp) and desired) and desired
outputsoutputs yydd,1,1((pp),), yydd,2,2((pp),…,),…, yydd,,nn((pp).).
((aa) Calculate the actual outputs of the) Calculate the actual outputs of the neuronsneurons inin
the hidden layer:the hidden layer:
wherewhere nn is the number of inputs ofis the number of inputs of neuronneuron jj in thein the
hidden layer, andhidden layer, and sigmoidsigmoid is theis the sigmoidsigmoid activationactivation
functionfunction..








θ−⋅= ∑
=
j
n
i
ijij pwpxsigmoidpy
1
)()()(
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 34
((bb) Calculate the actual outputs of the neurons in) Calculate the actual outputs of the neurons in
the output layer:the output layer:
wherewhere mm is the number of inputs of neuronis the number of inputs of neuron kk in thein the
output layer.output layer.








θ−⋅= ∑
=
k
m
j
jkjkk pwpxsigmoidpy
1
)()()(
Step 2Step 2: Activation (continued): Activation (continued)
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 35
Step 3Step 3: Weight: Weight trainingtraining
UpdateUpdate the weights in the back-propagation networkthe weights in the back-propagation network
propagating backward the errors associated withpropagating backward the errors associated with
outputoutput neuronsneurons..
((aa) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons in thein the
output layer:output layer:
wherewhere
Calculate the weight corrections:Calculate the weight corrections:
UpdateUpdate the weights at the outputthe weights at the output neuronsneurons::
[ ] )()(1)()( pepypyp kkkk ⋅−⋅=δ
)()()( , pypype kkdk −=
)()()( ppypw kjjk δα ⋅⋅=∆
)()()1( pwpwpw jkjkjk ∆+=+
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 36
((bb) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons inin
the hidden layer:the hidden layer:
Calculate the weight corrections:Calculate the weight corrections:
Update the weights at the hiddenUpdate the weights at the hidden neuronsneurons::
)()()(1)()(
1
][ pwppypyp jk
l
k
kjjj ∑
=
⋅−⋅= δδ
)()()( ppxpw jiij δα ⋅⋅=∆
)()()1( pwpwpw ijijij ∆+=+
Step 3Step 3: Weight training (continued): Weight training (continued)
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 37
Step 4Step 4: Iteration: Iteration
Increase iterationIncrease iteration pp by one, go back toby one, go back to Step 2Step 2 andand
repeat the process until the selected error criterionrepeat the process until the selected error criterion
is satisfied.is satisfied.
As an example, we may consider the three-layerAs an example, we may consider the three-layer
back-propagation network. Suppose that theback-propagation network. Suppose that the
network is required to perform logical operationnetwork is required to perform logical operation
Exclusive-ORExclusive-OR. Recall that a single-layer. Recall that a single-layer perceptronperceptron
could not do this operation. Now we will apply thecould not do this operation. Now we will apply the
three-layer net.three-layer net.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 38
Three-layer network for solving theThree-layer network for solving the
Exclusive-OR operationExclusive-OR operation
y55
x1 31
x2
Input
layer
Output
layer
Hidden layer
42
θ3
w13
w24
w23
w24
w35
w45
θ4
θ5
−1
−1
−1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 39
ss The effect of the threshold applied to aThe effect of the threshold applied to a neuronneuron in thein the
hidden or output layer is represented by its weight,hidden or output layer is represented by its weight, θθ,,
connected to a fixed input equal toconnected to a fixed input equal to −−1.1.
ss The initial weights and threshold levels are setThe initial weights and threshold levels are set
randomly as follows:randomly as follows:
ww1313 = 0.5,= 0.5, ww1414 = 0.9,= 0.9, ww2323 = 0.4,= 0.4, ww2424 = 1.0= 1.0,, ww3535 == −−1.2,1.2,
ww4545 = 1.1= 1.1,, θθ33 = 0.8,= 0.8, θθ44 == −−0.1 and0.1 and θθ55 = 0.3.= 0.3.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 40
ss We consider a training set where inputsWe consider a training set where inputs xx11 andand xx22 areare
equal to 1 and desired outputequal to 1 and desired output yydd,5,5 is 0. The actualis 0. The actual
outputs of neurons 3 and 4 in the hidden layer areoutputs of neurons 3 and 4 in the hidden layer are
calculated ascalculated as
[ ] 5250.01/1)( )8.014.015.01(
32321313 =+=θ−+= ⋅−⋅+⋅−
ewxwxsigmoidy
[ ] 8808.01/1)( )1.010.119.01(
42421414 =+=θ−+= ⋅+⋅+⋅−
ewxwxsigmoidy
ss Now the actual output ofNow the actual output of neuronneuron 5 in the output layer5 in the output layer
is determined as:is determined as:
ss ThusThus, the following error is obtained:, the following error is obtained:
[ ] 5097.01/1)( )3.011.18808.02.15250.0(
54543535 =+=θ−+= ⋅−⋅+⋅−−
ewywysigmoidy
5097.05097.0055, −=−=−= yye d
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 41
ss The next step is weight training. To update theThe next step is weight training. To update the
weights and threshold levels in our network, weweights and threshold levels in our network, we
propagate the errorpropagate the error,, ee, from the output layer, from the output layer
backward to the input layer.backward to the input layer.
ss First, we calculate the error gradient forFirst, we calculate the error gradient for neuronneuron 5 in5 in
the output layer:the output layer:
1274.05097).0(0.5097)(10.5097)1( 555 −=−⋅−⋅=−= eyyδ
ss Then we determine the weight corrections assumingThen we determine the weight corrections assuming
that the learning rate parameter,that the learning rate parameter, αα, is equal to 0.1:, is equal to 0.1:
0112.0)1274.0(8808.01.05445 −=−⋅⋅=⋅⋅=∆ δα yw
0067.0)1274.0(5250.01.05335 −=−⋅⋅=⋅⋅=∆ δα yw
0127.0)1274.0()1(1.0)1( 55 −=−⋅−⋅=⋅−⋅=θ∆ δα
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 42
ss Next we calculate the error gradients for neurons 3Next we calculate the error gradients for neurons 3
and 4 in the hidden layer:and 4 in the hidden layer:
ss We then determine the weightWe then determine the weight corrections:corrections:
0381.0)2.1(0.1274)(0.5250)(10.5250)1( 355333 =−⋅−⋅−⋅=⋅⋅−= wyy δδ
0.0147.114)0.127(0.8808)(10.8808)1( 455444 −=⋅−⋅−⋅=⋅⋅−= wyy δδ
0038.00381.011.03113 =⋅⋅=⋅⋅=∆ δα xw
0038.00381.011.03223 =⋅⋅=⋅⋅=∆ δα xw
0038.00381.0)1(1.0)1( 33 −=⋅−⋅=⋅−⋅=θ∆ δα
0015.0)0147.0(11.04114 −=−⋅⋅=⋅⋅=∆ δα xw
0015.0)0147.0(11.04224 −=−⋅⋅=⋅⋅=∆ δα xw
0015.0)0147.0()1(1.0)1( 44 =−⋅−⋅=⋅−⋅=θ∆ δα
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 43
ss At last, we update all weights andAt last, we update all weights and threshold:threshold:
5038.00038.05.0131313 =+=∆+= www
8985.00015.09.0141414 =−=∆+= www
4038.00038.04.0232323 =+=∆+= www
9985.00015.00.1242424 =−=∆+= www
2067.10067.02.1353535 −=−−=∆+= www
0888.10112.01.1454545 =−=∆+= www
7962.00038.08.0333 =−=θ∆+θ=θ
0985.00015.01.0444 −=+−=θ∆+θ=θ
3127.00127.03.0555 =+=θ∆+θ=θ
ss The training process is repeated until the sum ofThe training process is repeated until the sum of
squared errors is less than 0.001.squared errors is less than 0.001.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 44
Learning curve for operationLearning curve for operation Exclusive-ORExclusive-OR
0 50 100 150 200
10
1
Epoch
Sum-SquaredError
Sum-Squared Network Error for 224 Epochs
10
0
10
-1
10
-2
10
-3
10
-4
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 45
Final results of three-layer network learningFinal results of three-layer network learning
Inputs
x1 x2
1
0
1
0
1
1
0
0
0
1
1
Desired
output
yd
0
0.0155
Actual
output
y5
Error
e
Sum of
squared
errors
0.9849
0.9849
0.0175
−0.0155
0.0151
0.0151
−0.0175
0.0010
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 46
Network represented byNetwork represented by McCullochMcCulloch--PittsPitts modelmodel
for solving thefor solving the Exclusive-ORExclusive-OR operationoperation
y55
x1 31
x2 42
+1.0
−1
−1
−1
+1.0
+1.0
+1.0
+1.5
+1.0
+1.0
+0.5
+0.5
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 47
((aa) Decision) Decision boundary constructed by hiddenboundary constructed by hidden neuronneuron 3;3;
((bb)) Decision boundary constructed by hiddenDecision boundary constructed by hidden neuronneuron 4;4;
((cc) Decision boundaries constructed by the) Decision boundaries constructed by the completecomplete
threethree-layer network-layer network
x1
x2
1
(a)
1
x2
1
1
(b)
00
x1 + x2 – 1.5 = 0 x1 + x2 – 0.5 = 0
x1 x1
x2
1
1
(c)
0
Decision boundariesDecision boundaries
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 48
AcceleratedAccelerated learninglearning inin multilayermultilayer
neuralneural networksnetworks
ss AA multilayermultilayer network learns much fasternetwork learns much faster when thewhen the
sigmoidalsigmoidal activation function is represented by aactivation function is represented by a
hyperbolic tangenthyperbolic tangent::
wherewhere aa andand bb are constants.are constants.
SuitableSuitable values forvalues for aa andand bb are:are:
aa = 1.716 and= 1.716 and bb = 0.= 0.667667
a
e
a
Y bX
htan
−
+
= −
1
2
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 49
ss We also can accelerate training by including aWe also can accelerate training by including a
momentum termmomentum term in the delta rule:in the delta rule:
wherewhere ββ is a positive number (0is a positive number (0 ≤≤ ββ << 1) called the1) called the
momentum constantmomentum constant. Typically, the momentum. Typically, the momentum
constant is set to 0.95.constant is set to 0.95.
This equation is calledThis equation is called thethe generalised delta rulegeneralised delta rule..
)()()1()( ppypwpw kjjkjk δαβ ⋅⋅+−∆⋅=∆
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 50
Learning with momentum for operationLearning with momentum for operation Exclusive-ORExclusive-OR
0 20 40 60 80 100 120
10-4
10-2
10
0
10
2
Epoch
Sum-SquaredError
Training for 126 Epochs
0 100 140
-1
-0.5
0
0.5
1
1.5
Epoch
LearningRate
10-3
101
10
-1
20 40 60 80 120
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 51
Learning with adaptive learning rateLearning with adaptive learning rate
To accelerate the convergence and yet avoid theTo accelerate the convergence and yet avoid the
danger of instability, we can apply two heuristics:danger of instability, we can apply two heuristics:
Heuristic 1Heuristic 1
If the change of the sum of squared errors has the sameIf the change of the sum of squared errors has the same
algebraic sign for several consequent epochs, then thealgebraic sign for several consequent epochs, then the
learning rate parameter,learning rate parameter, αα, should be increased., should be increased.
Heuristic 2Heuristic 2
If the algebraic sign of the change of the sum ofIf the algebraic sign of the change of the sum of
squared errors alternates for several consequentsquared errors alternates for several consequent
epochs, then the learning rate parameter,epochs, then the learning rate parameter, αα, should be, should be
decreased.decreased.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 52
ss Adapting the learning rate requires some changesAdapting the learning rate requires some changes
in the back-propagation algorithm.in the back-propagation algorithm.
ss If the sum of squared errors at the current epochIf the sum of squared errors at the current epoch
exceeds the previous value by more than aexceeds the previous value by more than a
predefinedpredefined ratio (typically 1.04), the learning rateratio (typically 1.04), the learning rate
parameter is decreased (typically by multiplyingparameter is decreased (typically by multiplying
by 0.7) and new weights and thresholds areby 0.7) and new weights and thresholds are
calculated.calculated.
ss If the errorIf the error is less than the previous one, theis less than the previous one, the
learning rate is increased (typically by multiplyinglearning rate is increased (typically by multiplying
by 1.05).by 1.05).
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 53
Learning with adaptive learningLearning with adaptive learning raterate
0 10 20 30 40 50 60 70 80 90 100
Epoch
Training for 103 Epochs
0 20 40 60 80 100 120
0
0.2
0.4
0.6
0.8
1
Epoch
LearningRate
10-4
10-2
100
10
2
Sum-SquaredError
10
-3
101
10
-1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 54
Learning with momentum and adaptive learning rateLearning with momentum and adaptive learning rate
0 10 20 30 40 50 60 70 80
Epoch
Training for 85 Epochs
0 10 20 30 40 50 60 70 80 90
0
0.5
1
2.5
Epoch
LearningRate
10-4
10
-2
100
10
2
Sum-SquaredErro
10
-3
101
10-1
1.5
2
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 55
ss Neural networks were designed on analogy withNeural networks were designed on analogy with
the brain. The brain’s memory, however, worksthe brain. The brain’s memory, however, works
by association. For example, we can recognise aby association. For example, we can recognise a
familiar face even in an unfamiliar environmentfamiliar face even in an unfamiliar environment
within 100-200 ms. We can also recall awithin 100-200 ms. We can also recall a
complete sensory experience, including soundscomplete sensory experience, including sounds
and scenes, when we hear only a few bars ofand scenes, when we hear only a few bars of
music. The brain routinely associates one thingmusic. The brain routinely associates one thing
with another.with another.
TheThe HopfieldHopfield NetworkNetwork
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 56
ss Multilayer neural networks trained with the back-Multilayer neural networks trained with the back-
propagation algorithm are used for patternpropagation algorithm are used for pattern
recognition problems. However, to emulate therecognition problems. However, to emulate the
human memory’s associative characteristics wehuman memory’s associative characteristics we
need a different type of network: aneed a different type of network: a recurrentrecurrent
neural networkneural network..
ss A recurrent neural network has feedback loopsA recurrent neural network has feedback loops
from its outputs to its inputs. The presence offrom its outputs to its inputs. The presence of
such loops has a profound impact on the learningsuch loops has a profound impact on the learning
capability of the network.capability of the network.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 57
ss The stability of recurrent networks intriguedThe stability of recurrent networks intrigued
several researchers in the 1960s and 1970s.several researchers in the 1960s and 1970s.
However, none was able to predict which networkHowever, none was able to predict which network
would be stable, and some researchers werewould be stable, and some researchers were
pessimistic about finding a solution at all. Thepessimistic about finding a solution at all. The
problem was solved only in 1982, whenproblem was solved only in 1982, when JohnJohn
HopfieldHopfield formulated the physical principle offormulated the physical principle of
storing information in a dynamically stablestoring information in a dynamically stable
network.network.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 58
Single-layerSingle-layer nn--neuron Hopfieldneuron Hopfield networknetwork
xi
x1
x2
xn
InputSignals
yi
y1
y2
yn
1
2
i
n
OutputSignals
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 59
ss TheThe HopfieldHopfield network usesnetwork uses McCullochMcCulloch andand PittsPitts
neuronsneurons with thewith the sign activation functionsign activation function as itsas its
computing element:computing element:





0=
0<−
>+
=
XY
X
X
Ysign
if,
if,1
0if,1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 60
ss The current state ofThe current state of the Hopfield network isthe Hopfield network is
determined by the current outputs of alldetermined by the current outputs of all neuronsneurons,,
yy11,, yy22, . . .,, . . ., yynn..
ThusThus, for a single-layer, for a single-layer nn--neuronneuron network, the statenetwork, the state
can be defined by thecan be defined by the state vectorstate vector as:as:














=
ny
y
y
!
2
1
Y
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 61
ss In the Hopfield network, synaptic weights betweenIn the Hopfield network, synaptic weights between
neurons are usually represented in matrix form asneurons are usually represented in matrix form as
follows:follows:
wherewhere MM is the number of states to be memorisedis the number of states to be memorised
by the network,by the network, YYmm is theis the nn-dimensional binary-dimensional binary
vector,vector, II isis nn ×× nn identity matrix, and superscriptidentity matrix, and superscript TT
denotes a matrix transposition.denotes a matrix transposition.
IYYW M
M
m
T
mm −= ∑
=1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 62
Possible states for the three-neuronPossible states for the three-neuron
Hopfield networkHopfield network
y1
y2
y3
(1, −1, 1)(−1, −1, 1)
(−1, −1, −1) (1, −1, −1)
(1, 1, 1)(−1, 1, 1)
(1, 1, −1)(−1, 1, −1)
0
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 63
ss The stable state-vertex is determined by the weightThe stable state-vertex is determined by the weight
matrixmatrix WW, the current input vector, the current input vector XX, and the, and the
threshold matrixthreshold matrix θθθθθθθθ. If the input vector is partially. If the input vector is partially
incorrect or incomplete, the initial state will convergeincorrect or incomplete, the initial state will converge
into the stable state-vertex after a few iterations.into the stable state-vertex after a few iterations.
ss Suppose, for instance, that our network is required toSuppose, for instance, that our network is required to
memorise two opposite states, (1, 1, 1) and (memorise two opposite states, (1, 1, 1) and (−−1,1, −−1,1, −−1).1).
Thus,Thus,
oror
wherewhere YY11 andand YY22 are the three-dimensional vectors.are the three-dimensional vectors.










=
1
1
1
1Y










−
−
−
=
1
1
1
2Y [ ]1111 =T
Y [ ]1112 −−−=T
Y
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 64
ss The 3The 3 ×× 3 identity matrix3 identity matrix II isis
ss ThusThus, we can now determine the weight matrix as, we can now determine the weight matrix as
follows:follows:
ss Next, the network is tested by the sequence of inputNext, the network is tested by the sequence of input
vectors,vectors, XX11 andand XX22, which are equal to the output (or, which are equal to the output (or
target) vectorstarget) vectors YY11 andand YY22, respectively., respectively.










=
100
010
001
I
[ ] [ ]










−−−−










−
−
−
+










=
100
010
001
2111
1
1
1
111
1
1
1
W










=
022
202
220
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 65
ss First, we activate the Hopfield network by applyingFirst, we activate the Hopfield network by applying
the input vectorthe input vector XX. Then, we calculate the actual. Then, we calculate the actual
output vectoroutput vector YY, and finally, we compare the result, and finally, we compare the result
with the initial input vectorwith the initial input vector XX..










=




















−




















=
1
1
1
0
0
0
1
1
1
022
202
220
1 signY










−
−
−
=




















−










−
−
−










=
1
1
1
0
0
0
1
1
1
022
202
220
2 signY
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 66
ss The remaining six states are all unstable. However,The remaining six states are all unstable. However,
stable states (also calledstable states (also called fundamental memoriesfundamental memories) are) are
capable of attracting states that arecapable of attracting states that are close to them.close to them.
ss TheThe fundamental memory (1, 1, 1) attracts unstablefundamental memory (1, 1, 1) attracts unstable
states (states (−−1, 1, 1), (1,1, 1, 1), (1, −−1, 1) and (1, 1,1, 1) and (1, 1, −−1). Each of1). Each of
these unstable states represents a single error,these unstable states represents a single error,
compared to the fundamental memory (1, 1, 1).compared to the fundamental memory (1, 1, 1).
ss TheThe fundamental memory (fundamental memory (−−1,1, −−1,1, −−1) attracts1) attracts
unstable states (unstable states (−−1,1, −−1, 1), (1, 1), (−−1, 1,1, 1, −−1) and (1,1) and (1, −−1,1, −−11).).
ss ThusThus, the, the HopfieldHopfield network can act as annetwork can act as an errorerror
correction networkcorrection network..
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 67
ss Storage capacityStorage capacity isis or the largest number ofor the largest number of
fundamentalfundamental memories that can be stored andmemories that can be stored and
retrieved correctly.retrieved correctly.
ss The maximum number of fundamental memoriesThe maximum number of fundamental memories
MMmaxmax that can be stored in thethat can be stored in the nn--neuronneuron recurrentrecurrent
network is limitednetwork is limited byby
nMmax 15.0=
Storage capacity of the Hopfield networkStorage capacity of the Hopfield network
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 68
ss The Hopfield network represents anThe Hopfield network represents an autoassociativeautoassociative
type of memorytype of memory −− it can retrieve a corrupted orit can retrieve a corrupted or
incomplete memory but cannot associate this memoryincomplete memory but cannot associate this memory
with another different memory.with another different memory.
ss Human memory isHuman memory is essentiallyessentially associativeassociative. One thing. One thing
may remind us of another, and that of another, and somay remind us of another, and that of another, and so
on. We use a chain of mental associations to recoveron. We use a chain of mental associations to recover
a lost memory. Ifa lost memory. If we forget wherewe forget where we left anwe left an
umbrella, we try to recall where we lastumbrella, we try to recall where we last had ithad it, what, what
we were doing, and who we were talking to. Wewe were doing, and who we were talking to. We
attempt to establish a chain of associations, andattempt to establish a chain of associations, and
thereby to restore a lost memory.thereby to restore a lost memory.
BidirectionalBidirectional associativeassociative memory (BAM)memory (BAM)
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 69
ss To associate one memory with another, we need aTo associate one memory with another, we need a
recurrent neural network capable of accepting anrecurrent neural network capable of accepting an
input pattern on one set of neurons and producinginput pattern on one set of neurons and producing
a relateda related, but different, output pattern on another, but different, output pattern on another
set ofset of neuronsneurons..
ss Bidirectional associative memoryBidirectional associative memory (BAM)(BAM), first, first
proposed byproposed by Bart KoskoBart Kosko, is a heteroassociative, is a heteroassociative
network. It associates patterns from one set, setnetwork. It associates patterns from one set, set AA,,
to patterns from another set, setto patterns from another set, set BB, and vice versa., and vice versa.
Like aLike a HopfieldHopfield network, the BAM can generalisenetwork, the BAM can generalise
and also produce correct outputs despite corruptedand also produce correct outputs despite corrupted
or incomplete inputs.or incomplete inputs.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 70
BAMBAM operationoperation
yj(p)
y1(p)
y2(p)
ym(p)
1
2
j
m
Output
layer
Input
layer
xi(p)
x1(p)
x2(p)
xn(p)
2
i
n
1
xi(p+1)
x1(p+1)
x2(p+1)
xn(p+1)
yj(p)
y1(p)
y2(p)
ym(p)
1
2
j
m
Output
layer
Input
layer
2
i
n
1
(a) Forward direction. (b) Backward direction.
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 71
The basic idea behind the BAM is to storeThe basic idea behind the BAM is to store
pattern pairs so that whenpattern pairs so that when nn-dimensional vector-dimensional vector
XX from setfrom set AA is presented as input, the BAMis presented as input, the BAM
recallsrecalls mm-dimensional vector-dimensional vector YY from setfrom set BB, but, but
whenwhen YY is presented as input, the BAM recallsis presented as input, the BAM recalls XX..
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 72
ss To develop the BAM, we need to create aTo develop the BAM, we need to create a
correlation matrix for each pattern pair we want tocorrelation matrix for each pattern pair we want to
store. The correlation matrix is the matrix productstore. The correlation matrix is the matrix product
of the input vectorof the input vector XX, and the transpose of the, and the transpose of the
output vectoroutput vector YYTT
. The BAM weight matrix is the. The BAM weight matrix is the
sum of all correlation matrices, that is,sum of all correlation matrices, that is,
wherewhere MM is the number of pattern pairs to be storedis the number of pattern pairs to be stored
in the BAM.in the BAM.
T
m
M
m
m YXW ∑
=
=
1
 Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 73
ss The BAM isThe BAM is unconditionally stableunconditionally stable. This means that. This means that
any set of associations can be learned without risk ofany set of associations can be learned without risk of
instability.instability.
ss TheThe maximum number of associations to be storedmaximum number of associations to be stored
in the BAM should not exceed the number ofin the BAM should not exceed the number of
neuronsneurons in the smaller layer.in the smaller layer.
ss The more serious problem with the BAMThe more serious problem with the BAM isis
incorrect convergenceincorrect convergence. The BAM may not. The BAM may not
always produce the closest association. In fact, aalways produce the closest association. In fact, a
stable association may be only slightly related tostable association may be only slightly related to
the initial input vector.the initial input vector.
Stability and storage capacity of the BAMStability and storage capacity of the BAM

Más contenido relacionado

La actualidad más candente

Spiking Neural P System
Spiking Neural P SystemSpiking Neural P System
Spiking Neural P SystemAtit Gaonkar
 
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...IDES Editor
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
 
mohsin dalvi artificial neural networks questions
mohsin dalvi   artificial neural networks questionsmohsin dalvi   artificial neural networks questions
mohsin dalvi artificial neural networks questionsAkash Maurya
 
Reinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 NeuroscienceReinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 NeuroscienceJason Tsai
 
NYAI - Intersection of neuroscience and deep learning by Russell Hanson
NYAI - Intersection of neuroscience and deep learning by Russell HansonNYAI - Intersection of neuroscience and deep learning by Russell Hanson
NYAI - Intersection of neuroscience and deep learning by Russell HansonRizwan Habib
 
Design of Cortical Neuron Circuits With VLSI Design Approach
Design of Cortical Neuron Circuits With VLSI Design ApproachDesign of Cortical Neuron Circuits With VLSI Design Approach
Design of Cortical Neuron Circuits With VLSI Design Approachijsc
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSREHMAT ULLAH
 
mohsin dalvi artificial neural networks presentation
mohsin dalvi   artificial neural networks presentationmohsin dalvi   artificial neural networks presentation
mohsin dalvi artificial neural networks presentationAkash Maurya
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkRenas Rekany
 
Artificial neural networks and its applications
Artificial neural networks and its applications Artificial neural networks and its applications
Artificial neural networks and its applications PoojaKoshti2
 
ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power SystemYousuf Khan
 
Neural networks
Neural networksNeural networks
Neural networksBasil John
 

La actualidad más candente (20)

Soft computing BY:- Dr. Rakesh Kumar Maurya
Soft computing BY:- Dr. Rakesh Kumar MauryaSoft computing BY:- Dr. Rakesh Kumar Maurya
Soft computing BY:- Dr. Rakesh Kumar Maurya
 
Spiking Neural P System
Spiking Neural P SystemSpiking Neural P System
Spiking Neural P System
 
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
mohsin dalvi artificial neural networks questions
mohsin dalvi   artificial neural networks questionsmohsin dalvi   artificial neural networks questions
mohsin dalvi artificial neural networks questions
 
Reinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 NeuroscienceReinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 Neuroscience
 
NYAI - Intersection of neuroscience and deep learning by Russell Hanson
NYAI - Intersection of neuroscience and deep learning by Russell HansonNYAI - Intersection of neuroscience and deep learning by Russell Hanson
NYAI - Intersection of neuroscience and deep learning by Russell Hanson
 
Design of Cortical Neuron Circuits With VLSI Design Approach
Design of Cortical Neuron Circuits With VLSI Design ApproachDesign of Cortical Neuron Circuits With VLSI Design Approach
Design of Cortical Neuron Circuits With VLSI Design Approach
 
intelligent system
intelligent systemintelligent system
intelligent system
 
Neural network
Neural networkNeural network
Neural network
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKS
 
mohsin dalvi artificial neural networks presentation
mohsin dalvi   artificial neural networks presentationmohsin dalvi   artificial neural networks presentation
mohsin dalvi artificial neural networks presentation
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
7 nn1-intro.ppt
7 nn1-intro.ppt7 nn1-intro.ppt
7 nn1-intro.ppt
 
Artificial neural networks and its applications
Artificial neural networks and its applications Artificial neural networks and its applications
Artificial neural networks and its applications
 
Introduction to Neural Network
Introduction to Neural NetworkIntroduction to Neural Network
Introduction to Neural Network
 
SoftComputing5
SoftComputing5SoftComputing5
SoftComputing5
 
ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power System
 
Neural networks
Neural networksNeural networks
Neural networks
 

Destacado

buchanania lanzan hepatocarcinoma presentation
buchanania lanzan hepatocarcinoma presentationbuchanania lanzan hepatocarcinoma presentation
buchanania lanzan hepatocarcinoma presentationSrota Dawn
 
My family and my house. rebollo corretedppt
My family and my house. rebollo corretedpptMy family and my house. rebollo corretedppt
My family and my house. rebollo corretedpptjavirebollo1999
 
Antiepileptics by srota dawn
Antiepileptics by srota dawnAntiepileptics by srota dawn
Antiepileptics by srota dawnSrota Dawn
 
Presentation1stmulant cns
Presentation1stmulant cnsPresentation1stmulant cns
Presentation1stmulant cnsSrota Dawn
 
Amway = joão bosco cardoso + internacional organização network system marketing
Amway = joão bosco cardoso + internacional organização network system marketingAmway = joão bosco cardoso + internacional organização network system marketing
Amway = joão bosco cardoso + internacional organização network system marketingJoaoBosco CardosodaSilva
 
Inflamation ppt
Inflamation pptInflamation ppt
Inflamation pptSrota Dawn
 
Teachings of Buddhism in Management
Teachings of Buddhism in Management Teachings of Buddhism in Management
Teachings of Buddhism in Management shashwati pawar
 
screening of ans drugs
screening of ans drugsscreening of ans drugs
screening of ans drugsSrota Dawn
 

Destacado (9)

buchanania lanzan hepatocarcinoma presentation
buchanania lanzan hepatocarcinoma presentationbuchanania lanzan hepatocarcinoma presentation
buchanania lanzan hepatocarcinoma presentation
 
My family and my house. rebollo corretedppt
My family and my house. rebollo corretedpptMy family and my house. rebollo corretedppt
My family and my house. rebollo corretedppt
 
Antiepileptics by srota dawn
Antiepileptics by srota dawnAntiepileptics by srota dawn
Antiepileptics by srota dawn
 
sls
slssls
sls
 
Presentation1stmulant cns
Presentation1stmulant cnsPresentation1stmulant cns
Presentation1stmulant cns
 
Amway = joão bosco cardoso + internacional organização network system marketing
Amway = joão bosco cardoso + internacional organização network system marketingAmway = joão bosco cardoso + internacional organização network system marketing
Amway = joão bosco cardoso + internacional organização network system marketing
 
Inflamation ppt
Inflamation pptInflamation ppt
Inflamation ppt
 
Teachings of Buddhism in Management
Teachings of Buddhism in Management Teachings of Buddhism in Management
Teachings of Buddhism in Management
 
screening of ans drugs
screening of ans drugsscreening of ans drugs
screening of ans drugs
 

Similar a Lec 5

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper PresentationArtificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentationguestac67362
 
Artificial neural-network-paper-presentation-100115092527-phpapp02
Artificial neural-network-paper-presentation-100115092527-phpapp02Artificial neural-network-paper-presentation-100115092527-phpapp02
Artificial neural-network-paper-presentation-100115092527-phpapp02anandECE2010
 
Introduction to Artificial Neural Network
Introduction to Artificial Neural NetworkIntroduction to Artificial Neural Network
Introduction to Artificial Neural NetworkAmeer H Ali
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...DurgadeviParamasivam
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Mohd Faiz
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Randa Elanwar
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Amit Kumar Rathi
 
Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANNMostafaHazemMostafaa
 
Artifical neural networks
Artifical neural networksArtifical neural networks
Artifical neural networksalldesign
 

Similar a Lec 5 (20)

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
A04401001013
A04401001013A04401001013
A04401001013
 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper PresentationArtificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentation
 
Artificial neural-network-paper-presentation-100115092527-phpapp02
Artificial neural-network-paper-presentation-100115092527-phpapp02Artificial neural-network-paper-presentation-100115092527-phpapp02
Artificial neural-network-paper-presentation-100115092527-phpapp02
 
Introduction to Artificial Neural Network
Introduction to Artificial Neural NetworkIntroduction to Artificial Neural Network
Introduction to Artificial Neural Network
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
Unit+i
Unit+iUnit+i
Unit+i
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)
 
Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANN
 
Artifical neural networks
Artifical neural networksArtifical neural networks
Artifical neural networks
 

Último

AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusZilliz
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024The Digital Insurer
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 

Último (20)

AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Lec 5

  • 1.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 1 Lecture 7Lecture 7 Artificial neural networks:Artificial neural networks: Supervised learningSupervised learning ss Introduction, or how the brain worksIntroduction, or how the brain works ss The neuron as a simple computing elementThe neuron as a simple computing element ss The perceptronThe perceptron ss Multilayer neural networksMultilayer neural networks ss Accelerated learning in multilayer neural networksAccelerated learning in multilayer neural networks ss The Hopfield networkThe Hopfield network ss Bidirectional associative memories (BAM)Bidirectional associative memories (BAM) ss SummarySummary
  • 2.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 2 Introduction, orIntroduction, or howhow thethe brainbrain worksworks MachineMachine learning involves adaptive mechanismslearning involves adaptive mechanisms that enable computers to learn from experience,that enable computers to learn from experience, learn by example and learn by analogy. Learninglearn by example and learn by analogy. Learning capabilities can improve the performance of ancapabilities can improve the performance of an intelligent system over time. The most popularintelligent system over time. The most popular approaches to machine learning areapproaches to machine learning are artificialartificial neural networksneural networks andand genetic algorithmsgenetic algorithms. This. This lecture is dedicated to neural networks.lecture is dedicated to neural networks.
  • 3.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 3 ss AA neural networkneural network can be defined as a model ofcan be defined as a model of reasoning based on the human brain. The brainreasoning based on the human brain. The brain consists of a densely interconnected set of nerveconsists of a densely interconnected set of nerve cells, or basic information-processing units, calledcells, or basic information-processing units, called neuronsneurons.. ss The human brain incorporates nearly 10 billionThe human brain incorporates nearly 10 billion neurons and 60 trillion connections,neurons and 60 trillion connections, synapsessynapses,, between them. By using multiple neuronsbetween them. By using multiple neurons simultaneously, the brain cansimultaneously, the brain can perform its functionsperform its functions much faster than the fastest computers in existencemuch faster than the fastest computers in existence today.today.
  • 4.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 4 ss EachEach neuronneuron has a very simple structure, but anhas a very simple structure, but an army of such elements constitutes a tremendousarmy of such elements constitutes a tremendous processing power.processing power. ss AA neuronneuron consists of a cell body,consists of a cell body, somasoma, a number of, a number of fibersfibers calledcalled dendritesdendrites, and a single long, and a single long fiberfiber called thecalled the axonaxon..
  • 5.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 5 Biological neural networkBiological neural network Soma Soma Synapse Synapse Dendrites Axon Synapse Dendrites Axon
  • 6.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 6 ss Our brain can be considered as a highly complex,Our brain can be considered as a highly complex, non-linear and parallel information-processingnon-linear and parallel information-processing system.system. ss Information is stored and processed in a neuralInformation is stored and processed in a neural network simultaneously throughout the wholenetwork simultaneously throughout the whole network, rather than at specific locations. In othernetwork, rather than at specific locations. In other words, in neural networks, both data and itswords, in neural networks, both data and its processing areprocessing are globalglobal rather than local.rather than local. ss Learning is a fundamental and essentialLearning is a fundamental and essential characteristic of biological neural networks. Thecharacteristic of biological neural networks. The ease with which they can learn led to attempts toease with which they can learn led to attempts to emulate a biological neural network in a computer.emulate a biological neural network in a computer.
  • 7.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 7 ss An artificial neural network consists of a number ofAn artificial neural network consists of a number of very simple processors, also calledvery simple processors, also called neuronsneurons, which, which are analogous to the biological neurons in the brain.are analogous to the biological neurons in the brain. ss The neurons are connected by weighted linksThe neurons are connected by weighted links passingpassing signals from onesignals from one neuronneuron to another.to another. ss The output signal is transmitted through theThe output signal is transmitted through the neuron’s outgoing connection. The outgoingneuron’s outgoing connection. The outgoing connection splitsconnection splits into a number of branches thatinto a number of branches that transmit the same signal. The outgoing branchestransmit the same signal. The outgoing branches terminate at the incoming connections of otherterminate at the incoming connections of other neuronsneurons in the network.in the network.
  • 8.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 8 Architecture of a typical artificial neural networkArchitecture of a typical artificial neural network Input Layer Output Layer Middle Layer InputSignals OutputSignals
  • 9.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 9 Biological Neural Network Artificial Neural Network Soma Dendrite Axon Synapse Neuron Input Output Weight Analogy between biological andAnalogy between biological and artificialartificial neuralneural networksnetworks
  • 10.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 10 TheThe neuronneuron as aas a simplesimple computingcomputing elementelement Diagram of aDiagram of a neuronneuron Neuron Y Input Signals x1 x2 xn Output Signals Y Y Y w2 w1 wn Weights
  • 11.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 11 ss TheThe neuronneuron computes the weighted sum of the inputcomputes the weighted sum of the input signals and compares the result with asignals and compares the result with a thresholdthreshold valuevalue,, θθ. If the net input is less than the threshold,. If the net input is less than the threshold, thethe neuronneuron output is –1. But if the net input is greateroutput is –1. But if the net input is greater than or equal to the threshold, thethan or equal to the threshold, the neuronneuron becomesbecomes activated and its output attains a value +1activated and its output attains a value +1.. ss The neuron uses the following transfer orThe neuron uses the following transfer or activationactivation functionfunction:: ss This type of activation function is called aThis type of activation function is called a signsign functionfunction.. ∑ = = n i iiwxX 1    θ<− θ≥+ = X X Y if,1 if,1
  • 12.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 12 Activation functions of aActivation functions of a neuronneuron Stepfunction Signfunction +1 -1 0 +1 -1 0X Y X Y +1 -1 0 X Y Sigmoidfunction +1 -1 0 X Y Linearfunction    < ≥ = 0if,0 0if,1 X X Ystep    <− ≥+ = 0if,1 0if,1 X X Ysign X sigmoid e Y − + = 1 1 XYlinear =
  • 13.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 13 Can a singleCan a single neuronneuron learn a task?learn a task? ss In 1958,In 1958, FrankFrank RosenblattRosenblatt introduced a trainingintroduced a training algorithm that provided the first procedure foralgorithm that provided the first procedure for training a simple ANN: atraining a simple ANN: a perceptronperceptron.. ss TheThe perceptronperceptron is the simplest form of a neuralis the simplest form of a neural network. It consists of a singlenetwork. It consists of a single neuronneuron withwith adjustableadjustable synaptic weights and asynaptic weights and a hardhard limiterlimiter..
  • 14.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 14 Threshold Inputs x1 x2 Output Y∑∑∑∑ Hard Limiter w2 w1 Linear Combiner θ Single-layer two-inputSingle-layer two-input perceptronperceptron
  • 15.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 15 TheThe PerceptronPerceptron ss The operation ofThe operation of Rosenblatt’s perceptronRosenblatt’s perceptron is basedis based on theon the McCullochMcCulloch andand Pitts neuronPitts neuron modelmodel. The. The model consists of a linearmodel consists of a linear combinercombiner followed by afollowed by a hardhard limiterlimiter.. ss The weighted sum of the inputs is applied to theThe weighted sum of the inputs is applied to the hardhard limiterlimiter, which produces an output equal to +1, which produces an output equal to +1 if its input is positive andif its input is positive and −−1 if it is negative.1 if it is negative.
  • 16.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 16 ss The aim of the perceptron is to classify inputs,The aim of the perceptron is to classify inputs, xx11,, xx22, . . .,, . . ., xxnn, into one of two classes, say, into one of two classes, say AA11 andand AA22.. ss In the case of an elementary perceptron, the n-In the case of an elementary perceptron, the n- dimensional space is divided by adimensional space is divided by a hyperplanehyperplane intointo two decision regions. The hyperplane is defined bytwo decision regions. The hyperplane is defined by thethe linearlylinearly separableseparable functionfunction:: 0 1 =θ−∑ = n i iiwx
  • 17.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 17 Linear separabilityLinear separability in thein the perceptronsperceptrons x1 x2 Class A2 Class A1 1 2 x1w1 + x2w2 − θ = 0 (a) Two-input perceptron. (b) Three-input perceptron. x2 x1 x3 x1w1 + x2w2 + x3w3 − θ = 0 1 2
  • 18.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 18 ThisThis is done by making small adjustments in theis done by making small adjustments in the weights to reduce the difference between the actualweights to reduce the difference between the actual and desired outputs of theand desired outputs of the perceptronperceptron. The initial. The initial weights are randomly assigned, usually in the rangeweights are randomly assigned, usually in the range [[−−0.5, 0.5], and then updated to obtain the output0.5, 0.5], and then updated to obtain the output consistent with the training examples.consistent with the training examples. How does the perceptronHow does the perceptron learn its classificationlearn its classification tasks?tasks?
  • 19.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 19 ss If at iterationIf at iteration pp, the actual output is, the actual output is YY((pp) and the) and the desired output isdesired output is YYdd ((pp), then the error is given by:), then the error is given by: wherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . . IterationIteration pp here refers to thehere refers to the ppth training exampleth training example presented to the perceptron.presented to the perceptron. ss If the error,If the error, ee((pp), is positive, we need to increase), is positive, we need to increase perceptron outputperceptron output YY((pp), but if it is negative, we), but if it is negative, we need to decreaseneed to decrease YY((pp).). )()()( pYpYpe d −=
  • 20.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 20 The perceptron learning ruleThe perceptron learning rule wherewhere pp = 1, 2, 3, . . .= 1, 2, 3, . . . αα is theis the learning ratelearning rate, a positive constant less than, a positive constant less than unity.unity. The perceptron learning rule was first proposed byThe perceptron learning rule was first proposed by RosenblattRosenblatt in 1960. Using this rule we can derivein 1960. Using this rule we can derive the perceptron training algorithm for classificationthe perceptron training algorithm for classification tasks.tasks. )()()()1( pepxpwpw iii ⋅⋅+=+ α
  • 21.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 21 Step 1Step 1:: InitialisationInitialisation SetSet initial weightsinitial weights ww11,, ww22,…,,…, wwnn and thresholdand threshold θθ to random numbers in the range [to random numbers in the range [−−0.5, 0.5].0.5, 0.5]. IfIf the error,the error, ee((pp), is positive, we need to increase), is positive, we need to increase perceptronperceptron outputoutput YY((pp), but if it is negative, we), but if it is negative, we need to decreaseneed to decrease YY((pp).). Perceptron’s tarining algorithmPerceptron’s tarining algorithm
  • 22.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 22 Step 2Step 2: Activation: Activation Activate the perceptron by applying inputsActivate the perceptron by applying inputs xx11((pp),), xx22((pp),…,),…, xxnn((pp) and desired output) and desired output YYdd ((pp).). Calculate the actual output at iterationCalculate the actual output at iteration pp = 1= 1 wherewhere nn is the number of the perceptron inputs,is the number of the perceptron inputs, andand stepstep is a step activationis a step activation function.function. Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued)         θ−= ∑ = n i ii pwpxsteppY 1 )()()(
  • 23.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 23 Step 3Step 3: Weight: Weight trainingtraining UpdateUpdate the weights of thethe weights of the perceptronperceptron wherewhere ∆∆wwii((pp) is) is the weight correction at iterationthe weight correction at iteration pp.. The weight correction is computed by theThe weight correction is computed by the deltadelta rulerule:: Step 4Step 4:: IterationIteration IncreaseIncrease iterationiteration pp by one, go back toby one, go back to Step 2Step 2 andand repeat the process until convergence.repeat the process until convergence. )()()1( pwpwpw iii ∆+=+ Perceptron’s tarining algorithm (continued)Perceptron’s tarining algorithm (continued) )()()( pepxpw ii ⋅⋅α=∆
  • 24.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 24 Example of perceptronExample of perceptron learning: the logical operationlearning: the logical operation ANDAND Inputs x1 x2 0 0 1 1 0 1 0 1 0 0 0 Epoch Desired output Yd 1 Initial weights w1 w2 1 0.3 0.3 0.3 0.2 −0.1 −0.1 −0.1 −0.1 0 0 1 0 Actual output Y Error e 0 0 −1 1 Final weights w1 w2 0.3 0.3 0.2 0.3 −0.1 −0.1 −0.1 0.0 0 0 1 1 0 1 0 1 0 0 0 2 1 0.3 0.3 0.3 0.2 0 0 1 1 0 0 −1 0 0.3 0.3 0.2 0.2 0.0 0.0 0.0 0.0 0 0 1 1 0 1 0 1 0 0 0 3 1 0.2 0.2 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 0 −1 1 0.2 0.2 0.1 0.2 0.0 0.0 0.0 0.1 0 0 1 1 0 1 0 1 0 0 0 4 1 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 0 −1 0 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 1 0 1 0 0 0 5 1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 1 0 0 0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 Threshold: θ = 0.2; learning rate: α = 0.1
  • 25.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 25 Two-dimensionalTwo-dimensional plots of basic logical operationsplots of basic logical operations x1 x2 1 (a) AND (x1 ∩ x2) 1 x1 x2 1 1 (b) OR (x1 ∪ x2) x1 x2 1 1 (c) Exclusive-OR (x1 ⊕ x2) 00 0 AA perceptronperceptron can learn the operationscan learn the operations ANDAND andand OROR,, but notbut not Exclusive-ORExclusive-OR..
  • 26.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 26 MultilayerMultilayer neuralneural networksnetworks ss AA multilayer perceptronmultilayer perceptron is ais a feedforwardfeedforward neuralneural network with one or more hidden layers.network with one or more hidden layers. ss TheThe network consists of annetwork consists of an input layerinput layer of sourceof source neuronsneurons, at least one middle or, at least one middle or hidden layerhidden layer ofof computationalcomputational neuronsneurons, and an, and an output layeroutput layer ofof computationalcomputational neuronsneurons.. ss TheThe input signals are propagated in a forwardinput signals are propagated in a forward direction on a layer-by-layer basis.direction on a layer-by-layer basis.
  • 27.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 27 Multilayer perceptronMultilayer perceptron with two hidden layerswith two hidden layers Input layer First hidden layer Second hidden layer Output layer OutputSignals InputSignals
  • 28.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 28 What does the middle layer hideWhat does the middle layer hide?? ss A hidden layer “hides” its desired output.A hidden layer “hides” its desired output. NeuronsNeurons in the hidden layer cannot be observedin the hidden layer cannot be observed through the input/output behaviour of the network.through the input/output behaviour of the network. There is no obvious way to know what the desiredThere is no obvious way to know what the desired output of the hidden layer should be.output of the hidden layer should be. ss Commercial ANNs incorporate three andCommercial ANNs incorporate three and sometimes four layers, including one or twosometimes four layers, including one or two hidden layershidden layers. Each layer can contain from 10 to. Each layer can contain from 10 to 10001000 neuronsneurons. Experimental neural networks may. Experimental neural networks may have five or even six layers, including three orhave five or even six layers, including three or four hidden layers, and utilise millions offour hidden layers, and utilise millions of neurons.neurons.
  • 29.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 29 Back-propagation neural networkBack-propagation neural network ss Learning in aLearning in a multilayermultilayer network proceeds thenetwork proceeds the same way as for asame way as for a perceptronperceptron.. ss A training set of input patterns is presented to theA training set of input patterns is presented to the network.network. ss TheThe network computes its output pattern, and ifnetwork computes its output pattern, and if there is an errorthere is an error −− or in other words a differenceor in other words a difference between actual and desired output patternsbetween actual and desired output patterns −− thethe weights are adjusted to reduce this error.weights are adjusted to reduce this error.
  • 30.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 30 ss In a back-propagation neural network, the learningIn a back-propagation neural network, the learning algorithm has two phases.algorithm has two phases. ss First, a training input pattern is presented to theFirst, a training input pattern is presented to the network input layer. The network propagates thenetwork input layer. The network propagates the input pattern from layer to layer until the outputinput pattern from layer to layer until the output pattern is generated by the output layer.pattern is generated by the output layer. ss If this pattern is different from the desired output,If this pattern is different from the desired output, an error is calculated and then propagatedan error is calculated and then propagated backwards through the network from the outputbackwards through the network from the output layer to the input layer. The weights are modifiedlayer to the input layer. The weights are modified as the error is propagated.as the error is propagated.
  • 31.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 31 Three-layer back-propagation neural networkThree-layer back-propagation neural network Input layer xi x1 x2 xn 1 2 i n Output layer 1 2 k l yk y1 y2 yl Input signals Error signals wjk Hidden layer wij 1 2 j m
  • 32.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 32 Step 1Step 1:: InitialisationInitialisation SetSet all the weights and threshold levels of theall the weights and threshold levels of the network to random numbers uniformlynetwork to random numbers uniformly distributed inside a small range:distributed inside a small range: wherewhere FFii is the total number of inputs ofis the total number of inputs of neuronneuron ii in the network. The weight initialisation is donein the network. The weight initialisation is done on aon a neuronneuron-by--by-neuronneuron basis.basis. The back-propagation trainingThe back-propagation training algorithmalgorithm       +− ii FF 4.2 , 4.2
  • 33.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 33 Step 2Step 2:: ActivationActivation ActivateActivate the back-propagation neural network bythe back-propagation neural network by applying inputsapplying inputs xx11((pp),), xx22((pp),…,),…, xxnn((pp) and desired) and desired outputsoutputs yydd,1,1((pp),), yydd,2,2((pp),…,),…, yydd,,nn((pp).). ((aa) Calculate the actual outputs of the) Calculate the actual outputs of the neuronsneurons inin the hidden layer:the hidden layer: wherewhere nn is the number of inputs ofis the number of inputs of neuronneuron jj in thein the hidden layer, andhidden layer, and sigmoidsigmoid is theis the sigmoidsigmoid activationactivation functionfunction..         θ−⋅= ∑ = j n i ijij pwpxsigmoidpy 1 )()()(
  • 34.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 34 ((bb) Calculate the actual outputs of the neurons in) Calculate the actual outputs of the neurons in the output layer:the output layer: wherewhere mm is the number of inputs of neuronis the number of inputs of neuron kk in thein the output layer.output layer.         θ−⋅= ∑ = k m j jkjkk pwpxsigmoidpy 1 )()()( Step 2Step 2: Activation (continued): Activation (continued)
  • 35.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 35 Step 3Step 3: Weight: Weight trainingtraining UpdateUpdate the weights in the back-propagation networkthe weights in the back-propagation network propagating backward the errors associated withpropagating backward the errors associated with outputoutput neuronsneurons.. ((aa) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons in thein the output layer:output layer: wherewhere Calculate the weight corrections:Calculate the weight corrections: UpdateUpdate the weights at the outputthe weights at the output neuronsneurons:: [ ] )()(1)()( pepypyp kkkk ⋅−⋅=δ )()()( , pypype kkdk −= )()()( ppypw kjjk δα ⋅⋅=∆ )()()1( pwpwpw jkjkjk ∆+=+
  • 36.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 36 ((bb) Calculate the error gradient for the) Calculate the error gradient for the neuronsneurons inin the hidden layer:the hidden layer: Calculate the weight corrections:Calculate the weight corrections: Update the weights at the hiddenUpdate the weights at the hidden neuronsneurons:: )()()(1)()( 1 ][ pwppypyp jk l k kjjj ∑ = ⋅−⋅= δδ )()()( ppxpw jiij δα ⋅⋅=∆ )()()1( pwpwpw ijijij ∆+=+ Step 3Step 3: Weight training (continued): Weight training (continued)
  • 37.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 37 Step 4Step 4: Iteration: Iteration Increase iterationIncrease iteration pp by one, go back toby one, go back to Step 2Step 2 andand repeat the process until the selected error criterionrepeat the process until the selected error criterion is satisfied.is satisfied. As an example, we may consider the three-layerAs an example, we may consider the three-layer back-propagation network. Suppose that theback-propagation network. Suppose that the network is required to perform logical operationnetwork is required to perform logical operation Exclusive-ORExclusive-OR. Recall that a single-layer. Recall that a single-layer perceptronperceptron could not do this operation. Now we will apply thecould not do this operation. Now we will apply the three-layer net.three-layer net.
  • 38.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 38 Three-layer network for solving theThree-layer network for solving the Exclusive-OR operationExclusive-OR operation y55 x1 31 x2 Input layer Output layer Hidden layer 42 θ3 w13 w24 w23 w24 w35 w45 θ4 θ5 −1 −1 −1
  • 39.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 39 ss The effect of the threshold applied to aThe effect of the threshold applied to a neuronneuron in thein the hidden or output layer is represented by its weight,hidden or output layer is represented by its weight, θθ,, connected to a fixed input equal toconnected to a fixed input equal to −−1.1. ss The initial weights and threshold levels are setThe initial weights and threshold levels are set randomly as follows:randomly as follows: ww1313 = 0.5,= 0.5, ww1414 = 0.9,= 0.9, ww2323 = 0.4,= 0.4, ww2424 = 1.0= 1.0,, ww3535 == −−1.2,1.2, ww4545 = 1.1= 1.1,, θθ33 = 0.8,= 0.8, θθ44 == −−0.1 and0.1 and θθ55 = 0.3.= 0.3.
  • 40.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 40 ss We consider a training set where inputsWe consider a training set where inputs xx11 andand xx22 areare equal to 1 and desired outputequal to 1 and desired output yydd,5,5 is 0. The actualis 0. The actual outputs of neurons 3 and 4 in the hidden layer areoutputs of neurons 3 and 4 in the hidden layer are calculated ascalculated as [ ] 5250.01/1)( )8.014.015.01( 32321313 =+=θ−+= ⋅−⋅+⋅− ewxwxsigmoidy [ ] 8808.01/1)( )1.010.119.01( 42421414 =+=θ−+= ⋅+⋅+⋅− ewxwxsigmoidy ss Now the actual output ofNow the actual output of neuronneuron 5 in the output layer5 in the output layer is determined as:is determined as: ss ThusThus, the following error is obtained:, the following error is obtained: [ ] 5097.01/1)( )3.011.18808.02.15250.0( 54543535 =+=θ−+= ⋅−⋅+⋅−− ewywysigmoidy 5097.05097.0055, −=−=−= yye d
  • 41.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 41 ss The next step is weight training. To update theThe next step is weight training. To update the weights and threshold levels in our network, weweights and threshold levels in our network, we propagate the errorpropagate the error,, ee, from the output layer, from the output layer backward to the input layer.backward to the input layer. ss First, we calculate the error gradient forFirst, we calculate the error gradient for neuronneuron 5 in5 in the output layer:the output layer: 1274.05097).0(0.5097)(10.5097)1( 555 −=−⋅−⋅=−= eyyδ ss Then we determine the weight corrections assumingThen we determine the weight corrections assuming that the learning rate parameter,that the learning rate parameter, αα, is equal to 0.1:, is equal to 0.1: 0112.0)1274.0(8808.01.05445 −=−⋅⋅=⋅⋅=∆ δα yw 0067.0)1274.0(5250.01.05335 −=−⋅⋅=⋅⋅=∆ δα yw 0127.0)1274.0()1(1.0)1( 55 −=−⋅−⋅=⋅−⋅=θ∆ δα
  • 42.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 42 ss Next we calculate the error gradients for neurons 3Next we calculate the error gradients for neurons 3 and 4 in the hidden layer:and 4 in the hidden layer: ss We then determine the weightWe then determine the weight corrections:corrections: 0381.0)2.1(0.1274)(0.5250)(10.5250)1( 355333 =−⋅−⋅−⋅=⋅⋅−= wyy δδ 0.0147.114)0.127(0.8808)(10.8808)1( 455444 −=⋅−⋅−⋅=⋅⋅−= wyy δδ 0038.00381.011.03113 =⋅⋅=⋅⋅=∆ δα xw 0038.00381.011.03223 =⋅⋅=⋅⋅=∆ δα xw 0038.00381.0)1(1.0)1( 33 −=⋅−⋅=⋅−⋅=θ∆ δα 0015.0)0147.0(11.04114 −=−⋅⋅=⋅⋅=∆ δα xw 0015.0)0147.0(11.04224 −=−⋅⋅=⋅⋅=∆ δα xw 0015.0)0147.0()1(1.0)1( 44 =−⋅−⋅=⋅−⋅=θ∆ δα
  • 43.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 43 ss At last, we update all weights andAt last, we update all weights and threshold:threshold: 5038.00038.05.0131313 =+=∆+= www 8985.00015.09.0141414 =−=∆+= www 4038.00038.04.0232323 =+=∆+= www 9985.00015.00.1242424 =−=∆+= www 2067.10067.02.1353535 −=−−=∆+= www 0888.10112.01.1454545 =−=∆+= www 7962.00038.08.0333 =−=θ∆+θ=θ 0985.00015.01.0444 −=+−=θ∆+θ=θ 3127.00127.03.0555 =+=θ∆+θ=θ ss The training process is repeated until the sum ofThe training process is repeated until the sum of squared errors is less than 0.001.squared errors is less than 0.001.
  • 44.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 44 Learning curve for operationLearning curve for operation Exclusive-ORExclusive-OR 0 50 100 150 200 10 1 Epoch Sum-SquaredError Sum-Squared Network Error for 224 Epochs 10 0 10 -1 10 -2 10 -3 10 -4
  • 45.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 45 Final results of three-layer network learningFinal results of three-layer network learning Inputs x1 x2 1 0 1 0 1 1 0 0 0 1 1 Desired output yd 0 0.0155 Actual output y5 Error e Sum of squared errors 0.9849 0.9849 0.0175 −0.0155 0.0151 0.0151 −0.0175 0.0010
  • 46.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 46 Network represented byNetwork represented by McCullochMcCulloch--PittsPitts modelmodel for solving thefor solving the Exclusive-ORExclusive-OR operationoperation y55 x1 31 x2 42 +1.0 −1 −1 −1 +1.0 +1.0 +1.0 +1.5 +1.0 +1.0 +0.5 +0.5
  • 47.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 47 ((aa) Decision) Decision boundary constructed by hiddenboundary constructed by hidden neuronneuron 3;3; ((bb)) Decision boundary constructed by hiddenDecision boundary constructed by hidden neuronneuron 4;4; ((cc) Decision boundaries constructed by the) Decision boundaries constructed by the completecomplete threethree-layer network-layer network x1 x2 1 (a) 1 x2 1 1 (b) 00 x1 + x2 – 1.5 = 0 x1 + x2 – 0.5 = 0 x1 x1 x2 1 1 (c) 0 Decision boundariesDecision boundaries
  • 48.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 48 AcceleratedAccelerated learninglearning inin multilayermultilayer neuralneural networksnetworks ss AA multilayermultilayer network learns much fasternetwork learns much faster when thewhen the sigmoidalsigmoidal activation function is represented by aactivation function is represented by a hyperbolic tangenthyperbolic tangent:: wherewhere aa andand bb are constants.are constants. SuitableSuitable values forvalues for aa andand bb are:are: aa = 1.716 and= 1.716 and bb = 0.= 0.667667 a e a Y bX htan − + = − 1 2
  • 49.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 49 ss We also can accelerate training by including aWe also can accelerate training by including a momentum termmomentum term in the delta rule:in the delta rule: wherewhere ββ is a positive number (0is a positive number (0 ≤≤ ββ << 1) called the1) called the momentum constantmomentum constant. Typically, the momentum. Typically, the momentum constant is set to 0.95.constant is set to 0.95. This equation is calledThis equation is called thethe generalised delta rulegeneralised delta rule.. )()()1()( ppypwpw kjjkjk δαβ ⋅⋅+−∆⋅=∆
  • 50.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 50 Learning with momentum for operationLearning with momentum for operation Exclusive-ORExclusive-OR 0 20 40 60 80 100 120 10-4 10-2 10 0 10 2 Epoch Sum-SquaredError Training for 126 Epochs 0 100 140 -1 -0.5 0 0.5 1 1.5 Epoch LearningRate 10-3 101 10 -1 20 40 60 80 120
  • 51.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 51 Learning with adaptive learning rateLearning with adaptive learning rate To accelerate the convergence and yet avoid theTo accelerate the convergence and yet avoid the danger of instability, we can apply two heuristics:danger of instability, we can apply two heuristics: Heuristic 1Heuristic 1 If the change of the sum of squared errors has the sameIf the change of the sum of squared errors has the same algebraic sign for several consequent epochs, then thealgebraic sign for several consequent epochs, then the learning rate parameter,learning rate parameter, αα, should be increased., should be increased. Heuristic 2Heuristic 2 If the algebraic sign of the change of the sum ofIf the algebraic sign of the change of the sum of squared errors alternates for several consequentsquared errors alternates for several consequent epochs, then the learning rate parameter,epochs, then the learning rate parameter, αα, should be, should be decreased.decreased.
  • 52.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 52 ss Adapting the learning rate requires some changesAdapting the learning rate requires some changes in the back-propagation algorithm.in the back-propagation algorithm. ss If the sum of squared errors at the current epochIf the sum of squared errors at the current epoch exceeds the previous value by more than aexceeds the previous value by more than a predefinedpredefined ratio (typically 1.04), the learning rateratio (typically 1.04), the learning rate parameter is decreased (typically by multiplyingparameter is decreased (typically by multiplying by 0.7) and new weights and thresholds areby 0.7) and new weights and thresholds are calculated.calculated. ss If the errorIf the error is less than the previous one, theis less than the previous one, the learning rate is increased (typically by multiplyinglearning rate is increased (typically by multiplying by 1.05).by 1.05).
  • 53.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 53 Learning with adaptive learningLearning with adaptive learning raterate 0 10 20 30 40 50 60 70 80 90 100 Epoch Training for 103 Epochs 0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1 Epoch LearningRate 10-4 10-2 100 10 2 Sum-SquaredError 10 -3 101 10 -1
  • 54.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 54 Learning with momentum and adaptive learning rateLearning with momentum and adaptive learning rate 0 10 20 30 40 50 60 70 80 Epoch Training for 85 Epochs 0 10 20 30 40 50 60 70 80 90 0 0.5 1 2.5 Epoch LearningRate 10-4 10 -2 100 10 2 Sum-SquaredErro 10 -3 101 10-1 1.5 2
  • 55.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 55 ss Neural networks were designed on analogy withNeural networks were designed on analogy with the brain. The brain’s memory, however, worksthe brain. The brain’s memory, however, works by association. For example, we can recognise aby association. For example, we can recognise a familiar face even in an unfamiliar environmentfamiliar face even in an unfamiliar environment within 100-200 ms. We can also recall awithin 100-200 ms. We can also recall a complete sensory experience, including soundscomplete sensory experience, including sounds and scenes, when we hear only a few bars ofand scenes, when we hear only a few bars of music. The brain routinely associates one thingmusic. The brain routinely associates one thing with another.with another. TheThe HopfieldHopfield NetworkNetwork
  • 56.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 56 ss Multilayer neural networks trained with the back-Multilayer neural networks trained with the back- propagation algorithm are used for patternpropagation algorithm are used for pattern recognition problems. However, to emulate therecognition problems. However, to emulate the human memory’s associative characteristics wehuman memory’s associative characteristics we need a different type of network: aneed a different type of network: a recurrentrecurrent neural networkneural network.. ss A recurrent neural network has feedback loopsA recurrent neural network has feedback loops from its outputs to its inputs. The presence offrom its outputs to its inputs. The presence of such loops has a profound impact on the learningsuch loops has a profound impact on the learning capability of the network.capability of the network.
  • 57.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 57 ss The stability of recurrent networks intriguedThe stability of recurrent networks intrigued several researchers in the 1960s and 1970s.several researchers in the 1960s and 1970s. However, none was able to predict which networkHowever, none was able to predict which network would be stable, and some researchers werewould be stable, and some researchers were pessimistic about finding a solution at all. Thepessimistic about finding a solution at all. The problem was solved only in 1982, whenproblem was solved only in 1982, when JohnJohn HopfieldHopfield formulated the physical principle offormulated the physical principle of storing information in a dynamically stablestoring information in a dynamically stable network.network.
  • 58.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 58 Single-layerSingle-layer nn--neuron Hopfieldneuron Hopfield networknetwork xi x1 x2 xn InputSignals yi y1 y2 yn 1 2 i n OutputSignals
  • 59.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 59 ss TheThe HopfieldHopfield network usesnetwork uses McCullochMcCulloch andand PittsPitts neuronsneurons with thewith the sign activation functionsign activation function as itsas its computing element:computing element:      0= 0<− >+ = XY X X Ysign if, if,1 0if,1
  • 60.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 60 ss The current state ofThe current state of the Hopfield network isthe Hopfield network is determined by the current outputs of alldetermined by the current outputs of all neuronsneurons,, yy11,, yy22, . . .,, . . ., yynn.. ThusThus, for a single-layer, for a single-layer nn--neuronneuron network, the statenetwork, the state can be defined by thecan be defined by the state vectorstate vector as:as:               = ny y y ! 2 1 Y
  • 61.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 61 ss In the Hopfield network, synaptic weights betweenIn the Hopfield network, synaptic weights between neurons are usually represented in matrix form asneurons are usually represented in matrix form as follows:follows: wherewhere MM is the number of states to be memorisedis the number of states to be memorised by the network,by the network, YYmm is theis the nn-dimensional binary-dimensional binary vector,vector, II isis nn ×× nn identity matrix, and superscriptidentity matrix, and superscript TT denotes a matrix transposition.denotes a matrix transposition. IYYW M M m T mm −= ∑ =1
  • 62.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 62 Possible states for the three-neuronPossible states for the three-neuron Hopfield networkHopfield network y1 y2 y3 (1, −1, 1)(−1, −1, 1) (−1, −1, −1) (1, −1, −1) (1, 1, 1)(−1, 1, 1) (1, 1, −1)(−1, 1, −1) 0
  • 63.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 63 ss The stable state-vertex is determined by the weightThe stable state-vertex is determined by the weight matrixmatrix WW, the current input vector, the current input vector XX, and the, and the threshold matrixthreshold matrix θθθθθθθθ. If the input vector is partially. If the input vector is partially incorrect or incomplete, the initial state will convergeincorrect or incomplete, the initial state will converge into the stable state-vertex after a few iterations.into the stable state-vertex after a few iterations. ss Suppose, for instance, that our network is required toSuppose, for instance, that our network is required to memorise two opposite states, (1, 1, 1) and (memorise two opposite states, (1, 1, 1) and (−−1,1, −−1,1, −−1).1). Thus,Thus, oror wherewhere YY11 andand YY22 are the three-dimensional vectors.are the three-dimensional vectors.           = 1 1 1 1Y           − − − = 1 1 1 2Y [ ]1111 =T Y [ ]1112 −−−=T Y
  • 64.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 64 ss The 3The 3 ×× 3 identity matrix3 identity matrix II isis ss ThusThus, we can now determine the weight matrix as, we can now determine the weight matrix as follows:follows: ss Next, the network is tested by the sequence of inputNext, the network is tested by the sequence of input vectors,vectors, XX11 andand XX22, which are equal to the output (or, which are equal to the output (or target) vectorstarget) vectors YY11 andand YY22, respectively., respectively.           = 100 010 001 I [ ] [ ]           −−−−           − − − +           = 100 010 001 2111 1 1 1 111 1 1 1 W           = 022 202 220
  • 65.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 65 ss First, we activate the Hopfield network by applyingFirst, we activate the Hopfield network by applying the input vectorthe input vector XX. Then, we calculate the actual. Then, we calculate the actual output vectoroutput vector YY, and finally, we compare the result, and finally, we compare the result with the initial input vectorwith the initial input vector XX..           =                     −                     = 1 1 1 0 0 0 1 1 1 022 202 220 1 signY           − − − =                     −           − − −           = 1 1 1 0 0 0 1 1 1 022 202 220 2 signY
  • 66.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 66 ss The remaining six states are all unstable. However,The remaining six states are all unstable. However, stable states (also calledstable states (also called fundamental memoriesfundamental memories) are) are capable of attracting states that arecapable of attracting states that are close to them.close to them. ss TheThe fundamental memory (1, 1, 1) attracts unstablefundamental memory (1, 1, 1) attracts unstable states (states (−−1, 1, 1), (1,1, 1, 1), (1, −−1, 1) and (1, 1,1, 1) and (1, 1, −−1). Each of1). Each of these unstable states represents a single error,these unstable states represents a single error, compared to the fundamental memory (1, 1, 1).compared to the fundamental memory (1, 1, 1). ss TheThe fundamental memory (fundamental memory (−−1,1, −−1,1, −−1) attracts1) attracts unstable states (unstable states (−−1,1, −−1, 1), (1, 1), (−−1, 1,1, 1, −−1) and (1,1) and (1, −−1,1, −−11).). ss ThusThus, the, the HopfieldHopfield network can act as annetwork can act as an errorerror correction networkcorrection network..
  • 67.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 67 ss Storage capacityStorage capacity isis or the largest number ofor the largest number of fundamentalfundamental memories that can be stored andmemories that can be stored and retrieved correctly.retrieved correctly. ss The maximum number of fundamental memoriesThe maximum number of fundamental memories MMmaxmax that can be stored in thethat can be stored in the nn--neuronneuron recurrentrecurrent network is limitednetwork is limited byby nMmax 15.0= Storage capacity of the Hopfield networkStorage capacity of the Hopfield network
  • 68.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 68 ss The Hopfield network represents anThe Hopfield network represents an autoassociativeautoassociative type of memorytype of memory −− it can retrieve a corrupted orit can retrieve a corrupted or incomplete memory but cannot associate this memoryincomplete memory but cannot associate this memory with another different memory.with another different memory. ss Human memory isHuman memory is essentiallyessentially associativeassociative. One thing. One thing may remind us of another, and that of another, and somay remind us of another, and that of another, and so on. We use a chain of mental associations to recoveron. We use a chain of mental associations to recover a lost memory. Ifa lost memory. If we forget wherewe forget where we left anwe left an umbrella, we try to recall where we lastumbrella, we try to recall where we last had ithad it, what, what we were doing, and who we were talking to. Wewe were doing, and who we were talking to. We attempt to establish a chain of associations, andattempt to establish a chain of associations, and thereby to restore a lost memory.thereby to restore a lost memory. BidirectionalBidirectional associativeassociative memory (BAM)memory (BAM)
  • 69.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 69 ss To associate one memory with another, we need aTo associate one memory with another, we need a recurrent neural network capable of accepting anrecurrent neural network capable of accepting an input pattern on one set of neurons and producinginput pattern on one set of neurons and producing a relateda related, but different, output pattern on another, but different, output pattern on another set ofset of neuronsneurons.. ss Bidirectional associative memoryBidirectional associative memory (BAM)(BAM), first, first proposed byproposed by Bart KoskoBart Kosko, is a heteroassociative, is a heteroassociative network. It associates patterns from one set, setnetwork. It associates patterns from one set, set AA,, to patterns from another set, setto patterns from another set, set BB, and vice versa., and vice versa. Like aLike a HopfieldHopfield network, the BAM can generalisenetwork, the BAM can generalise and also produce correct outputs despite corruptedand also produce correct outputs despite corrupted or incomplete inputs.or incomplete inputs.
  • 70.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 70 BAMBAM operationoperation yj(p) y1(p) y2(p) ym(p) 1 2 j m Output layer Input layer xi(p) x1(p) x2(p) xn(p) 2 i n 1 xi(p+1) x1(p+1) x2(p+1) xn(p+1) yj(p) y1(p) y2(p) ym(p) 1 2 j m Output layer Input layer 2 i n 1 (a) Forward direction. (b) Backward direction.
  • 71.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 71 The basic idea behind the BAM is to storeThe basic idea behind the BAM is to store pattern pairs so that whenpattern pairs so that when nn-dimensional vector-dimensional vector XX from setfrom set AA is presented as input, the BAMis presented as input, the BAM recallsrecalls mm-dimensional vector-dimensional vector YY from setfrom set BB, but, but whenwhen YY is presented as input, the BAM recallsis presented as input, the BAM recalls XX..
  • 72.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 72 ss To develop the BAM, we need to create aTo develop the BAM, we need to create a correlation matrix for each pattern pair we want tocorrelation matrix for each pattern pair we want to store. The correlation matrix is the matrix productstore. The correlation matrix is the matrix product of the input vectorof the input vector XX, and the transpose of the, and the transpose of the output vectoroutput vector YYTT . The BAM weight matrix is the. The BAM weight matrix is the sum of all correlation matrices, that is,sum of all correlation matrices, that is, wherewhere MM is the number of pattern pairs to be storedis the number of pattern pairs to be stored in the BAM.in the BAM. T m M m m YXW ∑ = = 1
  • 73.  Negnevitsky, Pearson Education, 2002Negnevitsky, Pearson Education, 2002 73 ss The BAM isThe BAM is unconditionally stableunconditionally stable. This means that. This means that any set of associations can be learned without risk ofany set of associations can be learned without risk of instability.instability. ss TheThe maximum number of associations to be storedmaximum number of associations to be stored in the BAM should not exceed the number ofin the BAM should not exceed the number of neuronsneurons in the smaller layer.in the smaller layer. ss The more serious problem with the BAMThe more serious problem with the BAM isis incorrect convergenceincorrect convergence. The BAM may not. The BAM may not always produce the closest association. In fact, aalways produce the closest association. In fact, a stable association may be only slightly related tostable association may be only slightly related to the initial input vector.the initial input vector. Stability and storage capacity of the BAMStability and storage capacity of the BAM