SlideShare una empresa de Scribd logo
1 de 52
Backpropagation
Learning Algorithm
The backpropagation algorithm was used to train the
multi layer perception MLP
MLP used to describe any general Feedforward (no
recurrent connections) Neural Network FNN
However, we will concentrate on nets with units
arranged in layers
x1
xn
Architecture of BP Nets
• Multi-layer, feed-forward networks have the following
characteristics:
-They must have at least one hidden layer
– Hidden units must be non-linear units (usually with sigmoid
activation functions)
– Fully connected between units in two consecutive layers, but no
connection between units within one layer.
– For a net with only one hidden layer, each hidden unit receives
input from all input units and sends output to all output units
– Number of output units need not equal number of input units
– Number of hidden units per layer can be more or less than
input or output units
Other Feedforward Networks
• Madaline
– Multiple adalines (of a sort) as hidden nodes
• Adaptive multi-layer networks
– Dynamically change the network size (# of hidden
nodes)
• Networks of radial basis function (RBF)
– e.g., Gaussian function
– Perform better than sigmoid function (e.g.,
interpolation in function approximation
Introduction to Backpropagation
• In 1969 a method for learning in multi-layer network,
Backpropagation (or generalized delta rule) , was invented by
Bryson and Ho.
• It is best-known example of a training algorithm. Uses training
data to adjust weights and thresholds of neurons so as to
minimize the networks errors of prediction.
• Slower than gradient descent .
• Easiest algorithm to understand
• Backpropagation works by applying the gradient descent
rule to a feedforward network.
• How many hidden layers and hidden units per layer?
– Theoretically, one hidden layer (possibly with many
hidden units) is sufficient for any L2 functions
– There is no theoretical results on minimum necessary #
of hidden units (either problem dependent or
independent)
– Practical rule :
•n = # of input units; p = # of hidden units
•For binary/bipolar data: p = 2n
•For real data: p >> 2n
– Multiple hidden layers with fewer units may be trained
faster for similar quality in some applications
Training a BackPropagation Net
• Feedforward training of input patterns
– each input node receives a signal, which is broadcast to all of the
hidden units
– each hidden unit computes its activation which is broadcast to all
output nodes
• Back propagation of errors
– each output node compares its activation with the desired output
– based on these differences, the error is propagated back to all
previous nodes Delta Rule
• Adjustment of weights
– weights of all links computed simultaneously based on the errors
that were propagated back
Three-layer back-propagation neural network
Input
layer
xi
x1
x2
xn
1
2
i
n
Output
layer
1
2
k
l
yk
y1
y2
yl
Input signals
Error signals
wjk
Hidden
layer
wij
1
2
j
m
Generalized delta rule
• Delta rule only works for the output layer.
• Backpropagation, or the generalized delta
rule, is a way of creating desired values for
hidden layers
Description of Training BP Net:
Feedforward Stage
1. Initialize weights with small, random values
2. While stopping condition is not true
– for each training pair (input/output):
• each input unit broadcasts its value to all hidden
units
• each hidden unit sums its input signals & applies
activation function to compute its output signal
• each hidden unit sends its signal to the output units
• each output unit sums its input signals & applies its
activation function to compute its output signal
Training BP Net:
Backpropagation stage
3. Each output computes its error term, its own
weight correction term and its bias(threshold)
correction term & sends it to layer below
4. Each hidden unit sums its delta inputs from above
& multiplies by the derivative of its activation
function; it also computes its own weight
correction term and its bias correction term
Training a Back Prop Net:
Adjusting the Weights
5. Each output unit updates its weights and bias
6. Each hidden unit updates its weights and bias
– Each training cycle is called an epoch. The
weights are updated in each cycle
– It is not analytically possible to determine where
the global minimum is. Eventually the algorithm
stops in a low point, which may just be a local
minimum.
How long should you train?
• Goal: balance between correct responses for
training patterns & correct responses for new
patterns (memorization v. generalization)
• In general, network is trained until it reaches an
acceptable error rate (e.g. 95%)
• If train too long, you run the risk of overfitting
Graphical description of of training multi-layer
neural network using BP algorithm
To apply the BP algorithm to the following FNN
• To teach the neural network we need training data set. The
training data set consists of input signals (x1 and x2 )
assigned with corresponding target (desired output) z.
• The network training is an iterative process. In each
iteration weights coefficients of nodes are modified using
new data from training data set.
• After this stage we can determine output signals values for
each neuron in each network layer.
• Pictures below illustrate how signal is propagating through
the network, Symbols w(xm)n represent weights of
connections between network input xm and neuron n in
input layer. Symbols yn represents output signal of neuron
n.
• Propagation of signals through the hidden layer.
Symbols wmn represent weights of connections
between output of neuron m and input of neuron
n in the next layer.
• Propagation of signals through the output layer.
• In the next algorithm step the output signal of the
network y is compared with the desired output value
(the target), which is found in training data set. The
difference is called error signal d of output layer
neuron.
• It is impossible to compute error signal for internal neurons directly,
because output values of these neurons are unknown. For many years the
effective method for training multiplayer networks has been unknown.
• Only in the middle eighties the backpropagation algorithm has been
worked out. The idea is to propagate error signal d (computed in single
teaching step) back to all neurons, which output signals were input for
discussed neuron.
• The weights' coefficients wmn used to propagate errors back are equal to this used
during computing output value. Only the direction of data flow is changed
(signals are propagated from output to inputs one after the other). This technique
is used for all network layers. If propagated errors came from few neurons they
are added. The illustration is below:
• When the error
signal for each
neuron is
computed, the
weights
coefficients of
each neuron input
node may be
modified. In
formulas below
df(e)/de represents
derivative of
neuron activation
function (which
weights are
modified).
Coefficient h affects network teaching speed. There are a few techniques
to select this parameter. The first method is to start teaching process with
large value of the parameter. While weights coefficients are being
established the parameter is being decreased gradually.
The second, more complicated, method starts teaching with small
parameter value. During the teaching process the parameter is being
increased when the teaching is advanced and then decreased again in the
final stage.
Training Algorithm 1
• Step 0: Initialize the weights to small random values
• Step 1: Feed the training sample through the network
and determine the final output
• Step 2: Compute the error for each output unit, for unit
k it is:
dk = (tk – yk)f’(y_ink)
Required output
Actual output
Derivative of f
Training Algorithm 2
• Step 3: Calculate the weight correction
term for each output unit, for unit k it is:
Dwjk = hdkzj
A small constant
Hidden layer signal
Training Algorithm 3
• Step 4: Propagate the delta terms (errors) back
through the weights of the hidden units where
the delta input for the jth hidden unit is:
d_inj =
dkwjk
Sk=1
m
The delta term for the jth hidden unit is:
dj = d_injf’(z_inj)
where
f’(z_inj)= f(z_inj)[1- f(z_inj)]
Training Algorithm 4
• Step 5: Calculate the weight correction term for the
hidden units:
• Step 6: Update the weights:
• Step 7: Test for stopping (maximum cylces, small
changes, etc)
Dwij = hdjxi
wjk(new) = wjk(old) + Dwjk
Options
• There are a number of options in the design
of a backprop system
– Initial weights – best to set the initial weights
(and all other free parameters) to random
numbers inside a small range of values
(say: – 0.5 to 0.5)
– Number of cycles – tend to be quite large for
backprop systems
– Number of neurons in the hidden layer – as few
as possible
Example
• The XOR function could not be solved by a
single layer perceptron network
• The function is: X Y F
0 0 0
0 1 1
1 0 1
1 1 0
XOR Architecture
x
y
fv21 S
v11
v31
fv22 S
v12
v32
fw21 S
w11
w31
1
1
1
Initial Weights
• Randomly assign small weight values:
x
y
f.21 S
-.3
.15
f-.4 S
.25
.1
f-.2 S
-.4
.3
1
1
1
Feedfoward – 1st Pass
x
y
f.21 S
-.3
.15
f-.4 S
.25
.1
f-.2 S
-.4
.3
1
1
1
Training Case: (0 0 0)
0
0
1
1
y_in1 = -.3(1) + .21(0) + .25(0) = -.3
f(yj )= y_ini
1
1 + e
Activation function f:
f = .43
y_in2 = .25(1) -.4(0) + .1(0)
f = .56
1
y_in3 = -.4(1) - .2(.43)
+.3(.56) = -.318
f = .42
(not 0)
Backpropagate
0
0
f.21 S
-.3
.15
f-.4 S
.25
.1
f-.2 S
-.4
.3
1
1
1
d3 = (t3 – y3)f’(y_in3)
=(t3 – y3)f(y_in3)[1- f(y_in3)]
d3 = (0 – .42).42[1-.42]
= -.102
d_in1 = d3w13 = -.102(-.2) = .02
d1 = d_in1f’(z_in1) = .02(.43)(1-.43)
= .005
d_in2 = d3w12 = -.102(.3) = -.03
d2 = d_in2f’(z_in2) = -.03(.56)(1-.56)
= -.007
Update the Weights – First Pass
0
0
f.21 S
-.3
.15
f-.4 S
.25
.1
f-.2 S
-.4
.3
1
1
1
d3 = (t3 – y3)f’(y_in3)
=(t3 – y3)f(y_in3)[1- f(y_in3)]
d3 = (0 – .42).42[1-.42]
= -.102
d_in1 = d3w13 = -.102(-.2) = .02
d1 = d_in1f’(z_in1) = .02(.43)(1-.43)
= .005
d_in2 = d3w12 = -.102(.3) = -.03
d2 = d_in2f’(z_in2) = -.03(.56)(1-.56)
= -.007
Final Result
• After about 500 iterations:
x
y
f1 S
-1.5
1
f1 S
-.5
1
f-2 S
-.5
1
1
1
1
More details for
gradient descent method + MLP
to whom may have interest
In the perceptron/single layer nets, we used gradient descent on the
error function to find the correct weights:
D wji = (tj - yj) xi
We see that errors/updates are local to the node ie the change in the
weight from node i to output j (wji) is controlled by the input that
travels along the connection and the error signal from output j
x1
x2
•But with more layers how are the weights for the first 2 layers
found when the error is computed for layer 3 only?
•There is no direct error signal for the first layers!!!!!
?
x1
(tj - yj)
Credit assignment problem
• Problem of assigning ‘credit’ or ‘blame’ to individual elements
involved in forming overall response of a learning system
(hidden units)
• In neural networks, problem relates to deciding which weights
should be altered, by how much and in which direction.
Analogous to deciding how much a weight in the early layer
contributes to the output and thus the error
We therefore want to find out how weight wij affects the error ie we
want:
)(
)(
tw
tE
ij

Backpropagation learning algorithm ‘BP’
Solution to credit assignment problem in MLP
Rumelhart, Hinton and Williams (1986)
BP has two phases:
Forward pass phase: computes ‘functional signal’, feedforward
propagation of input pattern signals through
network
Backpropagation learning algorithm ‘BP’
Solution to credit assignment problem in MLP. Rumelhart, Hinton and
Williams (1986) (though actually invented earlier in a PhD thesis
relating to economics)
BP has two phases:
Forward pass phase: computes ‘functional signal’, feedforward
propagation of input pattern signals through network
Backward pass phase: computes ‘error signal’, propagates
the error backwards through network starting at output units (where
the error is the difference between actual and desired output values)
xn
x1
x2
Inputs xi
Outputs yj
Two-layer networks
y1
ym
2nd layer weights wij
from j to i
1st layer weights vij
from j to i
Outputs of 1st layer zi
We will concentrate on three-layer, but could easily
generalize to more layers
zi (t) = g( S j vij (t) xj (t) ) at time t
= g ( ui (t) )
yi (t) = g( S j wij (t) zj (t) ) at time t
= g ( ai (t) )
a/u known as activation, g the activation function
biases set as extra weights
Forward pass
Weights are fixed during forward and backward
pass at time t
1. Compute values for hidden units
2. compute values for output units
))((
)()()(
tugz
txtvtu
jj
i
ijij

 
))((
)()(
tagy
ztwta
kk
j
jkjk

 
xi
vji(t)
wkj(t)
zj
yk
Backward Pass
Will use a sum of squares error measure. For each training pattern
we have:
where dk is the target value for dimension k. We want to know how to
modify weights in order to decrease E. Use gradient descent ie
both for hidden units and output units


1
2
))()((
2
1
)(
k
kk tytdtE
)(
)(
)()1(
tw
tE
twtw
ij
ijij



)(
)(
)(
)(
)(
)(
tw
ta
ta
tE
tw
tE
ij
i
iij 






How error for pattern changes as function of change
in network input to unit j
How net input to unit j changes as a function of
change in weight w
both for hidden units and output units
Term A
Term B
The partial derivative can be rewritten as product of two terms using
chain rule for partial differentiation
Term A Let
)(
)(
)(,
)(
)(
)(
ta
tE
t
tu
tE
t
i
i
i
i




d D
(error terms). Can evaluate these by chain rule:
)(
)(
)(
)(
)(
)(
tz
tw
ta
tx
tv
tu
j
ij
i
j
ij
i





Term B first:
))()())(((')(
)(
)(
))(('
)(
)(
)(
tytdtagt
ty
tE
tag
ta
tE
t
iiii
i
i
i
i
D
D




For output units we therefore have:


D

j
jjiii
i
j
j ji
i
wtugt
tu
ta
ta
tE
tu
tE
t
))((')(
)(
)(
)(
)(
)(
)(
)(
d






d
For hidden units must use the chain rule:
Backward Pass
Weights here can be viewed as providing
degree of ‘credit’ or ‘blame’ to hidden units
Dj
Dk
di
wki wji
di = g’(ai) Sj wji Dj
Combining A+B gives
So to achieve gradient descent in E should change weights by
vij(t+1)-vij(t) = h d i (t) xj (n)
wij(t+1)-wij(t) = h D i (t) zj (t)
Where h is the learning rate parameter (0 < h <=1)
)()(
)(
)(
)()(
)(
)(
tzt
tw
tE
txt
tv
tE
ji
ij
ji
ij
D



d


Summary
Weight updates are local
output unit
hidden unit
)()()()1(
)()()()1(
tzttwtw
txttvtv
jiijij
jiijij
D

h
hd
D

k
kikji
jiijij
wttxtug
txttvtv
)()())(('
)()()()1(
h
hd
)())(('))()((
)()()()1(
tztagtytd
tzttwtw
jiii
jiijij

D
h
h
End of slides

Más contenido relacionado

La actualidad más candente

Backpropagation
BackpropagationBackpropagation
Backpropagation
ariffast
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
Sivagowry Shathesh
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
Victor Miagkikh
 
Multi-Layer Perceptrons
Multi-Layer PerceptronsMulti-Layer Perceptrons
Multi-Layer Perceptrons
ESCOM
 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 

La actualidad más candente (20)

Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Backpropagation
BackpropagationBackpropagation
Backpropagation
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
 
Back propagation method
Back propagation methodBack propagation method
Back propagation method
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Back propagation network
Back propagation networkBack propagation network
Back propagation network
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
 
Auto encoders in Deep Learning
Auto encoders in Deep LearningAuto encoders in Deep Learning
Auto encoders in Deep Learning
 
Backpropagation algo
Backpropagation  algoBackpropagation  algo
Backpropagation algo
 
Recurrent neural networks rnn
Recurrent neural networks   rnnRecurrent neural networks   rnn
Recurrent neural networks rnn
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Deep learning
Deep learningDeep learning
Deep learning
 
Multi-Layer Perceptrons
Multi-Layer PerceptronsMulti-Layer Perceptrons
Multi-Layer Perceptrons
 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
 
Activation functions and Training Algorithms for Deep Neural network
Activation functions and Training Algorithms for Deep Neural networkActivation functions and Training Algorithms for Deep Neural network
Activation functions and Training Algorithms for Deep Neural network
 
Back propagation
Back propagationBack propagation
Back propagation
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Multi Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back PropagationMulti Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back Propagation
 

Similar a Lec 6-bp

Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Simplilearn
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
MayuraD1
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
Naren Chandra Kattla
 

Similar a Lec 6-bp (20)

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
MNN
MNNMNN
MNN
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptx
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptx
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
 
N ns 1
N ns 1N ns 1
N ns 1
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 

Más de Taymoor Nazmy

Más de Taymoor Nazmy (20)

Cognitive systems
Cognitive  systemsCognitive  systems
Cognitive systems
 
Cognitive systems
Cognitive  systemsCognitive  systems
Cognitive systems
 
Artificial intelligent Lec 5-logic
Artificial intelligent Lec 5-logicArtificial intelligent Lec 5-logic
Artificial intelligent Lec 5-logic
 
Artificial intelligent Lec 3-ai chapter3-search
Artificial intelligent Lec 3-ai chapter3-searchArtificial intelligent Lec 3-ai chapter3-search
Artificial intelligent Lec 3-ai chapter3-search
 
Lec 2-agents
Lec 2-agentsLec 2-agents
Lec 2-agents
 
Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-
 
Image processing 2
Image processing 2Image processing 2
Image processing 2
 
Image processing 1-lectures
Image processing  1-lecturesImage processing  1-lectures
Image processing 1-lectures
 
Software Engineering Lec 10 -software testing--
Software Engineering Lec 10 -software testing--Software Engineering Lec 10 -software testing--
Software Engineering Lec 10 -software testing--
 
Software Engineering Lec 8-design-
Software Engineering Lec 8-design-Software Engineering Lec 8-design-
Software Engineering Lec 8-design-
 
Software Engineering Lec 7-uml-
Software Engineering Lec 7-uml-Software Engineering Lec 7-uml-
Software Engineering Lec 7-uml-
 
Software Engineering Lec5 oop-uml-i
Software Engineering Lec5 oop-uml-iSoftware Engineering Lec5 oop-uml-i
Software Engineering Lec5 oop-uml-i
 
Software Engineering Lec 4-requirments
Software Engineering Lec 4-requirmentsSoftware Engineering Lec 4-requirments
Software Engineering Lec 4-requirments
 
Software Engineering Lec 3-project managment
Software Engineering Lec 3-project managmentSoftware Engineering Lec 3-project managment
Software Engineering Lec 3-project managment
 
Software Engineering Lec 2
Software Engineering Lec 2Software Engineering Lec 2
Software Engineering Lec 2
 
Software Engineering Lec 1-introduction
Software Engineering Lec 1-introductionSoftware Engineering Lec 1-introduction
Software Engineering Lec 1-introduction
 
Lec 6-
Lec 6-Lec 6-
Lec 6-
 
presentation skill
presentation skillpresentation skill
presentation skill
 
Lec 4
Lec 4Lec 4
Lec 4
 
Lec 3
Lec 3Lec 3
Lec 3
 

Último

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 

Último (20)

Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 

Lec 6-bp

  • 2. The backpropagation algorithm was used to train the multi layer perception MLP MLP used to describe any general Feedforward (no recurrent connections) Neural Network FNN However, we will concentrate on nets with units arranged in layers x1 xn
  • 3. Architecture of BP Nets • Multi-layer, feed-forward networks have the following characteristics: -They must have at least one hidden layer – Hidden units must be non-linear units (usually with sigmoid activation functions) – Fully connected between units in two consecutive layers, but no connection between units within one layer. – For a net with only one hidden layer, each hidden unit receives input from all input units and sends output to all output units – Number of output units need not equal number of input units – Number of hidden units per layer can be more or less than input or output units
  • 4. Other Feedforward Networks • Madaline – Multiple adalines (of a sort) as hidden nodes • Adaptive multi-layer networks – Dynamically change the network size (# of hidden nodes) • Networks of radial basis function (RBF) – e.g., Gaussian function – Perform better than sigmoid function (e.g., interpolation in function approximation
  • 5. Introduction to Backpropagation • In 1969 a method for learning in multi-layer network, Backpropagation (or generalized delta rule) , was invented by Bryson and Ho. • It is best-known example of a training algorithm. Uses training data to adjust weights and thresholds of neurons so as to minimize the networks errors of prediction. • Slower than gradient descent . • Easiest algorithm to understand • Backpropagation works by applying the gradient descent rule to a feedforward network.
  • 6. • How many hidden layers and hidden units per layer? – Theoretically, one hidden layer (possibly with many hidden units) is sufficient for any L2 functions – There is no theoretical results on minimum necessary # of hidden units (either problem dependent or independent) – Practical rule : •n = # of input units; p = # of hidden units •For binary/bipolar data: p = 2n •For real data: p >> 2n – Multiple hidden layers with fewer units may be trained faster for similar quality in some applications
  • 7. Training a BackPropagation Net • Feedforward training of input patterns – each input node receives a signal, which is broadcast to all of the hidden units – each hidden unit computes its activation which is broadcast to all output nodes • Back propagation of errors – each output node compares its activation with the desired output – based on these differences, the error is propagated back to all previous nodes Delta Rule • Adjustment of weights – weights of all links computed simultaneously based on the errors that were propagated back
  • 8. Three-layer back-propagation neural network Input layer xi x1 x2 xn 1 2 i n Output layer 1 2 k l yk y1 y2 yl Input signals Error signals wjk Hidden layer wij 1 2 j m
  • 9. Generalized delta rule • Delta rule only works for the output layer. • Backpropagation, or the generalized delta rule, is a way of creating desired values for hidden layers
  • 10. Description of Training BP Net: Feedforward Stage 1. Initialize weights with small, random values 2. While stopping condition is not true – for each training pair (input/output): • each input unit broadcasts its value to all hidden units • each hidden unit sums its input signals & applies activation function to compute its output signal • each hidden unit sends its signal to the output units • each output unit sums its input signals & applies its activation function to compute its output signal
  • 11. Training BP Net: Backpropagation stage 3. Each output computes its error term, its own weight correction term and its bias(threshold) correction term & sends it to layer below 4. Each hidden unit sums its delta inputs from above & multiplies by the derivative of its activation function; it also computes its own weight correction term and its bias correction term
  • 12. Training a Back Prop Net: Adjusting the Weights 5. Each output unit updates its weights and bias 6. Each hidden unit updates its weights and bias – Each training cycle is called an epoch. The weights are updated in each cycle – It is not analytically possible to determine where the global minimum is. Eventually the algorithm stops in a low point, which may just be a local minimum.
  • 13. How long should you train? • Goal: balance between correct responses for training patterns & correct responses for new patterns (memorization v. generalization) • In general, network is trained until it reaches an acceptable error rate (e.g. 95%) • If train too long, you run the risk of overfitting
  • 14. Graphical description of of training multi-layer neural network using BP algorithm To apply the BP algorithm to the following FNN
  • 15. • To teach the neural network we need training data set. The training data set consists of input signals (x1 and x2 ) assigned with corresponding target (desired output) z. • The network training is an iterative process. In each iteration weights coefficients of nodes are modified using new data from training data set. • After this stage we can determine output signals values for each neuron in each network layer. • Pictures below illustrate how signal is propagating through the network, Symbols w(xm)n represent weights of connections between network input xm and neuron n in input layer. Symbols yn represents output signal of neuron n.
  • 16.
  • 17. • Propagation of signals through the hidden layer. Symbols wmn represent weights of connections between output of neuron m and input of neuron n in the next layer.
  • 18. • Propagation of signals through the output layer. • In the next algorithm step the output signal of the network y is compared with the desired output value (the target), which is found in training data set. The difference is called error signal d of output layer neuron.
  • 19. • It is impossible to compute error signal for internal neurons directly, because output values of these neurons are unknown. For many years the effective method for training multiplayer networks has been unknown. • Only in the middle eighties the backpropagation algorithm has been worked out. The idea is to propagate error signal d (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.
  • 20. • The weights' coefficients wmn used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the other). This technique is used for all network layers. If propagated errors came from few neurons they are added. The illustration is below:
  • 21. • When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).
  • 22.
  • 23. Coefficient h affects network teaching speed. There are a few techniques to select this parameter. The first method is to start teaching process with large value of the parameter. While weights coefficients are being established the parameter is being decreased gradually. The second, more complicated, method starts teaching with small parameter value. During the teaching process the parameter is being increased when the teaching is advanced and then decreased again in the final stage.
  • 24. Training Algorithm 1 • Step 0: Initialize the weights to small random values • Step 1: Feed the training sample through the network and determine the final output • Step 2: Compute the error for each output unit, for unit k it is: dk = (tk – yk)f’(y_ink) Required output Actual output Derivative of f
  • 25. Training Algorithm 2 • Step 3: Calculate the weight correction term for each output unit, for unit k it is: Dwjk = hdkzj A small constant Hidden layer signal
  • 26. Training Algorithm 3 • Step 4: Propagate the delta terms (errors) back through the weights of the hidden units where the delta input for the jth hidden unit is: d_inj = dkwjk Sk=1 m The delta term for the jth hidden unit is: dj = d_injf’(z_inj) where f’(z_inj)= f(z_inj)[1- f(z_inj)]
  • 27. Training Algorithm 4 • Step 5: Calculate the weight correction term for the hidden units: • Step 6: Update the weights: • Step 7: Test for stopping (maximum cylces, small changes, etc) Dwij = hdjxi wjk(new) = wjk(old) + Dwjk
  • 28. Options • There are a number of options in the design of a backprop system – Initial weights – best to set the initial weights (and all other free parameters) to random numbers inside a small range of values (say: – 0.5 to 0.5) – Number of cycles – tend to be quite large for backprop systems – Number of neurons in the hidden layer – as few as possible
  • 29. Example • The XOR function could not be solved by a single layer perceptron network • The function is: X Y F 0 0 0 0 1 1 1 0 1 1 1 0
  • 30. XOR Architecture x y fv21 S v11 v31 fv22 S v12 v32 fw21 S w11 w31 1 1 1
  • 31. Initial Weights • Randomly assign small weight values: x y f.21 S -.3 .15 f-.4 S .25 .1 f-.2 S -.4 .3 1 1 1
  • 32. Feedfoward – 1st Pass x y f.21 S -.3 .15 f-.4 S .25 .1 f-.2 S -.4 .3 1 1 1 Training Case: (0 0 0) 0 0 1 1 y_in1 = -.3(1) + .21(0) + .25(0) = -.3 f(yj )= y_ini 1 1 + e Activation function f: f = .43 y_in2 = .25(1) -.4(0) + .1(0) f = .56 1 y_in3 = -.4(1) - .2(.43) +.3(.56) = -.318 f = .42 (not 0)
  • 33. Backpropagate 0 0 f.21 S -.3 .15 f-.4 S .25 .1 f-.2 S -.4 .3 1 1 1 d3 = (t3 – y3)f’(y_in3) =(t3 – y3)f(y_in3)[1- f(y_in3)] d3 = (0 – .42).42[1-.42] = -.102 d_in1 = d3w13 = -.102(-.2) = .02 d1 = d_in1f’(z_in1) = .02(.43)(1-.43) = .005 d_in2 = d3w12 = -.102(.3) = -.03 d2 = d_in2f’(z_in2) = -.03(.56)(1-.56) = -.007
  • 34. Update the Weights – First Pass 0 0 f.21 S -.3 .15 f-.4 S .25 .1 f-.2 S -.4 .3 1 1 1 d3 = (t3 – y3)f’(y_in3) =(t3 – y3)f(y_in3)[1- f(y_in3)] d3 = (0 – .42).42[1-.42] = -.102 d_in1 = d3w13 = -.102(-.2) = .02 d1 = d_in1f’(z_in1) = .02(.43)(1-.43) = .005 d_in2 = d3w12 = -.102(.3) = -.03 d2 = d_in2f’(z_in2) = -.03(.56)(1-.56) = -.007
  • 35. Final Result • After about 500 iterations: x y f1 S -1.5 1 f1 S -.5 1 f-2 S -.5 1 1 1 1
  • 36. More details for gradient descent method + MLP to whom may have interest
  • 37. In the perceptron/single layer nets, we used gradient descent on the error function to find the correct weights: D wji = (tj - yj) xi We see that errors/updates are local to the node ie the change in the weight from node i to output j (wji) is controlled by the input that travels along the connection and the error signal from output j x1 x2 •But with more layers how are the weights for the first 2 layers found when the error is computed for layer 3 only? •There is no direct error signal for the first layers!!!!! ? x1 (tj - yj)
  • 38. Credit assignment problem • Problem of assigning ‘credit’ or ‘blame’ to individual elements involved in forming overall response of a learning system (hidden units) • In neural networks, problem relates to deciding which weights should be altered, by how much and in which direction. Analogous to deciding how much a weight in the early layer contributes to the output and thus the error We therefore want to find out how weight wij affects the error ie we want: )( )( tw tE ij 
  • 39. Backpropagation learning algorithm ‘BP’ Solution to credit assignment problem in MLP Rumelhart, Hinton and Williams (1986) BP has two phases: Forward pass phase: computes ‘functional signal’, feedforward propagation of input pattern signals through network
  • 40. Backpropagation learning algorithm ‘BP’ Solution to credit assignment problem in MLP. Rumelhart, Hinton and Williams (1986) (though actually invented earlier in a PhD thesis relating to economics) BP has two phases: Forward pass phase: computes ‘functional signal’, feedforward propagation of input pattern signals through network Backward pass phase: computes ‘error signal’, propagates the error backwards through network starting at output units (where the error is the difference between actual and desired output values)
  • 41. xn x1 x2 Inputs xi Outputs yj Two-layer networks y1 ym 2nd layer weights wij from j to i 1st layer weights vij from j to i Outputs of 1st layer zi
  • 42. We will concentrate on three-layer, but could easily generalize to more layers zi (t) = g( S j vij (t) xj (t) ) at time t = g ( ui (t) ) yi (t) = g( S j wij (t) zj (t) ) at time t = g ( ai (t) ) a/u known as activation, g the activation function biases set as extra weights
  • 43. Forward pass Weights are fixed during forward and backward pass at time t 1. Compute values for hidden units 2. compute values for output units ))(( )()()( tugz txtvtu jj i ijij    ))(( )()( tagy ztwta kk j jkjk    xi vji(t) wkj(t) zj yk
  • 44. Backward Pass Will use a sum of squares error measure. For each training pattern we have: where dk is the target value for dimension k. We want to know how to modify weights in order to decrease E. Use gradient descent ie both for hidden units and output units   1 2 ))()(( 2 1 )( k kk tytdtE )( )( )()1( tw tE twtw ij ijij   
  • 45. )( )( )( )( )( )( tw ta ta tE tw tE ij i iij        How error for pattern changes as function of change in network input to unit j How net input to unit j changes as a function of change in weight w both for hidden units and output units Term A Term B The partial derivative can be rewritten as product of two terms using chain rule for partial differentiation
  • 46. Term A Let )( )( )(, )( )( )( ta tE t tu tE t i i i i     d D (error terms). Can evaluate these by chain rule: )( )( )( )( )( )( tz tw ta tx tv tu j ij i j ij i      Term B first:
  • 49. Backward Pass Weights here can be viewed as providing degree of ‘credit’ or ‘blame’ to hidden units Dj Dk di wki wji di = g’(ai) Sj wji Dj
  • 50. Combining A+B gives So to achieve gradient descent in E should change weights by vij(t+1)-vij(t) = h d i (t) xj (n) wij(t+1)-wij(t) = h D i (t) zj (t) Where h is the learning rate parameter (0 < h <=1) )()( )( )( )()( )( )( tzt tw tE txt tv tE ji ij ji ij D    d  
  • 51. Summary Weight updates are local output unit hidden unit )()()()1( )()()()1( tzttwtw txttvtv jiijij jiijij D  h hd D  k kikji jiijij wttxtug txttvtv )()())((' )()()()1( h hd )())(('))()(( )()()()1( tztagtytd tzttwtw jiii jiijij  D h h