Enviar búsqueda
Cargar
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
•
0 recomendaciones
•
1,492 vistas
Dongseo University
Seguir
Educación
Tecnología
Denunciar
Compartir
Denunciar
Compartir
1 de 73
Descargar ahora
Descargar para leer sin conexión
Recomendados
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
Dongseo University
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
Dongseo University
neural networks
neural networks
Institute of Technology Telkom
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
Dongseo University
The 5th WBA Hackathon Orientation -- Cerenaut Part
The 5th WBA Hackathon Orientation -- Cerenaut Part
The Whole Brain Architecture Initiative
脳とAIの接点から何を学びうるのか@第5回WBAシンポジウム: 銅谷賢治
脳とAIの接点から何を学びうるのか@第5回WBAシンポジウム: 銅谷賢治
The Whole Brain Architecture Initiative
mohsin dalvi artificial neural networks presentation
mohsin dalvi artificial neural networks presentation
Akash Maurya
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
Recomendados
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…
Dongseo University
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
Dongseo University
neural networks
neural networks
Institute of Technology Telkom
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
Dongseo University
The 5th WBA Hackathon Orientation -- Cerenaut Part
The 5th WBA Hackathon Orientation -- Cerenaut Part
The Whole Brain Architecture Initiative
脳とAIの接点から何を学びうるのか@第5回WBAシンポジウム: 銅谷賢治
脳とAIの接点から何を学びうるのか@第5回WBAシンポジウム: 銅谷賢治
The Whole Brain Architecture Initiative
mohsin dalvi artificial neural networks presentation
mohsin dalvi artificial neural networks presentation
Akash Maurya
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
Lec 5
Lec 5
Adzeim Eifa
ANN-lecture9
ANN-lecture9
Laila Fatehy
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
Andres Mendez-Vazquez
Annintro
Annintro
airsrch
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentation
guestac67362
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
Seonghyun Kim
Artificial Intelligence- Neural Networks
Artificial Intelligence- Neural Networks
Learnbay Datascience
Artificial Neural Network Abstract
Artificial Neural Network Abstract
Anjali Agrawal
Artificial Neural Network
Artificial Neural Network
Burhan Muzafar
Neuromorphic computing for neural networks
Neuromorphic computing for neural networks
Claudio Gallicchio
Deep randomized neural networks
Deep randomized neural networks
Claudio Gallicchio
Backpropagation and the brain review
Backpropagation and the brain review
Seonghyun Kim
A04401001013
A04401001013
ijceronline
Lecture artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
Hưng Đặng
Reservoir computing fast deep learning for sequences
Reservoir computing fast deep learning for sequences
Claudio Gallicchio
Theories of error back propagation in the brain review
Theories of error back propagation in the brain review
Seonghyun Kim
mohsin dalvi artificial neural networks questions
mohsin dalvi artificial neural networks questions
Akash Maurya
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
lecture07.ppt
lecture07.ppt
butest
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
sravanthi computers
10-Perceptron.pdf
10-Perceptron.pdf
ESTIBALYZJIMENEZCAST
Perceptron
Perceptron
Nagarajan
Más contenido relacionado
La actualidad más candente
Lec 5
Lec 5
Adzeim Eifa
ANN-lecture9
ANN-lecture9
Laila Fatehy
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
Andres Mendez-Vazquez
Annintro
Annintro
airsrch
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentation
guestac67362
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
Seonghyun Kim
Artificial Intelligence- Neural Networks
Artificial Intelligence- Neural Networks
Learnbay Datascience
Artificial Neural Network Abstract
Artificial Neural Network Abstract
Anjali Agrawal
Artificial Neural Network
Artificial Neural Network
Burhan Muzafar
Neuromorphic computing for neural networks
Neuromorphic computing for neural networks
Claudio Gallicchio
Deep randomized neural networks
Deep randomized neural networks
Claudio Gallicchio
Backpropagation and the brain review
Backpropagation and the brain review
Seonghyun Kim
A04401001013
A04401001013
ijceronline
Lecture artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
Hưng Đặng
Reservoir computing fast deep learning for sequences
Reservoir computing fast deep learning for sequences
Claudio Gallicchio
Theories of error back propagation in the brain review
Theories of error back propagation in the brain review
Seonghyun Kim
mohsin dalvi artificial neural networks questions
mohsin dalvi artificial neural networks questions
Akash Maurya
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
La actualidad más candente
(18)
Lec 5
Lec 5
ANN-lecture9
ANN-lecture9
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
Annintro
Annintro
Artificial Neural Network Paper Presentation
Artificial Neural Network Paper Presentation
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
Artificial Intelligence- Neural Networks
Artificial Intelligence- Neural Networks
Artificial Neural Network Abstract
Artificial Neural Network Abstract
Artificial Neural Network
Artificial Neural Network
Neuromorphic computing for neural networks
Neuromorphic computing for neural networks
Deep randomized neural networks
Deep randomized neural networks
Backpropagation and the brain review
Backpropagation and the brain review
A04401001013
A04401001013
Lecture artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
Reservoir computing fast deep learning for sequences
Reservoir computing fast deep learning for sequences
Theories of error back propagation in the brain review
Theories of error back propagation in the brain review
mohsin dalvi artificial neural networks questions
mohsin dalvi artificial neural networks questions
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
Similar a 2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
lecture07.ppt
lecture07.ppt
butest
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
sravanthi computers
10-Perceptron.pdf
10-Perceptron.pdf
ESTIBALYZJIMENEZCAST
Perceptron
Perceptron
Nagarajan
Soft Computing-173101
Soft Computing-173101
AMIT KUMAR
Neural Networks Ver1
Neural Networks Ver1
ncct
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
Artificial neural networks (2)
Artificial neural networks (2)
sai anjaneya
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
MDYasin34
19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
Neural network
Neural network
marada0033
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
hirokazutanaka
Supervised Learning
Supervised Learning
butest
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
IJCSES Journal
Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2
hasan_elektro
Artificial Neural Network
Artificial Neural Network
Renas Rekany
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
DrBaljitSinghKhehra
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
DrBaljitSinghKhehra
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
DrBaljitSinghKhehra
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
Similar a 2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
(20)
lecture07.ppt
lecture07.ppt
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
10-Perceptron.pdf
10-Perceptron.pdf
Perceptron
Perceptron
Soft Computing-173101
Soft Computing-173101
Neural Networks Ver1
Neural Networks Ver1
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
Artificial neural networks (2)
Artificial neural networks (2)
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
19_Learning.ppt
19_Learning.ppt
Neural network
Neural network
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
Supervised Learning
Supervised Learning
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Csss2010 20100803-kanevski-lecture2
Csss2010 20100803-kanevski-lecture2
Artificial Neural Network
Artificial Neural Network
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Más de Dongseo University
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Dongseo University
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
Dongseo University
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
Dongseo University
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
Dongseo University
Markov Chain Monte Carlo
Markov Chain Monte Carlo
Dongseo University
Simplex Lecture Notes
Simplex Lecture Notes
Dongseo University
Reinforcement Learning
Reinforcement Learning
Dongseo University
Median of Medians
Median of Medians
Dongseo University
Average Linear Selection Algorithm
Average Linear Selection Algorithm
Dongseo University
Lower Bound of Comparison Sort
Lower Bound of Comparison Sort
Dongseo University
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
Dongseo University
Running Time of MergeSort
Running Time of MergeSort
Dongseo University
Binary Trees
Binary Trees
Dongseo University
Proof By Math Induction Example
Proof By Math Induction Example
Dongseo University
TRPO and PPO notes
TRPO and PPO notes
Dongseo University
Estimating probability distributions
Estimating probability distributions
Dongseo University
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
Dongseo University
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
Dongseo University
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
Dongseo University
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
Dongseo University
Más de Dongseo University
(20)
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
Markov Chain Monte Carlo
Markov Chain Monte Carlo
Simplex Lecture Notes
Simplex Lecture Notes
Reinforcement Learning
Reinforcement Learning
Median of Medians
Median of Medians
Average Linear Selection Algorithm
Average Linear Selection Algorithm
Lower Bound of Comparison Sort
Lower Bound of Comparison Sort
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
Running Time of MergeSort
Running Time of MergeSort
Binary Trees
Binary Trees
Proof By Math Induction Example
Proof By Math Induction Example
TRPO and PPO notes
TRPO and PPO notes
Estimating probability distributions
Estimating probability distributions
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
Último
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
pragatimahajan3
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
misteraugie
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
eniolaolutunde
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
Dr. Mazin Mohamed alkathiri
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
Disha Kariya
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
SafetyChain Software
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
VS Mahajan Coaching Centre
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
dawncurless
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
National Information Standards Organization (NISO)
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Krashi Coaching
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
Maestría en Comunicación Digital Interactiva - UNR
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Sapana Sha
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
Sayali Powar
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
FatimaKhan178732
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
RaunakKeshri1
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
fonyou31
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
iammrhaywood
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
anjaliyadav012327
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
chloefrazer622
Último
(20)
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
1.
© Negnevitsky, Pearson
Education, 2005 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in multilayer neural networks The Hopfield network Bidirectional associative memories (BAM) Summary
2.
© Negnevitsky, Pearson
Education, 2005 2 Introduction, or how the brain works Machine learning involves adaptive mechanisms that enable computers to learn from experience, learn by example and learn by analogy. Learning capabilities can improve the performance of an intelligent system over time. The most popular approaches to machine learning are artificial neural networks and genetic algorithms. This lecture is dedicated to neural networks.
3.
© Negnevitsky, Pearson
Education, 2005 3 A neural network can be defined as a model of reasoning based on the human brain. The brain consists of a densely interconnected set of nerve cells, or basic information-processing units, called neurons. The human brain incorporates nearly 10 billion neurons and 60 trillion connections, synapses, between them. By using multiple neurons simultaneously, the brain can perform its functions much faster than the fastest computers in existence today.
4.
© Negnevitsky, Pearson
Education, 2005 4 Each neuron has a very simple structure, but an army of such elements constitutes a tremendous processing power. A neuron consists of a cell body, soma, a number of fibers called dendrites, and a single long fiber called the axon.
5.
© Negnevitsky, Pearson
Education, 2005 5 Biological neural network Soma Soma Synapse Synapse Dendrites Axon Synapse Dendrites Axon
6.
© Negnevitsky, Pearson
Education, 2005 6 Our brain can be considered as a highly complex, non-linear and parallel information-processing system. Information is stored and processed in a neural network simultaneously throughout the whole network, rather than at specific locations. In other words, in neural networks, both data and its processing are global rather than local. Learning is a fundamental and essential characteristic of biological neural networks. The ease with which they can learn led to attempts to emulate a biological neural network in a computer.
7.
© Negnevitsky, Pearson
Education, 2005 7 An artificial neural network consists of a number of very simple processors, also called neurons, which are analogous to the biological neurons in the brain. The neurons are connected by weighted links passing signals from one neuron to another. The output signal is transmitted through the neuron’s outgoing connection. The outgoing connection splits into a number of branches that transmit the same signal. The outgoing branches terminate at the incoming connections of other neurons in the network.
8.
© Negnevitsky, Pearson
Education, 2005 8 Architecture of a typical artificial neural network Input Layer Output Layer MiddleLayer InputSignals OutputSignals
9.
© Negnevitsky, Pearson
Education, 2005 9 Analogy between biological and artificial neural networks Biological Neural Network Artificial Neural Network Soma Dendrite Axon Synapse Neuron Input Output Weight
10.
© Negnevitsky, Pearson
Education, 2005 10 The neuron as a simple computing element Diagram of a neuron Neuron Y Input Signals x1 x2 xn Output Signals Y Y Y w2 w1 wn Weights
11.
© Negnevitsky, Pearson
Education, 2005 11 The neuron computes the weighted sum of the input signals and compares the result with a threshold value, q. If the net input is less than the threshold, the neuron output is –1. But if the net input is greater than or equal to the threshold, the neuron becomes activated and its output attains a value +1. The neuron uses the following transfer or activation function: This type of activation function is called a sign function. n i ii wxX 1 q1 q1 X X Y if, if,
12.
© Negnevitsky, Pearson
Education, 2005 12 Activation functions of a neuron Step function Sign function +1 -1 0 +1 -1 0X Y X Y 1 1 -1 0 X Y Sigmoidfunction -1 0 X Y Linear function 0if,0 0if,1 X X Ystep 0if,1 0if,1 X X Ysign X sigmoid e Y 1 1 XYlinear
13.
© Negnevitsky, Pearson
Education, 2005 13 Can a single neuron learn a task? In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron. The perceptron is the simplest form of a neural network. It consists of a single neuron with adjustable synaptic weights and a hard limiter.
14.
© Negnevitsky, Pearson
Education, 2005 14 Single-layer two-input perceptron Threshold Inputs x1 x2 Output Y Hard Limiter w2 w1 Linear Combiner q
15.
© Negnevitsky, Pearson
Education, 2005 15 The Perceptron The operation of Rosenblatt’s perceptron is based on the McCulloch and Pitts neuron model. The model consists of a linear combiner followed by a hard limiter. The weighted sum of the inputs is applied to the hard limiter, which produces an output equal to +1 if its input is positive and 1 if it is negative.
16.
© Negnevitsky, Pearson
Education, 2005 16 The aim of the perceptron is to classify inputs, x1, x2, . . ., xn, into one of two classes, say A1 and A2. In the case of an elementary perceptron, the n- dimensional space is divided by a hyperplane into two decision regions. The hyperplane is defined by the linearly separable function: 0 1 q n i ii wx
17.
© Negnevitsky, Pearson
Education, 2005 17 Linear separability in the perceptrons x1 x2 Class A2 Class A1 1 2 x1w1 + x2w2 q = 0 (a) Two-input perceptron. (b) Three-input perceptron. x2 x1 x3 x1w1 + x2w2 + x3w3 q = 0 1 2
18.
© Negnevitsky, Pearson
Education, 2005 18 How does the perceptron learn its classification tasks? This is done by making small adjustments in the weights to reduce the difference between the actual and desired outputs of the perceptron. The initial weights are randomly assigned, usually in the range [0.5, 0.5], and then updated to obtain the output consistent with the training examples.
19.
© Negnevitsky, Pearson
Education, 2005 19 If at iteration p, the actual output is Y(p) and the desired output is Yd (p), then the error is given by: where p = 1, 2, 3, . . . Iteration p here refers to the pth training example presented to the perceptron. If the error, e(p), is positive, we need to increase perceptron output Y(p), but if it is negative, we need to decrease Y(p). )()()( pYpYpe d
20.
© Negnevitsky, Pearson
Education, 2005 20 )()()()1( pepxpwpw iii a. . where p = 1, 2, 3, . . . a is the learning rate, a positive constant less than unity. The perceptron learning rule was first proposed by Rosenblatt in 1960. Using this rule we can derive the perceptron training algorithm for classification tasks. The perceptron learning rule
21.
© Negnevitsky, Pearson
Education, 2005 21 Perceptron’s training algorithm Step 1: Initialisation Set initial weights w1, w2,…, wn and threshold q to random numbers in the range [0.5, 0.5]. If the error, e(p), is positive, we need to increase perceptron output Y(p), but if it is negative, we need to decrease Y(p).
22.
© Negnevitsky, Pearson
Education, 2005 22 Perceptron’s training algorithm (continued) Step 2: Activation Activate the perceptron by applying inputs x1(p), x2(p),…, xn(p) and desired output Yd (p). Calculate the actual output at iteration p = 1 where n is the number of the perceptron inputs, and step is a step activation function. q n i ii pwpxsteppY 1 )()()(
23.
© Negnevitsky, Pearson
Education, 2005 23 Perceptron’s training algorithm (continued) Step 3: Weight training Update the weights of the perceptron where Dwi(p) is the weight correction at iteration p. The weight correction is computed by the delta rule: Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until convergence. )()()1( pwpwpw iii D )()()( pepxpw ii aD .
24.
© Negnevitsky, Pearson
Education, 2005 24 Example of perceptron learning: the logical operation AND Inputs x1 x2 0 0 1 1 0 1 0 1 0 0 0 Epoch Desired output Yd 1 Initial weights w1 w2 1 0.3 0.3 0.3 0.2 0.1 0.1 0.1 0.1 0 0 1 0 Actual output Y Error e 0 0 1 1 Final weights w1 w2 0.3 0.3 0.2 0.3 0.1 0.1 0.1 0.0 0 0 1 1 0 1 0 1 0 0 0 2 1 0.3 0.3 0.3 0.2 0 0 1 1 0 0 1 0 0.3 0.3 0.2 0.2 0.0 0.0 0.0 0.0 0 0 1 1 0 1 0 1 0 0 0 3 1 0.2 0.2 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 0 1 1 0.2 0.2 0.1 0.2 0.0 0.0 0.0 0.1 0 0 1 1 0 1 0 1 0 0 0 4 1 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 0 1 0 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 1 0 1 0 0 0 5 1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 1 0 0 0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 Threshold: q = 0.2; learningrate: = 0.1
25.
© Negnevitsky, Pearson
Education, 2005 25 Two-dimensional plots of basic logical operations A perceptron can learn the operations AND and OR, but not Exclusive-OR. x1 x2 1 1 x1 x2 1 1 (b) OR (x1 x2) x1 x2 1 1 (c) Exclusive-OR (x1 x2) 00 0 (a) AND (x1 x2)
26.
© Negnevitsky, Pearson
Education, 2005 26 Multilayer neural networks A multilayer perceptron is a feedforward neural network with one or more hidden layers. The network consists of an input layer of source neurons, at least one middle or hidden layer of computational neurons, and an output layer of computational neurons. The input signals are propagated in a forward direction on a layer-by-layer basis.
27.
© Negnevitsky, Pearson
Education, 2005 27 Multilayer perceptron with two hidden layers Input layer First hidden layer Second hidden layer Output layer InputSignal s OutputSign als
28.
© Negnevitsky, Pearson
Education, 2005 28 What does the middle layer hide? A hidden layer “hides” its desired output. Neurons in the hidden layer cannot be observed through the input/output behaviour of the network. There is no obvious way to know what the desired output of the hidden layer should be. Commercial ANNs incorporate three and sometimes four layers, including one or two hidden layers. Each layer can contain from 10 to 1000 neurons. Experimental neural networks may have five or even six layers, including three or four hidden layers, and utilise millions of neurons.
29.
© Negnevitsky, Pearson
Education, 2005 29 Back-propagation neural network Learning in a multilayer network proceeds the same way as for a perceptron. A training set of input patterns is presented to the network. The network computes its output pattern, and if there is an error or in other words a difference between actual and desired output patterns the weights are adjusted to reduce this error.
30.
© Negnevitsky, Pearson
Education, 2005 30 In a back-propagation neural network, the learning algorithm has two phases. First, a training input pattern is presented to the network input layer. The network propagates the input pattern from layer to layer until the output pattern is generated by the output layer. If this pattern is different from the desired output, an error is calculated and then propagated backwards through the network from the output layer to the input layer. The weights are modified as the error is propagated.
31.
© Negnevitsky, Pearson
Education, 2005 31 Three-layer back-propagation neural network Input layer xi x1 x2 xn 1 2 i n Output layer 1 2 k l yk y1 y2 yl Input signals Error signals wjk Hidden layer wij 1 2 j m
32.
© Negnevitsky, Pearson
Education, 2005 32 The back-propagation training algorithm Step 1: Initialisation Set all the weights and threshold levels of the network to random numbers uniformly distributed inside a small range: where Fi is the total number of inputs of neuron i in the network. The weight initialisation is done on a neuron-by-neuron basis. ii FF 4.2 , 4.2
33.
© Negnevitsky, Pearson
Education, 2005 33 Step 2: Activation Activate the back-propagation neural network by applying inputs x1(p), x2(p),…, xn(p) and desired outputs yd,1(p), yd,2(p),…, yd,n(p). (a) Calculate the actual outputs of the neurons in the hidden layer: where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function. q j n i ijij pwpxsigmoidpy 1 )()()(
34.
© Negnevitsky, Pearson
Education, 2005 34 (b) Calculate the actual outputs of the neurons in the output layer: Step 2 : Activation (continued) where m is the number of inputs of neuron k in the output layer. q k m j jkjkk pwpxsigmoidpy 1 )()()(
35.
© Negnevitsky, Pearson
Education, 2005 35 Step 3: Weight training Update the weights in the back-propagation network propagating backward the errors associated with output neurons. (a) Calculate the error gradient for the neurons in the output layer: where Calculate the weight corrections: Update the weights at the output neurons: )()()1( pwpwpw jkjkjk D )()(1)()( pepypyp kkkk )()()( , pypype kkdk )()()( ppypw kjjk D
36.
© Negnevitsky, Pearson
Education, 2005 36 (b) Calculate the error gradient for the neurons in the hidden layer: Step 3: Weight training (continued) Calculate the weight corrections: Update the weights at the hidden neurons: )()()(1)()( 1 ][ pwppypyp jk l k kjjj )()()( ppxpw jiij D )()()1( pwpwpw ijijij D
37.
© Negnevitsky, Pearson
Education, 2005 37 Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until the selected error criterion is satisfied. As an example, we may consider the three-layer back-propagation network. Suppose that the network is required to perform logical operation Exclusive-OR. Recall that a single-layer perceptron could not do this operation. Now we will apply the three-layer net.
38.
© Negnevitsky, Pearson
Education, 2005 38 Three-layer network for solving the Exclusive-OR operation y55 x1 31 x2 Input layer Output layer Hiddenlayer 42 q3 w13 w24 w23 w24 w35 w45 q4 q5 1 1 1
39.
© Negnevitsky, Pearson
Education, 2005 39 The effect of the threshold applied to a neuron in the hidden or output layer is represented by its weight, q, connected to a fixed input equal to 1. The initial weights and threshold levels are set randomly as follows: w13 = 0.5, w14 = 0.9, w23 = 0.4, w24 = 1.0, w35 = 1.2, w45 = 1.1, q3 = 0.8, q4 = 0.1 and q5 = 0.3.
40.
© Negnevitsky, Pearson
Education, 2005 40 We consider a training set where inputs x1 and x2 are equal to 1 and desired output yd,5 is 0. The actual outputs of neurons 3 and 4 in the hidden layer are calculated as Now the actual output of neuron 5 in the output layer is determined as: Thus, the following error is obtained: 5250.01/1)( )8.014.015.01( 32321313 q ewxwxsigmoidy 8808.01/1)( )1.010.119.01( 42421414 q ewxwxsigmoidy 5097.01/1)( )3.011.18808.02.15250.0( 54543535 q ewywysigmoidy 5097.05097.0055, yye d
41.
© Negnevitsky, Pearson
Education, 2005 41 The next step is weight training. To update the weights and threshold levels in our network, we propagate the error, e, from the output layer backward to the input layer. First, we calculate the error gradient for neuron 5 in the output layer: Then we determine the weight corrections assuming that the learning rate parameter, a, is equal to 0.1: 1274.05097).0(0.5097)(10.5097)1( 555 eyy 0112.0)1274.0(8808.01.05445 D yw 0067.0)1274.0(5250.01.05335 D yw 0127.0)1274.0()1(1.0)1( 55 qD
42.
© Negnevitsky, Pearson
Education, 2005 42 Next we calculate the error gradients for neurons 3 and 4 in the hidden layer: We then determine the weight corrections: 0381.0)2.1(0.1274)(0.5250)(10.5250)1( 355333 wyy 0.0147.114)0.127(0.8808)(10.8808)1( 455444 wyy 0038.00381.011.03113 D xw 0038.00381.011.03223 D xw 0038.00381.0)1(1.0)1( 33 qD 0015.0)0147.0(11.04114 D xw 0015.0)0147.0(11.04224 D xw 0015.0)0147.0()1(1.0)1( 44 qD
43.
© Negnevitsky, Pearson
Education, 2005 43 At last, we update all weights and threshold: The training process is repeated until the sum of squared errors is less than 0.001. 5038.00038.05.0131313 D www 8985.00015.09.0141414 D www 4038.00038.04.0232323 D www 9985.00015.00.1242424 D www 2067.10067.02.1353535 D www 0888.10112.01.1454545 D www 7962.00038.08.0333 qDqq 0985.00015.01.0444 qDqq 3127.00127.03.0555 qDqq
44.
© Negnevitsky, Pearson
Education, 2005 44 Learning curve for operation Exclusive-OR 0 50 100 150 200 101 Epoch Sum-Squared Network Error for 224 Epochs 100 10-1 10-2 10-3 10-4 Sum-SquaredError
45.
© Negnevitsky, Pearson
Education, 2005 45 Final results of three-layer network learning Inputs x1 x2 1 0 1 0 1 1 0 0 0 1 1 Desired output yd 0 0.0155 Actual output y5 e Sum of squared errors 0.9849 0.9849 0.0175 0.0010
46.
© Negnevitsky, Pearson
Education, 2005 46 Network represented by McCulloch-Pitts model for solving the Exclusive-OR operation y55 x1 31 x2 42 +1.0 1 1 1 +1.0 +1.0 +1.0 +1.5 +1.0 +0.5 +0.52.0
47.
© Negnevitsky, Pearson
Education, 2005 47 Decision boundaries (a) Decision boundary constructed by hidden neuron 3; (b) Decision boundary constructed by hidden neuron 4; (c) Decision boundaries constructed by the complete three-layer network x1 x2 1 (a) 1 x2 1 1 (b) 00 x1 + x2 – 1.5 = 0 x1 + x2 – 0.5 = 0 x1 x1 x2 1 1 (c) 0
48.
© Negnevitsky, Pearson
Education, 2005 48 Accelerated learning in multilayer neural networks A multilayer network learns much faster when the sigmoidal activation function is represented by a hyperbolic tangent: where a and b are constants. Suitable values for a and b are: a = 1.716 and b = 0.667 a e a Y bX htan 1 2
49.
© Negnevitsky, Pearson
Education, 2005 49 We also can accelerate training by including a momentum term in the delta rule: where b is a positive number (0 b 1) called the momentum constant. Typically, the momentum constant is set to 0.95. This equation is called the generalised delta rule. )()()1()( ppypwpw kjjkjk DD
50.
© Negnevitsky, Pearson
Education, 2005 50 Learning with momentum for operation Exclusive-OR 0 20 40 60 80 100 120 10-4 10-2 100 102 Epoch Training for 126 Epochs 0 100 140 -1 -0.5 0 0.5 1 1.5 Epoch 10-3 101 10-1 20 40 60 80 120 LearningRate
51.
© Negnevitsky, Pearson
Education, 2005 51 Learning with adaptive learning rate To accelerate the convergence and yet avoid the danger of instability, we can apply two heuristics: Heuristic 1 If the change of the sum of squared errors has the same algebraic sign for several consequent epochs, then the learning rate parameter, a, should be increased. Heuristic 2 If the algebraic sign of the change of the sum of squared errors alternates for several consequent epochs, then the learning rate parameter, a, should be decreased.
52.
© Negnevitsky, Pearson
Education, 2005 52 Adapting the learning rate requires some changes in the back-propagation algorithm. If the sum of squared errors at the current epoch exceeds the previous value by more than a predefined ratio (typically 1.04), the learning rate parameter is decreased (typically by multiplying by 0.7) and new weights and thresholds are calculated. If the error is less than the previous one, the learning rate is increased (typically by multiplying by 1.05).
53.
© Negnevitsky, Pearson
Education, 2005 53 Learning with adaptive learning rate 0 10 20 30 40 50 60 70 80 90 100 Epoch Tr ainingfor 103Epochs 0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1 Epoch 10-4 10-2 100 102 10-3 101 10-1 Sum-Squared Erro Learning Rate
54.
© Negnevitsky, Pearson
Education, 2005 54 Learning with momentum and adaptive learning rate 0 10 20 30 40 50 60 70 80 Epoch Tr ainingfor 85Epochs 0 10 20 30 40 50 60 70 80 90 0 0.5 1 2.5 Epoch 10-4 10-2 100 102 10-3 101 10-1 1.5 2 Sum-Squared Erro Learning Rate
55.
© Negnevitsky, Pearson
Education, 2005 55 The Hopfield Network Neural networks were designed on analogy with the brain. The brain’s memory, however, works by association. For example, we can recognise a familiar face even in an unfamiliar environment within 100-200 ms. We can also recall a complete sensory experience, including sounds and scenes, when we hear only a few bars of music. The brain routinely associates one thing with another.
56.
© Negnevitsky, Pearson
Education, 2005 56 Multilayer neural networks trained with the back- propagation algorithm are used for pattern recognition problems. However, to emulate the human memory’s associative characteristics we need a different type of network: a recurrent neural network. A recurrent neural network has feedback loops from its outputs to its inputs. The presence of such loops has a profound impact on the learning capability of the network.
57.
© Negnevitsky, Pearson
Education, 2005 57 The stability of recurrent networks intrigued several researchers in the 1960s and 1970s. However, none was able to predict which network would be stable, and some researchers were pessimistic about finding a solution at all. The problem was solved only in 1982, when John Hopfield formulated the physical principle of storing information in a dynamically stable network.
58.
© Negnevitsky, Pearson
Education, 2005 58 Single-layer n-neuron Hopfield network xi x1 x2 xn yi y1 y2 yn 1 2 i n
59.
© Negnevitsky, Pearson
Education, 2005 59 The Hopfield network uses McCulloch and Pitts neurons with the sign activation function as its computing element: XY X X Ysign if, if,1 0if,1
60.
© Negnevitsky, Pearson
Education, 2005 60 The current state of the Hopfield network is determined by the current outputs of all neurons, y1, y2, . . ., yn. Thus, for a single-layer n-neuron network, the state can be defined by the state vector as: ny y y 2 1 Y
61.
© Negnevitsky, Pearson
Education, 2005 61 In the Hopfield network, synaptic weights between neurons are usually represented in matrix form as follows: where M is the number of states to be memorised by the network, Ym is the n-dimensional binary vector, I is n ´ n identity matrix, and superscript T denotes matrix transposition. IYYW M M m T mm 1
62.
© Negnevitsky, Pearson
Education, 2005 62 Possible states for the three-neuron Hopfield network y1 y2 y3 (1,1,1)(1,1,1) (1,1,1) (1,1,1) (1, 1,1)(1,1,1) (1, 1,1)(1,1,1) 0
63.
© Negnevitsky, Pearson
Education, 2005 63 The stable state-vertex is determined by the weight matrix W, the current input vector X, and the threshold matrix q. If the input vector is partially incorrect or incomplete, the initial state will converge into the stable state-vertex after a few iterations. Suppose, for instance, that our network is required to memorise two opposite states, (1, 1, 1) and (1, 1, 1). Thus, where Y1 and Y2 are the three-dimensional vectors. or 1 1 1 1Y 1 1 1 2Y 1111 T Y 1112 T Y
64.
© Negnevitsky, Pearson
Education, 2005 64 The 3 ´ 3 identity matrix I is Thus, we can now determine the weight matrix as follows: Next, the network is tested by the sequence of input vectors, X1 and X2, which are equal to the output (or target) vectors Y1 and Y2, respectively. 100 010 001 I 100 010 001 2111 1 1 1 111 1 1 1 W 022 202 220
65.
© Negnevitsky, Pearson
Education, 2005 65 First, we activate the Hopfield network by applying the input vector X. Then, we calculate the actual output vector Y, and finally, we compare the result with the initial input vector X. 1 1 1 0 0 0 1 1 1 022 202 220 1 signY 1 1 1 0 0 0 1 1 1 022 202 220 2 signY
66.
© Negnevitsky, Pearson
Education, 2005 66 The remaining six states are all unstable. However, stable states (also called fundamental memories) are capable of attracting states that are close to them. The fundamental memory (1, 1, 1) attracts unstable states (1, 1, 1), (1, 1, 1) and (1, 1, 1). Each of these unstable states represents a single error, compared to the fundamental memory (1, 1, 1). The fundamental memory (1, 1, 1) attracts unstable states (1, 1, 1), (1, 1, 1) and (1, 1, 1). Thus, the Hopfield network can act as an error correction network.
67.
© Negnevitsky, Pearson
Education, 2005 67 Storage capacity of the Hopfield network Storage capacity is or the largest number of fundamental memories that can be stored and retrieved correctly. The maximum number of fundamental memories Mmax that can be stored in the n-neuron recurrent network is limited by nMmax 0.15
68.
© Negnevitsky, Pearson
Education, 2005 68 Bidirectional associative memory (BAM) The Hopfield network represents an autoassociative type of memory it can retrieve a corrupted or incomplete memory but cannot associate this memory with another different memory. Human memory is essentially associative. One thing may remind us of another, and that of another, and so on. We use a chain of mental associations to recover a lost memory. If we forget where we left an umbrella, we try to recall where we last had it, what we were doing, and who we were talking to. We attempt to establish a chain of associations, and thereby to restore a lost memory.
69.
© Negnevitsky, Pearson
Education, 2005 69 To associate one memory with another, we need a recurrent neural network capable of accepting an input pattern on one set of neurons and producing a related, but different, output pattern on another set of neurons. Bidirectional associative memory (BAM), first proposed by Bart Kosko, is a heteroassociative network. It associates patterns from one set, set A, to patterns from another set, set B, and vice versa. Like a Hopfield network, the BAM can generalise and also produce correct outputs despite corrupted or incomplete inputs.
70.
© Negnevitsky, Pearson
Education, 2005 70 BAM operation yj(p) y1(p) y2(p) ym(p) 1 2 j m Output layer Input layer xi(p) x1 x2 (p) (p) xn(p) 2 i n 1 xi(p+1) x1(p+1) x2(p+1) xn(p+1) yj(p) y1(p) y2(p) ym(p) 1 2 j m Output layer Input layer 2 i n 1 (a) Forward direction. (b) Backward direction.
71.
© Negnevitsky, Pearson
Education, 2005 71 The basic idea behind the BAM is to store pattern pairs so that when n-dimensional vector X from set A is presented as input, the BAM recalls m-dimensional vector Y from set B, but when Y is presented as input, the BAM recalls X.
72.
© Negnevitsky, Pearson
Education, 2005 72 To develop the BAM, we need to create a correlation matrix for each pattern pair we want to store. The correlation matrix is the matrix product of the input vector X, and the transpose of the output vector YT. The BAM weight matrix is the sum of all correlation matrices, that is, where M is the number of pattern pairs to be stored in the BAM. T m M m m YXW 1
73.
© Negnevitsky, Pearson
Education, 2005 73 Stability and storage capacity of the BAM The BAM is unconditionally stable. This means that any set of associations can be learned without risk of instability. The maximum number of associations to be stored in the BAM should not exceed the number of neurons in the smaller layer. The more serious problem with the BAM is incorrect convergence. The BAM may not always produce the closest association. In fact, a stable association may be only slightly related to the initial input vector.
Descargar ahora