SlideShare una empresa de Scribd logo
1 de 119
Some more
Artificial
Intelligence
• Neural Networks please read chapter 19.
• Genetic Algorithms
• Genetic Programming
• Behavior-Based Systems
Background
- Neural Networks can be :
- Biological models
- Artificial models
- Desire to produce artificial systems capable of
sophisticated computations similar to the human
brain.
Biological analogy and some main
ideas
• The brain is composed of a mass of interconnected
neurons
– each neuron is connected to many other neurons
• Neurons transmit signals to each other
• Whether a signal is transmitted is an all-or-nothing
event (the electrical potential in the cell body of the
neuron is thresholded)
• Whether a signal is sent, depends on the strength of
the bond (synapse) between two neurons
How Does the Brain Work ? (1)
NEURON
- The cell that performs information processing in the brain.
- Fundamental functional unit of all nervous system tissue.
Each consists of :
SOMA, DENDRITES, AXON, and SYNAPSE.
How Does the Brain Work ? (2)
Brain vs. Digital Computers (1)
- Computers require hundreds of cycles to simulate
a firing of a neuron.
- The brain can fire all the neurons in a single step.
Parallelism
- Serial computers require billions of cycles to
perform some tasks but the brain takes less than
a second.
e.g. Face Recognition
Comparison of Brain and
computer
Human Computer
Processing
Elements
100 Billion
neurons
10 Million
gates
Interconnects 1000 per
neuron
A few
Cycles per sec 1000 500 Million
2X
improvement
200,000
Years
2 Years
Brain vs. Digital Computers (2)
Future : combine parallelism of the brain with the
switching speed of the computer.
History
• 1943: McCulloch & Pitts show that neurons can be
combined to construct a Turing machine (using
ANDs, Ors, & NOTs)
• 1958: Rosenblatt shows that perceptrons will
converge if what they are trying to learn can be
represented
• 1969: Minsky & Papert showed the limitations of
perceptrons, killing research for a decade
• 1985: backpropagation algorithm revitalizes the field
Definition of Neural Network
A Neural Network is a system composed of
many simple processing elements operating in
parallel which can acquire, store, and utilize
experiential knowledge.
What is
Artificial
Neural
Network?
Neurons vs. Units (1)
- Each element of NN is a node called unit.
- Units are connected by links.
- Each link has a numeric weight.
Neurons vs units (2)
Real neuron is far
away from our
simplified model - unit
Chemistry,
biochemistry,
quantumness.
Computing Elements
A typical unit:
Planning in building a Neural Network
Decisions must be taken on the following:
- The number of units to use.
- The type of units required.
- Connection between the units.
How NN learns a task.
Issues to be discussed
- Initializing the weights.
- Use of a learning algorithm.
- Set of training examples.
- Encode the examples as inputs.
- Convert output into meaningful results.
Neural Network Example
Figure 19.7. A very simple, two-layer, feed-forward network with
two inputs, two hidden nodes, and one output node.
Simple Computations in this network
- There are 2 types of components: Linear and
Non-linear.
- Linear: Input function
- calculate weighted sum of all inputs.
- Non-linear: Activation function
- transform sum into activation level.
Calculations
Input function:
Activation function g:
A Computing Unit.
Now in more detail but for a
particular model only
Figure 19.4. A unit
Activation Functions
- Use different functions to obtain different models.
- 3 most common choices :
1) Step function
2) Sign function
3) Sigmoid function
- An output of 1 represents firing of a neuron down
the axon.
Step Function Perceptrons
3 Activation Functions
Are current computer a wrong
model of thinking?
• Humans can’t be doing the sequential
analysis we are studying
– Neurons are a million times slower than gates
– Humans don’t need to be rebooted or
debugged when one bit dies.
100-step program constraint
• Neurons operate on the order of 10-3 seconds
• Humans can process information in a fraction
of a second (face recognition)
• Hence, at most a couple of hundred serial
operations are possible
• That is, even in parallel, no “chain of
reasoning” can involve more than 100 -1000
steps
Standard structure of an artificial neural
network
• Input units
– represents the input as a fixed-length vector of numbers
(user defined)
• Hidden units
– calculate thresholded weighted sums of the inputs
– represent intermediate calculations that the network
learns
• Output units
– represent the output as a fixed length vector of numbers
Representations
• Logic rules
– If color = red ^ shape = square then +
• Decision trees
– tree
• Nearest neighbor
– training examples
• Probabilities
– table of probabilities
• Neural networks
– inputs in [0, 1]
Can be used for all of
them
Many variants exist
Notation
Notation (cont.)
Operation of individual units
• Outputi = f(Wi,j * Inputj + Wi,k * Inputk + Wi,l
* Inputl)
– where f(x) is a threshold (activation) function
– f(x) = 1 / (1 + e-Output)
• “sigmoid”
– f(x) = step function
Artificial Neural
Networks
Units in Action
- Individual units representing Boolean functions
Network Structures
Feed-forward neural nets:
Links can only go in one direction.
Recurrent neural nets:
Links can go anywhere and form arbitrary
topologies.
Feed-forward Networks
- Arranged in layers.
- Each unit is linked only in the unit in next layer.
- No units are linked between the same layer, back to
the previous layer or skipping a layer.
- Computations can proceed uniformly from input to
output units.
- No internal state exists.
Feed-Forward Example
I1
I2
t = -0.5W24= -1
H4
W46 = 1
t = 1.5
H6
W67 = 1
t = 0.5
I1
t = -0.5W13 = -1
H3
W35 = 1
t = 1.5
H5
O7
W57 = 1
W25 = 1
W16 = 1
Inputs skip the layer in this case
Multi-layer Networks and Perceptrons
- Have one or more
layers of hidden units.
- With two possibly
very large hidden
layers, it is possible
to implement any
function.
- Networks without hidden
layer are called
perceptrons.
- Perceptrons are very
limited in what they can
represent, but this makes
their learning problem
much simpler.
Recurrent Network (1)
- The brain is not and cannot be a feed-forward network.
- Allows activation to be fed back to the previous unit.
- Internal state is stored in its activation level.
- Can become unstable
-Can oscillate.
Recurrent Network (2)
- May take long time to compute a stable output.
- Learning process is much more difficult.
- Can implement more complex designs.
- Can model certain systems with internal states.
Perceptrons
- First studied in the late 1950s.
- Also known as Layered Feed-Forward Networks.
- The only efficient learning element at that time was
for single-layered networks.
- Today, used as a synonym for a single-layer,
feed-forward network.
Fig. 19.8. Perceptrons
Perceptrons
Sigmoid Perceptron
Perceptron learning rule
• Teacher specifies the desired output for a given input
• Network calculates what it thinks the output should be
• Network changes its weights in proportion to the error
between the desired & calculated results
• wi,j =  * [teacheri - outputi] * inputj
– where:
–  is the learning rate;
– teacheri - outputi is the error term;
– and inputj is the input activation
• wi,j = wi,j + wi,j
Delta rule
Adjusting perceptron weights
• wi,j =  * [teacheri - outputi] * inputj
• missi is (teacheri - outputi)
• Adjust each wi,j based on inputj and missi
• The above table shows adaptation.
• Incremental learning.
miss<0 miss=0 miss>0
input < 0 alpha 0 -alpha
input = 0 0 0 0
input > 0 -alpha 0 alpha
Node biases
• A node’s output is a weighted function of
its inputs
• What is a bias?
• How can we learn the bias value?
• Answer: treat them like just another weight
Training biases ()
• A node’s output:
– 1 if w1x1 + w2x2 + … + wnxn >= 
– 0 otherwise
• Rewrite
– w1x1 + w2x2 + … + wnxn -  >= 0
– w1x1 + w2x2 + … + wnxn + (-1) >= 0
• Hence, the bias is just another weight whose
activation is always -1
• Just add one more input unit to the network
topology
bias
Perceptron convergence
theorem
• If a set of <input, output> pairs are
learnable (representable), the delta rule
will find the necessary weights
– in a finite number of steps
– independent of initial weights
• However, a single layer perceptron can
only learn linearly separable concepts
– it works iff gradient descent works
Linear separability
• Consider a perceptron
• Its output is
– 1, if W1X1 + W2X2 > 
– 0, otherwise
• In terms of feature space
– hence, it can only classify examples if a line (hyperplane
more generally) can separate the positive examples
from the negative examples
What can Perceptrons Represent ?
- Some complex Boolean function can be
represented.
For example:
Majority function - will be covered in this lecture.
- Perceptrons are limited in the Boolean functions
they can represent.
Figure 19.9. Linear Separability in
Perceptrons
The Separability Problem and EXOR
trouble
AND and OR linear Separators
Separation in n-1 dimensions
Example of
3Dimensional space
majority
Perceptrons & XOR
• XOR function
– no way to draw a line to separate the positive
from negative examples
Input1 Input2 Output
0 0 0
0 1 1
1 0 1
1 1 0
How do we compute XOR?
Learning Linearly Separable Functions (1)
What can these functions learn ?
Bad news:
- There are not many linearly separable functions.
Good news:
- There is a perceptron algorithm that will learn
any linearly separable function, given enough
training examples.
Most neural network learning algorithms, including the
perceptrons learning method, follow the current-best-
hypothesis (CBH) scheme.
Learning Linearly Separable Functions (2)
- Initial network has a randomly assigned weights.
- Learning is done by making small adjustments in the
weights to reduce the difference between the observed and
predicted values.
- Main difference from the logical algorithms is the need to
repeat the update phase several times in order to achieve
convergence.
-Updating process is divided into epochs.
-Each epoch updates all the weights of the process.
Learning Linearly Separable Functions (3)
Figure 19.11. The Generic Neural Network
Learning Method: adjust the weights until predicted
output values O and true values T agree
e are examples from set
examples
Examples of Feed-Forward Learning
Two types of networks were
compared for the restaurant
problem
Multi-Layer
Neural Nets
Feed Forward
Networks
2-layer Feed Forward example
Need for hidden units
• If there is one layer of enough hidden
units, the input can be recoded (perhaps
just memorized; example)
• This recoding allows any mapping to be
represented
• Problem: how can the weights of the
hidden units be trained?
XOR Solution
Majority of 11 Inputs
(any 6 or more)
Perceptron is
better than DT
on majority
Constructive
induction is even
better than NN
How many times
in battlefield the
robot recognizes
majority?
Other Examples
• Need more than a 1-layer network for:
– Parity
– Error Correction
– Connected Paths
• Neural nets do well with
– continuous inputs and outputs
• But poorly with
– logical combinations of boolean inputs
Give DT brain to a mathematician robot and a NN brain
to a soldier robot
WillWait Restaurant example
Let us not dramatize “universal”
benchmarks too much
Here decision
tree is better
than
perceptron
N-layer FeedForward
Network
• Layer 0 is input nodes
• Layers 1 to N-1 are hidden nodes
• Layer N is output nodes
• All nodes at any layer k are connected to all nodes at
layer k+1
• There are no cycles
2 Layer FF net with LTUs
• 1 output layer + 1 hidden layer
– Therefore, 2 stages to “assign reward”
• Can compute functions with convex regions
• Each hidden node acts like a perceptron,
learning a separating line
• Output units can compute intersections of half-
planes given by hidden units
Linear Threshold Units
Feed-forward NN with hidden
layer
• Braitenberg Vehicles
• Quantum Neural BV
Reactive architecture based
on NN for a simple robot
Evaluation of a Feedforward NN using software is easy
Calculate output neurons
Calculate activation of hidden neurons
Set bias input neuron
Take from hidden
neurons and multiply
by weights
Backpropagation
Networks
Introduction to
Backpropagation
- In 1969 a method for learning in multi-layer network,
Backpropagation, was invented by Bryson and Ho.
- The Backpropagation algorithm is a sensible approach
for dividing the contribution of each weight.
- Works basically the same as perceptrons
Backpropagation Learning Principles:
Hidden Layers and Gradients
There are two differences for the updating rule :
1) The activation of the hidden unit is used instead of the
input value.
2) The rule contains a term for the gradient of the activation
function.
Backpropagation Network
training
• 1. Initialize network with random weights
• 2. For all training cases (called examples):
– a. Present training inputs to network and
calculate output
– b. For all layers (starting with output layer,
back to input layer):
• i. Compare network output with correct output
(error function)
• ii. Adapt weights in current layer
This is
what
you
want
Backpropagation Learning
Details
• Method for learning weights in feed-forward (FF)
nets
• Can’t use Perceptron Learning Rule
– no teacher values are possible for hidden units
• Use gradient descent to minimize the error
– propagate deltas to adjust for errors
backward from outputs
to hidden layers
to inputs
forward
backward
Backpropagation Algorithm – Main
Idea – error in hidden layers
The ideas of the algorithm can be summarized as follows :
1. Computes the error term for the output units using the
observed error.
2. From output layer, repeat
- propagating the error term back to the previous layer
and
- updating the weights between the two layers
until the earliest hidden layer is reached.
Backpropagation Algorithm
• Initialize weights (typically random!)
• Keep doing epochs
– For each example e in training set do
• forward pass to compute
– O = neural-net-output(network,e)
– miss = (T-O) at each output unit
• backward pass to calculate deltas to weights
• update all weights
– end
• until tuning set error stops improving
Backward pass explained in next slideForward pass explained
earlier
Backward Pass
• Compute deltas to weights
– from hidden layer
– to output layer
• Without changing any weights (yet),
compute the actual contributions
– within the hidden layer(s)
– and compute deltas
Gradient Descent
• Think of the N weights as a point in an N-
dimensional space
• Add a dimension for the observed error
• Try to minimize your position on the “error
surface”
Error Surface
Error as function of
weights in
multidimensional space
error
weights
Gradient
• Trying to make error decrease the fastest
• Compute:
• GradE = [dE/dw1, dE/dw2, . . ., dE/dwn]
• Change i-th weight by
• deltawi = -alpha * dE/dwi
• We need a derivative!
• Activation function must be continuous,
differentiable, non-decreasing, and easy to
compute
Derivatives of error for weights
Compute
deltas
Can’t use LTU
• To effectively assign credit / blame to units
in hidden layers, we want to look at the
first derivative of the activation function
• Sigmoid function is easy to differentiate
and easy to compute forward
Linear Threshold Units Sigmoid function
Updating hidden-to-output
• We have teacher supplied desired values
• deltawji =  * aj * (Ti - Oi) * g’(ini)
=  * aj * (Ti - Oi) * Oi * (1 - Oi)
– for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x))
alpha
derivative
miss
Here we have
general formula with
derivative, next we
use for sigmoid
Updating interior weights
• Layer k units provide values to all layer
k+1 units
• “miss” is sum of misses from all units on k+1
• missj =  [ ai(1- ai) (Ti - ai) wji ]
• weights coming into this unit are adjusted
based on their contribution
deltakj =  * Ik * aj * (1 - aj) * missj For layer k+1
Compute deltas
How do we pick ?
1. Tuning set, or
2. Cross validation, or
3. Small for slow, conservative learning
How many hidden layers?
• Usually just one (i.e., a 2-layer net)
• How many hidden units in the layer?
– Too few ==> can’t learn
– Too many ==> poor generalization
How big a training set?
• Determine your target error rate, e
• Success rate is 1- e
• Typical training set approx. n/e, where n is the
number of weights in the net
• Example:
– e = 0.1, n = 80 weights
– training set size 800
trained until 95% correct training set classification
should produce 90% correct classification
on testing set (typical)
Examples of Backpropagation
Learning
In the restaurant
problem NN was
worse than the
decision tree
Error
decreases
with number
of epochs
Decision tree still
better for
restaurant
example
Examples of Backpropagation Learning
Majority example,
perceptron better
Restaurant
example, DT
better
Backpropagation
Learning Math
See next
slide for
explanation
Visualization of
Backpropagation
learning
Backprop output layer
bias neuron in input layer
Bias Neurons in Backpropagation
Learning
Training pairs
Software for Backpropagation Learning
Calculate difference to
desired output
Calculate total error
Run network forward.
Was explained earlier
This routine
calculate error for
backpropagation
Update output weights
Software for Backpropagation Learning
continuation
Calculate hidden difference values
Update input weights
Return total error
Here we do not use
alpha, the learning rate
The general Backpropagation Algorithm for updating weights in a multilayer network
Run network to
calculate its
output for this
example
Go through all
examples
Compute the
error in output
Update weights
to output layer
Compute error in
each hidden layer
Update weights in
each hidden layer
Repeat until
convergent
Return learned network
Here we use alpha, the
learning rate
Examples and
Applications
of ANN
Neural Network in Practice
NNs are used for classification and function approximation
or mapping problems which are:
- Tolerant of some imprecision.
- Have lots of training data available.
- Hard and fast rules cannot easily be applied.
NETalk (1987)
• Mapping character strings into phonemes so they
can be pronounced by a computer
• Neural network trained how to pronounce each
letter in a word in a sentence, given the three
letters before and three letters after it in a window
• Output was the correct phoneme
• Results
– 95% accuracy on the training data
– 78% accuracy on the test set
Other Examples
• Neurogammon (Tesauro & Sejnowski, 1989)
– Backgammon learning program
• Speech Recognition (Waibel, 1989)
• Character Recognition (LeCun et al., 1989)
• Face Recognition (Mitchell)
ALVINN
• Steer a van down the road
– 2-layer feedforward
• using backpropagation for learning
– Raw input is 480 x 512 pixel image 15x per sec
– Color image preprocessed into 960 input units
– 4 hidden units
– 30 output units, each is a steering direction
Neural Network Approaches
ALVINN - Autonomous Land Vehicle In a Neural Network
Learning on-the-
fly
• ALVINN learned as the vehicle
traveled
– initially by observing a human
driving
– learns from its own driving by
watching for future corrections
– never saw bad driving
• didn’t know what was
dangerous, NOT correct
• computes alternate views of
the road (rotations, shifts, and
fill-ins) to use as “bad”
examples
– keeps a buffer pool of 200 pretty
old examples to avoid overfitting
to only the most recent images
Feed-forward vs. Interactive
Nets
• Feed-forward
– activation propagates in one direction
– We usually focus on this
• Interactive
– activation propagates forward & backwards
– propagation continues until equilibrium is reached in
the network
– We do not discuss these networks here, complex
training. May be unstable.
Ways of learning with an ANN
• Add nodes & connections
• Subtract nodes & connections
• Modify connection weights
– current focus
– can simulate first two
• I/O pairs:
– given the inputs, what should the output be?
[“typical” learning problem]
More Neural Network
Applications
- May provide a model for massive parallel computation.
- More successful approach of “parallelizing” traditional
serial algorithms.
- Can compute any computable function.
- Can do everything a normal digital computer can do.
- Can do even more under some impractical assumptions.
Neural Network Approaches to driving
- Developed in 1993.
- Performs driving with
Neural Networks.
- An intelligent VLSI image
sensor for road following.
- Learns to filter out image
details not relevant to
driving.
Hidden layer
Output units
Input units
•Use special hardware
•ASIC
•FPGA
•analog
Neural Network Approaches
Hidden Units Output unitsInput Array
Actual Products Available
ex1. Enterprise Miner:
- Single multi-layered feed-forward neural networks.
- Provides business solutions for data mining.
ex2. Nestor:
- Uses Nestor Learning System (NLS).
- Several multi-layered feed-forward neural networks.
- Intel has made such a chip - NE1000 in VLSI technology.
Ex1. Software tool - Enterprise Miner
- Based on SEMMA (Sample, Explore, Modify, Model,
Access) methodology.
- Statistical tools include :
Clustering, decision trees, linear and logistic
regression and neural networks.
- Data preparation tools include :
Outliner detection, variable transformation, random
sampling, and partition of data sets (into training,
testing and validation data sets).
Ex 2. Hardware Tool - Nestor
- With low connectivity within each layer.
- Minimized connectivity within each layer results in rapid
training and efficient memory utilization, ideal for VLSI.
- Composed of multiple neural networks, each specializing
in a subset of information about the input patterns.
- Real time operation without the need of special computers
or custom hardware DSP platforms
•Software exists.
Summary
- Neural network is a computational model that simulate
some properties of the human brain.
- The connections and nature of units determine the
behavior of a neural network.
- Perceptrons are feed-forward networks that can only
represent linearly separable functions.
Summary
- Given enough units, any function can be represented
by Multi-layer feed-forward networks.
- Backpropagation learning works on multi-layer
feed-forward networks.
- Neural Networks are widely used in developing
artificial learning systems.
References
- Russel, S. and P. Norvig (1995). Artificial Intelligence - A
Modern Approach. Upper Saddle River, NJ, Prentice
Hall.
- Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7:
Introduction, periodic posting to the Usenet newsgroup
comp.ai.neural-nets,
URL: ftp://ftp.sas.com/pub/neural/FAQ.html
Eddy Li
Eric Wong
Martin Ho
Kitty Wong
Sources

Más contenido relacionado

La actualidad más candente

Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Parinaz Faraji
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkDessy Amirudin
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Neural network
Neural networkNeural network
Neural networkSilicon
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPratik Aggarwal
 
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Randa Elanwar
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Muhammad Ishaq
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehtaRutul Mehta
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 

La actualidad más candente (20)

Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Neural network
Neural networkNeural network
Neural network
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Unit+i
Unit+iUnit+i
Unit+i
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 

Similar a 2011 0480.neural-networks

Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaEr. Arpit Sharma
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxneelamsanjeevkumar
 
Basics of Artificial Neural Network
Basics of Artificial Neural Network Basics of Artificial Neural Network
Basics of Artificial Neural Network Subham Preetam
 
neuralnetwork.pptx
neuralnetwork.pptxneuralnetwork.pptx
neuralnetwork.pptxSherinRappai
 
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptxartificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptxREG83NITHYANANTHANN
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Prof. Neeta Awasthy
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksFrancesco Collova'
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksmadhu sudhakar
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
lecture11_Artificial neural networks.ppt
lecture11_Artificial neural networks.pptlecture11_Artificial neural networks.ppt
lecture11_Artificial neural networks.pptj7757652020
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...DurgadeviParamasivam
 
Multilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLPMultilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLPAbdullah al Mamun
 

Similar a 2011 0480.neural-networks (20)

Neural
NeuralNeural
Neural
 
Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharma
 
Feed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptxFeed forward back propogation algorithm .pptx
Feed forward back propogation algorithm .pptx
 
7 nn1-intro.ppt
7 nn1-intro.ppt7 nn1-intro.ppt
7 nn1-intro.ppt
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Basics of Artificial Neural Network
Basics of Artificial Neural Network Basics of Artificial Neural Network
Basics of Artificial Neural Network
 
neuralnetwork.pptx
neuralnetwork.pptxneuralnetwork.pptx
neuralnetwork.pptx
 
neuralnetwork.pptx
neuralnetwork.pptxneuralnetwork.pptx
neuralnetwork.pptx
 
Deep learning
Deep learningDeep learning
Deep learning
 
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptxartificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
lecture11_Artificial neural networks.ppt
lecture11_Artificial neural networks.pptlecture11_Artificial neural networks.ppt
lecture11_Artificial neural networks.ppt
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
Multilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLPMultilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLP
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 

Último

Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdfKamal Acharya
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086anil_gaur
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Servicemeghakumariji156
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdfKamal Acharya
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARKOUSTAV SARKAR
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptMsecMca
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Arindam Chakraborty, Ph.D., P.E. (CA, TX)
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projectssmsksolar
 
Bridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptxBridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptxnuruddin69
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxmaisarahman1
 

Último (20)

Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects
 
Bridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptxBridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptx
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 

2011 0480.neural-networks

  • 1. Some more Artificial Intelligence • Neural Networks please read chapter 19. • Genetic Algorithms • Genetic Programming • Behavior-Based Systems
  • 2. Background - Neural Networks can be : - Biological models - Artificial models - Desire to produce artificial systems capable of sophisticated computations similar to the human brain.
  • 3. Biological analogy and some main ideas • The brain is composed of a mass of interconnected neurons – each neuron is connected to many other neurons • Neurons transmit signals to each other • Whether a signal is transmitted is an all-or-nothing event (the electrical potential in the cell body of the neuron is thresholded) • Whether a signal is sent, depends on the strength of the bond (synapse) between two neurons
  • 4. How Does the Brain Work ? (1) NEURON - The cell that performs information processing in the brain. - Fundamental functional unit of all nervous system tissue.
  • 5. Each consists of : SOMA, DENDRITES, AXON, and SYNAPSE. How Does the Brain Work ? (2)
  • 6. Brain vs. Digital Computers (1) - Computers require hundreds of cycles to simulate a firing of a neuron. - The brain can fire all the neurons in a single step. Parallelism - Serial computers require billions of cycles to perform some tasks but the brain takes less than a second. e.g. Face Recognition
  • 7. Comparison of Brain and computer Human Computer Processing Elements 100 Billion neurons 10 Million gates Interconnects 1000 per neuron A few Cycles per sec 1000 500 Million 2X improvement 200,000 Years 2 Years
  • 8. Brain vs. Digital Computers (2) Future : combine parallelism of the brain with the switching speed of the computer.
  • 9. History • 1943: McCulloch & Pitts show that neurons can be combined to construct a Turing machine (using ANDs, Ors, & NOTs) • 1958: Rosenblatt shows that perceptrons will converge if what they are trying to learn can be represented • 1969: Minsky & Papert showed the limitations of perceptrons, killing research for a decade • 1985: backpropagation algorithm revitalizes the field
  • 10. Definition of Neural Network A Neural Network is a system composed of many simple processing elements operating in parallel which can acquire, store, and utilize experiential knowledge.
  • 12. Neurons vs. Units (1) - Each element of NN is a node called unit. - Units are connected by links. - Each link has a numeric weight.
  • 13. Neurons vs units (2) Real neuron is far away from our simplified model - unit Chemistry, biochemistry, quantumness.
  • 15. Planning in building a Neural Network Decisions must be taken on the following: - The number of units to use. - The type of units required. - Connection between the units.
  • 16. How NN learns a task. Issues to be discussed - Initializing the weights. - Use of a learning algorithm. - Set of training examples. - Encode the examples as inputs. - Convert output into meaningful results.
  • 17. Neural Network Example Figure 19.7. A very simple, two-layer, feed-forward network with two inputs, two hidden nodes, and one output node.
  • 18. Simple Computations in this network - There are 2 types of components: Linear and Non-linear. - Linear: Input function - calculate weighted sum of all inputs. - Non-linear: Activation function - transform sum into activation level.
  • 20. A Computing Unit. Now in more detail but for a particular model only Figure 19.4. A unit
  • 21. Activation Functions - Use different functions to obtain different models. - 3 most common choices : 1) Step function 2) Sign function 3) Sigmoid function - An output of 1 represents firing of a neuron down the axon.
  • 24. Are current computer a wrong model of thinking? • Humans can’t be doing the sequential analysis we are studying – Neurons are a million times slower than gates – Humans don’t need to be rebooted or debugged when one bit dies.
  • 25. 100-step program constraint • Neurons operate on the order of 10-3 seconds • Humans can process information in a fraction of a second (face recognition) • Hence, at most a couple of hundred serial operations are possible • That is, even in parallel, no “chain of reasoning” can involve more than 100 -1000 steps
  • 26. Standard structure of an artificial neural network • Input units – represents the input as a fixed-length vector of numbers (user defined) • Hidden units – calculate thresholded weighted sums of the inputs – represent intermediate calculations that the network learns • Output units – represent the output as a fixed length vector of numbers
  • 27. Representations • Logic rules – If color = red ^ shape = square then + • Decision trees – tree • Nearest neighbor – training examples • Probabilities – table of probabilities • Neural networks – inputs in [0, 1] Can be used for all of them Many variants exist
  • 30. Operation of individual units • Outputi = f(Wi,j * Inputj + Wi,k * Inputk + Wi,l * Inputl) – where f(x) is a threshold (activation) function – f(x) = 1 / (1 + e-Output) • “sigmoid” – f(x) = step function
  • 32. Units in Action - Individual units representing Boolean functions
  • 33. Network Structures Feed-forward neural nets: Links can only go in one direction. Recurrent neural nets: Links can go anywhere and form arbitrary topologies.
  • 34. Feed-forward Networks - Arranged in layers. - Each unit is linked only in the unit in next layer. - No units are linked between the same layer, back to the previous layer or skipping a layer. - Computations can proceed uniformly from input to output units. - No internal state exists.
  • 35. Feed-Forward Example I1 I2 t = -0.5W24= -1 H4 W46 = 1 t = 1.5 H6 W67 = 1 t = 0.5 I1 t = -0.5W13 = -1 H3 W35 = 1 t = 1.5 H5 O7 W57 = 1 W25 = 1 W16 = 1 Inputs skip the layer in this case
  • 36. Multi-layer Networks and Perceptrons - Have one or more layers of hidden units. - With two possibly very large hidden layers, it is possible to implement any function. - Networks without hidden layer are called perceptrons. - Perceptrons are very limited in what they can represent, but this makes their learning problem much simpler.
  • 37. Recurrent Network (1) - The brain is not and cannot be a feed-forward network. - Allows activation to be fed back to the previous unit. - Internal state is stored in its activation level. - Can become unstable -Can oscillate.
  • 38. Recurrent Network (2) - May take long time to compute a stable output. - Learning process is much more difficult. - Can implement more complex designs. - Can model certain systems with internal states.
  • 39. Perceptrons - First studied in the late 1950s. - Also known as Layered Feed-Forward Networks. - The only efficient learning element at that time was for single-layered networks. - Today, used as a synonym for a single-layer, feed-forward network.
  • 43. Perceptron learning rule • Teacher specifies the desired output for a given input • Network calculates what it thinks the output should be • Network changes its weights in proportion to the error between the desired & calculated results • wi,j =  * [teacheri - outputi] * inputj – where: –  is the learning rate; – teacheri - outputi is the error term; – and inputj is the input activation • wi,j = wi,j + wi,j Delta rule
  • 44. Adjusting perceptron weights • wi,j =  * [teacheri - outputi] * inputj • missi is (teacheri - outputi) • Adjust each wi,j based on inputj and missi • The above table shows adaptation. • Incremental learning. miss<0 miss=0 miss>0 input < 0 alpha 0 -alpha input = 0 0 0 0 input > 0 -alpha 0 alpha
  • 45. Node biases • A node’s output is a weighted function of its inputs • What is a bias? • How can we learn the bias value? • Answer: treat them like just another weight
  • 46. Training biases () • A node’s output: – 1 if w1x1 + w2x2 + … + wnxn >=  – 0 otherwise • Rewrite – w1x1 + w2x2 + … + wnxn -  >= 0 – w1x1 + w2x2 + … + wnxn + (-1) >= 0 • Hence, the bias is just another weight whose activation is always -1 • Just add one more input unit to the network topology bias
  • 47. Perceptron convergence theorem • If a set of <input, output> pairs are learnable (representable), the delta rule will find the necessary weights – in a finite number of steps – independent of initial weights • However, a single layer perceptron can only learn linearly separable concepts – it works iff gradient descent works
  • 48. Linear separability • Consider a perceptron • Its output is – 1, if W1X1 + W2X2 >  – 0, otherwise • In terms of feature space – hence, it can only classify examples if a line (hyperplane more generally) can separate the positive examples from the negative examples
  • 49. What can Perceptrons Represent ? - Some complex Boolean function can be represented. For example: Majority function - will be covered in this lecture. - Perceptrons are limited in the Boolean functions they can represent.
  • 50. Figure 19.9. Linear Separability in Perceptrons The Separability Problem and EXOR trouble
  • 51. AND and OR linear Separators
  • 52. Separation in n-1 dimensions Example of 3Dimensional space majority
  • 53. Perceptrons & XOR • XOR function – no way to draw a line to separate the positive from negative examples Input1 Input2 Output 0 0 0 0 1 1 1 0 1 1 1 0
  • 54. How do we compute XOR?
  • 55. Learning Linearly Separable Functions (1) What can these functions learn ? Bad news: - There are not many linearly separable functions. Good news: - There is a perceptron algorithm that will learn any linearly separable function, given enough training examples.
  • 56. Most neural network learning algorithms, including the perceptrons learning method, follow the current-best- hypothesis (CBH) scheme. Learning Linearly Separable Functions (2)
  • 57. - Initial network has a randomly assigned weights. - Learning is done by making small adjustments in the weights to reduce the difference between the observed and predicted values. - Main difference from the logical algorithms is the need to repeat the update phase several times in order to achieve convergence. -Updating process is divided into epochs. -Each epoch updates all the weights of the process. Learning Linearly Separable Functions (3)
  • 58. Figure 19.11. The Generic Neural Network Learning Method: adjust the weights until predicted output values O and true values T agree e are examples from set examples
  • 59. Examples of Feed-Forward Learning Two types of networks were compared for the restaurant problem
  • 63. Need for hidden units • If there is one layer of enough hidden units, the input can be recoded (perhaps just memorized; example) • This recoding allows any mapping to be represented • Problem: how can the weights of the hidden units be trained?
  • 65. Majority of 11 Inputs (any 6 or more) Perceptron is better than DT on majority Constructive induction is even better than NN How many times in battlefield the robot recognizes majority?
  • 66. Other Examples • Need more than a 1-layer network for: – Parity – Error Correction – Connected Paths • Neural nets do well with – continuous inputs and outputs • But poorly with – logical combinations of boolean inputs Give DT brain to a mathematician robot and a NN brain to a soldier robot
  • 67. WillWait Restaurant example Let us not dramatize “universal” benchmarks too much Here decision tree is better than perceptron
  • 68. N-layer FeedForward Network • Layer 0 is input nodes • Layers 1 to N-1 are hidden nodes • Layer N is output nodes • All nodes at any layer k are connected to all nodes at layer k+1 • There are no cycles
  • 69. 2 Layer FF net with LTUs • 1 output layer + 1 hidden layer – Therefore, 2 stages to “assign reward” • Can compute functions with convex regions • Each hidden node acts like a perceptron, learning a separating line • Output units can compute intersections of half- planes given by hidden units Linear Threshold Units
  • 70. Feed-forward NN with hidden layer
  • 71. • Braitenberg Vehicles • Quantum Neural BV Reactive architecture based on NN for a simple robot
  • 72. Evaluation of a Feedforward NN using software is easy Calculate output neurons Calculate activation of hidden neurons Set bias input neuron Take from hidden neurons and multiply by weights
  • 74. Introduction to Backpropagation - In 1969 a method for learning in multi-layer network, Backpropagation, was invented by Bryson and Ho. - The Backpropagation algorithm is a sensible approach for dividing the contribution of each weight. - Works basically the same as perceptrons
  • 75. Backpropagation Learning Principles: Hidden Layers and Gradients There are two differences for the updating rule : 1) The activation of the hidden unit is used instead of the input value. 2) The rule contains a term for the gradient of the activation function.
  • 76. Backpropagation Network training • 1. Initialize network with random weights • 2. For all training cases (called examples): – a. Present training inputs to network and calculate output – b. For all layers (starting with output layer, back to input layer): • i. Compare network output with correct output (error function) • ii. Adapt weights in current layer This is what you want
  • 77. Backpropagation Learning Details • Method for learning weights in feed-forward (FF) nets • Can’t use Perceptron Learning Rule – no teacher values are possible for hidden units • Use gradient descent to minimize the error – propagate deltas to adjust for errors backward from outputs to hidden layers to inputs forward backward
  • 78. Backpropagation Algorithm – Main Idea – error in hidden layers The ideas of the algorithm can be summarized as follows : 1. Computes the error term for the output units using the observed error. 2. From output layer, repeat - propagating the error term back to the previous layer and - updating the weights between the two layers until the earliest hidden layer is reached.
  • 79. Backpropagation Algorithm • Initialize weights (typically random!) • Keep doing epochs – For each example e in training set do • forward pass to compute – O = neural-net-output(network,e) – miss = (T-O) at each output unit • backward pass to calculate deltas to weights • update all weights – end • until tuning set error stops improving Backward pass explained in next slideForward pass explained earlier
  • 80. Backward Pass • Compute deltas to weights – from hidden layer – to output layer • Without changing any weights (yet), compute the actual contributions – within the hidden layer(s) – and compute deltas
  • 81. Gradient Descent • Think of the N weights as a point in an N- dimensional space • Add a dimension for the observed error • Try to minimize your position on the “error surface”
  • 82. Error Surface Error as function of weights in multidimensional space error weights
  • 83. Gradient • Trying to make error decrease the fastest • Compute: • GradE = [dE/dw1, dE/dw2, . . ., dE/dwn] • Change i-th weight by • deltawi = -alpha * dE/dwi • We need a derivative! • Activation function must be continuous, differentiable, non-decreasing, and easy to compute Derivatives of error for weights Compute deltas
  • 84. Can’t use LTU • To effectively assign credit / blame to units in hidden layers, we want to look at the first derivative of the activation function • Sigmoid function is easy to differentiate and easy to compute forward Linear Threshold Units Sigmoid function
  • 85. Updating hidden-to-output • We have teacher supplied desired values • deltawji =  * aj * (Ti - Oi) * g’(ini) =  * aj * (Ti - Oi) * Oi * (1 - Oi) – for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x)) alpha derivative miss Here we have general formula with derivative, next we use for sigmoid
  • 86. Updating interior weights • Layer k units provide values to all layer k+1 units • “miss” is sum of misses from all units on k+1 • missj =  [ ai(1- ai) (Ti - ai) wji ] • weights coming into this unit are adjusted based on their contribution deltakj =  * Ik * aj * (1 - aj) * missj For layer k+1 Compute deltas
  • 87. How do we pick ? 1. Tuning set, or 2. Cross validation, or 3. Small for slow, conservative learning
  • 88. How many hidden layers? • Usually just one (i.e., a 2-layer net) • How many hidden units in the layer? – Too few ==> can’t learn – Too many ==> poor generalization
  • 89. How big a training set? • Determine your target error rate, e • Success rate is 1- e • Typical training set approx. n/e, where n is the number of weights in the net • Example: – e = 0.1, n = 80 weights – training set size 800 trained until 95% correct training set classification should produce 90% correct classification on testing set (typical)
  • 90. Examples of Backpropagation Learning In the restaurant problem NN was worse than the decision tree Error decreases with number of epochs Decision tree still better for restaurant example
  • 91. Examples of Backpropagation Learning Majority example, perceptron better Restaurant example, DT better
  • 94.
  • 95.
  • 96.
  • 97. bias neuron in input layer Bias Neurons in Backpropagation Learning
  • 98. Training pairs Software for Backpropagation Learning Calculate difference to desired output Calculate total error Run network forward. Was explained earlier This routine calculate error for backpropagation
  • 99. Update output weights Software for Backpropagation Learning continuation Calculate hidden difference values Update input weights Return total error Here we do not use alpha, the learning rate
  • 100. The general Backpropagation Algorithm for updating weights in a multilayer network Run network to calculate its output for this example Go through all examples Compute the error in output Update weights to output layer Compute error in each hidden layer Update weights in each hidden layer Repeat until convergent Return learned network Here we use alpha, the learning rate
  • 102. Neural Network in Practice NNs are used for classification and function approximation or mapping problems which are: - Tolerant of some imprecision. - Have lots of training data available. - Hard and fast rules cannot easily be applied.
  • 103. NETalk (1987) • Mapping character strings into phonemes so they can be pronounced by a computer • Neural network trained how to pronounce each letter in a word in a sentence, given the three letters before and three letters after it in a window • Output was the correct phoneme • Results – 95% accuracy on the training data – 78% accuracy on the test set
  • 104. Other Examples • Neurogammon (Tesauro & Sejnowski, 1989) – Backgammon learning program • Speech Recognition (Waibel, 1989) • Character Recognition (LeCun et al., 1989) • Face Recognition (Mitchell)
  • 105. ALVINN • Steer a van down the road – 2-layer feedforward • using backpropagation for learning – Raw input is 480 x 512 pixel image 15x per sec – Color image preprocessed into 960 input units – 4 hidden units – 30 output units, each is a steering direction
  • 106. Neural Network Approaches ALVINN - Autonomous Land Vehicle In a Neural Network
  • 107. Learning on-the- fly • ALVINN learned as the vehicle traveled – initially by observing a human driving – learns from its own driving by watching for future corrections – never saw bad driving • didn’t know what was dangerous, NOT correct • computes alternate views of the road (rotations, shifts, and fill-ins) to use as “bad” examples – keeps a buffer pool of 200 pretty old examples to avoid overfitting to only the most recent images
  • 108. Feed-forward vs. Interactive Nets • Feed-forward – activation propagates in one direction – We usually focus on this • Interactive – activation propagates forward & backwards – propagation continues until equilibrium is reached in the network – We do not discuss these networks here, complex training. May be unstable.
  • 109. Ways of learning with an ANN • Add nodes & connections • Subtract nodes & connections • Modify connection weights – current focus – can simulate first two • I/O pairs: – given the inputs, what should the output be? [“typical” learning problem]
  • 110. More Neural Network Applications - May provide a model for massive parallel computation. - More successful approach of “parallelizing” traditional serial algorithms. - Can compute any computable function. - Can do everything a normal digital computer can do. - Can do even more under some impractical assumptions.
  • 111. Neural Network Approaches to driving - Developed in 1993. - Performs driving with Neural Networks. - An intelligent VLSI image sensor for road following. - Learns to filter out image details not relevant to driving. Hidden layer Output units Input units •Use special hardware •ASIC •FPGA •analog
  • 112. Neural Network Approaches Hidden Units Output unitsInput Array
  • 113. Actual Products Available ex1. Enterprise Miner: - Single multi-layered feed-forward neural networks. - Provides business solutions for data mining. ex2. Nestor: - Uses Nestor Learning System (NLS). - Several multi-layered feed-forward neural networks. - Intel has made such a chip - NE1000 in VLSI technology.
  • 114. Ex1. Software tool - Enterprise Miner - Based on SEMMA (Sample, Explore, Modify, Model, Access) methodology. - Statistical tools include : Clustering, decision trees, linear and logistic regression and neural networks. - Data preparation tools include : Outliner detection, variable transformation, random sampling, and partition of data sets (into training, testing and validation data sets).
  • 115. Ex 2. Hardware Tool - Nestor - With low connectivity within each layer. - Minimized connectivity within each layer results in rapid training and efficient memory utilization, ideal for VLSI. - Composed of multiple neural networks, each specializing in a subset of information about the input patterns. - Real time operation without the need of special computers or custom hardware DSP platforms •Software exists.
  • 116. Summary - Neural network is a computational model that simulate some properties of the human brain. - The connections and nature of units determine the behavior of a neural network. - Perceptrons are feed-forward networks that can only represent linearly separable functions.
  • 117. Summary - Given enough units, any function can be represented by Multi-layer feed-forward networks. - Backpropagation learning works on multi-layer feed-forward networks. - Neural Networks are widely used in developing artificial learning systems.
  • 118. References - Russel, S. and P. Norvig (1995). Artificial Intelligence - A Modern Approach. Upper Saddle River, NJ, Prentice Hall. - Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7: Introduction, periodic posting to the Usenet newsgroup comp.ai.neural-nets, URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  • 119. Eddy Li Eric Wong Martin Ho Kitty Wong Sources