2. Neural Networks
Introduction
Neural Networks are input related, output generating
synapsed neurons which exist in biological and artificial
forms.
These networks can perform simple and highly complex tasks
respectively resulting in small and enormous interconnected
networks.
networks
Involved tasks for these sorts of networks range from image
recognition to intelligent brains.
g g
3. Neural Networks
Introduction
Neural networks process information through neurons to
discover relations between the input they receive and the
p y
output they are expected to give.
4. Neural Networks
Introduction
Neural networks are thus able to learn which relations, input,
relate to which output. These relations are then stored in either
a biological way, through axons and dendrites, or ,in case of the
artificial neurons, in a weighted function. By iteration these
synapses become more and more precise in their certainty of
output.
5. Neural Networks
• ANN’s
ANN s
The ANN’s (Artificial Neural Networks) are usually divided into
three layers where the first two influence the last.
HIDDEN
NPUT
IN
OUTPUT
6. Neural Networks
ANN s
ANN’s
ANN’s are, in a way, intelligent because they’re able to interpolate
outside their learning region. The more complex these networks
become, the more input they receive and the more neurons are involved
the better they interpolate. However, ANN s can also generate strange
the better they interpolate However ANN’s can also generate strange
synapses if they’re trained wrong, have to many neurons in the hidden
layer or if certain input parameters influence the output directly (called
“ jumpers”)
7. Neural Networks
Processing input
Processing input
Therefore, the way the input is processed through and by the
hidden layer, containing the adaptive neurons, needs to be
carefully examined. This directly influences the validation of
f ll i d Thi di l i fl h lid i f
the generated output.
8. Neural Networks
Input typology
Input typology
The learning process of Neural Networks is empirical based;
while some sort of validation is required to the output it
generates. This validation ranges from instinctive reactions,
Thi lid i f iii i
biological reactions on the atomic level, to simple yes or no’s.
To learn, a neural network uses
samples. A certain amount is
used to train the network, an
other to test and validate.
But what’s to validate if no one,
B t h t’ t lid t if
no “thing” or no biological
reaction tells the network if
something is right or wrong? In
g g g
other words what’s to learn
ANN: These amounts of green produces this
when no valid teacher is at
amount of purple?
hand?
Teacher: Correct!
Teacher: Correct!
ANN: Answer validated and stored
9. Neural Networks
Input typology
Input typology
ANN’s are thus unaware of what they are taught; they simply
receive input and generate an output of who something or
someone validated.
lid t d
In order to achieve this the neurons, their weighted functions
and synapses are freely fine tuned just to produce the correct
and synapses are freely fine tuned just to produce the correct
output. So what actually happens inside the network remains
unclear and unreadable.
10. Neural Networks
Input typology
Input typology
An example:
An image recognition neural network
Goal:
To recognize the whereabouts of an image
Input processing:
I t i
Image converted to a matrix where relations
are discovered by the network.
These relations range from pixel to pixel to
clustered relations
clustered relations
Small cluster relations Large cluster relations
11. Neural Networks
Input typology
Input typology
The image shows a small matrix but the actual network receives
it’s input from images with a high resolution meaning that the
neurons in such a network create intense complex relations.
Therefore, the validation process and teacher require a massive
amount of samples. As in the article
“ “
the sample size was 6,5 million.
12. Neural Networks
Reducing complexity
Reducing complexity
For the project a simpler version of the ANN was chosen; no
complex input matrices and millions of samples but a simple
elimination picker.
li i ti ik
This reduces the complexity of the network and it’s algorithm.
On the other side it relies even more on the teacher for
validation. Therefore, wrong output or wrong influenced output
validation. Therefore, wrong output or wrong influenced output
need to be tested to create an acceptable confidence interval.
13. Neural Networks
Reducing complexity
Reducing complexity
Enormous downside of this method is that you’re relying more
on the teacher than is actually necessary. The network, in this
way, remains, relatively, quite dumb. On the other hand making
it function correctly without thousands and thousands of
samples is challenging enough for this project.
14. Neural Networks
Initial Input processing
Initial Input processing
To cope with the strange or incorrect output of the ANN some
statistics will be used. The image below represents schematic
how the ANN process’ input.
INPUT
ANN 1: OUTPUT
OUTPUT
20 neurons
1 layer OUTPUT
1 output OUTPUT
OUTPUT
Statistic RELIABLE
OUTPUT
C.I. 99%
20 neurons 20 neurons
OUTPUT n
1 layer 1 layer
ANN 2 ANN 4
1 output 1 output
20 neurons
20 neurons 20 neurons
20 neurons
1 layer 1 layer
ANN 3 ANN n
1 output 1 output
Simultaneously, identical, processed input data through 40
separately trained ANN’s. Each output is used for a final
l d ’ h df fl
statistical confidence interval.