SlideShare a Scribd company logo
1 of 31
Download to read offline
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 1
Lecture 12Lecture 12
Hybrid intelligent systems:Hybrid intelligent systems:
Evolutionary neural networks and fuzzyEvolutionary neural networks and fuzzy
evolutionary systemsevolutionary systems
II IntroductionIntroduction
II Evolutionary neural networksEvolutionary neural networks
II Fuzzy evolutionary systemsFuzzy evolutionary systems
II SummarySummary
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 2
Evolutionary neural networksEvolutionary neural networks
II Although neural networks are used for solving aAlthough neural networks are used for solving a
variety of problems, they still have somevariety of problems, they still have some
limitations.limitations.
II One of the most common is associated with neuralOne of the most common is associated with neural
network training. The backnetwork training. The back--propagation learningpropagation learning
algorithm cannot guarantee an optimal solution.algorithm cannot guarantee an optimal solution.
In realIn real--world applications, the backworld applications, the back--propagationpropagation
algorithm might converge to a set of subalgorithm might converge to a set of sub--optimaloptimal
weights from which it cannot escape. As a result,weights from which it cannot escape. As a result,
the neural network is often unable to find athe neural network is often unable to find a
desirable solution to a problem at hand.desirable solution to a problem at hand.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 3
II Another difficulty is related to selecting anAnother difficulty is related to selecting an
optimal topology for the neural network. Theoptimal topology for the neural network. The
““rightright”” network architecture for a particularnetwork architecture for a particular
problem is often chosen by means of heuristics,problem is often chosen by means of heuristics,
and designing a neural network topology is stilland designing a neural network topology is still
more art than engineering.more art than engineering.
II Genetic algorithms are an effective optimisationGenetic algorithms are an effective optimisation
technique that can guide both weight optimisationtechnique that can guide both weight optimisation
and topology selection.and topology selection.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 4
y
0.9
1
3
4
5
6
7
8
x1
x3
x2
2
-0.8
0.4
0.8
-0.7
0.2
-0.2
0.6
-0.3 0.1
-0.2
0.9
-0.60.1
0.3
0.5
From neuron:
To neuron:
1 2 3 4 5 6 7 8
1
2
3
4
5
6
7
8
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0.9 -0.3 -0.7 0 0 0 0 0
-0.8 0.6 0.3 0 0 0 0 0
0.1 -0.2 0.2 0 0 0 0 0
0.4 0.5 0.8 0 0 0 0 0
0 0 0 -0.6 0.1 -0.2 0.9 0
Chromosome: 0.9 -0.3 -0.7 -0.8 0.6 0.3 0.1 -0.2 0.2 0.4 0.5 0.8 -0.6 0.1 -0.2 0.9
Encoding a set of weights in a chromosomeEncoding a set of weights in a chromosome
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 5
II The second step is to define a fitness function forThe second step is to define a fitness function for
evaluating the chromosomeevaluating the chromosome’’s performance. Thiss performance. This
function must estimate the performance of afunction must estimate the performance of a
given neural network. We can apply here agiven neural network. We can apply here a
simple function defined by the sum of squaredsimple function defined by the sum of squared
errors.errors.
II The training set of examples is presented to theThe training set of examples is presented to the
network, and the sum of squared errors isnetwork, and the sum of squared errors is
calculated. The smaller the sum, the fitter thecalculated. The smaller the sum, the fitter the
chromosome.chromosome. The genetic algorithm attemptsThe genetic algorithm attempts
to find a set of weights that minimises the sumto find a set of weights that minimises the sum
of squared errors.of squared errors.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 6
II The third step is to choose the genetic operatorsThe third step is to choose the genetic operators ––
crossover and mutation. A crossover operatorcrossover and mutation. A crossover operator
takes two parent chromosomes and creates atakes two parent chromosomes and creates a
single child with genetic material from bothsingle child with genetic material from both
parents. Each gene in the childparents. Each gene in the child’’s chromosome iss chromosome is
represented by the corresponding gene of therepresented by the corresponding gene of the
randomly selected parent.randomly selected parent.
II A mutation operator selects a gene in aA mutation operator selects a gene in a
chromosome and adds a small random valuechromosome and adds a small random value
betweenbetween −−1 and 1 to each weight in this gene.1 and 1 to each weight in this gene.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 7
Crossover in weight optimisationCrossover in weight optimisation
3
4
5
y
6
x2
2
-0.3
0.9
-0.7
0.5
-0.8
-0.6
Parent 1
x1
1
-0.2
0.1
0.4
3
4
5
y
6
x2
2
-0.1
-0.5
0.2
-0.9
0.6
0.3
Parent 2
x1
1 0.9
0.3
-0.8
0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9 0.4 -0.3 0.3 0.2 0.3 -0.9 0.60.9 -0.5 -0.8 -0.1
0.1 -0.7 -0.6 0.5 -0.80.9 -0.5 -0.8 0.1
3
4
5
y
6
x2
2
-0.1
-0.5
-0.7
0.5
-0.8
-0.6
Child
x1
1 0.9
0.1
-0.8
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 8
Mutation in weight optimisationMutation in weight optimisation
Original network
3
4
5
y
6
x2
2
-0.3
0.9
-0.7
0.5
-0.8
-0.6x1
1
-0.2
0.1
0.4
0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9
3
4
5
y
6
x2
2
0.2
0.9
-0.7
0.5
-0.8
-0.6x1
1
-0.2
0.1
-0.1
0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9
Mutated network
0.4 -0.3 -0.1 0.2
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 9
Can genetic algorithms help us in selectingCan genetic algorithms help us in selecting
the network architecture?the network architecture?
The architecture of the network (i.e. the number ofThe architecture of the network (i.e. the number of
neurons and their interconnections) oftenneurons and their interconnections) often
determines the success or failure of the application.determines the success or failure of the application.
Usually the network architecture is decided by trialUsually the network architecture is decided by trial
and error; there is a great need for a method ofand error; there is a great need for a method of
automatically designing the architecture for aautomatically designing the architecture for a
particular application. Genetic algorithms mayparticular application. Genetic algorithms may
well be suited for this task.well be suited for this task.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 10
II The basic idea behind evolving a suitable networkThe basic idea behind evolving a suitable network
architecture is to conduct a genetic search in aarchitecture is to conduct a genetic search in a
population of possible architectures.population of possible architectures.
II We must first choose a method of encoding aWe must first choose a method of encoding a
networknetwork’’s architecture into a chromosome.s architecture into a chromosome.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 11
Encoding the network architectureEncoding the network architecture
II The connection topology of a neural network canThe connection topology of a neural network can
be represented by a square connectivity matrix.be represented by a square connectivity matrix.
II Each entry in the matrix defines the type ofEach entry in the matrix defines the type of
connection from one neuron (column) to anotherconnection from one neuron (column) to another
(row), where 0 means no connection and 1(row), where 0 means no connection and 1
denotes connection for which the weight can bedenotes connection for which the weight can be
changed through learning.changed through learning.
II To transform the connectivity matrix into aTo transform the connectivity matrix into a
chromosome, we need only to string the rows ofchromosome, we need only to string the rows of
the matrix together.the matrix together.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 12
Encoding of the network topologyEncoding of the network topology
From neuron:
To neuron:
1 2 3 4 5 6
1
2
3
4
5
6
0 0 0 0 0 0
0 0 0 0 0 0
1 1 0 0 0 0
1 0 0 0 0 0
0 1 0 0 0 0
0 1 1 1 1 0
3
4
5
y
6
x2
2
x1
1
Chromosome:
0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 0
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 13
The cycle of evolving a neural network topologyThe cycle of evolving a neural network topology
Neural Network j
Fitness = 117
Neural Network j
Fitness = 117
Generation i
Training Data Set
0 0 1.0000
0.1000 0.0998 0.8869
0.2000 0.1987 0.7551
0.3000 0.2955 0.6142
0.4000 0.3894 0.4720
0.5000 0.4794 0.3345
0.6000 0.5646 0.2060
0.7000 0.6442 0.0892
0.8000 0.7174 -0.0143
0.9000 0.7833 -0.1038
1.0000 0.8415 -0.1794
Child 2
Child 1
Crossover
Parent 1
Parent 2
Mutation
Generation (i + 1)
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 14
Fuzzy evolutionary systemsFuzzy evolutionary systems
II Evolutionary computation is also used in theEvolutionary computation is also used in the
design of fuzzy systems, particularly for generatingdesign of fuzzy systems, particularly for generating
fuzzy rules and adjusting membership functions offuzzy rules and adjusting membership functions of
fuzzy sets.fuzzy sets.
II In this section, we introduce an application ofIn this section, we introduce an application of
genetic algorithms to select an appropriate set ofgenetic algorithms to select an appropriate set of
fuzzy IFfuzzy IF--THEN rules for a classification problem.THEN rules for a classification problem.
II For a classification problem, a set of fuzzyFor a classification problem, a set of fuzzy
IFIF--THEN rules is generated from numerical data.THEN rules is generated from numerical data.
II First, we use a gridFirst, we use a grid--type fuzzy partition of an inputtype fuzzy partition of an input
space.space.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 15
Fuzzy partition by a 3Fuzzy partition by a 3××××××××3 fuzzy grid3 fuzzy grid
0 1
A1 A2 A3
X1
B2
B1
B3
0
1
X2
Class 1:
Class 2:
µ(x1)
µ(x2)
0
10 1
1
2
3
6
7
4
5
9
8
11
10
12
16
15
14
13
x11
x21
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 16
II Black and white dots denote the training patternsBlack and white dots denote the training patterns
ofof ClassClass 1 and1 and ClassClass 2, respectively.2, respectively.
II The gridThe grid--type fuzzy partition can be seen as atype fuzzy partition can be seen as a
rule table.rule table.
II The linguistic values of inputThe linguistic values of input xx1 (1 (AA11,, AA22 andand AA33))
form the horizontal axis, and the linguisticform the horizontal axis, and the linguistic
values of inputvalues of input xx2 (2 (BB11,, BB22 andand BB33) form the) form the
vertical axis.vertical axis.
II At the intersection of a row and a column lies theAt the intersection of a row and a column lies the
rule consequent.rule consequent.
Fuzzy partitionFuzzy partition
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 17
In the rule table, each fuzzy subspace can haveIn the rule table, each fuzzy subspace can have
only one fuzzy IFonly one fuzzy IF--THEN rule, and thus the totalTHEN rule, and thus the total
number of rules that can be generated in anumber of rules that can be generated in a KK××KK
grid is equal togrid is equal to KK××KK..
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 18
Fuzzy rules that correspond to theFuzzy rules that correspond to the KK××KK fuzzyfuzzy
partition can be represented in a general form as:partition can be represented in a general form as:
wherewhere xxpp is a training pattern on input spaceis a training pattern on input space XX11××XX2,2,
PP is the total number of training patterns,is the total number of training patterns, CCnn is theis the
rule consequent (eitherrule consequent (either ClassClass 1 or1 or ClassClass 2), and2), and
is the certaintyis the certainty factor that a pattern in fuzzyfactor that a pattern in fuzzy
subspacesubspace AAiiBBjj belongs to classbelongs to class CCnn..
is Ai i = 1, 2, . . . , K
is Bj j = 1, 2, . . . , K
Rule Rij :
IF x1p
THEN xp
AND x2p
∈ Cn





 n
ji
C
BA
CF xp = (x1p, x2p), p = 1, 2, . . . , P
CFCFAAii BBjj
CCnn
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 19
To determine the rule consequent and the certaintyTo determine the rule consequent and the certainty
factor, we use the following procedure:factor, we use the following procedure:
Step 1Step 1:: Partition an input space intoPartition an input space into KK××KK fuzzyfuzzy
subspaces, and calculate the strength of each classsubspaces, and calculate the strength of each class
of training patterns in every fuzzy subspace.of training patterns in every fuzzy subspace.
Each class in a given fuzzy subspace is representedEach class in a given fuzzy subspace is represented
by its training patterns. The more training patterns,by its training patterns. The more training patterns,
the stronger the classthe stronger the class −− in a given fuzzy subspace,in a given fuzzy subspace,
the rule consequent becomes more certain whenthe rule consequent becomes more certain when
patterns of one particular class appear more oftenpatterns of one particular class appear more often
than patterns of any other class.than patterns of any other class.
Step 2Step 2:: Determine the rule consequent and theDetermine the rule consequent and the
certainty factor in each fuzzy subspace.certainty factor in each fuzzy subspace.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 20
The certainty factor can be interpreted asThe certainty factor can be interpreted as
follows:follows:
II If all the training patterns in fuzzy subspaceIf all the training patterns in fuzzy subspace AAiiBBjj
belong to the same class, then the certaintybelong to the same class, then the certainty
factor is maximum and it is certain that any newfactor is maximum and it is certain that any new
pattern in this subspace will belong to this class.pattern in this subspace will belong to this class.
II If, however, training patterns belong to differentIf, however, training patterns belong to different
classes and these classes have similar strengths,classes and these classes have similar strengths,
then the certainty factor is minimum and it isthen the certainty factor is minimum and it is
uncertain that a new pattern will belong to anyuncertain that a new pattern will belong to any
particular class.particular class.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 21
II This means that patterns in a fuzzy subspace canThis means that patterns in a fuzzy subspace can
be misclassified. Moreover, if a fuzzy subspacebe misclassified. Moreover, if a fuzzy subspace
does not have any training patterns, we cannotdoes not have any training patterns, we cannot
determine the rule consequent at all.determine the rule consequent at all.
II If a fuzzy partition is too coarse, many patternsIf a fuzzy partition is too coarse, many patterns
may be misclassified. On the other hand, if amay be misclassified. On the other hand, if a
fuzzy partition is too fine, many fuzzy rulesfuzzy partition is too fine, many fuzzy rules
cannot be obtained, because of the lack ofcannot be obtained, because of the lack of
training patterns in the corresponding fuzzytraining patterns in the corresponding fuzzy
subspaces.subspaces.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 22
Training patterns are not necessarilyTraining patterns are not necessarily
distributed evenly in the input space. As adistributed evenly in the input space. As a
result, it is often difficult to choose anresult, it is often difficult to choose an
appropriate density for the fuzzy grid. Toappropriate density for the fuzzy grid. To
overcome this difficulty, we useovercome this difficulty, we use multiplemultiple
fuzzy rule tablesfuzzy rule tables..
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 23
Multiple fuzzy rule tablesMultiple fuzzy rule tables
K = 2 K = 3 K = 4 K = 5 K = 6
Fuzzy IFFuzzy IF--THEN rules are generated for each fuzzyTHEN rules are generated for each fuzzy
subspace of multiple fuzzy rule tables, and thus asubspace of multiple fuzzy rule tables, and thus a
complete set of rules for our case can be specifiedcomplete set of rules for our case can be specified
as:as:
2222
++ 3322
++ 4422
++ 5522
++ 6622
= 90 rules.= 90 rules.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 24
Once the set of rulesOnce the set of rules SSALLALL is generated, a newis generated, a new
pattern,pattern, xx = (= (xx1,1, xx2), can be classified by the2), can be classified by the
following procedure:following procedure:
Step 1Step 1:: In every fuzzy subspace of the multipleIn every fuzzy subspace of the multiple
fuzzy rule tables, calculate the degree offuzzy rule tables, calculate the degree of
compatibility of a new pattern with each class.compatibility of a new pattern with each class.
Step 2Step 2:: Determine the maximum degree ofDetermine the maximum degree of
compatibility of the new pattern with each class.compatibility of the new pattern with each class.
Step 3Step 3:: Determine the class with which the newDetermine the class with which the new
pattern has the highest degree of compatibility,pattern has the highest degree of compatibility,
and assign the pattern to this class.and assign the pattern to this class.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 25
The number of multiple fuzzy rule tablesThe number of multiple fuzzy rule tables
required for an accurate pattern classificationrequired for an accurate pattern classification
may be large. Consequently, a complete set ofmay be large. Consequently, a complete set of
rules can be enormous. Meanwhile, these rulesrules can be enormous. Meanwhile, these rules
have different classification abilities, and thushave different classification abilities, and thus
by selecting only rules with high potential forby selecting only rules with high potential for
accurate classification, we reduce the numberaccurate classification, we reduce the number
of rules.of rules.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 26
Can we use genetic algorithms for selectingCan we use genetic algorithms for selecting
fuzzy IFfuzzy IF--THEN rules ?THEN rules ?
II The problem of selecting fuzzy IFThe problem of selecting fuzzy IF--THEN rulesTHEN rules
can be seen as a combinatorial optimisationcan be seen as a combinatorial optimisation
problem with two objectives.problem with two objectives.
II The first, more important, objective is toThe first, more important, objective is to
maximise the number of correctly classifiedmaximise the number of correctly classified
patterns.patterns.
II The second objective is to minimise the numberThe second objective is to minimise the number
of rules.of rules.
II Genetic algorithms can be applied to thisGenetic algorithms can be applied to this
problem.problem.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 27
A basic genetic algorithm for selecting fuzzy IFA basic genetic algorithm for selecting fuzzy IF--
THEN rules includes the following steps:THEN rules includes the following steps:
Step 1Step 1:: Randomly generate an initial population ofRandomly generate an initial population of
chromosomes. The population size may bechromosomes. The population size may be
relatively small, say 10 or 20 chromosomes.relatively small, say 10 or 20 chromosomes.
Each gene in a chromosome corresponds to aEach gene in a chromosome corresponds to a
particular fuzzy IFparticular fuzzy IF--THEN rule in the rule setTHEN rule in the rule set
defined bydefined by SSALLALL..
Step 2Step 2:: Calculate the performance, or fitness, ofCalculate the performance, or fitness, of
each individual chromosome in the currenteach individual chromosome in the current
population.population.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 28
The problem of selecting fuzzy rules has twoThe problem of selecting fuzzy rules has two
objectives: to maximise the accuracy of the patternobjectives: to maximise the accuracy of the pattern
classification and to minimise the size of a rule set.classification and to minimise the size of a rule set.
The fitness function has to accommodate both theseThe fitness function has to accommodate both these
objectives. This can be achieved by introducing twoobjectives. This can be achieved by introducing two
respective weights,respective weights, wwPP andand wwNN, in the fitness function:, in the fitness function:
wherewhere PPss is the number of patterns classifiedis the number of patterns classified
successfully,successfully, PPALLALL is the total number of patternsis the total number of patterns
presented to the classification system,presented to the classification system, NNSS andand NNALLALL areare
the numbers of fuzzy IFthe numbers of fuzzy IF--THEN rules in setTHEN rules in set SS and setand set
SSALLALL, respectively., respectively.
ALL
S
N
ALL
P
N
N
w
P
P
wSf s −=)(
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 29
The classification accuracy is more important thanThe classification accuracy is more important than
the size of a rule set. That is,the size of a rule set. That is,
ALL
S
ALL N
N
P
P
Sf s −=10)(
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 30
Step 3Step 3:: Select a pair of chromosomes for mating.Select a pair of chromosomes for mating.
Parent chromosomes are selected with aParent chromosomes are selected with a
probability associated with their fitness; a betterprobability associated with their fitness; a better
fit chromosome has a higher probability of beingfit chromosome has a higher probability of being
selected.selected.
Step 4Step 4:: Create a pair of offspring chromosomesCreate a pair of offspring chromosomes
by applying a standard crossover operator.by applying a standard crossover operator.
Parent chromosomes are crossed at the randomlyParent chromosomes are crossed at the randomly
selected crossover point.selected crossover point.
Step 5Step 5:: Perform mutation on each gene of thePerform mutation on each gene of the
created offspring. The mutation probability iscreated offspring. The mutation probability is
normally kept quite low, say 0.01. The mutationnormally kept quite low, say 0.01. The mutation
is done by multiplying the gene value byis done by multiplying the gene value by ––1.1.
 Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 31
Step 6Step 6:: Place the created offspring chromosomes inPlace the created offspring chromosomes in
the new population.the new population.
Step 7Step 7:: RepeatRepeat Step 3Step 3 until the size of the newuntil the size of the new
population becomes equal to the size of the initialpopulation becomes equal to the size of the initial
population, and then replace the initial (parent)population, and then replace the initial (parent)
population with the new (offspring) population.population with the new (offspring) population.
Step 9Step 9:: Go toGo to Step 2Step 2, and repeat the process until a, and repeat the process until a
specified number of generations (typically severalspecified number of generations (typically several
hundreds) is considered.hundreds) is considered.
The number of rules can be cut down to less thanThe number of rules can be cut down to less than
2% of the initially generated set of rules.2% of the initially generated set of rules.

More Related Content

What's hot

INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...
INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...
INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...sipij
 
Evolving Neural Networks Through Augmenting Topologies
Evolving Neural Networks Through Augmenting TopologiesEvolving Neural Networks Through Augmenting Topologies
Evolving Neural Networks Through Augmenting TopologiesDaniele Loiacono
 
Evolving Neural Networks through Augmenting Topologies NEAT
Evolving Neural Networks through Augmenting Topologies NEATEvolving Neural Networks through Augmenting Topologies NEAT
Evolving Neural Networks through Augmenting Topologies NEATAavaas Gajurel
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
 
Neuromorphic computing for neural networks
Neuromorphic computing for neural networksNeuromorphic computing for neural networks
Neuromorphic computing for neural networksClaudio Gallicchio
 
Computational neuroscience
Computational neuroscienceComputational neuroscience
Computational neuroscienceNicolas Rougier
 
What is (computational) neuroscience?
What is (computational) neuroscience?What is (computational) neuroscience?
What is (computational) neuroscience?SSA KPI
 
Theories of error back propagation in the brain review
Theories of error back propagation in the brain reviewTheories of error back propagation in the brain review
Theories of error back propagation in the brain reviewSeonghyun Kim
 
Deep randomized neural networks
Deep randomized neural networksDeep randomized neural networks
Deep randomized neural networksClaudio Gallicchio
 
Reservoir computing fast deep learning for sequences
Reservoir computing   fast deep learning for sequencesReservoir computing   fast deep learning for sequences
Reservoir computing fast deep learning for sequencesClaudio Gallicchio
 
Backpropagation and the brain review
Backpropagation and the brain reviewBackpropagation and the brain review
Backpropagation and the brain reviewSeonghyun Kim
 
Tracking times in temporal patterns embodied in intra-cortical data for cont...
Tracking times in temporal patterns embodied in  intra-cortical data for cont...Tracking times in temporal patterns embodied in  intra-cortical data for cont...
Tracking times in temporal patterns embodied in intra-cortical data for cont...IJECEIAES
 
Neural fields, a cognitive approach
Neural fields, a cognitive approachNeural fields, a cognitive approach
Neural fields, a cognitive approachNicolas Rougier
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Jason Tsai
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkBurhan Muzafar
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 

What's hot (19)

INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...
INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...
INHIBITION AND SET-SHIFTING TASKS IN CENTRAL EXECUTIVE FUNCTION OF WORKING ME...
 
Evolving Neural Networks Through Augmenting Topologies
Evolving Neural Networks Through Augmenting TopologiesEvolving Neural Networks Through Augmenting Topologies
Evolving Neural Networks Through Augmenting Topologies
 
Evolving Neural Networks through Augmenting Topologies NEAT
Evolving Neural Networks through Augmenting Topologies NEATEvolving Neural Networks through Augmenting Topologies NEAT
Evolving Neural Networks through Augmenting Topologies NEAT
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
 
Neuromorphic computing for neural networks
Neuromorphic computing for neural networksNeuromorphic computing for neural networks
Neuromorphic computing for neural networks
 
Computational neuroscience
Computational neuroscienceComputational neuroscience
Computational neuroscience
 
What is (computational) neuroscience?
What is (computational) neuroscience?What is (computational) neuroscience?
What is (computational) neuroscience?
 
Theories of error back propagation in the brain review
Theories of error back propagation in the brain reviewTheories of error back propagation in the brain review
Theories of error back propagation in the brain review
 
Deep randomized neural networks
Deep randomized neural networksDeep randomized neural networks
Deep randomized neural networks
 
Reservoir computing fast deep learning for sequences
Reservoir computing   fast deep learning for sequencesReservoir computing   fast deep learning for sequences
Reservoir computing fast deep learning for sequences
 
Backpropagation and the brain review
Backpropagation and the brain reviewBackpropagation and the brain review
Backpropagation and the brain review
 
Tracking times in temporal patterns embodied in intra-cortical data for cont...
Tracking times in temporal patterns embodied in  intra-cortical data for cont...Tracking times in temporal patterns embodied in  intra-cortical data for cont...
Tracking times in temporal patterns embodied in intra-cortical data for cont...
 
Neural fields, a cognitive approach
Neural fields, a cognitive approachNeural fields, a cognitive approach
Neural fields, a cognitive approach
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
intelligent system
intelligent systemintelligent system
intelligent system
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Jaeggi2010
Jaeggi2010Jaeggi2010
Jaeggi2010
 

Viewers also liked

양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음
양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음
양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음Dongseo University
 
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …Dongseo University
 
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble MethodsDongseo University
 
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…Dongseo University
 
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector MachinesDongseo University
 
6장 그래프 알고리즘
6장 그래프 알고리즘6장 그래프 알고리즘
6장 그래프 알고리즘Vong Sik Kong
 
1. introduction to algorithm
1. introduction to algorithm1. introduction to algorithm
1. introduction to algorithmGeunhyung Kim
 
Artificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 NegnevitskyArtificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 Negnevitskylopanath
 
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…Dongseo University
 
懷念臺灣 光陰隧道L呷飯
懷念臺灣 光陰隧道L呷飯懷念臺灣 光陰隧道L呷飯
懷念臺灣 光陰隧道L呷飯psjlew
 
职业规划2010
职业规划2010职业规划2010
职业规划2010worldhema
 
明石跨海大橋完成於1998年
明石跨海大橋完成於1998年明石跨海大橋完成於1998年
明石跨海大橋完成於1998年psjlew
 
留住胡同 鉛筆畫
留住胡同 鉛筆畫留住胡同 鉛筆畫
留住胡同 鉛筆畫psjlew
 
Pencil art
Pencil artPencil art
Pencil artpsjlew
 
13 2-4世界名牌
13 2-4世界名牌13 2-4世界名牌
13 2-4世界名牌psjlew
 
Topic6 pptldshp4leadinglearningL70
 Topic6 pptldshp4leadinglearningL70 Topic6 pptldshp4leadinglearningL70
Topic6 pptldshp4leadinglearningL70omarraarmstrong
 
肉類千萬不能吃的8個部位
肉類千萬不能吃的8個部位肉類千萬不能吃的8個部位
肉類千萬不能吃的8個部位psjlew
 
Fun joke 解鬱集_一年份的笑話_52
Fun joke 解鬱集_一年份的笑話_52Fun joke 解鬱集_一年份的笑話_52
Fun joke 解鬱集_一年份的笑話_52psjlew
 
Guangxi
GuangxiGuangxi
Guangxipsjlew
 

Viewers also liked (20)

2015-2 W16 kernel and smo
2015-2 W16   kernel and smo2015-2 W16   kernel and smo
2015-2 W16 kernel and smo
 
양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음
양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음
양성봉 - 알기쉬운 알고리즘 - 1장알고리즘의첫걸음
 
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …
2013-1 Machine Learning Lecture 03 - Andrew Moore - a gentle introduction …
 
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
2013-1 Machine Learning Lecture 06 - Lucila Ohno-Machado - Ensemble Methods
 
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
 
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
 
6장 그래프 알고리즘
6장 그래프 알고리즘6장 그래프 알고리즘
6장 그래프 알고리즘
 
1. introduction to algorithm
1. introduction to algorithm1. introduction to algorithm
1. introduction to algorithm
 
Artificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 NegnevitskyArtificial Intelligence Chapter 9 Negnevitsky
Artificial Intelligence Chapter 9 Negnevitsky
 
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
 
懷念臺灣 光陰隧道L呷飯
懷念臺灣 光陰隧道L呷飯懷念臺灣 光陰隧道L呷飯
懷念臺灣 光陰隧道L呷飯
 
职业规划2010
职业规划2010职业规划2010
职业规划2010
 
明石跨海大橋完成於1998年
明石跨海大橋完成於1998年明石跨海大橋完成於1998年
明石跨海大橋完成於1998年
 
留住胡同 鉛筆畫
留住胡同 鉛筆畫留住胡同 鉛筆畫
留住胡同 鉛筆畫
 
Pencil art
Pencil artPencil art
Pencil art
 
13 2-4世界名牌
13 2-4世界名牌13 2-4世界名牌
13 2-4世界名牌
 
Topic6 pptldshp4leadinglearningL70
 Topic6 pptldshp4leadinglearningL70 Topic6 pptldshp4leadinglearningL70
Topic6 pptldshp4leadinglearningL70
 
肉類千萬不能吃的8個部位
肉類千萬不能吃的8個部位肉類千萬不能吃的8個部位
肉類千萬不能吃的8個部位
 
Fun joke 解鬱集_一年份的笑話_52
Fun joke 解鬱集_一年份的笑話_52Fun joke 解鬱集_一年份的笑話_52
Fun joke 解鬱集_一年份的笑話_52
 
Guangxi
GuangxiGuangxi
Guangxi
 

Similar to 2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…

Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid AlgorithmDiagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid AlgorithmIJERA Editor
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxRatuRumana3
 
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeKengo Sato
 
Data driven model optimization [autosaved]
Data driven model optimization [autosaved]Data driven model optimization [autosaved]
Data driven model optimization [autosaved]Russell Jarvis
 
NIPS2007: deep belief nets
NIPS2007: deep belief netsNIPS2007: deep belief nets
NIPS2007: deep belief netszukun
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsDrBaljitSinghKhehra
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsDrBaljitSinghKhehra
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsDrBaljitSinghKhehra
 
Artificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part IArtificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part IRamez Abdalla, M.Sc
 
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptxssuser67281d
 
Practical Ai Class 3
Practical Ai Class 3Practical Ai Class 3
Practical Ai Class 3Oliver Zhang
 
[PR12] Inception and Xception - Jaejun Yoo
[PR12] Inception and Xception - Jaejun Yoo[PR12] Inception and Xception - Jaejun Yoo
[PR12] Inception and Xception - Jaejun YooJaeJun Yoo
 

Similar to 2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig… (20)

Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid AlgorithmDiagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
 
bbbPaper
bbbPaperbbbPaper
bbbPaper
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptx
 
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
 
Data driven model optimization [autosaved]
Data driven model optimization [autosaved]Data driven model optimization [autosaved]
Data driven model optimization [autosaved]
 
NIPS2007: deep belief nets
NIPS2007: deep belief netsNIPS2007: deep belief nets
NIPS2007: deep belief nets
 
Perceptron
PerceptronPerceptron
Perceptron
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
6
66
6
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Lec 5
Lec 5Lec 5
Lec 5
 
1.pptx
1.pptx1.pptx
1.pptx
 
Artificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part IArtificial Intelligence Applications in Petroleum Engineering - Part I
Artificial Intelligence Applications in Petroleum Engineering - Part I
 
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptx
 
StockMarketPrediction
StockMarketPredictionStockMarketPrediction
StockMarketPrediction
 
Practical Ai Class 3
Practical Ai Class 3Practical Ai Class 3
Practical Ai Class 3
 
[PR12] Inception and Xception - Jaejun Yoo
[PR12] Inception and Xception - Jaejun Yoo[PR12] Inception and Xception - Jaejun Yoo
[PR12] Inception and Xception - Jaejun Yoo
 
A04401001013
A04401001013A04401001013
A04401001013
 

More from Dongseo University

Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfLecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfDongseo University
 
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Dongseo University
 
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Dongseo University
 
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Dongseo University
 
Average Linear Selection Algorithm
Average Linear Selection AlgorithmAverage Linear Selection Algorithm
Average Linear Selection AlgorithmDongseo University
 
Lower Bound of Comparison Sort
Lower Bound of Comparison SortLower Bound of Comparison Sort
Lower Bound of Comparison SortDongseo University
 
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayRunning Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayDongseo University
 
Proof By Math Induction Example
Proof By Math Induction ExampleProof By Math Induction Example
Proof By Math Induction ExampleDongseo University
 
Estimating probability distributions
Estimating probability distributionsEstimating probability distributions
Estimating probability distributionsDongseo University
 
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)Dongseo University
 
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)Dongseo University
 
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #52017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5Dongseo University
 

More from Dongseo University (20)

Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdfLecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
 
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
 
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
 
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
 
Markov Chain Monte Carlo
Markov Chain Monte CarloMarkov Chain Monte Carlo
Markov Chain Monte Carlo
 
Simplex Lecture Notes
Simplex Lecture NotesSimplex Lecture Notes
Simplex Lecture Notes
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Median of Medians
Median of MediansMedian of Medians
Median of Medians
 
Average Linear Selection Algorithm
Average Linear Selection AlgorithmAverage Linear Selection Algorithm
Average Linear Selection Algorithm
 
Lower Bound of Comparison Sort
Lower Bound of Comparison SortLower Bound of Comparison Sort
Lower Bound of Comparison Sort
 
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using ArrayRunning Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
 
Running Time of MergeSort
Running Time of MergeSortRunning Time of MergeSort
Running Time of MergeSort
 
Binary Trees
Binary TreesBinary Trees
Binary Trees
 
Proof By Math Induction Example
Proof By Math Induction ExampleProof By Math Induction Example
Proof By Math Induction Example
 
TRPO and PPO notes
TRPO and PPO notesTRPO and PPO notes
TRPO and PPO notes
 
Estimating probability distributions
Estimating probability distributionsEstimating probability distributions
Estimating probability distributions
 
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
 
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
 
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #12017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
 
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #52017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
 

2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Hybrid Intellig…

  • 1.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 1 Lecture 12Lecture 12 Hybrid intelligent systems:Hybrid intelligent systems: Evolutionary neural networks and fuzzyEvolutionary neural networks and fuzzy evolutionary systemsevolutionary systems II IntroductionIntroduction II Evolutionary neural networksEvolutionary neural networks II Fuzzy evolutionary systemsFuzzy evolutionary systems II SummarySummary
  • 2.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 2 Evolutionary neural networksEvolutionary neural networks II Although neural networks are used for solving aAlthough neural networks are used for solving a variety of problems, they still have somevariety of problems, they still have some limitations.limitations. II One of the most common is associated with neuralOne of the most common is associated with neural network training. The backnetwork training. The back--propagation learningpropagation learning algorithm cannot guarantee an optimal solution.algorithm cannot guarantee an optimal solution. In realIn real--world applications, the backworld applications, the back--propagationpropagation algorithm might converge to a set of subalgorithm might converge to a set of sub--optimaloptimal weights from which it cannot escape. As a result,weights from which it cannot escape. As a result, the neural network is often unable to find athe neural network is often unable to find a desirable solution to a problem at hand.desirable solution to a problem at hand.
  • 3.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 3 II Another difficulty is related to selecting anAnother difficulty is related to selecting an optimal topology for the neural network. Theoptimal topology for the neural network. The ““rightright”” network architecture for a particularnetwork architecture for a particular problem is often chosen by means of heuristics,problem is often chosen by means of heuristics, and designing a neural network topology is stilland designing a neural network topology is still more art than engineering.more art than engineering. II Genetic algorithms are an effective optimisationGenetic algorithms are an effective optimisation technique that can guide both weight optimisationtechnique that can guide both weight optimisation and topology selection.and topology selection.
  • 4.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 4 y 0.9 1 3 4 5 6 7 8 x1 x3 x2 2 -0.8 0.4 0.8 -0.7 0.2 -0.2 0.6 -0.3 0.1 -0.2 0.9 -0.60.1 0.3 0.5 From neuron: To neuron: 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.9 -0.3 -0.7 0 0 0 0 0 -0.8 0.6 0.3 0 0 0 0 0 0.1 -0.2 0.2 0 0 0 0 0 0.4 0.5 0.8 0 0 0 0 0 0 0 0 -0.6 0.1 -0.2 0.9 0 Chromosome: 0.9 -0.3 -0.7 -0.8 0.6 0.3 0.1 -0.2 0.2 0.4 0.5 0.8 -0.6 0.1 -0.2 0.9 Encoding a set of weights in a chromosomeEncoding a set of weights in a chromosome
  • 5.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 5 II The second step is to define a fitness function forThe second step is to define a fitness function for evaluating the chromosomeevaluating the chromosome’’s performance. Thiss performance. This function must estimate the performance of afunction must estimate the performance of a given neural network. We can apply here agiven neural network. We can apply here a simple function defined by the sum of squaredsimple function defined by the sum of squared errors.errors. II The training set of examples is presented to theThe training set of examples is presented to the network, and the sum of squared errors isnetwork, and the sum of squared errors is calculated. The smaller the sum, the fitter thecalculated. The smaller the sum, the fitter the chromosome.chromosome. The genetic algorithm attemptsThe genetic algorithm attempts to find a set of weights that minimises the sumto find a set of weights that minimises the sum of squared errors.of squared errors.
  • 6.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 6 II The third step is to choose the genetic operatorsThe third step is to choose the genetic operators –– crossover and mutation. A crossover operatorcrossover and mutation. A crossover operator takes two parent chromosomes and creates atakes two parent chromosomes and creates a single child with genetic material from bothsingle child with genetic material from both parents. Each gene in the childparents. Each gene in the child’’s chromosome iss chromosome is represented by the corresponding gene of therepresented by the corresponding gene of the randomly selected parent.randomly selected parent. II A mutation operator selects a gene in aA mutation operator selects a gene in a chromosome and adds a small random valuechromosome and adds a small random value betweenbetween −−1 and 1 to each weight in this gene.1 and 1 to each weight in this gene.
  • 7.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 7 Crossover in weight optimisationCrossover in weight optimisation 3 4 5 y 6 x2 2 -0.3 0.9 -0.7 0.5 -0.8 -0.6 Parent 1 x1 1 -0.2 0.1 0.4 3 4 5 y 6 x2 2 -0.1 -0.5 0.2 -0.9 0.6 0.3 Parent 2 x1 1 0.9 0.3 -0.8 0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9 0.4 -0.3 0.3 0.2 0.3 -0.9 0.60.9 -0.5 -0.8 -0.1 0.1 -0.7 -0.6 0.5 -0.80.9 -0.5 -0.8 0.1 3 4 5 y 6 x2 2 -0.1 -0.5 -0.7 0.5 -0.8 -0.6 Child x1 1 0.9 0.1 -0.8
  • 8.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 8 Mutation in weight optimisationMutation in weight optimisation Original network 3 4 5 y 6 x2 2 -0.3 0.9 -0.7 0.5 -0.8 -0.6x1 1 -0.2 0.1 0.4 0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9 3 4 5 y 6 x2 2 0.2 0.9 -0.7 0.5 -0.8 -0.6x1 1 -0.2 0.1 -0.1 0.1 -0.7 -0.6 0.5 -0.8-0.2 0.9 Mutated network 0.4 -0.3 -0.1 0.2
  • 9.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 9 Can genetic algorithms help us in selectingCan genetic algorithms help us in selecting the network architecture?the network architecture? The architecture of the network (i.e. the number ofThe architecture of the network (i.e. the number of neurons and their interconnections) oftenneurons and their interconnections) often determines the success or failure of the application.determines the success or failure of the application. Usually the network architecture is decided by trialUsually the network architecture is decided by trial and error; there is a great need for a method ofand error; there is a great need for a method of automatically designing the architecture for aautomatically designing the architecture for a particular application. Genetic algorithms mayparticular application. Genetic algorithms may well be suited for this task.well be suited for this task.
  • 10.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 10 II The basic idea behind evolving a suitable networkThe basic idea behind evolving a suitable network architecture is to conduct a genetic search in aarchitecture is to conduct a genetic search in a population of possible architectures.population of possible architectures. II We must first choose a method of encoding aWe must first choose a method of encoding a networknetwork’’s architecture into a chromosome.s architecture into a chromosome.
  • 11.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 11 Encoding the network architectureEncoding the network architecture II The connection topology of a neural network canThe connection topology of a neural network can be represented by a square connectivity matrix.be represented by a square connectivity matrix. II Each entry in the matrix defines the type ofEach entry in the matrix defines the type of connection from one neuron (column) to anotherconnection from one neuron (column) to another (row), where 0 means no connection and 1(row), where 0 means no connection and 1 denotes connection for which the weight can bedenotes connection for which the weight can be changed through learning.changed through learning. II To transform the connectivity matrix into aTo transform the connectivity matrix into a chromosome, we need only to string the rows ofchromosome, we need only to string the rows of the matrix together.the matrix together.
  • 12.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 12 Encoding of the network topologyEncoding of the network topology From neuron: To neuron: 1 2 3 4 5 6 1 2 3 4 5 6 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 0 3 4 5 y 6 x2 2 x1 1 Chromosome: 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 0
  • 13.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 13 The cycle of evolving a neural network topologyThe cycle of evolving a neural network topology Neural Network j Fitness = 117 Neural Network j Fitness = 117 Generation i Training Data Set 0 0 1.0000 0.1000 0.0998 0.8869 0.2000 0.1987 0.7551 0.3000 0.2955 0.6142 0.4000 0.3894 0.4720 0.5000 0.4794 0.3345 0.6000 0.5646 0.2060 0.7000 0.6442 0.0892 0.8000 0.7174 -0.0143 0.9000 0.7833 -0.1038 1.0000 0.8415 -0.1794 Child 2 Child 1 Crossover Parent 1 Parent 2 Mutation Generation (i + 1)
  • 14.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 14 Fuzzy evolutionary systemsFuzzy evolutionary systems II Evolutionary computation is also used in theEvolutionary computation is also used in the design of fuzzy systems, particularly for generatingdesign of fuzzy systems, particularly for generating fuzzy rules and adjusting membership functions offuzzy rules and adjusting membership functions of fuzzy sets.fuzzy sets. II In this section, we introduce an application ofIn this section, we introduce an application of genetic algorithms to select an appropriate set ofgenetic algorithms to select an appropriate set of fuzzy IFfuzzy IF--THEN rules for a classification problem.THEN rules for a classification problem. II For a classification problem, a set of fuzzyFor a classification problem, a set of fuzzy IFIF--THEN rules is generated from numerical data.THEN rules is generated from numerical data. II First, we use a gridFirst, we use a grid--type fuzzy partition of an inputtype fuzzy partition of an input space.space.
  • 15.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 15 Fuzzy partition by a 3Fuzzy partition by a 3××××××××3 fuzzy grid3 fuzzy grid 0 1 A1 A2 A3 X1 B2 B1 B3 0 1 X2 Class 1: Class 2: µ(x1) µ(x2) 0 10 1 1 2 3 6 7 4 5 9 8 11 10 12 16 15 14 13 x11 x21
  • 16.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 16 II Black and white dots denote the training patternsBlack and white dots denote the training patterns ofof ClassClass 1 and1 and ClassClass 2, respectively.2, respectively. II The gridThe grid--type fuzzy partition can be seen as atype fuzzy partition can be seen as a rule table.rule table. II The linguistic values of inputThe linguistic values of input xx1 (1 (AA11,, AA22 andand AA33)) form the horizontal axis, and the linguisticform the horizontal axis, and the linguistic values of inputvalues of input xx2 (2 (BB11,, BB22 andand BB33) form the) form the vertical axis.vertical axis. II At the intersection of a row and a column lies theAt the intersection of a row and a column lies the rule consequent.rule consequent. Fuzzy partitionFuzzy partition
  • 17.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 17 In the rule table, each fuzzy subspace can haveIn the rule table, each fuzzy subspace can have only one fuzzy IFonly one fuzzy IF--THEN rule, and thus the totalTHEN rule, and thus the total number of rules that can be generated in anumber of rules that can be generated in a KK××KK grid is equal togrid is equal to KK××KK..
  • 18.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 18 Fuzzy rules that correspond to theFuzzy rules that correspond to the KK××KK fuzzyfuzzy partition can be represented in a general form as:partition can be represented in a general form as: wherewhere xxpp is a training pattern on input spaceis a training pattern on input space XX11××XX2,2, PP is the total number of training patterns,is the total number of training patterns, CCnn is theis the rule consequent (eitherrule consequent (either ClassClass 1 or1 or ClassClass 2), and2), and is the certaintyis the certainty factor that a pattern in fuzzyfactor that a pattern in fuzzy subspacesubspace AAiiBBjj belongs to classbelongs to class CCnn.. is Ai i = 1, 2, . . . , K is Bj j = 1, 2, . . . , K Rule Rij : IF x1p THEN xp AND x2p ∈ Cn       n ji C BA CF xp = (x1p, x2p), p = 1, 2, . . . , P CFCFAAii BBjj CCnn
  • 19.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 19 To determine the rule consequent and the certaintyTo determine the rule consequent and the certainty factor, we use the following procedure:factor, we use the following procedure: Step 1Step 1:: Partition an input space intoPartition an input space into KK××KK fuzzyfuzzy subspaces, and calculate the strength of each classsubspaces, and calculate the strength of each class of training patterns in every fuzzy subspace.of training patterns in every fuzzy subspace. Each class in a given fuzzy subspace is representedEach class in a given fuzzy subspace is represented by its training patterns. The more training patterns,by its training patterns. The more training patterns, the stronger the classthe stronger the class −− in a given fuzzy subspace,in a given fuzzy subspace, the rule consequent becomes more certain whenthe rule consequent becomes more certain when patterns of one particular class appear more oftenpatterns of one particular class appear more often than patterns of any other class.than patterns of any other class. Step 2Step 2:: Determine the rule consequent and theDetermine the rule consequent and the certainty factor in each fuzzy subspace.certainty factor in each fuzzy subspace.
  • 20.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 20 The certainty factor can be interpreted asThe certainty factor can be interpreted as follows:follows: II If all the training patterns in fuzzy subspaceIf all the training patterns in fuzzy subspace AAiiBBjj belong to the same class, then the certaintybelong to the same class, then the certainty factor is maximum and it is certain that any newfactor is maximum and it is certain that any new pattern in this subspace will belong to this class.pattern in this subspace will belong to this class. II If, however, training patterns belong to differentIf, however, training patterns belong to different classes and these classes have similar strengths,classes and these classes have similar strengths, then the certainty factor is minimum and it isthen the certainty factor is minimum and it is uncertain that a new pattern will belong to anyuncertain that a new pattern will belong to any particular class.particular class.
  • 21.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 21 II This means that patterns in a fuzzy subspace canThis means that patterns in a fuzzy subspace can be misclassified. Moreover, if a fuzzy subspacebe misclassified. Moreover, if a fuzzy subspace does not have any training patterns, we cannotdoes not have any training patterns, we cannot determine the rule consequent at all.determine the rule consequent at all. II If a fuzzy partition is too coarse, many patternsIf a fuzzy partition is too coarse, many patterns may be misclassified. On the other hand, if amay be misclassified. On the other hand, if a fuzzy partition is too fine, many fuzzy rulesfuzzy partition is too fine, many fuzzy rules cannot be obtained, because of the lack ofcannot be obtained, because of the lack of training patterns in the corresponding fuzzytraining patterns in the corresponding fuzzy subspaces.subspaces.
  • 22.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 22 Training patterns are not necessarilyTraining patterns are not necessarily distributed evenly in the input space. As adistributed evenly in the input space. As a result, it is often difficult to choose anresult, it is often difficult to choose an appropriate density for the fuzzy grid. Toappropriate density for the fuzzy grid. To overcome this difficulty, we useovercome this difficulty, we use multiplemultiple fuzzy rule tablesfuzzy rule tables..
  • 23.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 23 Multiple fuzzy rule tablesMultiple fuzzy rule tables K = 2 K = 3 K = 4 K = 5 K = 6 Fuzzy IFFuzzy IF--THEN rules are generated for each fuzzyTHEN rules are generated for each fuzzy subspace of multiple fuzzy rule tables, and thus asubspace of multiple fuzzy rule tables, and thus a complete set of rules for our case can be specifiedcomplete set of rules for our case can be specified as:as: 2222 ++ 3322 ++ 4422 ++ 5522 ++ 6622 = 90 rules.= 90 rules.
  • 24.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 24 Once the set of rulesOnce the set of rules SSALLALL is generated, a newis generated, a new pattern,pattern, xx = (= (xx1,1, xx2), can be classified by the2), can be classified by the following procedure:following procedure: Step 1Step 1:: In every fuzzy subspace of the multipleIn every fuzzy subspace of the multiple fuzzy rule tables, calculate the degree offuzzy rule tables, calculate the degree of compatibility of a new pattern with each class.compatibility of a new pattern with each class. Step 2Step 2:: Determine the maximum degree ofDetermine the maximum degree of compatibility of the new pattern with each class.compatibility of the new pattern with each class. Step 3Step 3:: Determine the class with which the newDetermine the class with which the new pattern has the highest degree of compatibility,pattern has the highest degree of compatibility, and assign the pattern to this class.and assign the pattern to this class.
  • 25.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 25 The number of multiple fuzzy rule tablesThe number of multiple fuzzy rule tables required for an accurate pattern classificationrequired for an accurate pattern classification may be large. Consequently, a complete set ofmay be large. Consequently, a complete set of rules can be enormous. Meanwhile, these rulesrules can be enormous. Meanwhile, these rules have different classification abilities, and thushave different classification abilities, and thus by selecting only rules with high potential forby selecting only rules with high potential for accurate classification, we reduce the numberaccurate classification, we reduce the number of rules.of rules.
  • 26.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 26 Can we use genetic algorithms for selectingCan we use genetic algorithms for selecting fuzzy IFfuzzy IF--THEN rules ?THEN rules ? II The problem of selecting fuzzy IFThe problem of selecting fuzzy IF--THEN rulesTHEN rules can be seen as a combinatorial optimisationcan be seen as a combinatorial optimisation problem with two objectives.problem with two objectives. II The first, more important, objective is toThe first, more important, objective is to maximise the number of correctly classifiedmaximise the number of correctly classified patterns.patterns. II The second objective is to minimise the numberThe second objective is to minimise the number of rules.of rules. II Genetic algorithms can be applied to thisGenetic algorithms can be applied to this problem.problem.
  • 27.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 27 A basic genetic algorithm for selecting fuzzy IFA basic genetic algorithm for selecting fuzzy IF-- THEN rules includes the following steps:THEN rules includes the following steps: Step 1Step 1:: Randomly generate an initial population ofRandomly generate an initial population of chromosomes. The population size may bechromosomes. The population size may be relatively small, say 10 or 20 chromosomes.relatively small, say 10 or 20 chromosomes. Each gene in a chromosome corresponds to aEach gene in a chromosome corresponds to a particular fuzzy IFparticular fuzzy IF--THEN rule in the rule setTHEN rule in the rule set defined bydefined by SSALLALL.. Step 2Step 2:: Calculate the performance, or fitness, ofCalculate the performance, or fitness, of each individual chromosome in the currenteach individual chromosome in the current population.population.
  • 28.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 28 The problem of selecting fuzzy rules has twoThe problem of selecting fuzzy rules has two objectives: to maximise the accuracy of the patternobjectives: to maximise the accuracy of the pattern classification and to minimise the size of a rule set.classification and to minimise the size of a rule set. The fitness function has to accommodate both theseThe fitness function has to accommodate both these objectives. This can be achieved by introducing twoobjectives. This can be achieved by introducing two respective weights,respective weights, wwPP andand wwNN, in the fitness function:, in the fitness function: wherewhere PPss is the number of patterns classifiedis the number of patterns classified successfully,successfully, PPALLALL is the total number of patternsis the total number of patterns presented to the classification system,presented to the classification system, NNSS andand NNALLALL areare the numbers of fuzzy IFthe numbers of fuzzy IF--THEN rules in setTHEN rules in set SS and setand set SSALLALL, respectively., respectively. ALL S N ALL P N N w P P wSf s −=)(
  • 29.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 29 The classification accuracy is more important thanThe classification accuracy is more important than the size of a rule set. That is,the size of a rule set. That is, ALL S ALL N N P P Sf s −=10)(
  • 30.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 30 Step 3Step 3:: Select a pair of chromosomes for mating.Select a pair of chromosomes for mating. Parent chromosomes are selected with aParent chromosomes are selected with a probability associated with their fitness; a betterprobability associated with their fitness; a better fit chromosome has a higher probability of beingfit chromosome has a higher probability of being selected.selected. Step 4Step 4:: Create a pair of offspring chromosomesCreate a pair of offspring chromosomes by applying a standard crossover operator.by applying a standard crossover operator. Parent chromosomes are crossed at the randomlyParent chromosomes are crossed at the randomly selected crossover point.selected crossover point. Step 5Step 5:: Perform mutation on each gene of thePerform mutation on each gene of the created offspring. The mutation probability iscreated offspring. The mutation probability is normally kept quite low, say 0.01. The mutationnormally kept quite low, say 0.01. The mutation is done by multiplying the gene value byis done by multiplying the gene value by ––1.1.
  • 31.  Negnevitsky, Pearson Education, 2011Negnevitsky, Pearson Education, 2011 31 Step 6Step 6:: Place the created offspring chromosomes inPlace the created offspring chromosomes in the new population.the new population. Step 7Step 7:: RepeatRepeat Step 3Step 3 until the size of the newuntil the size of the new population becomes equal to the size of the initialpopulation becomes equal to the size of the initial population, and then replace the initial (parent)population, and then replace the initial (parent) population with the new (offspring) population.population with the new (offspring) population. Step 9Step 9:: Go toGo to Step 2Step 2, and repeat the process until a, and repeat the process until a specified number of generations (typically severalspecified number of generations (typically several hundreds) is considered.hundreds) is considered. The number of rules can be cut down to less thanThe number of rules can be cut down to less than 2% of the initially generated set of rules.2% of the initially generated set of rules.