SlideShare a Scribd company logo
1 of 33
EXPERT SYSTEMS AND SOLUTIONS
     Email: expertsyssol@gmail.com
        expertsyssol@yahoo.com
          Cell: 9952749533
     www.researchprojects.info
    PAIYANOOR, OMR, CHENNAI
 Call For Research Projects          Final
 year students of B.E in EEE, ECE,
    EI, M.E (Power Systems), M.E
  (Applied Electronics), M.E (Power
              Electronics)
  Ph.D Electrical and Electronics.
Students can assemble their hardware in our
 Research labs. Experts will be guiding the
                 projects.
Radial Basis Function (RBF)
         Networks
RBF network
• This is becoming an increasingly popular neural
  network with diverse applications and is probably
  the main rival to the multi-layered perceptron
• Much of the inspiration for RBF networks has
  come from traditional statistical pattern
  classification techniques
RBF network
• The basic architecture for a RBF is a 3-layer
  network, as shown in Fig.
• The input layer is simply a fan-out layer and does
  no processing.
• The second or hidden layer performs a non-linear
  mapping from the input space into a (usually)
  higher dimensional space in which the patterns
  become linearly separable.
x1

                                                    y1
x2

                                                    y2
x3
                                  output layer
 input layer
                                  (linear weighted sum)
 (fan-out)

           hidden layer
           (weights correspond to cluster centre,
           output function usually Gaussian)
Output layer
• The final layer performs a simple weighted sum
  with a linear output.
• If the RBF network is used for function
  approximation (matching a real number) then this
  output is fine.
• However, if pattern classification is required, then
  a hard-limiter or sigmoid function could be placed
  on the output neurons to give 0/1 output values.
Clustering
• The unique feature of the RBF network is the
  process performed in the hidden layer.
• The idea is that the patterns in the input space
  form clusters.
• If the centres of these clusters are known, then the
  distance from the cluster centre can be measured.
• Furthermore, this distance measure is made non-
  linear, so that if a pattern is in an area that is close
  to a cluster centre it gives a value close to 1.
Clustering
• Beyond this area, the value drops dramatically.
• The notion is that this area is radially symmetrical
  around the cluster centre, so that the non-linear
  function becomes known as the radial-basis
  function.
Gaussian function
• The most commonly used radial-basis function is
  a Gaussian function
• In a RBF network, r is the distance from the
  cluster centre.
• The equation represents a Gaussian bell-shaped
  curve, as shown in Fig.
1
  0.9
  0.8
  0.7
φ 0.6
  0.5
  0.4
  0.3
  0.2
  0.1
    0
                                         0




                                                                     1
                                                   0.4

                                                         0.6

                                                               0.8
                                             0.2
        -1

             -0.8




                                  -0.2
                    -0.6

                           -0.4




                                         x
Distance measure
• The distance measured from the cluster centre is
  usually the Euclidean distance.
• For each neuron in the hidden layer, the weights
  represent the co-ordinates of the centre of the
  cluster.
• Therefore, when that neuron receives an input
  pattern, X, the distance is found using the
  following equation:
Distance measure

        n
rj =   ∑ (x − w )
       i =1
              i   ij
                       2
Width of hidden unit basis
                  function
                                          n

                                        ∑ (x    i   − wij )   2


     (hidden _ unit )ϕ j = exp(−         i =1
                                                                  )
                                                2σ   2



The variable sigma, σ, defines the width or radius of the
bell-shape and is something that has to be determined
empirically. When the distance from the centre of the
Gaussian reaches σ, the output drops from 1 to 0.6.
Example
• An often quoted example which shows how the
  RBF network can handle a non-linearly separable
  function is the exclusive-or problem.
• One solution has 2 inputs, 2 hidden units and 1
  output.
• The centres for the two hidden units are set at c1 =
  0,0 and c2 = 1,1, and the value of radius σ is
  chosen such that 2σ2 = 1.
Example
• The inputs are x, the distances from the centres
  squared are r, and the outputs from the hidden
  units are ϕ.
• When all four examples of input patterns are
  shown to the network, the outputs of the two
  hidden units are shown in the following table.
x1   x2   r1   r2   ϕ1    ϕ2




0    0    0    2    1     0.1




0    1    1    1    0.4   0.4




1    0    1    1    0.4   0.4




1    1    2    0    0.1   1
Example
• Next Fig. shows the position of the four input
  patterns using the output of the two hidden units
  as the axes on the graph - it can be seen that the
  patterns are now linearly separable.
• This is an ideal solution - the centres were chosen
  carefully to show this result.
• Methods generally adopted for learning in an RBF
  network would find it impossible to arrive at those
  centre values - later learning methods that are
  usually adopted will be described.
φ1
 1




0.5
          XX




      0    0.5   1 φ2
Training hidden layer
• The hidden layer in a RBF network has units
  which have weights that correspond to the vector
  representation of the centre of a cluster.
• These weights are found either using a traditional
  clustering algorithm such as the k-means
  algorithm, or adaptively using essentially the
  Kohonen algorithm.
Training hidden layer
• In either case, the training is unsupervised but the
  number of clusters that you expect, k, is set in
  advance. The algorithms then find the best fit to
  these clusters.
• The k -means algorithm will be briefly outlined.
• Initially k points in the pattern space are randomly
  set.
Training hidden layer
• Then for each item of data in the training set, the
  distances are found from all of the k centres.
• The closest centre is chosen for each item of data -
  this is the initial classification, so all items of data
  will be assigned a class from 1 to k.
• Then, for all data which has been found to be class
  1, the average or mean values are found for each
  of co-ordinates.
Training hidden layer
• These become the new values for the centre
  corresponding to class 1.
• Repeated for all data found to be in class 2, then
  class 3 and so on until class k is dealt with - we
  now have k new centres.
• Process of measuring the distance between the
  centres and each item of data and re-classifying
  the data is repeated until there is no further change
  – i.e. the sum of the distances monitored and
  training halts when the total distance no longer
  falls.
Adaptive k-means
• The alternative is to use an adaptive k -means
  algorithm which similar to Kohonen learning.
• Input patterns are presented to all of the cluster
  centres one at a time, and the cluster centres
  adjusted after each one. The cluster centre that is
  nearest to the input data wins, and is shifted
  slightly towards the new data.
• This has the advantage that you don’t have to store
  all of the training data so can be done on-line.
Finding radius of Gaussians
• Having found the cluster centres using one or
  other of these methods, the next step is
  determining the radius of the Gaussian curves.
• This is usually done using the P-nearest neighbour
  algorithm.
• A number P is chosen, and for each centre, the P
  nearest centres are found.
Finding radius of Gaussians
• The root-mean squared distance between the
  current cluster centre and its P nearest neighbours
  is calculated, and this is the value chosen for σ.
• So, if the current cluster centre is cj, the value is:

                        P
        1
    σj=   ∑ (ck − ci )
        P i =1
                       2
Finding radius of Gaussians
• A typical value for P is 2, in which case σ is set to
  be the average distance from the two nearest
  neighbouring cluster centres.
XOR example
• Using this method XOR function can be
  implemented using a minimum of 4 hidden units.
• If more than four units are used, the additional
  units duplicate the centres and therefore do not
  contribute any further discrimination to the
  network.
• So, assuming four neurons in the hidden layer,
  each unit is centred on one of the four input
  patterns, namely 00, 01, 10 and 11.
• The P-nearest neighbour algorithm with P set to 2
  is used to find the size if the radii.
• In each of the neurons, the distances to the other
  three neurons is 1, 1 and 1.414, so the two nearest
  cluster centres are at a distance of 1.
• Using the mean squared distance as the radii gives
  each neuron a radius of 1.
• Using these values for the centres and radius, if
  each of the four input patterns is presented to the
  network, the output of the hidden layer would be:
input   neuron 1   neuron 2   neuron 3   neuron 4




00      0.6        0.4        1.0        0.6




01      0.4        0.6        0.6        1.0




10      1.0        0.6        0.6        0.4




11      0.6        1.0        0.4        0.6
Training output layer
• Having trained the hidden layer with some
  unsupervised learning, the final step is to train the
  output layer using a standard gradient descent
  technique such as the Least Mean Squares
  algorithm.
• In the example of the exclusive-or function given
  above a suitable set of weights would be +1, –1, –
  1 and +1. With these weights the value of net and
  the output is:
input   neuron 1   neuron 2   neuron 3   neuron 4   net    output




00      0.6        0.4        1.0        0.6        –0.2   0




01      0.4        0.6        0.6        1.0        0.2    1




10      1.0        0.6        0.6        0.4        0.2    1




11      0.6        1.0        0.4        0.6        –0.2   0
Advantages/Disadvantages
• RBF trains faster than a MLP
• Another advantage that is claimed is that the
  hidden layer is easier to interpret than the hidden
  layer in an MLP.
• Although the RBF is quick to train, when training
  is finished and it is being used it is slower than a
  MLP, so where speed is a factor a MLP may be
  more appropriate.
Summary
• Statistical feed-forward networks such as the RBF
  network have become very popular, and are
  serious rivals to the MLP.
• Essentially well tried statistical techniques being
  presented as neural networks.
• Learning mechanisms in statistical neural
  networks are not biologically plausible – so have
  not been taken up by those researchers who insist
  on biological analogies.

More Related Content

What's hot

Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear ComplexitySangwoo Mo
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkHiroshi Kuwajima
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Sangwoo Mo
 
Nural network ER.Abhishek k. upadhyay
Nural network  ER.Abhishek k. upadhyayNural network  ER.Abhishek k. upadhyay
Nural network ER.Abhishek k. upadhyayabhishek upadhyay
 
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGING
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGINGFR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGING
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGINGgrssieee
 
Learning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat MinimaLearning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat MinimaSangwoo Mo
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing mapsraphaelkiminya
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningSangwoo Mo
 
Presentation on SOM
Presentation on SOMPresentation on SOM
Presentation on SOMArchiLab 7
 
Machine Learning - Introduction to Convolutional Neural Networks
Machine Learning - Introduction to Convolutional Neural NetworksMachine Learning - Introduction to Convolutional Neural Networks
Machine Learning - Introduction to Convolutional Neural NetworksAndrew Ferlitsch
 
Introduction to Capsule Networks
Introduction to Capsule NetworksIntroduction to Capsule Networks
Introduction to Capsule NetworksChia-Ching Lin
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural NetworksArslan Zulfiqar
 
Deep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural NetworksDeep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural NetworksSangwoo Mo
 

What's hot (20)

Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear Complexity
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)
 
Before quiz 2
Before quiz 2Before quiz 2
Before quiz 2
 
Nural network ER.Abhishek k. upadhyay
Nural network  ER.Abhishek k. upadhyayNural network  ER.Abhishek k. upadhyay
Nural network ER.Abhishek k. upadhyay
 
Deep learning
Deep learningDeep learning
Deep learning
 
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGING
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGINGFR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGING
FR4.L09 - OPTIMAL SENSOR POSITIONING FOR ISAR IMAGING
 
Learning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat MinimaLearning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat Minima
 
Image transforms
Image transformsImage transforms
Image transforms
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing maps
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
 
Presentation on SOM
Presentation on SOMPresentation on SOM
Presentation on SOM
 
Machine Learning - Introduction to Convolutional Neural Networks
Machine Learning - Introduction to Convolutional Neural NetworksMachine Learning - Introduction to Convolutional Neural Networks
Machine Learning - Introduction to Convolutional Neural Networks
 
Introduction to Capsule Networks
Introduction to Capsule NetworksIntroduction to Capsule Networks
Introduction to Capsule Networks
 
Max net
Max netMax net
Max net
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Deep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural NetworksDeep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural Networks
 

Viewers also liked

Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_final
Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_finalTutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_final
Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_finalJonna Lähdemäki
 
Somnium Network Маркетингплан
Somnium Network МаркетингпланSomnium Network Маркетингплан
Somnium Network Маркетингпланonlinesarabotok
 
LAYERS Competition: Creative use of 3D printing technologies
LAYERS Competition: Creative use of 3D printing technologiesLAYERS Competition: Creative use of 3D printing technologies
LAYERS Competition: Creative use of 3D printing technologiesClifford Choy
 
Drgorad retail mgmt case prsrn kmart
Drgorad retail mgmt case prsrn  kmartDrgorad retail mgmt case prsrn  kmart
Drgorad retail mgmt case prsrn kmartDeepak R Gorad
 
The Social Challenge of 1.5°C Webinar: Ilan Chabay
The Social Challenge of 1.5°C Webinar: Ilan ChabayThe Social Challenge of 1.5°C Webinar: Ilan Chabay
The Social Challenge of 1.5°C Webinar: Ilan Chabaytewksjj
 
Instrumentation projects
Instrumentation projectsInstrumentation projects
Instrumentation projectsSenthil Kumar
 
Pakistan ky wasail
Pakistan ky wasailPakistan ky wasail
Pakistan ky wasailjawed shaikh
 
Ky nang giao tiep oralcommunication
Ky nang giao tiep   oralcommunicationKy nang giao tiep   oralcommunication
Ky nang giao tiep oralcommunicationLuat Ba
 
More sharing services partager
More sharing services partagerMore sharing services partager
More sharing services partagerJihane Balsheh
 
Carteles y Obra de Cassandre
Carteles y Obra de CassandreCarteles y Obra de Cassandre
Carteles y Obra de CassandreBrian Lurex
 
Skoltech faculty prospectus August 2014
Skoltech faculty prospectus August 2014Skoltech faculty prospectus August 2014
Skoltech faculty prospectus August 2014ilangoren
 
Seattle AT&T Hackathon
Seattle AT&T HackathonSeattle AT&T Hackathon
Seattle AT&T HackathonDoug Sillars
 

Viewers also liked (18)

Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_final
Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_finalTutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_final
Tutu1 Ryhmätyö - Lähdemäki & Timonen 0.91_final
 
Somnium Network Маркетингплан
Somnium Network МаркетингпланSomnium Network Маркетингплан
Somnium Network Маркетингплан
 
Digital fiiter
Digital fiiterDigital fiiter
Digital fiiter
 
Jeugd mar13
Jeugd mar13Jeugd mar13
Jeugd mar13
 
LAYERS Competition: Creative use of 3D printing technologies
LAYERS Competition: Creative use of 3D printing technologiesLAYERS Competition: Creative use of 3D printing technologies
LAYERS Competition: Creative use of 3D printing technologies
 
Drgorad retail mgmt case prsrn kmart
Drgorad retail mgmt case prsrn  kmartDrgorad retail mgmt case prsrn  kmart
Drgorad retail mgmt case prsrn kmart
 
The Social Challenge of 1.5°C Webinar: Ilan Chabay
The Social Challenge of 1.5°C Webinar: Ilan ChabayThe Social Challenge of 1.5°C Webinar: Ilan Chabay
The Social Challenge of 1.5°C Webinar: Ilan Chabay
 
Electricmotors6
Electricmotors6Electricmotors6
Electricmotors6
 
Electricmotors5
Electricmotors5Electricmotors5
Electricmotors5
 
Instrumentation projects
Instrumentation projectsInstrumentation projects
Instrumentation projects
 
Pakistan ky wasail
Pakistan ky wasailPakistan ky wasail
Pakistan ky wasail
 
Ky nang giao tiep oralcommunication
Ky nang giao tiep   oralcommunicationKy nang giao tiep   oralcommunication
Ky nang giao tiep oralcommunication
 
More sharing services partager
More sharing services partagerMore sharing services partager
More sharing services partager
 
Carteles y Obra de Cassandre
Carteles y Obra de CassandreCarteles y Obra de Cassandre
Carteles y Obra de Cassandre
 
Leigh lillis resume 7 2016
Leigh lillis resume 7 2016Leigh lillis resume 7 2016
Leigh lillis resume 7 2016
 
Ib deepak r gorad
Ib deepak r goradIb deepak r gorad
Ib deepak r gorad
 
Skoltech faculty prospectus August 2014
Skoltech faculty prospectus August 2014Skoltech faculty prospectus August 2014
Skoltech faculty prospectus August 2014
 
Seattle AT&T Hackathon
Seattle AT&T HackathonSeattle AT&T Hackathon
Seattle AT&T Hackathon
 

Similar to Circuitanlys2

Sess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSarthakKabi2
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGmohanapriyastp
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Schelkunoff Polynomial Method for Antenna Synthesis
Schelkunoff Polynomial Method for Antenna SynthesisSchelkunoff Polynomial Method for Antenna Synthesis
Schelkunoff Polynomial Method for Antenna SynthesisSwapnil Bangera
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptxSaifKhan703888
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier홍배 김
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012Ted Dunning
 
Convolutional neural networks
Convolutional neural networksConvolutional neural networks
Convolutional neural networksMohammad Imran
 

Similar to Circuitanlys2 (20)

Zoooooohaib
ZoooooohaibZoooooohaib
Zoooooohaib
 
Sess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptx
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Schelkunoff Polynomial Method for Antenna Synthesis
Schelkunoff Polynomial Method for Antenna SynthesisSchelkunoff Polynomial Method for Antenna Synthesis
Schelkunoff Polynomial Method for Antenna Synthesis
 
Deep learning
Deep learningDeep learning
Deep learning
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
 
03 Single layer Perception Classifier
03 Single layer Perception Classifier03 Single layer Perception Classifier
03 Single layer Perception Classifier
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
 
ANFIS
ANFISANFIS
ANFIS
 
Convolutional neural networks
Convolutional neural networksConvolutional neural networks
Convolutional neural networks
 

Recently uploaded

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 

Recently uploaded (20)

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 

Circuitanlys2

  • 1. EXPERT SYSTEMS AND SOLUTIONS Email: expertsyssol@gmail.com expertsyssol@yahoo.com Cell: 9952749533 www.researchprojects.info PAIYANOOR, OMR, CHENNAI Call For Research Projects Final year students of B.E in EEE, ECE, EI, M.E (Power Systems), M.E (Applied Electronics), M.E (Power Electronics) Ph.D Electrical and Electronics. Students can assemble their hardware in our Research labs. Experts will be guiding the projects.
  • 2. Radial Basis Function (RBF) Networks
  • 3. RBF network • This is becoming an increasingly popular neural network with diverse applications and is probably the main rival to the multi-layered perceptron • Much of the inspiration for RBF networks has come from traditional statistical pattern classification techniques
  • 4. RBF network • The basic architecture for a RBF is a 3-layer network, as shown in Fig. • The input layer is simply a fan-out layer and does no processing. • The second or hidden layer performs a non-linear mapping from the input space into a (usually) higher dimensional space in which the patterns become linearly separable.
  • 5. x1 y1 x2 y2 x3 output layer input layer (linear weighted sum) (fan-out) hidden layer (weights correspond to cluster centre, output function usually Gaussian)
  • 6. Output layer • The final layer performs a simple weighted sum with a linear output. • If the RBF network is used for function approximation (matching a real number) then this output is fine. • However, if pattern classification is required, then a hard-limiter or sigmoid function could be placed on the output neurons to give 0/1 output values.
  • 7. Clustering • The unique feature of the RBF network is the process performed in the hidden layer. • The idea is that the patterns in the input space form clusters. • If the centres of these clusters are known, then the distance from the cluster centre can be measured. • Furthermore, this distance measure is made non- linear, so that if a pattern is in an area that is close to a cluster centre it gives a value close to 1.
  • 8. Clustering • Beyond this area, the value drops dramatically. • The notion is that this area is radially symmetrical around the cluster centre, so that the non-linear function becomes known as the radial-basis function.
  • 9. Gaussian function • The most commonly used radial-basis function is a Gaussian function • In a RBF network, r is the distance from the cluster centre. • The equation represents a Gaussian bell-shaped curve, as shown in Fig.
  • 10. 1 0.9 0.8 0.7 φ 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1 0.4 0.6 0.8 0.2 -1 -0.8 -0.2 -0.6 -0.4 x
  • 11. Distance measure • The distance measured from the cluster centre is usually the Euclidean distance. • For each neuron in the hidden layer, the weights represent the co-ordinates of the centre of the cluster. • Therefore, when that neuron receives an input pattern, X, the distance is found using the following equation:
  • 12. Distance measure n rj = ∑ (x − w ) i =1 i ij 2
  • 13. Width of hidden unit basis function n ∑ (x i − wij ) 2 (hidden _ unit )ϕ j = exp(− i =1 ) 2σ 2 The variable sigma, σ, defines the width or radius of the bell-shape and is something that has to be determined empirically. When the distance from the centre of the Gaussian reaches σ, the output drops from 1 to 0.6.
  • 14. Example • An often quoted example which shows how the RBF network can handle a non-linearly separable function is the exclusive-or problem. • One solution has 2 inputs, 2 hidden units and 1 output. • The centres for the two hidden units are set at c1 = 0,0 and c2 = 1,1, and the value of radius σ is chosen such that 2σ2 = 1.
  • 15. Example • The inputs are x, the distances from the centres squared are r, and the outputs from the hidden units are ϕ. • When all four examples of input patterns are shown to the network, the outputs of the two hidden units are shown in the following table.
  • 16. x1 x2 r1 r2 ϕ1 ϕ2 0 0 0 2 1 0.1 0 1 1 1 0.4 0.4 1 0 1 1 0.4 0.4 1 1 2 0 0.1 1
  • 17. Example • Next Fig. shows the position of the four input patterns using the output of the two hidden units as the axes on the graph - it can be seen that the patterns are now linearly separable. • This is an ideal solution - the centres were chosen carefully to show this result. • Methods generally adopted for learning in an RBF network would find it impossible to arrive at those centre values - later learning methods that are usually adopted will be described.
  • 18. φ1 1 0.5 XX 0 0.5 1 φ2
  • 19. Training hidden layer • The hidden layer in a RBF network has units which have weights that correspond to the vector representation of the centre of a cluster. • These weights are found either using a traditional clustering algorithm such as the k-means algorithm, or adaptively using essentially the Kohonen algorithm.
  • 20. Training hidden layer • In either case, the training is unsupervised but the number of clusters that you expect, k, is set in advance. The algorithms then find the best fit to these clusters. • The k -means algorithm will be briefly outlined. • Initially k points in the pattern space are randomly set.
  • 21. Training hidden layer • Then for each item of data in the training set, the distances are found from all of the k centres. • The closest centre is chosen for each item of data - this is the initial classification, so all items of data will be assigned a class from 1 to k. • Then, for all data which has been found to be class 1, the average or mean values are found for each of co-ordinates.
  • 22. Training hidden layer • These become the new values for the centre corresponding to class 1. • Repeated for all data found to be in class 2, then class 3 and so on until class k is dealt with - we now have k new centres. • Process of measuring the distance between the centres and each item of data and re-classifying the data is repeated until there is no further change – i.e. the sum of the distances monitored and training halts when the total distance no longer falls.
  • 23. Adaptive k-means • The alternative is to use an adaptive k -means algorithm which similar to Kohonen learning. • Input patterns are presented to all of the cluster centres one at a time, and the cluster centres adjusted after each one. The cluster centre that is nearest to the input data wins, and is shifted slightly towards the new data. • This has the advantage that you don’t have to store all of the training data so can be done on-line.
  • 24. Finding radius of Gaussians • Having found the cluster centres using one or other of these methods, the next step is determining the radius of the Gaussian curves. • This is usually done using the P-nearest neighbour algorithm. • A number P is chosen, and for each centre, the P nearest centres are found.
  • 25. Finding radius of Gaussians • The root-mean squared distance between the current cluster centre and its P nearest neighbours is calculated, and this is the value chosen for σ. • So, if the current cluster centre is cj, the value is: P 1 σj= ∑ (ck − ci ) P i =1 2
  • 26. Finding radius of Gaussians • A typical value for P is 2, in which case σ is set to be the average distance from the two nearest neighbouring cluster centres.
  • 27. XOR example • Using this method XOR function can be implemented using a minimum of 4 hidden units. • If more than four units are used, the additional units duplicate the centres and therefore do not contribute any further discrimination to the network. • So, assuming four neurons in the hidden layer, each unit is centred on one of the four input patterns, namely 00, 01, 10 and 11.
  • 28. • The P-nearest neighbour algorithm with P set to 2 is used to find the size if the radii. • In each of the neurons, the distances to the other three neurons is 1, 1 and 1.414, so the two nearest cluster centres are at a distance of 1. • Using the mean squared distance as the radii gives each neuron a radius of 1. • Using these values for the centres and radius, if each of the four input patterns is presented to the network, the output of the hidden layer would be:
  • 29. input neuron 1 neuron 2 neuron 3 neuron 4 00 0.6 0.4 1.0 0.6 01 0.4 0.6 0.6 1.0 10 1.0 0.6 0.6 0.4 11 0.6 1.0 0.4 0.6
  • 30. Training output layer • Having trained the hidden layer with some unsupervised learning, the final step is to train the output layer using a standard gradient descent technique such as the Least Mean Squares algorithm. • In the example of the exclusive-or function given above a suitable set of weights would be +1, –1, – 1 and +1. With these weights the value of net and the output is:
  • 31. input neuron 1 neuron 2 neuron 3 neuron 4 net output 00 0.6 0.4 1.0 0.6 –0.2 0 01 0.4 0.6 0.6 1.0 0.2 1 10 1.0 0.6 0.6 0.4 0.2 1 11 0.6 1.0 0.4 0.6 –0.2 0
  • 32. Advantages/Disadvantages • RBF trains faster than a MLP • Another advantage that is claimed is that the hidden layer is easier to interpret than the hidden layer in an MLP. • Although the RBF is quick to train, when training is finished and it is being used it is slower than a MLP, so where speed is a factor a MLP may be more appropriate.
  • 33. Summary • Statistical feed-forward networks such as the RBF network have become very popular, and are serious rivals to the MLP. • Essentially well tried statistical techniques being presented as neural networks. • Learning mechanisms in statistical neural networks are not biologically plausible – so have not been taken up by those researchers who insist on biological analogies.