SlideShare una empresa de Scribd logo
1 de 10
INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX
                                         Budi Santosa, Fall 2002

FEEDFORWARD NEURAL NETWORK
To create a feedforward backpropagation network we can use NEWFF

Syntax

net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description

            NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
             PR - Rx2 matrix of min and max values for R input elements.
             Si - Size of ith layer, for Nl layers.
             TFi - Transfer function of ith layer, default = 'tansig'.
             BTF - Backprop network training function, default = 'trainlm'.
             BLF - Backprop weight/bias learning function, default = 'learngdm'.
             PF - Performance function, default = 'mse'.
            and returns an N layer feed-forward backprop network.

Consider this set of data:
p=[-1 -1 2 2;0 5 0 5]
t =[-1 -1 1 1]
where p is input vector and t is target.

Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in
hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function
for output layer, and with gradient descent with momentum backpropagation training
function, just simply use the following commands:

» net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’);

Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We
might use minmax(p) , especially for large data set, then the command becomes:

»net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’);

We then want to train the network net with the following commands and parameter
values1 (or we can write these set of command as M-file):

»     net.trainParam.epochs=30;%(number of epochs)
»     net.trainParam.lr=0.3;%(learning rate)
»     net.trainParam.mc=0.6;%(momentum)
»     net=train (net,p,t);


1
    % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment


                                                       1
TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010
TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient
4.27745e-012/1e-010
TRAINLM, Minimum gradient reached, performance goal was not
met.

The following is plot of training error vs epochs resulted after we run the above
command:


To simulate the network use the following command:
                                                P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0
                                 5
                            10


                                 0
                            10


                                 -5
                            10


                                 -1 0
      Tra in in g -B lu e




                            10


                                 -1 5
                            10


                                 -2 0
                            10


                                 -2 5
                            10
                                        0   1             2                3                   4             5       6
                                                                      6 E poc hs



» y=sim(net,p);

Then see the result:
» [t;y]

ans =

    -1.0000                                      -1.0000                                           1.0000                1.0000
    -1.0000                                      -1.0000                                           1.0000                1.0000

Notice that t is actual target and y is predicted value.

Pre-processing data

Some transfer functions need that the inputs and targets are scaled so that they fall within
a specified range. In order to meet this requirement we need to pre-process the data.


                                                                                                                 2
PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1.
Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t)

Description

PRESTD preprocesses the network training set by normalizing the inputs (p) and targets
(t) so that they have means of zero and standard deviations of 1.

PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1.

Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t)

Description
PREMNMX preprocesses the network training set by normalizing the inputs and targets
so that they fall in the interval [-1,1]. Post-processing Data

If we pre-process the data before experiments, then after the simulation in order to see the
output, we need to convert back to the original scale. The corresponding post-processing
subroutine for prestd is poststd, and for premnmx is postmnmx.

Now, consider the following real time-series data. We can save this data as text file in
MS-excel (i.e data.txt).

  107.1       113.5   112.7
  113.5       112.7   114.7
  112.7       114.7   123.4
  114.7       123.4   123.6
  123.4       123.6   116.3
  123.6       116.3   118.5
  116.3       118.5   119.8
  118.5       119.8   120.3
  119.8       120.3   127.4
  120.3       127.4   125.1
  127.4       125.1   127.6
  125.1       127.6     129
  127.6         129   124.6
    129       124.6   134.1
  124.6       134.1   146.5
  134.1       146.5   171.2
  146.5       171.2   178.6
  171.2       178.6   172.2
  178.6       172.2   171.5
  172.2       171.5   163.6




                                             3
Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split
the matrix into two new data sets, first 12 data point as training set and the rest as a
testing set. The following commands are to load the data and specify our training and
testing data:

»   load data.txt;
»   P=data(1:12,1:2);
»   T=data(1:12,3);
»   a=data(13:20,1:2);
»   s=data(13:20,3);

Subroutine premnmx is used to preprocess the data such that the converted data fall in
range [-1,1]. Note that we need to convert our data into row vectors to be able to apply
premnmx..

» [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T');
» [an,mina,maxa,sn,mins,maxs]=premnmx(a',s');

We would like to create two layers neural net with 5 nodes in hidden layer.

» net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm')

Specify these parameters:

»   net.trainParam.epochs=3000;
»   net.trainParam.lr=0.3;
»   net.trainParam.mc=0.6;
»   net=train (net,pn,tn);

After training the network we can simulate our testing data:

                                                  P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0
                                   0
                              10
        Tra in in g -B lu e




                                   -1
                              10
                                        0   500       1000            1500                2000          2 54
                                                                                                           00   3000
                                                                  3000 E poc hs
» y=sim(net,an)

y=

 Columns 1 through 7

 -0.4697 -0.4748 -0.4500 -0.3973                                                                    0.5257   0.6015   0.5142
 Column 8

  0.5280

To convert y back to the original scale use the following command:

» t=postmnmx(y’,mins,maxs);
See the results:
» [t s]
ans =
  138.9175 124.6000
  138.7803 134.1000
  139.4506 146.5000
  140.8737 171.2000
  165.7935 178.6000
  167.8394 172.2000
   165.4832 171.5000
 165.8551        163.6000

» plot(t,'r')
» hold
Current plot held
» plot(s)
» title('Comparison between actual targets and predictions')

                 C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s
   180



   170



   160



   150                         y



   140
                                       x


   130



   120
         1   2                3               4               5               6                 7    8


                                                                                                         5
To calculate mse between actual and predicted values use the following commands:

» d=[t-s].^2;
» mse=mean(d)

mse =

   177.5730
To judge our network performance we can also use regression analysis. After post-
processing the predicted values, we can apply the following command:

t» [m,b,r]=postreg(t',s’)
m =
     0.5368
b =
    68.1723
r =
     0.7539


               B es t Linear F it: X = (0.537) T + (68.2)           D ata P oints
       180                                                          X= T
                                                                    B es t Linear F it

       170


       160


       150
   X




       140
                                                  R = 0.754
       130


       120
         120   130      140       150      160       170      180
                                   T




                                                                    6
RADIAL BASIS NETWORK

NEWRB Design a radial basis network.

 Synopsis

   net = newrb
   [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF)

 Description

Radial basis networks can be used to approximate functions. NEWRB adds neurons to
the hidden layer of a radial basis network until it meets the specified mean squared error
goal.
Consider the same data set we use before. Now we use radial basis network to do an
experiment.

>> net = newrb(P',T',0,3,12,1);
>> Y=sim(net,a')

Y =

150.4579       142.0905       166.3241       167.1528        167.1528       167.1528
167.1528       167.1528

>> d=[Y' s]

d =

   150.4579       124.6000
   142.0905       134.1000
   166.3241       146.5000
   167.1528       171.2000
   167.1528       178.6000
   167.1528       172.2000
   167.1528       171.5000
   167.1528       163.6000

>> diff=d(:,1)-d(:,2)

diff =

    25.8579
     7.9905
    19.8241
    -4.0472


                                             7
-11.4472
     -5.0472
     -4.3472
      3.5528

>> mse=mean(diff.^2)

mse =

    166.2357

RECURRENT NETWORK

Here we will use Elman network. The Elman network differs from conventional two-
layers network in that the first layer has recurrent connection. Elman networks are two-
layer backpropagation networks, with addition of a feedback connection from the output
of the hidden layer to its input. This feedback path allows Elman networks to recognize
and generate temporal patterns, as well as spatial patterns.

The following is an example of applying Elman networks.

»   [an,meana,stda,tn,meant,stdt]=prestd(a',s');
»   [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T');
»   net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm');
»   y=sim(net,an);
»   x=poststd(y',meant,stdt);
»   [x s]

ans =

    158.2551     124.6000
    158.2079     134.1000
    158.3120     146.5000
    159.2024     171.2000
    165.3899     178.6000
    171.6465     172.2000
    170.5381     171.5000
    169.3573     163.6000

» plot(x)
» hold
Current plot held
» plot(t,'r')




                                           8
180



         170



         160



         150



         140



         130



         120
               1   2   3   4   5     6    7    8




Learning Vector Quantization (LVQ)
LVQ networks are used to solve classification problems. Now we use iris data to
implement this technique. Iris data is very popular in data mining or statistics study
where the dimension is 150 by 5. The row is the number of observations and the last
column of the data indicates the class. There are 3 classes in the data.
The following set of commands is to specify our training and testing data:
»   load iris.txt
»   x=sortrows(iris,2);
»   P=x(21:150,1:4);
»   T=x(21:150,5);
»   a=x(1:20,1:4);
»   s=x(1:20,5);

An LVQ network can be created with the function newlvq.

Syntax
net = newlvq(PR,S1,PC,LR,LF)

       NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs,
         PR - Rx2 matrix of min and max values for R input elements.
         S1 - Number of hidden neurons.
         PC - S2 element vector of typical class percentages.
              The dimension depends on the number of classes in our data
              set. Each element indicates the percentage of each class.
              The percentages must sum to 1.
         LR - Learning rate, default = 0.01.
         LF - Learning function, default = 'learnlv2'.The




                                          9
The following code is to implement LVQ network.

P=input('input matrix training: ');
T=input('input matrix target training: ');
a=input('input matrix testing: ');
s=input('input matrix target testing: ');
Tc = ind2vec(T');
net=newlvq(minmax(P'),6,[49/130 36/130 45/130]);
net.trainParam.epochs=700;
net=train (net,P',Tc);
y=sim(net,a');
lb=vec2ind(y);
[s lb’]
ans

     2     2
     2     2
     2     2
     3     1
     1     1
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     1
     2     2
     2     2
     3     2
     3     1
     3     1
     3     1
     2     2




                                  10

Más contenido relacionado

La actualidad más candente

Fourier transformation
Fourier transformationFourier transformation
Fourier transformation
zertux
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º ano
claudiaalves71
 

La actualidad más candente (20)

Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
 
Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)
 
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
 
Time Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryTime Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal Recovery
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformation
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º ano
 
Nabaa
NabaaNabaa
Nabaa
 
Fast Fourier Transform
Fast Fourier TransformFast Fourier Transform
Fast Fourier Transform
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
 
Distributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedDistributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and Related
 
alamouti
alamoutialamouti
alamouti
 
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier Transformation
 
Deep learning with C++ - an introduction to tiny-dnn
Deep learning with C++  - an introduction to tiny-dnnDeep learning with C++  - an introduction to tiny-dnn
Deep learning with C++ - an introduction to tiny-dnn
 
Solved problems
Solved problemsSolved problems
Solved problems
 
Lab7 task1
Lab7 task1Lab7 task1
Lab7 task1
 
Fourier 3
Fourier 3Fourier 3
Fourier 3
 

Destacado

learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3
ESCOM
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
ESCOM
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLAB
ESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
ESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
ESCOM
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
stellajoseph
 

Destacado (14)

Neural network-toolbox
Neural network-toolboxNeural network-toolbox
Neural network-toolbox
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3
 
Matlab Neural Network Toolbox
Matlab Neural Network ToolboxMatlab Neural Network Toolbox
Matlab Neural Network Toolbox
 
Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLAB
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
 
Self-organizing maps - Tutorial
Self-organizing maps -  TutorialSelf-organizing maps -  Tutorial
Self-organizing maps - Tutorial
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 

Similar a Intro matlab-nn

Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4
ozzie73
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
wellesleyterresa
 

Similar a Intro matlab-nn (20)

curve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxcurve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptx
 
presentazione
presentazionepresentazione
presentazione
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
 
MLE Example
MLE ExampleMLE Example
MLE Example
 
11. Linear Models
11. Linear Models11. Linear Models
11. Linear Models
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
 
Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
 
maXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIImaXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VII
 
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
 
Conference ppt
Conference pptConference ppt
Conference ppt
 
NLP@ICLR2019
NLP@ICLR2019NLP@ICLR2019
NLP@ICLR2019
 
parallel
parallelparallel
parallel
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
 
Learn Matlab
Learn MatlabLearn Matlab
Learn Matlab
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentation
 

Último

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Último (20)

Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 

Intro matlab-nn

  • 1. INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX Budi Santosa, Fall 2002 FEEDFORWARD NEURAL NETWORK To create a feedforward backpropagation network we can use NEWFF Syntax net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) Description NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes, PR - Rx2 matrix of min and max values for R input elements. Si - Size of ith layer, for Nl layers. TFi - Transfer function of ith layer, default = 'tansig'. BTF - Backprop network training function, default = 'trainlm'. BLF - Backprop weight/bias learning function, default = 'learngdm'. PF - Performance function, default = 'mse'. and returns an N layer feed-forward backprop network. Consider this set of data: p=[-1 -1 2 2;0 5 0 5] t =[-1 -1 1 1] where p is input vector and t is target. Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function for output layer, and with gradient descent with momentum backpropagation training function, just simply use the following commands: » net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’); Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We might use minmax(p) , especially for large data set, then the command becomes: »net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’); We then want to train the network net with the following commands and parameter values1 (or we can write these set of command as M-file): » net.trainParam.epochs=30;%(number of epochs) » net.trainParam.lr=0.3;%(learning rate) » net.trainParam.mc=0.6;%(momentum) » net=train (net,p,t); 1 % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment 1
  • 2. TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010 TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient 4.27745e-012/1e-010 TRAINLM, Minimum gradient reached, performance goal was not met. The following is plot of training error vs epochs resulted after we run the above command: To simulate the network use the following command: P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0 5 10 0 10 -5 10 -1 0 Tra in in g -B lu e 10 -1 5 10 -2 0 10 -2 5 10 0 1 2 3 4 5 6 6 E poc hs » y=sim(net,p); Then see the result: » [t;y] ans = -1.0000 -1.0000 1.0000 1.0000 -1.0000 -1.0000 1.0000 1.0000 Notice that t is actual target and y is predicted value. Pre-processing data Some transfer functions need that the inputs and targets are scaled so that they fall within a specified range. In order to meet this requirement we need to pre-process the data. 2
  • 3. PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1. Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t) Description PRESTD preprocesses the network training set by normalizing the inputs (p) and targets (t) so that they have means of zero and standard deviations of 1. PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1. Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t) Description PREMNMX preprocesses the network training set by normalizing the inputs and targets so that they fall in the interval [-1,1]. Post-processing Data If we pre-process the data before experiments, then after the simulation in order to see the output, we need to convert back to the original scale. The corresponding post-processing subroutine for prestd is poststd, and for premnmx is postmnmx. Now, consider the following real time-series data. We can save this data as text file in MS-excel (i.e data.txt). 107.1 113.5 112.7 113.5 112.7 114.7 112.7 114.7 123.4 114.7 123.4 123.6 123.4 123.6 116.3 123.6 116.3 118.5 116.3 118.5 119.8 118.5 119.8 120.3 119.8 120.3 127.4 120.3 127.4 125.1 127.4 125.1 127.6 125.1 127.6 129 127.6 129 124.6 129 124.6 134.1 124.6 134.1 146.5 134.1 146.5 171.2 146.5 171.2 178.6 171.2 178.6 172.2 178.6 172.2 171.5 172.2 171.5 163.6 3
  • 4. Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split the matrix into two new data sets, first 12 data point as training set and the rest as a testing set. The following commands are to load the data and specify our training and testing data: » load data.txt; » P=data(1:12,1:2); » T=data(1:12,3); » a=data(13:20,1:2); » s=data(13:20,3); Subroutine premnmx is used to preprocess the data such that the converted data fall in range [-1,1]. Note that we need to convert our data into row vectors to be able to apply premnmx.. » [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T'); » [an,mina,maxa,sn,mins,maxs]=premnmx(a',s'); We would like to create two layers neural net with 5 nodes in hidden layer. » net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm') Specify these parameters: » net.trainParam.epochs=3000; » net.trainParam.lr=0.3; » net.trainParam.mc=0.6; » net=train (net,pn,tn); After training the network we can simulate our testing data: P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0 0 10 Tra in in g -B lu e -1 10 0 500 1000 1500 2000 2 54 00 3000 3000 E poc hs
  • 5. » y=sim(net,an) y= Columns 1 through 7 -0.4697 -0.4748 -0.4500 -0.3973 0.5257 0.6015 0.5142 Column 8 0.5280 To convert y back to the original scale use the following command: » t=postmnmx(y’,mins,maxs); See the results: » [t s] ans = 138.9175 124.6000 138.7803 134.1000 139.4506 146.5000 140.8737 171.2000 165.7935 178.6000 167.8394 172.2000 165.4832 171.5000 165.8551 163.6000 » plot(t,'r') » hold Current plot held » plot(s) » title('Comparison between actual targets and predictions') C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s 180 170 160 150 y 140 x 130 120 1 2 3 4 5 6 7 8 5
  • 6. To calculate mse between actual and predicted values use the following commands: » d=[t-s].^2; » mse=mean(d) mse = 177.5730 To judge our network performance we can also use regression analysis. After post- processing the predicted values, we can apply the following command: t» [m,b,r]=postreg(t',s’) m = 0.5368 b = 68.1723 r = 0.7539 B es t Linear F it: X = (0.537) T + (68.2) D ata P oints 180 X= T B es t Linear F it 170 160 150 X 140 R = 0.754 130 120 120 130 140 150 160 170 180 T 6
  • 7. RADIAL BASIS NETWORK NEWRB Design a radial basis network. Synopsis net = newrb [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF) Description Radial basis networks can be used to approximate functions. NEWRB adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal. Consider the same data set we use before. Now we use radial basis network to do an experiment. >> net = newrb(P',T',0,3,12,1); >> Y=sim(net,a') Y = 150.4579 142.0905 166.3241 167.1528 167.1528 167.1528 167.1528 167.1528 >> d=[Y' s] d = 150.4579 124.6000 142.0905 134.1000 166.3241 146.5000 167.1528 171.2000 167.1528 178.6000 167.1528 172.2000 167.1528 171.5000 167.1528 163.6000 >> diff=d(:,1)-d(:,2) diff = 25.8579 7.9905 19.8241 -4.0472 7
  • 8. -11.4472 -5.0472 -4.3472 3.5528 >> mse=mean(diff.^2) mse = 166.2357 RECURRENT NETWORK Here we will use Elman network. The Elman network differs from conventional two- layers network in that the first layer has recurrent connection. Elman networks are two- layer backpropagation networks, with addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman networks to recognize and generate temporal patterns, as well as spatial patterns. The following is an example of applying Elman networks. » [an,meana,stda,tn,meant,stdt]=prestd(a',s'); » [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T'); » net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm'); » y=sim(net,an); » x=poststd(y',meant,stdt); » [x s] ans = 158.2551 124.6000 158.2079 134.1000 158.3120 146.5000 159.2024 171.2000 165.3899 178.6000 171.6465 172.2000 170.5381 171.5000 169.3573 163.6000 » plot(x) » hold Current plot held » plot(t,'r') 8
  • 9. 180 170 160 150 140 130 120 1 2 3 4 5 6 7 8 Learning Vector Quantization (LVQ) LVQ networks are used to solve classification problems. Now we use iris data to implement this technique. Iris data is very popular in data mining or statistics study where the dimension is 150 by 5. The row is the number of observations and the last column of the data indicates the class. There are 3 classes in the data. The following set of commands is to specify our training and testing data: » load iris.txt » x=sortrows(iris,2); » P=x(21:150,1:4); » T=x(21:150,5); » a=x(1:20,1:4); » s=x(1:20,5); An LVQ network can be created with the function newlvq. Syntax net = newlvq(PR,S1,PC,LR,LF) NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs, PR - Rx2 matrix of min and max values for R input elements. S1 - Number of hidden neurons. PC - S2 element vector of typical class percentages. The dimension depends on the number of classes in our data set. Each element indicates the percentage of each class. The percentages must sum to 1. LR - Learning rate, default = 0.01. LF - Learning function, default = 'learnlv2'.The 9
  • 10. The following code is to implement LVQ network. P=input('input matrix training: '); T=input('input matrix target training: '); a=input('input matrix testing: '); s=input('input matrix target testing: '); Tc = ind2vec(T'); net=newlvq(minmax(P'),6,[49/130 36/130 45/130]); net.trainParam.epochs=700; net=train (net,P',Tc); y=sim(net,a'); lb=vec2ind(y); [s lb’] ans 2 2 2 2 2 2 3 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 3 2 3 1 3 1 3 1 2 2 10