SlideShare a Scribd company logo
1 of 4
Download to read offline
Smriti Sehgal / International Journal of Engineering Research and Applications
                     (IJERA)            ISSN: 2248-9622     www.ijera.com
                      Vol. 2, Issue 5, September- October 2012, pp.043-046

  Remotely Sensed LANDSAT Image Classification Using Neural
                    Network Approaches
                                               Smriti Sehgal
                Department of Computer Science & Engineering, Amity University, Noida, India


ABSTRACT
          In paper, LANDSAT multispectral image            presented and compared for remotely sensed
is classified using several unsupervised and               multispectral image.
supervised techniques. Pixel-by-pixel classification                 In this paper, LANDSAT ETM+
approaches proved to be infeasible as well as time         multispectral image of Brazil is used for experiments
consuming in case of multispectral images. To              and result analysis. First, loading of multispectral
overcome this, instead of classifying each pixel,          image is achieved using PCA technique, which
feature based classification approach is used.             reduced the number of bands of this image from
Three supervised techniques namely, k-NN, BPNN             seven to three. Second, textural and spatial features
and PCNN are investigated for classification using         are extracted using GLCM and PSI. Structural feature
textural, spatial and spectral images. Experiments         set [4] is constructed using these features. Third,
shows supervised approaches perform better than            relevant features are selected using S-Index [6]
unsupervised ones. Comparison between k-NN,                approach. Fourth, both the feature set along with
BPNN, and PCNN is done using these features and            spectral features is given as input to different
classification accuracies for BPNN is found out to         classifiers. Classification accuracy is calculated and
be more than k-NN and PCNN.                                then compared among each classifier.
                                                                     This paper is organized as follows: section II
Keywords - Classification, multispectral image,            gives the detailed specifications of image and its
neural network, spatial feature, Textural feature.         reference data. Section III describes the different
                                                           features extracted, feature selection approach, and
I. INTRODUCTION                                            classification techniques. Section IV gives the design
          Remotely sensed satellite image having           flow. Section V shows the experimental results and
multispectral bands gives large amount of data about       section VI concludes the paper.
the ground interested in. The size of images is
growing due to advancement in sensors capturing            II. DATA ACQUISITION
them. Processing of such images poses challenges                     LANDSAT ETM+ image used, consist of 7
such as choosing appropriate bands, extracting only        spectral bands with a spatial resolution of 30 meters
relevant features, classification etc due to spectral      for all bands. The used LANDSAT has good quality
and spatial homogeneity. Using only spectral               of sensor distortion, acquired on 20 January, 2000.
information from multispectral image is not sufficed       The image has 0.1 micrometer spectral resolution and
for classification purpose. Combining either textural      30 m spatial resolution. Dimension of image is 169 ×
or spatial information with spectral information           169 with TIFF extension. Ground truth data was
increases the classification accuracy of the system.       available with the image as a reference for processing
Various spatial features are extracted from the image      on the image.
such as length, width, PSI, ratio etc using the concept
of direction lines [5]. Gray level co-occurrence
matrix (GLCM) [7] is mostly used and successful to
some extent, and it is used to find textural features
from the LANDSAT image. In [6], a new feature
similarity measure is introduced, known as maximum
information compression index, MICI [6]. This
approach is based on linear dependency among the
variables and benefit of choosing this measure is that
if we remove one of the linearly dependent feature,        Figure 1: Ground Truth Data
data would still be linearly separable. After extracting            Using available ground data, seven classes
and selecting appropriate features from the image,         are known in the image. The search program begins
these features are given as input into classifiers for     by examining blocks of pixels looking for spectrally
further processing. Several classification approaches      homogeneous blocks. Data samples created from this
such as k-means [8], SVM [4], BPNN [2][8], and             counted to total of 122 samples. Out of these, approx
PCNN are used. In [9], neural network techniques are


                                                                                                     43 | P a g e
Smriti Sehgal / International Journal of Engineering Research and Applications
                     (IJERA)            ISSN: 2248-9622     www.ijera.com
                      Vol. 2, Issue 5, September- October 2012, pp.043-046
74 were used as training samples and rest are kept as      features until all of them are either selected or
testing samples.                                           discarded.
                                                           After all features are clustered into some clusters, one
III. METHODOLOGY                                           representative feature from each cluster is selected.
3.1 Textural Features                                      For this, Mean of all the features in one cluster would
         GLCM [7] is used to extract textural              represent a cluster.
features from the image. A statistical method that
considers the spatial relationship of pixels is GLCM,      3.4 Classification
also known as the gray-level spatial dependence                      Classification processing step classifies each
matrix. By default, the spatial relationship is defined    block or pixel into one of the known classes.
as the pixel of interest and the pixel to its immediate    Backward Propagation is probably the most common
right (horizontally adjacent), but other spatial           method for training forward-feed neural networks. A
relationships between the two pixels can also be           forward pass using an input pattern propagates
made. Each element (I, J) in the resultant GLCM is         through the network and produces an actual output.
simply the sum of the number of times that the pixel       The backward pass uses the desired outputs
with value I occurred in the specified spatial             corresponding to the input pattern and updates the
relationship to a pixel with value J in the input image.   weights according to the error signal. A PCNN is a
The Following GLCM features were extracted:                two-dimensional neural network. Each neuron in the
Contrast, Dissimilarity, entropy, Homogeneity,             network corresponds to one pixel in an input image,
GLCM Mean, GLCM variance, GLCM Standard                    receiving its corresponding pixel’s color information
Deviation.                                                 (e.g. intensity) as an external stimulus. Each neuron
                                                           also connects with its neighboring neurons, receiving
3.2 Spatial Features                                       local stimuli from them.
          Shape and structural features are extracted
using PSI [5] and SFS [4]. Pixel Shape Index method        IV. DESIGN FLOW
uses the concept of direction lines for shape feature.             Following Fig. 2 shows the steps to process
In [5], direction lines of each pixel are determined       multispectral LANDSAT image.
using following rules: 1) Pixel Homogeneity of ith
pixel is less than a predefined threshold T1. 2) The
total number of pixels in this direction line is less
than another predefined threshold T2. Theoretically,
T1 will be set to between 2.5 to 4.0 times the
averages SD of the Euclidean distance of the training
pixel data from the class means [4]. T2 is set as 0.5 to
0.6 times the number of rows or columns for the
image [4]. Length each direction line is find and PSI
is calculated as in [5].
          Statistical measures will be employed to
reduce the dimensionality and extract the features
from the direction lines histogram. Following
statistical measures are extracted: Length, Width,
Ratio, and SD from the histogram.
All spatial features together form Structure Feature
Set, SFS.                                                  Figure 2: Classification flow of MSI
3.3 Feature Selection                                                Noise removal and dimension reduction are
          Feature selection is the process of removing     performed in preprocessing step. K-means, an
features from the data set that are irrelevant with        unsupervised technique i.e. no need of prior
respect to the task that is to be performed.               knowledge about image, is applied. Textural, spatial
          The task of feature selection involves two       and spectral features are found out and then feature
steps, namely, partitioning the original feature set       selection is performed. Several classification
into a number of homogeneous subsets (clusters) and        approaches, unsupervised and supervised, are
selecting a representative feature from each such          performed. K-nearest neighbor, BPNN and PCNN,
cluster. Partitioning of the features is done based on     supervised techniques, are also applied. Training data
the k-NN principle using MICI [6]. In doing so, the k      is fed into supervised classifier as an external input to
nearest features of each feature is computed first.        support classification step. Classification accuracy is
Among them the feature having the most compact             calculated and compared for all the classifiers.
subset is selected, and its k neighboring features are
discarded. The process is repeated for the remaining



                                                                                                     44 | P a g e
Smriti Sehgal / International Journal of Engineering Research and Applications
                  (IJERA)            ISSN: 2248-9622       www.ijera.com
                   Vol. 2, Issue 5, September- October 2012, pp.043-046
V. EXPERIMENTAL RESULTS                         Table 1: Pixel distribution of image
          LANDSAT image of Brazil is used for             Class      Total        Training         Testing
experiments. Original number of bands in image was        No.        Pixels       Pixels           Pixels
seven, which was reduced to three using PCA. Image        0          4373         2624             1749
after selection of bands is as below:
                                                          1          8375         5025             3350
                                                          2          7679         4608             3071
                                                          3          251          151              100
                                                          4          1756         1054             702
                                                          5          42           26               16
                                                          8          6085         3651             2636

                                                                 In k-NN, k-nearest neighbor, k represents
                                                        the number of clusters to be performed. Metric used
Figure 3: Selected band image                           for cluster formation is Euclidean distance. Table 2
                                                        and table 3 gives classification accuracies for
        k-means method was then applied to above        different values of k for textural with spectral
image to perform pixel-by-pixel classification.         features and spatial with spectral features.
Resultant classified image is as follows:
                                                        Table 2: Classification Accuracies for different k
                                                        values in k-NN using textural with spectral
                                                        features
                                                          k                  Classification Accuracy
                                                          1                  66.923%
                                                          3                  71.794%
                                                          5                  74.358%

                                                        Table 3: Classification Accuracies for different k
                                                        values in k-NN using spatial with spectral features
                                                              k                 Classification
Figure 4: K-means classified image                                              Accuracy
                                                              1                 76.67%
          In above image, each color corresponds to           3                 74.35%
distinct class mentioned in Fig. 1. Classification            5                 78.12%
accuracy for k-means approach found out to be
67.75%.                                                           Neural network model used in these
          Textural features such as mean, variance,     experiments consist of three layers, one input layer,
contrast, dissimilarity, entropy, homogeneity, and SD   one hidden layer and one output layer. Numbers of
are extracted from band selected image using GLCM.      neurons in input, hidden and output layer are seven,
Spatial features mentioned in section 3, are also       five, and seven respectively for textural with spectral
extracted from band selected image. Selected features   features. Numbers of neurons in input, hidden and
along with training data are fed into all supervised    output layer for spatial with spectral features are four,
classifiers. Training data and pixel distribution       three, and seven respectively. Initially, weights of the
formed using available ground data is given in Fig. 5   neurons are taken randomly. Error tolerance is taken
and table 1.                                            as 0.001 and maximum number of epochs is set to
                                                        1000. After setting all parameters of the model,
                                                        features are given as input to both the classifiers.
                                                        Classified image by BPNN and PCNN is shown in
                                                        Fig. 6 and Fig. 7. Classification accuracies found out
                                                        to be as follows:




Figure 5: Training Data


                                                                                                  45 | P a g e
Smriti Sehgal / International Journal of Engineering Research and Applications
                       (IJERA)            ISSN: 2248-9622     www.ijera.com
                        Vol. 2, Issue 5, September- October 2012, pp.043-046
Table 4: Classification accuracies for all methods   REFERENCES
 with both feature set                                     [1]    M. C. Alonso , J. A. Malpica, “Satellite
Feature Set            Method            Accuracy                 Imagery classification with lidar data”,
Textural + Spectral    k-means           67.75%                   International      Archives      of      the
                       k-NN              69.14%                   Photogrammetry, Remote Sensing and
                       BPNN              79.47%                   Spatial Information Science, Volume
                       PCNN              70.09%                   XXXVIII, Part 8, Kyoto Japan 2010.
Spatial + Spectral     k-means           67.75%            [2]    Jiefeng Jiang, Jing Zhang, Gege Yang,
                                                                  Dapeng Zhang, Lianjun Zhang, “Application
                                                                  of Back Propagation Neural Network in the
                         k-NN            72.45%                   Classification of High Resolution Remote
                         BPNN            87.12%                   Sensing Image”, supported by the
                         PCNN            82.34%                   geographic information technology research,
                                                                  2009.
VI. CONCLUSION                                             [3]    Haihui Wang, Junhua Zhang, Kai Xiang,
          Previous work showed that feature based                 Yang Liu, “Classification of Remote
classification overcomes the drawbacks of pixel                   Sensing Agricultural Image by Using
based classification approach. Detailed study of the              Artificial Neural Network”, In the
image shows that wide information is spread over all              proceedings of IEEE, 2009.
spectral bands. Experiments are conducted first            [4]    Xin Huang, Liangpei Zhang, and Pingxiang
taking textural and spectral features together and then           Li, “Classification and Extraction of spatial
spatial and spectral features. Giving these feature sets          Features in Urban Areas Using High-
as input to different classifiers, we compared the                Resolution Multispectral Imagery” In the
results. According to the table 4, feature based                  proceedings of IEEE Geosciences and
supervised classification gave better results than                remote sensing letters, VOL. 4, NO. 2,
unsupervised one. Among the supervised techniques,                APRIL 2007.
BPNN gives highest accuracies for both textural and        [5]    Liangpei Zhang, Xin Huang, Bo Huang, and
spatial features along with spectral features.                    Pingxiang Li, “A Pixel Shape Index
                                                                  Coupled With Spectral Information for
                                                                  Classification of High Spatial Resolution
                                                                  Remotely Sensed Imagery”, In the
                                                                  proceedings of IEEE transactions on
                                                                  Geosciences and remote sensing, VOL. 44,
                                                                  NO. 10, OCTOBER 2006.
                                                           [6]    Pabitra Mitra, “Unsupervised Feautre
                                                                  Selection using Feautre Similarity”, In the
                                                                  proceedings of IEEE transactions on
                                                                  Geosciences and remote sensing, vol. 44,
                                                                  NO. 10, October 2002.
                                                           [7]    Mryka Hall-Beyer, “The GLCM Tutorial”
                                                           [8]    Mrs.      Ashwini     T.     Sapkal,     Mr.
                                                                  Chandraprakash Bokhare, Mr. N. Z.
Figure 6: Classified image by BPNN giving 87.2 %                  Tarapore, “Satellite Image Classification
                                                                  using the Back Propagation Algorithm of
                                                                  Artificial Neural Network”, In the
                                                                  proceedings of IEEE on geoscience and
                                                                  remote sensing, Vol. 46, September, 2001.
                                                           [9]    H. Bischof, W. Schneider, and A. J. Pinz
                                                                  “Multispectral Classification of Landsat-
                                                                  Images Using Neural Networks”. In the
                                                                  proceedings of IEEE transactions on
                                                                  Geosciences and remote sensing, Vol. 30,
                                                                  NO 3, May 1992.
                                                           [10]   Robert M. Haralik, K.Shanmugam,”
                                                                  Textural Features for Image Classification”,
                                                                  In the proceedings of IEEE Transactions on
                                                                  systems, man, and cybernetics, November
                                                                  1973.
Figure 7: Classified image by PCNN giving 82.34            [11]   Christos Stergiou and Dimitrios Siganos,
%                                                                 “NEURAL NETWORKS”.

                                                                                                 46 | P a g e

More Related Content

What's hot

Image Steganography Using Wavelet Transform And Genetic Algorithm
Image Steganography Using Wavelet Transform And Genetic AlgorithmImage Steganography Using Wavelet Transform And Genetic Algorithm
Image Steganography Using Wavelet Transform And Genetic AlgorithmAM Publications
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus imagesIAEME Publication
 
Medical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transformMedical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transformAnju Anjujosepj
 
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenes
Texture Unit based Approach to Discriminate Manmade Scenes from Natural ScenesTexture Unit based Approach to Discriminate Manmade Scenes from Natural Scenes
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
 
A New Approach of Medical Image Fusion using Discrete Wavelet Transform
A New Approach of Medical Image Fusion using Discrete Wavelet TransformA New Approach of Medical Image Fusion using Discrete Wavelet Transform
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
 
Image classification, remote sensing, P K MANI
Image classification, remote sensing, P K MANIImage classification, remote sensing, P K MANI
Image classification, remote sensing, P K MANIP.K. Mani
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewIJMER
 
Review on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid techniqueReview on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid techniqueIRJET Journal
 
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORM
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMPDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORM
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
 
New Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionNew Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
 

What's hot (18)

B42020710
B42020710B42020710
B42020710
 
Cm31588593
Cm31588593Cm31588593
Cm31588593
 
Image Steganography Using Wavelet Transform And Genetic Algorithm
Image Steganography Using Wavelet Transform And Genetic AlgorithmImage Steganography Using Wavelet Transform And Genetic Algorithm
Image Steganography Using Wavelet Transform And Genetic Algorithm
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
93 98
93 9893 98
93 98
 
Medical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transformMedical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transform
 
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenes
Texture Unit based Approach to Discriminate Manmade Scenes from Natural ScenesTexture Unit based Approach to Discriminate Manmade Scenes from Natural Scenes
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenes
 
A New Approach of Medical Image Fusion using Discrete Wavelet Transform
A New Approach of Medical Image Fusion using Discrete Wavelet TransformA New Approach of Medical Image Fusion using Discrete Wavelet Transform
A New Approach of Medical Image Fusion using Discrete Wavelet Transform
 
Image classification, remote sensing, P K MANI
Image classification, remote sensing, P K MANIImage classification, remote sensing, P K MANI
Image classification, remote sensing, P K MANI
 
Extended fuzzy c means clustering algorithm in segmentation of noisy images
Extended fuzzy c means clustering algorithm in segmentation of noisy imagesExtended fuzzy c means clustering algorithm in segmentation of noisy images
Extended fuzzy c means clustering algorithm in segmentation of noisy images
 
A280108
A280108A280108
A280108
 
Image Fusion
Image FusionImage Fusion
Image Fusion
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
 
Review on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid techniqueReview on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid technique
 
sibgrapi2015
sibgrapi2015sibgrapi2015
sibgrapi2015
 
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORM
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMPDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORM
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORM
 
New Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionNew Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral Recognition
 

Viewers also liked (20)

Lm2519942003
Lm2519942003Lm2519942003
Lm2519942003
 
Ku2518881893
Ku2518881893Ku2518881893
Ku2518881893
 
Ky2519111913
Ky2519111913Ky2519111913
Ky2519111913
 
Jq2517021708
Jq2517021708Jq2517021708
Jq2517021708
 
I25038042
I25038042I25038042
I25038042
 
It2515601565
It2515601565It2515601565
It2515601565
 
Je2516241630
Je2516241630Je2516241630
Je2516241630
 
Ib2514451452
Ib2514451452Ib2514451452
Ib2514451452
 
Li2519631970
Li2519631970Li2519631970
Li2519631970
 
Id2514581462
Id2514581462Id2514581462
Id2514581462
 
Km2518391842
Km2518391842Km2518391842
Km2518391842
 
Jn2516891694
Jn2516891694Jn2516891694
Jn2516891694
 
Jh2516461655
Jh2516461655Jh2516461655
Jh2516461655
 
Lo2520082015
Lo2520082015Lo2520082015
Lo2520082015
 
Kc2517811784
Kc2517811784Kc2517811784
Kc2517811784
 
Lz2520802095
Lz2520802095Lz2520802095
Lz2520802095
 
Ki2518101816
Ki2518101816Ki2518101816
Ki2518101816
 
Ls2520342037
Ls2520342037Ls2520342037
Ls2520342037
 
Af31231237
Af31231237Af31231237
Af31231237
 
Kk2417701773
Kk2417701773Kk2417701773
Kk2417701773
 

Similar to J25043046

International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)inventionjournals
 
An fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAn fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAlexander Decker
 
SAR Image Classification by Multilayer Back Propagation Neural Network
SAR Image Classification by Multilayer Back Propagation Neural NetworkSAR Image Classification by Multilayer Back Propagation Neural Network
SAR Image Classification by Multilayer Back Propagation Neural NetworkIJMTST Journal
 
Segmentation and Classification of MRI Brain Tumor
Segmentation and Classification of MRI Brain TumorSegmentation and Classification of MRI Brain Tumor
Segmentation and Classification of MRI Brain TumorIRJET Journal
 
Feature extraction based retrieval of
Feature extraction based retrieval ofFeature extraction based retrieval of
Feature extraction based retrieval ofijcsity
 
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...
Q UANTUM  C LUSTERING -B ASED  F EATURE SUBSET  S ELECTION FOR MAMMOGRAPHIC I...Q UANTUM  C LUSTERING -B ASED  F EATURE SUBSET  S ELECTION FOR MAMMOGRAPHIC I...
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
 
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...An Analysis and Comparison of Quality Index Using Clustering Techniques for S...
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...CSCJournals
 
phase 2 ppt.pptx
phase 2 ppt.pptxphase 2 ppt.pptx
phase 2 ppt.pptxbharatt7
 
Detection of leaf diseases and classification using digital image processing
Detection of leaf diseases and classification using digital image processingDetection of leaf diseases and classification using digital image processing
Detection of leaf diseases and classification using digital image processingNaeem Shehzad
 
SEGMENTATION USING ‘NEW’ TEXTURE FEATURE
SEGMENTATION USING ‘NEW’ TEXTURE FEATURESEGMENTATION USING ‘NEW’ TEXTURE FEATURE
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
 
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...IAEME Publication
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
 
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...IRJET Journal
 
IRJET-Multimodal Image Classification through Band and K-Means Clustering
IRJET-Multimodal Image Classification through Band and K-Means ClusteringIRJET-Multimodal Image Classification through Band and K-Means Clustering
IRJET-Multimodal Image Classification through Band and K-Means ClusteringIRJET Journal
 
A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...IRJET Journal
 

Similar to J25043046 (20)

International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
An fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimumAn fpga based efficient fruit recognition system using minimum
An fpga based efficient fruit recognition system using minimum
 
SAR Image Classification by Multilayer Back Propagation Neural Network
SAR Image Classification by Multilayer Back Propagation Neural NetworkSAR Image Classification by Multilayer Back Propagation Neural Network
SAR Image Classification by Multilayer Back Propagation Neural Network
 
C017121219
C017121219C017121219
C017121219
 
Segmentation and Classification of MRI Brain Tumor
Segmentation and Classification of MRI Brain TumorSegmentation and Classification of MRI Brain Tumor
Segmentation and Classification of MRI Brain Tumor
 
Feature extraction based retrieval of
Feature extraction based retrieval ofFeature extraction based retrieval of
Feature extraction based retrieval of
 
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...
Q UANTUM  C LUSTERING -B ASED  F EATURE SUBSET  S ELECTION FOR MAMMOGRAPHIC I...Q UANTUM  C LUSTERING -B ASED  F EATURE SUBSET  S ELECTION FOR MAMMOGRAPHIC I...
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...
 
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...An Analysis and Comparison of Quality Index Using Clustering Techniques for S...
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...
 
phase 2 ppt.pptx
phase 2 ppt.pptxphase 2 ppt.pptx
phase 2 ppt.pptx
 
Detection of leaf diseases and classification using digital image processing
Detection of leaf diseases and classification using digital image processingDetection of leaf diseases and classification using digital image processing
Detection of leaf diseases and classification using digital image processing
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
SEGMENTATION USING ‘NEW’ TEXTURE FEATURE
SEGMENTATION USING ‘NEW’ TEXTURE FEATURESEGMENTATION USING ‘NEW’ TEXTURE FEATURE
SEGMENTATION USING ‘NEW’ TEXTURE FEATURE
 
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...
HYPERSPECTRAL IMAGERY CLASSIFICATION USING TECHNOLOGIES OF COMPUTATIONAL INTE...
 
Application of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving WeatherApplication of Image Retrieval Techniques to Understand Evolving Weather
Application of Image Retrieval Techniques to Understand Evolving Weather
 
Fd36957962
Fd36957962Fd36957962
Fd36957962
 
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...
IRJET- Remote Sensing Image Retrieval using Convolutional Neural Network with...
 
Mn3621372142
Mn3621372142Mn3621372142
Mn3621372142
 
IRJET-Multimodal Image Classification through Band and K-Means Clustering
IRJET-Multimodal Image Classification through Band and K-Means ClusteringIRJET-Multimodal Image Classification through Band and K-Means Clustering
IRJET-Multimodal Image Classification through Band and K-Means Clustering
 
A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...
 

J25043046

  • 1. Smriti Sehgal / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.043-046 Remotely Sensed LANDSAT Image Classification Using Neural Network Approaches Smriti Sehgal Department of Computer Science & Engineering, Amity University, Noida, India ABSTRACT In paper, LANDSAT multispectral image presented and compared for remotely sensed is classified using several unsupervised and multispectral image. supervised techniques. Pixel-by-pixel classification In this paper, LANDSAT ETM+ approaches proved to be infeasible as well as time multispectral image of Brazil is used for experiments consuming in case of multispectral images. To and result analysis. First, loading of multispectral overcome this, instead of classifying each pixel, image is achieved using PCA technique, which feature based classification approach is used. reduced the number of bands of this image from Three supervised techniques namely, k-NN, BPNN seven to three. Second, textural and spatial features and PCNN are investigated for classification using are extracted using GLCM and PSI. Structural feature textural, spatial and spectral images. Experiments set [4] is constructed using these features. Third, shows supervised approaches perform better than relevant features are selected using S-Index [6] unsupervised ones. Comparison between k-NN, approach. Fourth, both the feature set along with BPNN, and PCNN is done using these features and spectral features is given as input to different classification accuracies for BPNN is found out to classifiers. Classification accuracy is calculated and be more than k-NN and PCNN. then compared among each classifier. This paper is organized as follows: section II Keywords - Classification, multispectral image, gives the detailed specifications of image and its neural network, spatial feature, Textural feature. reference data. Section III describes the different features extracted, feature selection approach, and I. INTRODUCTION classification techniques. Section IV gives the design Remotely sensed satellite image having flow. Section V shows the experimental results and multispectral bands gives large amount of data about section VI concludes the paper. the ground interested in. The size of images is growing due to advancement in sensors capturing II. DATA ACQUISITION them. Processing of such images poses challenges LANDSAT ETM+ image used, consist of 7 such as choosing appropriate bands, extracting only spectral bands with a spatial resolution of 30 meters relevant features, classification etc due to spectral for all bands. The used LANDSAT has good quality and spatial homogeneity. Using only spectral of sensor distortion, acquired on 20 January, 2000. information from multispectral image is not sufficed The image has 0.1 micrometer spectral resolution and for classification purpose. Combining either textural 30 m spatial resolution. Dimension of image is 169 × or spatial information with spectral information 169 with TIFF extension. Ground truth data was increases the classification accuracy of the system. available with the image as a reference for processing Various spatial features are extracted from the image on the image. such as length, width, PSI, ratio etc using the concept of direction lines [5]. Gray level co-occurrence matrix (GLCM) [7] is mostly used and successful to some extent, and it is used to find textural features from the LANDSAT image. In [6], a new feature similarity measure is introduced, known as maximum information compression index, MICI [6]. This approach is based on linear dependency among the variables and benefit of choosing this measure is that if we remove one of the linearly dependent feature, Figure 1: Ground Truth Data data would still be linearly separable. After extracting Using available ground data, seven classes and selecting appropriate features from the image, are known in the image. The search program begins these features are given as input into classifiers for by examining blocks of pixels looking for spectrally further processing. Several classification approaches homogeneous blocks. Data samples created from this such as k-means [8], SVM [4], BPNN [2][8], and counted to total of 122 samples. Out of these, approx PCNN are used. In [9], neural network techniques are 43 | P a g e
  • 2. Smriti Sehgal / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.043-046 74 were used as training samples and rest are kept as features until all of them are either selected or testing samples. discarded. After all features are clustered into some clusters, one III. METHODOLOGY representative feature from each cluster is selected. 3.1 Textural Features For this, Mean of all the features in one cluster would GLCM [7] is used to extract textural represent a cluster. features from the image. A statistical method that considers the spatial relationship of pixels is GLCM, 3.4 Classification also known as the gray-level spatial dependence Classification processing step classifies each matrix. By default, the spatial relationship is defined block or pixel into one of the known classes. as the pixel of interest and the pixel to its immediate Backward Propagation is probably the most common right (horizontally adjacent), but other spatial method for training forward-feed neural networks. A relationships between the two pixels can also be forward pass using an input pattern propagates made. Each element (I, J) in the resultant GLCM is through the network and produces an actual output. simply the sum of the number of times that the pixel The backward pass uses the desired outputs with value I occurred in the specified spatial corresponding to the input pattern and updates the relationship to a pixel with value J in the input image. weights according to the error signal. A PCNN is a The Following GLCM features were extracted: two-dimensional neural network. Each neuron in the Contrast, Dissimilarity, entropy, Homogeneity, network corresponds to one pixel in an input image, GLCM Mean, GLCM variance, GLCM Standard receiving its corresponding pixel’s color information Deviation. (e.g. intensity) as an external stimulus. Each neuron also connects with its neighboring neurons, receiving 3.2 Spatial Features local stimuli from them. Shape and structural features are extracted using PSI [5] and SFS [4]. Pixel Shape Index method IV. DESIGN FLOW uses the concept of direction lines for shape feature. Following Fig. 2 shows the steps to process In [5], direction lines of each pixel are determined multispectral LANDSAT image. using following rules: 1) Pixel Homogeneity of ith pixel is less than a predefined threshold T1. 2) The total number of pixels in this direction line is less than another predefined threshold T2. Theoretically, T1 will be set to between 2.5 to 4.0 times the averages SD of the Euclidean distance of the training pixel data from the class means [4]. T2 is set as 0.5 to 0.6 times the number of rows or columns for the image [4]. Length each direction line is find and PSI is calculated as in [5]. Statistical measures will be employed to reduce the dimensionality and extract the features from the direction lines histogram. Following statistical measures are extracted: Length, Width, Ratio, and SD from the histogram. All spatial features together form Structure Feature Set, SFS. Figure 2: Classification flow of MSI 3.3 Feature Selection Noise removal and dimension reduction are Feature selection is the process of removing performed in preprocessing step. K-means, an features from the data set that are irrelevant with unsupervised technique i.e. no need of prior respect to the task that is to be performed. knowledge about image, is applied. Textural, spatial The task of feature selection involves two and spectral features are found out and then feature steps, namely, partitioning the original feature set selection is performed. Several classification into a number of homogeneous subsets (clusters) and approaches, unsupervised and supervised, are selecting a representative feature from each such performed. K-nearest neighbor, BPNN and PCNN, cluster. Partitioning of the features is done based on supervised techniques, are also applied. Training data the k-NN principle using MICI [6]. In doing so, the k is fed into supervised classifier as an external input to nearest features of each feature is computed first. support classification step. Classification accuracy is Among them the feature having the most compact calculated and compared for all the classifiers. subset is selected, and its k neighboring features are discarded. The process is repeated for the remaining 44 | P a g e
  • 3. Smriti Sehgal / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.043-046 V. EXPERIMENTAL RESULTS Table 1: Pixel distribution of image LANDSAT image of Brazil is used for Class Total Training Testing experiments. Original number of bands in image was No. Pixels Pixels Pixels seven, which was reduced to three using PCA. Image 0 4373 2624 1749 after selection of bands is as below: 1 8375 5025 3350 2 7679 4608 3071 3 251 151 100 4 1756 1054 702 5 42 26 16 8 6085 3651 2636 In k-NN, k-nearest neighbor, k represents the number of clusters to be performed. Metric used Figure 3: Selected band image for cluster formation is Euclidean distance. Table 2 and table 3 gives classification accuracies for k-means method was then applied to above different values of k for textural with spectral image to perform pixel-by-pixel classification. features and spatial with spectral features. Resultant classified image is as follows: Table 2: Classification Accuracies for different k values in k-NN using textural with spectral features k Classification Accuracy 1 66.923% 3 71.794% 5 74.358% Table 3: Classification Accuracies for different k values in k-NN using spatial with spectral features k Classification Figure 4: K-means classified image Accuracy 1 76.67% In above image, each color corresponds to 3 74.35% distinct class mentioned in Fig. 1. Classification 5 78.12% accuracy for k-means approach found out to be 67.75%. Neural network model used in these Textural features such as mean, variance, experiments consist of three layers, one input layer, contrast, dissimilarity, entropy, homogeneity, and SD one hidden layer and one output layer. Numbers of are extracted from band selected image using GLCM. neurons in input, hidden and output layer are seven, Spatial features mentioned in section 3, are also five, and seven respectively for textural with spectral extracted from band selected image. Selected features features. Numbers of neurons in input, hidden and along with training data are fed into all supervised output layer for spatial with spectral features are four, classifiers. Training data and pixel distribution three, and seven respectively. Initially, weights of the formed using available ground data is given in Fig. 5 neurons are taken randomly. Error tolerance is taken and table 1. as 0.001 and maximum number of epochs is set to 1000. After setting all parameters of the model, features are given as input to both the classifiers. Classified image by BPNN and PCNN is shown in Fig. 6 and Fig. 7. Classification accuracies found out to be as follows: Figure 5: Training Data 45 | P a g e
  • 4. Smriti Sehgal / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.043-046 Table 4: Classification accuracies for all methods REFERENCES with both feature set [1] M. C. Alonso , J. A. Malpica, “Satellite Feature Set Method Accuracy Imagery classification with lidar data”, Textural + Spectral k-means 67.75% International Archives of the k-NN 69.14% Photogrammetry, Remote Sensing and BPNN 79.47% Spatial Information Science, Volume PCNN 70.09% XXXVIII, Part 8, Kyoto Japan 2010. Spatial + Spectral k-means 67.75% [2] Jiefeng Jiang, Jing Zhang, Gege Yang, Dapeng Zhang, Lianjun Zhang, “Application of Back Propagation Neural Network in the k-NN 72.45% Classification of High Resolution Remote BPNN 87.12% Sensing Image”, supported by the PCNN 82.34% geographic information technology research, 2009. VI. CONCLUSION [3] Haihui Wang, Junhua Zhang, Kai Xiang, Previous work showed that feature based Yang Liu, “Classification of Remote classification overcomes the drawbacks of pixel Sensing Agricultural Image by Using based classification approach. Detailed study of the Artificial Neural Network”, In the image shows that wide information is spread over all proceedings of IEEE, 2009. spectral bands. Experiments are conducted first [4] Xin Huang, Liangpei Zhang, and Pingxiang taking textural and spectral features together and then Li, “Classification and Extraction of spatial spatial and spectral features. Giving these feature sets Features in Urban Areas Using High- as input to different classifiers, we compared the Resolution Multispectral Imagery” In the results. According to the table 4, feature based proceedings of IEEE Geosciences and supervised classification gave better results than remote sensing letters, VOL. 4, NO. 2, unsupervised one. Among the supervised techniques, APRIL 2007. BPNN gives highest accuracies for both textural and [5] Liangpei Zhang, Xin Huang, Bo Huang, and spatial features along with spectral features. Pingxiang Li, “A Pixel Shape Index Coupled With Spectral Information for Classification of High Spatial Resolution Remotely Sensed Imagery”, In the proceedings of IEEE transactions on Geosciences and remote sensing, VOL. 44, NO. 10, OCTOBER 2006. [6] Pabitra Mitra, “Unsupervised Feautre Selection using Feautre Similarity”, In the proceedings of IEEE transactions on Geosciences and remote sensing, vol. 44, NO. 10, October 2002. [7] Mryka Hall-Beyer, “The GLCM Tutorial” [8] Mrs. Ashwini T. Sapkal, Mr. Chandraprakash Bokhare, Mr. N. Z. Figure 6: Classified image by BPNN giving 87.2 % Tarapore, “Satellite Image Classification using the Back Propagation Algorithm of Artificial Neural Network”, In the proceedings of IEEE on geoscience and remote sensing, Vol. 46, September, 2001. [9] H. Bischof, W. Schneider, and A. J. Pinz “Multispectral Classification of Landsat- Images Using Neural Networks”. In the proceedings of IEEE transactions on Geosciences and remote sensing, Vol. 30, NO 3, May 1992. [10] Robert M. Haralik, K.Shanmugam,” Textural Features for Image Classification”, In the proceedings of IEEE Transactions on systems, man, and cybernetics, November 1973. Figure 7: Classified image by PCNN giving 82.34 [11] Christos Stergiou and Dimitrios Siganos, % “NEURAL NETWORKS”. 46 | P a g e