SlideShare una empresa de Scribd logo
1 de 8
Descargar para leer sin conexión
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
                         Hyperspectral Image Classification

                      Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar
Gitanjali S. Korgaonkar is currently pursuing masters degree in electronics & telecommunication engineering in
                                   TCET,Kandivali,Mumbai University,India,
Dr. R.S. Sedamkar is received his Ph.D. degree in 201 and is currently Dean-Academics, Professor and Head of
               Computer Department at Thakur college of engineering and technology, Mumbai.


Abstract
This paper proposes weighted decision fusion for          1.1The Imaging Spectrometer
classification on a sample hyperspectral image                      Hyperspectral images are produced by in-
which has two characteristics 1) Spatial and 2)           struments called imaging spectrometers. The devel-
Spectral. Spatial is nothing but within a single          opment of these complex sensors has involved the
band and Spectral means narrow spectral bands             convergence of two related but distinct technologies:
over a continuous spectral range.The Weighted             spectroscopy and the remote imaging of Earth and
Majority voting rule is introduced which is likely        planetary surfaces. Spectroscopy is the study of
to enhance the performance. In this pixels close          light that is emitted by or reflected from materials
to the spectral centroid of a segment are assigned        and its variation in energy with wavelength. As ap-
higher weights for voting because they are the            plied to the field of optical remote sensing, spectros-
best representatives of the segment, and pixels           copy deals with the spectrum of sunlight that is dif-
deviated from the centroid are not representa-            fusely reflected (scattered) by materials at the
tives and will have lower weights. Experimental           Earth.s surface.
results for SAMSON (Spectroscopic Aerial Map-
ping System with Onboard Navigation) data                           Instruments called spectrometers (or spec-
show that proposed scheme achieve Overall Ac-             troradiometers) are used to make ground-based or
curacy, Kappa Coefficient and Average Accuracy            laboratory measurements of the light reflected from
better than Majority voting rule.                         a test material. An optical dispersing element such
                                                          as a grating or prism in the spectrometer splits this
Index Terms— Hyperspectral image classifica-              light into many narrow, adjacent wavelength bands
tion, Supervised classification, Unsupervised Classi-     and the energy in each band is measured by a sepa-
fication, Weighted majority voting Rule.                  rate detector. By using hundreds or even thousands
                                                          of detectors, spectrometers can make spectral mea-
Introduction                                              surements of bands as narrow as 0.01 micrometers
                                                          over a wide wavelength range, typically at least 0.4
         Hyperspectral images contain a wealth of         to 2.4 micrometers (visible through middle infrared
data, but interpreting them requires an understand-
ing of exactly what properties of ground materials        wavelength ranges).
we are trying to measure, and how they relate to the
measurements actually made by the hyperspectral           1.2Characteristics of Electromagnetic Radiation:
sensor.[1]                                                          Hyperspectral imagery involves the sensing
                                                          of electromagnetic radiation. The electromagnetic
                                                          spectrum that is used in remote sensing includes the
                                                          ultraviolet (UV), visible, infrared, and microwave
                                                          portions of the spectrum.

                                                                   Figure 2 illustrates the wavelength regions
                                                          of the electromagnetic spectrum. The spectral por-
                                                          tions of near IR and short wave infrared (0.7-3.0
                                                          μm) are called the reflective infrared because meas-
                                                          ured radiation in this spectral region is mostly com-
                                                          posed of reflected sunlight. In contrast, the IR spec-
                                                          trum from 5.0 to 13.0 μm is termed thermal infrared
                                                          because measurements in this spectral region are
                                                          generally recording energy radiated from scene ele-
                                                          ments [2]

Figure 1. The Hyperspectral Imagary




                                                                                                2177 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
                                                           to the other two in specific wavelength regions.
                                                           Therefore, by knowing a material's spectral signa-
                                                           ture, it is possible for this material to be detected in
                                                           pixels within an image. Libraries with the character-
                                                           istic spectra of various materials of interest exist,
                                                           and these spectral signatures can be compared with
                                                           measured spectra in order to identify the features in
                                                           an image. In the early work on spectral imagery,
                                                           computational limits prevented full exploitation of
                                                           such data. Computational power in the latter portion
                                                           of the 1990‘s made routine use of spectral imagery
Figure 2 Wavelength regions of the electromagnetic
                                                           much more practical.
spectrum.
Based on these wavelength regions, remote sensing
can be classified into three main categories: visible
                                                           R
and reflective infrared remote sensing, thermal re-
                                                           a
mote sensing, and microwave remote sensing. In
                                                           d
visible and reflective remote sensing, the radiation
                                                           i
that is measured has a solar origin, radiated with a
                                                           a
peak wavelength of 0.5 μm. In thermal remote sens-
                                                           n
ing, the measured radiation comes from the ob-
                                                           c
served objects. Materials with normal temperatures
                                                           e
(~300K) emit radiation with a peak wavelength of
10.0 μm. Finally, in microwave remote sensing, ob-
servations are generally due to reflection of energy
radiated by the observing platform (i.e. radar). In
this paper, hyperspectral images from the visible and
reflective infrared spectrum are used. The majority
                                                           Wavelength(nm)
of currently available sensors with material identifi-
cation (ID) capability utilize this portion of spec-
                                                           Figure 3. Hyperspectral pixel spectra.
trum.
          Typically the source of energy in hyper-
                                                                     Identifying groups of pixels that have simi-
spectral imagery is the sun. The incident energy
                                                           lar spectral characteristics and determining the vari-
from the sun that is not absorbed or scattered in the
                                                           ous features or land cover classes represented by
atmosphere interacts with the earth's surface, where
                                                           these groups is an important part of image analysis.
it is absorbed, transmitted, or reflected. Additionally,
                                                           This form of analysis is known as classification.
the electromagnetic radiation has specific properties
                                                           Visual classification relies on the analyst's ability to
that are predictable in its interaction with materials
and transmission through the atmosphere. Therefore,        use visual elements (tone, contrast, shape, etc) to
by measuring the electromagnetic radiation in nar-         classify an image. Digital image classification is
                                                           based on the spectral information used to create the
row wavelength bands, the resulting spectral signa-
                                                           image and classifies each individual pixel based on
tures can be used, in principle, to uniquely charac-
                                                           its spectral characteristics. The result of a classifica-
terize and identify any given material.
                                                           tion is that all pixels in an image are assigned to
          All materials have unique spectral charac-
                                                           particular classes or themes (e.g. water, coniferous
teristics because they absorb, reflect, and emit radia-
                                                           forest, deciduous forest, corn, wheat, etc.), resulting
tion in a unique way. For example, in the visible
portion of the spectrum, a leaf appears green be-          in a classified image that is essentially a thematic
cause it absorbs in the blue and red regions of the        map of the original image. The theme of the classifi-
spectrum and reflects in the green region. These           cation is selectable, thus a classification can be per-
variations in absorption, reflection, and emission are     formed to observe land use patterns, geology, vege-
due to the material composition. Differences in            tation types, or rainfall.
                                                                     In classifying an image we must distin-
spectral response due to absorption, transmission,
                                                           guish between spectral classes and information
and reflection cause materials to have a unique spec-
                                                           classes. Spectral classes are groups of pixels that
tral signature. Figure 3 illustrates the spectral signa-
                                                           have nearly uniform spectral characteristics. Infor-
tures of three pixels, respectively dominated by
                                                           mation classes are various themes or groups we are
seawater, vegetation, and concrete. Comparing the
                                                           attempting to identify in an image. Information
spectra between seawater and vegetation, it is ob-
served that they reflect similarly in the visible wave-    classes may include such classes as deciduous and
                                                           coniferous forests, various crop types, or inland bod-
lengths but differently in the infrared portion. Also,
concrete has a different spectral signature compared       ies of water. The objective of image classification is
                                                           to match the spectral classes in the data to the in-



                                                                                                  2178 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
formation classes of interest.                             2.1.1.Support Vector Machine
          Different conditions like the sun illumina-                One of the best algorithm used for super-
tion, snow cover, other atmospheric conditions, crop       vised classification is Support Vector Ma-
growth in a particular season etc. conditions effect       chine(SVM). A support vector machine (SVM) is a
the classification process.                                concept in statistics and computer science for a set
                                                           of related supervised learning methods that analyze
2. TYPES OF CLASSIFICATION                                 data and recognize patterns, used for classification
         Image classification is perhaps the most          and regression analysis. The standard SVM takes a
important part of digital image analysis. It is very       set of input data and predicts, for each given input,
nice to have a "pretty picture" or an image, showing       which of two possible classes forms the input, mak-
a magnitude of colors illustrating various features of     ing the SVM a non-probabilisticbinarylinear classi-
the underlying terrain, but it is quite useless unless     fier. Given a set of training examples, each marked
to know what the colors mean. Image classification         as belonging to one of two categories, an SVM
or the ‗partition of the feature space‘ can be done in     training algorithm builds a model that assigns new
two ways: Supervised and Unsupervised classifica-          examples into one category or the other. An SVM
tion. In the supervised classification, the operator       model is a representation of the examples as points
defines his classes of interest by selecting the train-    in space, mapped so that the examples of the sepa-
ing samples in an image on the basis of his/her            rate categories are divided by a clear gap that is as
knowledge of the area. In the unsupervised classifi-       wide as possible. New examples are then mapped
cation, the image is partitioned into homogenous           into that same space and predicted to belong to a
spectral clusters using some clustering algorithm.         category based on which side of the gap they fall on.
These spectral clusters are accomplished on the ba-        SVMs characterize classes using a geometrical crite-
sis of some spectral similarities. In the classification   rion rather than statistical criteria. They seek a sepa-
application, we need to preserve the features that are     rating hyperplane maximizing the distance with the
useful for discrimination among classes.                   closest training samples for two classes. This ap-
                                                           proach allows SVMs to have a very high capability
2.1 Supervised Classification                              of generalization and, as a consequence, only re-
         With supervised classification, we identify       quire a few training samples. For non linearly sepa-
examples of the Information classes (i.e., land cover      rable data, SVMs use the kernel trick to map the
type) of interest in the image. These are called           data onto a higher dimensional space where they are
"training sites". The image processing software sys-       linearly separable [4].
tem is then used to develop a statistical characteriza-    Here, we consider multiclass SVMs without any
tion of the reflectance for each information class.        feature reduction of the original hyperspectral data.
This stage is often called "signature analysis" and        The standard
may involve developing a characterization as simple                  Gaussian radial basis kernel with L2-norm
as the mean or the rage of reflectance on each bands,      distance is used. This algorithm was proved to pro-
or as complex as detailed analyses of the mean,            vide interesting classification accuracy, even in the
variances and covariance over all bands.                   case of limited training set [5]. Note that other ker-
Once a statistical characterization has been achieved      nels could be considered, such as the spectral angle
for each information class, the image is then classi-      mapper which basically computes the angle between
fied by examining the reflectance for each pixel and       two vectors in the vector space [5][6][7]. In the fol-
making a decision about which of the signatures it         lowing, we present the used classifier and start by
resembles most[3].                                         briefly recalling the general mathematical formula-
                                                           tion of SVMs. Starting from the linearly separable
                                                           case, optimal hyperplanes are introduced, then the
                                                           classification problem is modified to handle non-
                                                           linearly separable data and a brief description of
                                                           multiclass strategies is given. Finally, kernel meth-
                                                           ods are presented.

                                                           Linear SVM
                                                           For a two-class problem in a n-dimensional space
                                                             n, we assume that l training samples, xi
                                                             n, are available with their corresponding labels yi
                                                           = ±1, S = {(xi, yi) | i 2 [1, l]}. The SVM method
                                                           consists of finding the hyperplane that maximizes
Figure 4. Supervised Classification                        the margin i.e., the distance to the closest training
                                                           data   points   in   both   classes.   Noting    w




                                                                                                  2179 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
  n as the vector normal to the hyperplane and b            find the hyperplane of maximal margin in the new
                                                            feature space. The separating hyperplane becomes a
   as the bias, the hyperplane Hp is defined as             linear function in the transformed feature space but
                                       (1)                  a nonlinear function in the original input space. Let
Where <W, X> is the inner product between w and             x be a vector in the n-dimensional input space and
x. If x    Hp then f(x) = <W, X> + b is the distance        j(·) be a nonlinear mapping function from the input
of x to Hp. The sign of f corresponds to decision           space to the high-dimensional feature space. The
function y = sgn (f(x)). The optimal Parameters (w,         hyperplane representing the decision boundary in
b) are found by solving                                     the feature space is defined as follows.

             +c          ]                (2)               w·(  x)−b = 0 (5)
Subject to                                                            where w denotes a weight vector that can
                                                            map the training data in the high dimensional fea-
   ((W,Xi) + b ) ≥ 1-ξi, ξi ≥ 0 i                  (3)      ture space to the output space, and b is the bias. Us-
                                                            ing the j(·) function, the weight becomes
         The solution vector is a linear combination
of some samples of the training set, whose _i is non-
zero, called Support Vectors. The hyperplane deci-          w  iyi (xi)
                                                                                              (6)
sion function can thus be written as:
              l                                             The decision function of
yu  sgn( yii  Xu , Xi   b)
             i 1                        (4)
The maximum margin linear classifier is the linear
classifier with the, um, maximum margin. This is                                                        (7)
the simplest kind of SVM (Called an LSVM)                   becomes

                                                                                                       (8)
                                                            Note that the feature mapping functions in the opti-
                                                            mization problem and also in the classifying func-
                                                            tion always appear as dot products, e.g., j(xi) ·j(xj).
                                                            j(xi) · j(xj) is the inner product between pairs of
                                                            vectors in the transformed feature space. Computing
                                                            the inner product in the transformed feature space
                                                            seems to be quite complex and suffer from the
                                                            course of dimensionality problem. To avoid this
                                                            problem, the kernel trick is used. The kernel trick
                                                            replaces the inner product in the feature space with a
                                                            kernel function K in the original input space as fol-
                                                            lows.

                                                            K(u,v) =j(u) ·j(v)          (9)

                                                                     The Mercer‘s theorem proves that a kernel
                                                            function K is valid, if and only if, the following
                                                            conditions are satisfied, for any function y(x).
Figure 5 Maximum Margin Of SVM

There is one non separable vector in each class                                                           (10)
Non linear support vector machine                           Where          2dx

          If the training data is not linearly separable,            The Mercer‘s theorem ensures that the ker-
there is no straight hyperplane that can separate the       nel function can be always expressed as the inner
classes. In order to learn a nonlinear function in that     product between pairs of input vectors in some high-
case, linear SVMs must be extended to nonlinear             dimensional space, thus the inner product can be
SVMs for the classification of nonlinearly separable        calculated using the kernel function only with input
data. The process of finding classification functions       vectors in the original space without transforming
using nonlinear SVMs consists of two steps. First,          the input vectors into the high dimensional
the input vectors are transformed into high-                feature vectors.
dimensional feature vectors where the training data         The classification function becomes:
can be linearly separated. Then, SVMs are used to


                                                                                                    2180 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
                                                         term GIS database maintenance. The reason is that
                                                 (11)    there are now systems that use clustering procedures
                                                         that are extremely fast and require little in the nature
          Since K(·) is computed in the input space,     of operational parameters. Thus it is becoming pos-
no feature transformation will be actually done or no    sible to train GIS analysis with only a general fa-
j(·) will be computed, and thus the weight vector w      miliarity with remote sensing to undertake classifi-
= åaiyij(x) will not be computed either in nonlinear     cations that meet typical map accuracy standards.
SVMs.                                                    With suitable ground truth accuracy assessment pro-
The followings are popularly used kernel functions.      cedures, this tool canprovide a remarkably rapid
• Polynomial: K(a,b) = (a ·b+1)d                         means of producing quality land cover data on a
• Radial Basis Function (RBF): K(a,b) = exp(−g           continuing basis[3].
||a−b||2)
• Sigmoid: K(a,b) = tanh(ka ·b+c)[8].




                                                         Figure 7 Unsupervised Classification
Figure 6 Non Linear SVM ‗s Separation
                                                         2.2.1 K-Mean Algorithm
         The kernel function transforms the data                   K-means is one of the simplest unsuper-
into a higher dimensional space to make it possible      vised learning algorithms that solve the well known
to perform the separation.                               clustering problem. The procedure follows a simple
                                                         and easy way to classify a given data set through a
2.2 Unsupervised Classification                          certain number of clusters (assume k clusters) fixed
          Unsupervised classification is a method        a priori. The main idea is to define k centroids, one
which examines a large number of unknown pixels          for each cluster. These centroids shoud be placed in
and divides into a number of classed based on natu-      a cunning way because of different location causes
ral groupings present in the image values. unlike        different result. So, the better choice is to place them
supervised classification, unsupervised classification   as much as possible far away from each other. The
does not require analyst-specified training data. The    next step is to take each point belonging to a given
basic premise is that values within a given cover        data set and associate it to the nearest centroid.
type should be close together in the measurement         When no point is pending, the first step is completed
space (i.e. have similar gray levels), whereas data in   and an early group age is done. At this point we
different classes should be comparatively well sepa-     need to re-calculate k new centroids as barycenters
rated (i.e. have very different gray levels).            of the clusters resulting from the previous step. After
          The classes that result from unsupervised      we have these k new centroids, a new binding has to
classification are spectral classed which based on       be done between the same data set points and the
natural groupings of the image values, the identity      nearest new centroid. A loop has been generated. As
of the spectral class will not be initially known,       a result of this loop we may notice that the k cen-
must compare classified data to some from of refer-      troids change their location step by step until no
ence data (such as larger scale imagery, maps, or site   more changes are done. In other words centroids do
visits) to determine the identity and informational      not move any more.[9]
values of the spectral classes. Thus, in the super-      Finally, this algorithm aims at minimizing
vised approach, to define useful information catego-     an objective function, in this case a squared error
ries and then examine their spectral separability; in    function. The objective function:
the unsupervised approach the computer determines
spectrally separable class, and then define their in-
formation value.
          Unsupervised classification is becoming                                                (12)
increasingly popular in agencies involved in long


                                                                                               2181 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
          where Xi(j) - Cj∥ 2 is a chosen distance                   Decision fusion has been widely used to
measure between a data point Xi(j) and the cluster         improve the overall classification accuracy Most
centre Cj, is an indicator of the distance of the n data   decision fusion approaches mainly focus on super-
points from their respective cluster centres.              vised classifiers as base learner, i.e., all classifiers
The algorithm is composed of the following steps:          need training, so the classification results can only
1) Place K points into the space represented by the        be as good as training data. To avoid the possible
objects that are being clustered. These points repre-      negative influence from the limited quality of train-
sent initial group centroids.                              ing data, we are motivated to propose amethod that
2) Assign each object to the group that has the clos-      can combine supervised and unsupervised classifi-
est centroid.                                              ers.
3) When all objects have been assigned, recalculate                  A supervised classifier may provide better
the positions of the K centroids.                          classification than an unsupervised classifier. How-
4) Repeat Steps 2 and 3 until the centroids no longer      ever, in addition to training data limitation, a super-
move. This produces a separation of the objects into       vised classifier may result in over-classification for
groups from which the metric to be minimized can           some homogeneous areas. Although it may be less
be calculated.                                             powerful, an unsupervised classifier can generally
Although it can be proved that the procedure will          well classify those spectrally homogeneous areas.
always terminate, the k-means algorithm does not           Thus, fusing supervised and unsupervised classifica-
necessarily find the most optimal configuration,           tion may yield better performance since the impact
corresponding to the global objective function             from trivial spectral variations may be alleviated and
minimum. The algorithm is also significantly sensi-        the subtle difference between spectrally similar pix-
tive to the initial randomly selected cluster centres.     els may not be exaggerated. Although individual
The k-means algorithm can be run multiple times to         classifiers are pixel-based, the final fused classifica-
reduce this effect.                                        tion has a similar result to an objectbased classifier.
K-means is a simple algorithm that has been adapted
to many problem domains. As we are going to see, it        4. VOTING RULE
is a good candidate for extension to work with fuzzy                 In this paper, we used a weighted majority
feature vectors.                                           voting (WMV) rule to further improve classification
                                                           accuracy. The final fusion output is dependent on
3. DECISION FUSION                                         not only the classification accuracy of the super-
         A decision fusion approach is developed to        vised classifier and the chosen unsupervised classi-
combine the results from supervised and unsuper-           fier but also the fusion rule. The MV rule is the most
vised classifiers. The final output                        frequently used. It is simple and requires no train-
         takes advantage of the power of a support-        ing. The MV rule may have some limitations. For
vector machine- based supervised classification in         instance, it only counts the number of pixels in each
class separation and the capability of an unsuper-         class in a segment, but does not consider pixel
vised classifier, such as K-means clustering, in re-       ―quality‖ since it treats all the pixels equally; as a
ducing trivial spectral variation impact in homoge-        result, the spectral variations from pixel to pixel is
neous regions [10][11][12].                                completely ignored in a relatively homogeneous
                                                           segment. Intuitively, pixels close to the spectral cen-
                                                           troid of a segment should be assigned higher
                                                           weights for voting because they are the best repre-
                                                           sentatives of the segment, and pixels deviated from
                                                           the centroid are not representatives and should have
                                                           lower weights. The resulting WMV rule can allevi-
                                                           ate the impact from outliers, whose spectral signa-
                                                           tures are quite different from the rest of pixels al-
                                                           though they are grouped into the same segment. To
                                                           gauge such deviation, the Mahalanobis distance can
                                                           be used to measure the distance between a pixel and
                                                           the centroid.

                                                           4.1 Accuracy Assessment:
                                                                     Accuracy assessment is an important step
                                                           in the classification process. The goal is to quantita-
                                                           tively determine how effectively pixels were
                                                           grouped into the correct feature classes in the area
Figure 8 Decision Fusion Approach                          under investigation. The land cover types derived
                                                           from digital image interpretation and analysis re-
                                                           quires validation with data obtained from ground



                                                                                                 2182 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
verification. For the accuracy assessment, a coeffi-
cient of agreement between classified image data
and ground reference data were calculated using
Kappa. The accuracy was determined by kappa sta-
tistics in order to test whether any difference exists
in the interpretation work.
          Briefly, Kappa statistic considers a measure
of overall accuracy of image classification and indi-
vidual category accuracy as a means of actual
agreement between classification and observation.
The value of Kappa lies between 0 and 1, where 0
represents agreement due to chance only. Mean-
while 1 represents complete agreement between the
two data sets. Negative values can occur but they
are spurious. It is usually expressed as a percentage
(%). [13] claimed that the Kappa statistic has been
shown to be a statistically more sophisticated meas-     Figure 9 Hyperspectral Image
ure of classifier agreement and thus gives better
interclass discrimination than overall accuracy.         5.1 Classification output:

5. RESULT AND ANALYSIS
          The software implementation of the algo-
rithm is written in a Matlab environment using Mat-
lab7.10 software. This hyperspectral dataset was
generated by the SAMSON sensor The SAMSON
components are integrated with the flight manage-
ment system to enable real-time management of data
collection and flight orientation.                            K- Means                            SVM
          The HSI sensor incorporates improved
spectral alignment, radiometric calibrations and at-
mospheric correction procedures. The charge cou-
pled device (CCD) used in the SAMSON sensor is a
thinned, backside-illuminated CCD (produced by
Sarnoff Imaging). The CCD manufacturing process
greatly increases the quantum efficiency from about
5% to 60% at 400 nm, and from about 40% to 85%
at 700 nm.with a band width of 3.2nm. The data was
collected by the Florida Environmental Research          Weighted Majority
Institute as part of the GOES-R sponsored experi-        Voting                         Majority Voting
ment. The instrument flown during the collect is the      Figure 10.Classified Output for band 75
SAMSON, a push-broom,visible to near IR, hyper-
spectral sensor. This sensor was designed and devel-     Bands     OA=        AA=       OA=        AA=
oped by FERI. The following paper describes the                    90.5886    90.585    99.0039    99
basic design of the sensor: Kohler et al. (2006)[14].              Majority Voting      Weighted Majority
This dataset utilized the new radiometric calibration              Kappa Coefficient    Voting
technique specifically designed to characterize and                                     Kappa Coefficient
correct the stray light found within the sensor. The     1         91.4158              99.9080
following paper describes the approach: Kohler et
al. (2004). It has samples = 952,
                                                         10        90.8216              99.2586
lines = 952,bands = 156.
                                                         25        91.0987              99.5614

                                                         45        90.7436              99.1733

                                                         75        88.8225              97.0738

                                                         50        90.3567              98.7505




                                                                                           2183 | P a g e
Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research
                  and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.2177-2184
100      89.0256                 97.2958                      multiclass classification of hyperspectral-
                                                              remote sensing data,‖ in Proc. IEEE Int.
125      91.4878                 99.9867                      Conf. on Acoustics, Speech, and Signal
                                                              Processing, Toulouse, France,2006.
                                                        [6]   M. L. G. Mercier, ―Support vector ma-
145      89.0309                 97.3015
                                                              chines for hyperspectral image classifica-
                                                              tion with spectral-based kernels,‖in Geo-
150      89.0639                 97.3376                      science and Remote Sensing Symposium,
                                                              vol. 1. IGARSS ‘03. Proceedings, July
156      89.1105                 97.3886                      2003.
                                                        [7]   N. Keshava, ―Distance metrics and band
                                                              selection in hyperspectral processing with
Table 1. Comparision of different Bands                       application to material iden-tification and
                                                              spectral librairies,‖ IEEE Trans. on Geo-
                                                              science and Remote Sensing, vol. 42, pp.
                                                              1552–1565, July2004.
                                                        [8]
                                                              http://people.sabanciuniv.edu/berrin/cs512/l
                                                              ectures/11-svmtutorial2.pdf by Hwanjo Yu
                                                              andSungchul
                                                              Kim[9]http://home.dei.polimi.it/matteucc/C
                                                              lustering/tutorial_html/kmeans.html
                                                        [10] H. Yang, B. Ma, and Q. Du, ―Decision fu-
                                                              sion for supervised and unsupervised hyper
                                                              spectral image classification,‖ in Proc.
                                                              IEEE Geosci. Remote Sens. Symp., Jul.
                                                              2009, pp. 948–951.
                                                         [11] M.Petrakos,J.A. Beniktsson, and I. Kanel-
                                                              lopoulos,‖The effect of Classifier agree-
Figure 11.Kappa Coefficient of hyperspectral image
                                                              ment on the accuracy of the combined clas-
at different wavelengths
                                                              sifier in decision level fusion,‖IEEE
                                                              Trans.Geosci.      Remote      Sens.,    Vol
6. CONLUSION                                                  39,no.11,pp.2539-2546, Nov.2001.
         Decision Fusion is the most widely used        [12] A.Cheriyadat,L.M.Bruce and A. Mathur, ―
methods for Classification of hyperspectral image.            Decision Level Fusion with best-bases for
In this paper, we used a weighted decision fusion             hyperspectral Classification,‖ in Proc.
approach. The final output take advantage of the              IEEE Workshop Adv. Tech. Anal. Remotely
power of supervised classification in class separa-           Sensed Data, 2003,pp.399-406
tion and the capability of unsupervised classifier in   [13] Fitzgerald,R.W and Lees,B.G. 1994. As-
reducing the impact from spectral variations in ho-           sessing the classification accuracy of multi-
mogeneous regions.                                            sources remote sensing data. Remote Sens-
         The Weighted Decision Fusion Method                  ing of Environment, 47: 362-368.
were examined using SAMSON hyperspectal sam-            [14]
ple data, which shows that the classification accu-           www.opticks.org/confluence/display/optick
racy and the kappa coefficient is better than that            s/sample+data
using MV method.

REFERENCES
  [1]    Documention Tutorials on hyprspec by
         Radall B. Smith
  [2]    Peg Shippert‖ Introduction to Hyperspec-
         tral Image Analysis‖ Earth Science Appli-
         cations Specialist Research Systems, Inc
  [3]
         http://www.sc.chula.ac.th/courseware/2309
         507/Lecture/remote18.htm
  [4]    B. Scholkopf and A. J. Smola, Learning
         with Kernels. MIT Press, 2002.
  [5]    M. Fauvel, J. Chanussot, and J. A.
         Benediktsson, ―Kernels Evaluation for



                                                                                          2184 | P a g e

Más contenido relacionado

La actualidad más candente

B.tech sem i engineering physics u i chapter 1-optical fiber
B.tech sem i engineering physics u i chapter 1-optical fiber B.tech sem i engineering physics u i chapter 1-optical fiber
B.tech sem i engineering physics u i chapter 1-optical fiber
Rai University
 
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
algous
 
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite PanelStructural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
Dawid Yhisreal
 
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
Nanomedicine Journal (NMJ)
 
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
Abhi Hirpara
 
Fabry–pérot interferometer picoseconds dispersive properties
Fabry–pérot interferometer picoseconds dispersive propertiesFabry–pérot interferometer picoseconds dispersive properties
Fabry–pérot interferometer picoseconds dispersive properties
IAEME Publication
 

La actualidad más candente (20)

Photonics devices
Photonics devicesPhotonics devices
Photonics devices
 
Thermal response of skin diseased tissue treated by plasmonic nanoantenna
Thermal response of skin diseased tissue treated  by plasmonic nanoantenna Thermal response of skin diseased tissue treated  by plasmonic nanoantenna
Thermal response of skin diseased tissue treated by plasmonic nanoantenna
 
The analysis of spectral parameters of led grow li
The analysis of spectral parameters of led grow liThe analysis of spectral parameters of led grow li
The analysis of spectral parameters of led grow li
 
Photonics By JYRJ
Photonics By JYRJPhotonics By JYRJ
Photonics By JYRJ
 
B.tech sem i engineering physics u i chapter 1-optical fiber
B.tech sem i engineering physics u i chapter 1-optical fiber B.tech sem i engineering physics u i chapter 1-optical fiber
B.tech sem i engineering physics u i chapter 1-optical fiber
 
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱
 
Reflectivity and Braggs Wavelength in FBG
Reflectivity and Braggs Wavelength in FBGReflectivity and Braggs Wavelength in FBG
Reflectivity and Braggs Wavelength in FBG
 
FTIR
FTIRFTIR
FTIR
 
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite PanelStructural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
Structural Health Monitoring Aluminum Honeycomb Sandwich Composite Panel
 
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D MaterialsTearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
 
OPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPTOPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPT
 
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
Study of magnetic and structural and optical properties of Zn doped Fe3O4 nan...
 
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
B.Tech sem I Engineering Physics U-I Chapter 1-Optical fiber
 
ICACC Presentation
ICACC PresentationICACC Presentation
ICACC Presentation
 
Fabry–pérot interferometer picoseconds dispersive properties
Fabry–pérot interferometer picoseconds dispersive propertiesFabry–pérot interferometer picoseconds dispersive properties
Fabry–pérot interferometer picoseconds dispersive properties
 
Nanotechnology by manish myst ssgbcoet
Nanotechnology by manish myst ssgbcoetNanotechnology by manish myst ssgbcoet
Nanotechnology by manish myst ssgbcoet
 
A low-cost fiber based displacement sensor for industrial applications
A low-cost fiber based displacement sensor for industrial applicationsA low-cost fiber based displacement sensor for industrial applications
A low-cost fiber based displacement sensor for industrial applications
 
Introduction to optical fiber cable
Introduction to optical fiber cableIntroduction to optical fiber cable
Introduction to optical fiber cable
 
Using Metamaterial as Optical Perfect Absorber
Using Metamaterial as Optical Perfect AbsorberUsing Metamaterial as Optical Perfect Absorber
Using Metamaterial as Optical Perfect Absorber
 
Optical communication unit 1
Optical communication unit 1Optical communication unit 1
Optical communication unit 1
 

Destacado

Exam review preeschool
Exam review preeschoolExam review preeschool
Exam review preeschool
tamaracod
 

Destacado (20)

Mv2522052211
Mv2522052211Mv2522052211
Mv2522052211
 
Rs 28.05.2013
Rs 28.05.2013Rs 28.05.2013
Rs 28.05.2013
 
Live Project Training in asp.net Vadodara
Live Project Training in asp.net VadodaraLive Project Training in asp.net Vadodara
Live Project Training in asp.net Vadodara
 
Us5996266
Us5996266Us5996266
Us5996266
 
Untitled
UntitledUntitled
Untitled
 
Ligflat - FlatBlue - Media Empresa - Conta Controlada
Ligflat - FlatBlue - Media Empresa - Conta ControladaLigflat - FlatBlue - Media Empresa - Conta Controlada
Ligflat - FlatBlue - Media Empresa - Conta Controlada
 
Como anda o sentimento das pessoas pela sua marca?
Como anda o sentimento das pessoas pela sua marca?Como anda o sentimento das pessoas pela sua marca?
Como anda o sentimento das pessoas pela sua marca?
 
Ronak_CV
Ronak_CVRonak_CV
Ronak_CV
 
Edital - PPP de Eficiência Energética de Caruaru
Edital  - PPP de Eficiência Energética de CaruaruEdital  - PPP de Eficiência Energética de Caruaru
Edital - PPP de Eficiência Energética de Caruaru
 
Rutas del Pequeño Minero: Retos y Experiencia Nacional e Internacional
Rutas del Pequeño Minero: Retos y Experiencia Nacional e InternacionalRutas del Pequeño Minero: Retos y Experiencia Nacional e Internacional
Rutas del Pequeño Minero: Retos y Experiencia Nacional e Internacional
 
Cn2treballfinal
Cn2treballfinalCn2treballfinal
Cn2treballfinal
 
Cronograma - PPP de Eficiência Energética de Caruaru
Cronograma - PPP de Eficiência Energética de CaruaruCronograma - PPP de Eficiência Energética de Caruaru
Cronograma - PPP de Eficiência Energética de Caruaru
 
Utkarsh_Resume
Utkarsh_ResumeUtkarsh_Resume
Utkarsh_Resume
 
TwitRadar - SP 460
TwitRadar - SP 460TwitRadar - SP 460
TwitRadar - SP 460
 
Vida
Vida  Vida
Vida
 
Best Play School in Bhopal
Best Play School in BhopalBest Play School in Bhopal
Best Play School in Bhopal
 
Assssoria de Planos e Desenvolvimento
Assssoria de Planos e DesenvolvimentoAssssoria de Planos e Desenvolvimento
Assssoria de Planos e Desenvolvimento
 
Exam review preeschool
Exam review preeschoolExam review preeschool
Exam review preeschool
 
SAMPLE CV
SAMPLE CVSAMPLE CV
SAMPLE CV
 
Visitas das férias
Visitas das fériasVisitas das férias
Visitas das férias
 

Similar a Mr2521772184

Determination_of_Concrete_Properties_Usi.pdf
Determination_of_Concrete_Properties_Usi.pdfDetermination_of_Concrete_Properties_Usi.pdf
Determination_of_Concrete_Properties_Usi.pdf
PreetiKulkarni20
 
UVSpectroscopy.pptx
UVSpectroscopy.pptxUVSpectroscopy.pptx
UVSpectroscopy.pptx
ARUNNT2
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
Gokul Saud
 
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
algous
 
Introduction To Spectroscopy
Introduction To SpectroscopyIntroduction To Spectroscopy
Introduction To Spectroscopy
guest824336
 

Similar a Mr2521772184 (20)

Infrared image enhancement using wavelet transform
Infrared image enhancement using wavelet transformInfrared image enhancement using wavelet transform
Infrared image enhancement using wavelet transform
 
MAPPING REMOTE PLANTS THROUGH REMOTE SENSING TECHNOLOGY AND GIS
MAPPING REMOTE PLANTS THROUGH REMOTE SENSING TECHNOLOGY AND GISMAPPING REMOTE PLANTS THROUGH REMOTE SENSING TECHNOLOGY AND GIS
MAPPING REMOTE PLANTS THROUGH REMOTE SENSING TECHNOLOGY AND GIS
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
Introduction to Remote Sensing- by Wankie Richman
Introduction to Remote Sensing- by Wankie RichmanIntroduction to Remote Sensing- by Wankie Richman
Introduction to Remote Sensing- by Wankie Richman
 
remote sensing-converted.pptx
remote sensing-converted.pptxremote sensing-converted.pptx
remote sensing-converted.pptx
 
Determination_of_Concrete_Properties_Usi.pdf
Determination_of_Concrete_Properties_Usi.pdfDetermination_of_Concrete_Properties_Usi.pdf
Determination_of_Concrete_Properties_Usi.pdf
 
Iirs rstechnologypdf
Iirs rstechnologypdfIirs rstechnologypdf
Iirs rstechnologypdf
 
Surveying ii ajith sir class1
Surveying ii ajith sir class1Surveying ii ajith sir class1
Surveying ii ajith sir class1
 
Ftir intro
Ftir introFtir intro
Ftir intro
 
08 chapter3
08 chapter308 chapter3
08 chapter3
 
UVSpectroscopy.pptx
UVSpectroscopy.pptxUVSpectroscopy.pptx
UVSpectroscopy.pptx
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
Remote sensing in entomology
Remote sensing in entomologyRemote sensing in entomology
Remote sensing in entomology
 
Crack detection in railway track 2
Crack detection in  railway track 2Crack detection in  railway track 2
Crack detection in railway track 2
 
HYPERSPECTRAL.ppt
HYPERSPECTRAL.pptHYPERSPECTRAL.ppt
HYPERSPECTRAL.ppt
 
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
上海必和 Advances-hyperspectral-imaging0409超光谱高光谱多光谱
 
Introduction To Spectroscopy
Introduction To SpectroscopyIntroduction To Spectroscopy
Introduction To Spectroscopy
 
Remote Sensing for Forest Resources Assessment
Remote Sensing for Forest Resources AssessmentRemote Sensing for Forest Resources Assessment
Remote Sensing for Forest Resources Assessment
 
Swati nir
Swati nirSwati nir
Swati nir
 
Effect of Glittering and Reflective Objects of Different Colors to the Output...
Effect of Glittering and Reflective Objects of Different Colors to the Output...Effect of Glittering and Reflective Objects of Different Colors to the Output...
Effect of Glittering and Reflective Objects of Different Colors to the Output...
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 

Mr2521772184

  • 1. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 Hyperspectral Image Classification Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar Gitanjali S. Korgaonkar is currently pursuing masters degree in electronics & telecommunication engineering in TCET,Kandivali,Mumbai University,India, Dr. R.S. Sedamkar is received his Ph.D. degree in 201 and is currently Dean-Academics, Professor and Head of Computer Department at Thakur college of engineering and technology, Mumbai. Abstract This paper proposes weighted decision fusion for 1.1The Imaging Spectrometer classification on a sample hyperspectral image Hyperspectral images are produced by in- which has two characteristics 1) Spatial and 2) struments called imaging spectrometers. The devel- Spectral. Spatial is nothing but within a single opment of these complex sensors has involved the band and Spectral means narrow spectral bands convergence of two related but distinct technologies: over a continuous spectral range.The Weighted spectroscopy and the remote imaging of Earth and Majority voting rule is introduced which is likely planetary surfaces. Spectroscopy is the study of to enhance the performance. In this pixels close light that is emitted by or reflected from materials to the spectral centroid of a segment are assigned and its variation in energy with wavelength. As ap- higher weights for voting because they are the plied to the field of optical remote sensing, spectros- best representatives of the segment, and pixels copy deals with the spectrum of sunlight that is dif- deviated from the centroid are not representa- fusely reflected (scattered) by materials at the tives and will have lower weights. Experimental Earth.s surface. results for SAMSON (Spectroscopic Aerial Map- ping System with Onboard Navigation) data Instruments called spectrometers (or spec- show that proposed scheme achieve Overall Ac- troradiometers) are used to make ground-based or curacy, Kappa Coefficient and Average Accuracy laboratory measurements of the light reflected from better than Majority voting rule. a test material. An optical dispersing element such as a grating or prism in the spectrometer splits this Index Terms— Hyperspectral image classifica- light into many narrow, adjacent wavelength bands tion, Supervised classification, Unsupervised Classi- and the energy in each band is measured by a sepa- fication, Weighted majority voting Rule. rate detector. By using hundreds or even thousands of detectors, spectrometers can make spectral mea- Introduction surements of bands as narrow as 0.01 micrometers over a wide wavelength range, typically at least 0.4 Hyperspectral images contain a wealth of to 2.4 micrometers (visible through middle infrared data, but interpreting them requires an understand- ing of exactly what properties of ground materials wavelength ranges). we are trying to measure, and how they relate to the measurements actually made by the hyperspectral 1.2Characteristics of Electromagnetic Radiation: sensor.[1] Hyperspectral imagery involves the sensing of electromagnetic radiation. The electromagnetic spectrum that is used in remote sensing includes the ultraviolet (UV), visible, infrared, and microwave portions of the spectrum. Figure 2 illustrates the wavelength regions of the electromagnetic spectrum. The spectral por- tions of near IR and short wave infrared (0.7-3.0 μm) are called the reflective infrared because meas- ured radiation in this spectral region is mostly com- posed of reflected sunlight. In contrast, the IR spec- trum from 5.0 to 13.0 μm is termed thermal infrared because measurements in this spectral region are generally recording energy radiated from scene ele- ments [2] Figure 1. The Hyperspectral Imagary 2177 | P a g e
  • 2. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 to the other two in specific wavelength regions. Therefore, by knowing a material's spectral signa- ture, it is possible for this material to be detected in pixels within an image. Libraries with the character- istic spectra of various materials of interest exist, and these spectral signatures can be compared with measured spectra in order to identify the features in an image. In the early work on spectral imagery, computational limits prevented full exploitation of such data. Computational power in the latter portion of the 1990‘s made routine use of spectral imagery Figure 2 Wavelength regions of the electromagnetic much more practical. spectrum. Based on these wavelength regions, remote sensing can be classified into three main categories: visible R and reflective infrared remote sensing, thermal re- a mote sensing, and microwave remote sensing. In d visible and reflective remote sensing, the radiation i that is measured has a solar origin, radiated with a a peak wavelength of 0.5 μm. In thermal remote sens- n ing, the measured radiation comes from the ob- c served objects. Materials with normal temperatures e (~300K) emit radiation with a peak wavelength of 10.0 μm. Finally, in microwave remote sensing, ob- servations are generally due to reflection of energy radiated by the observing platform (i.e. radar). In this paper, hyperspectral images from the visible and reflective infrared spectrum are used. The majority Wavelength(nm) of currently available sensors with material identifi- cation (ID) capability utilize this portion of spec- Figure 3. Hyperspectral pixel spectra. trum. Typically the source of energy in hyper- Identifying groups of pixels that have simi- spectral imagery is the sun. The incident energy lar spectral characteristics and determining the vari- from the sun that is not absorbed or scattered in the ous features or land cover classes represented by atmosphere interacts with the earth's surface, where these groups is an important part of image analysis. it is absorbed, transmitted, or reflected. Additionally, This form of analysis is known as classification. the electromagnetic radiation has specific properties Visual classification relies on the analyst's ability to that are predictable in its interaction with materials and transmission through the atmosphere. Therefore, use visual elements (tone, contrast, shape, etc) to by measuring the electromagnetic radiation in nar- classify an image. Digital image classification is based on the spectral information used to create the row wavelength bands, the resulting spectral signa- image and classifies each individual pixel based on tures can be used, in principle, to uniquely charac- its spectral characteristics. The result of a classifica- terize and identify any given material. tion is that all pixels in an image are assigned to All materials have unique spectral charac- particular classes or themes (e.g. water, coniferous teristics because they absorb, reflect, and emit radia- forest, deciduous forest, corn, wheat, etc.), resulting tion in a unique way. For example, in the visible portion of the spectrum, a leaf appears green be- in a classified image that is essentially a thematic cause it absorbs in the blue and red regions of the map of the original image. The theme of the classifi- spectrum and reflects in the green region. These cation is selectable, thus a classification can be per- variations in absorption, reflection, and emission are formed to observe land use patterns, geology, vege- due to the material composition. Differences in tation types, or rainfall. In classifying an image we must distin- spectral response due to absorption, transmission, guish between spectral classes and information and reflection cause materials to have a unique spec- classes. Spectral classes are groups of pixels that tral signature. Figure 3 illustrates the spectral signa- have nearly uniform spectral characteristics. Infor- tures of three pixels, respectively dominated by mation classes are various themes or groups we are seawater, vegetation, and concrete. Comparing the attempting to identify in an image. Information spectra between seawater and vegetation, it is ob- served that they reflect similarly in the visible wave- classes may include such classes as deciduous and coniferous forests, various crop types, or inland bod- lengths but differently in the infrared portion. Also, concrete has a different spectral signature compared ies of water. The objective of image classification is to match the spectral classes in the data to the in- 2178 | P a g e
  • 3. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 formation classes of interest. 2.1.1.Support Vector Machine Different conditions like the sun illumina- One of the best algorithm used for super- tion, snow cover, other atmospheric conditions, crop vised classification is Support Vector Ma- growth in a particular season etc. conditions effect chine(SVM). A support vector machine (SVM) is a the classification process. concept in statistics and computer science for a set of related supervised learning methods that analyze 2. TYPES OF CLASSIFICATION data and recognize patterns, used for classification Image classification is perhaps the most and regression analysis. The standard SVM takes a important part of digital image analysis. It is very set of input data and predicts, for each given input, nice to have a "pretty picture" or an image, showing which of two possible classes forms the input, mak- a magnitude of colors illustrating various features of ing the SVM a non-probabilisticbinarylinear classi- the underlying terrain, but it is quite useless unless fier. Given a set of training examples, each marked to know what the colors mean. Image classification as belonging to one of two categories, an SVM or the ‗partition of the feature space‘ can be done in training algorithm builds a model that assigns new two ways: Supervised and Unsupervised classifica- examples into one category or the other. An SVM tion. In the supervised classification, the operator model is a representation of the examples as points defines his classes of interest by selecting the train- in space, mapped so that the examples of the sepa- ing samples in an image on the basis of his/her rate categories are divided by a clear gap that is as knowledge of the area. In the unsupervised classifi- wide as possible. New examples are then mapped cation, the image is partitioned into homogenous into that same space and predicted to belong to a spectral clusters using some clustering algorithm. category based on which side of the gap they fall on. These spectral clusters are accomplished on the ba- SVMs characterize classes using a geometrical crite- sis of some spectral similarities. In the classification rion rather than statistical criteria. They seek a sepa- application, we need to preserve the features that are rating hyperplane maximizing the distance with the useful for discrimination among classes. closest training samples for two classes. This ap- proach allows SVMs to have a very high capability 2.1 Supervised Classification of generalization and, as a consequence, only re- With supervised classification, we identify quire a few training samples. For non linearly sepa- examples of the Information classes (i.e., land cover rable data, SVMs use the kernel trick to map the type) of interest in the image. These are called data onto a higher dimensional space where they are "training sites". The image processing software sys- linearly separable [4]. tem is then used to develop a statistical characteriza- Here, we consider multiclass SVMs without any tion of the reflectance for each information class. feature reduction of the original hyperspectral data. This stage is often called "signature analysis" and The standard may involve developing a characterization as simple Gaussian radial basis kernel with L2-norm as the mean or the rage of reflectance on each bands, distance is used. This algorithm was proved to pro- or as complex as detailed analyses of the mean, vide interesting classification accuracy, even in the variances and covariance over all bands. case of limited training set [5]. Note that other ker- Once a statistical characterization has been achieved nels could be considered, such as the spectral angle for each information class, the image is then classi- mapper which basically computes the angle between fied by examining the reflectance for each pixel and two vectors in the vector space [5][6][7]. In the fol- making a decision about which of the signatures it lowing, we present the used classifier and start by resembles most[3]. briefly recalling the general mathematical formula- tion of SVMs. Starting from the linearly separable case, optimal hyperplanes are introduced, then the classification problem is modified to handle non- linearly separable data and a brief description of multiclass strategies is given. Finally, kernel meth- ods are presented. Linear SVM For a two-class problem in a n-dimensional space n, we assume that l training samples, xi n, are available with their corresponding labels yi = ±1, S = {(xi, yi) | i 2 [1, l]}. The SVM method consists of finding the hyperplane that maximizes Figure 4. Supervised Classification the margin i.e., the distance to the closest training data points in both classes. Noting w 2179 | P a g e
  • 4. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 n as the vector normal to the hyperplane and b find the hyperplane of maximal margin in the new feature space. The separating hyperplane becomes a as the bias, the hyperplane Hp is defined as linear function in the transformed feature space but (1) a nonlinear function in the original input space. Let Where <W, X> is the inner product between w and x be a vector in the n-dimensional input space and x. If x Hp then f(x) = <W, X> + b is the distance j(·) be a nonlinear mapping function from the input of x to Hp. The sign of f corresponds to decision space to the high-dimensional feature space. The function y = sgn (f(x)). The optimal Parameters (w, hyperplane representing the decision boundary in b) are found by solving the feature space is defined as follows. +c ] (2) w·( x)−b = 0 (5) Subject to where w denotes a weight vector that can map the training data in the high dimensional fea- ((W,Xi) + b ) ≥ 1-ξi, ξi ≥ 0 i (3) ture space to the output space, and b is the bias. Us- ing the j(·) function, the weight becomes The solution vector is a linear combination of some samples of the training set, whose _i is non- zero, called Support Vectors. The hyperplane deci- w  iyi (xi) (6) sion function can thus be written as: l The decision function of yu  sgn( yii  Xu , Xi   b) i 1 (4) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is (7) the simplest kind of SVM (Called an LSVM) becomes (8) Note that the feature mapping functions in the opti- mization problem and also in the classifying func- tion always appear as dot products, e.g., j(xi) ·j(xj). j(xi) · j(xj) is the inner product between pairs of vectors in the transformed feature space. Computing the inner product in the transformed feature space seems to be quite complex and suffer from the course of dimensionality problem. To avoid this problem, the kernel trick is used. The kernel trick replaces the inner product in the feature space with a kernel function K in the original input space as fol- lows. K(u,v) =j(u) ·j(v) (9) The Mercer‘s theorem proves that a kernel function K is valid, if and only if, the following conditions are satisfied, for any function y(x). Figure 5 Maximum Margin Of SVM There is one non separable vector in each class (10) Non linear support vector machine Where 2dx If the training data is not linearly separable, The Mercer‘s theorem ensures that the ker- there is no straight hyperplane that can separate the nel function can be always expressed as the inner classes. In order to learn a nonlinear function in that product between pairs of input vectors in some high- case, linear SVMs must be extended to nonlinear dimensional space, thus the inner product can be SVMs for the classification of nonlinearly separable calculated using the kernel function only with input data. The process of finding classification functions vectors in the original space without transforming using nonlinear SVMs consists of two steps. First, the input vectors into the high dimensional the input vectors are transformed into high- feature vectors. dimensional feature vectors where the training data The classification function becomes: can be linearly separated. Then, SVMs are used to 2180 | P a g e
  • 5. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 term GIS database maintenance. The reason is that (11) there are now systems that use clustering procedures that are extremely fast and require little in the nature Since K(·) is computed in the input space, of operational parameters. Thus it is becoming pos- no feature transformation will be actually done or no sible to train GIS analysis with only a general fa- j(·) will be computed, and thus the weight vector w miliarity with remote sensing to undertake classifi- = åaiyij(x) will not be computed either in nonlinear cations that meet typical map accuracy standards. SVMs. With suitable ground truth accuracy assessment pro- The followings are popularly used kernel functions. cedures, this tool canprovide a remarkably rapid • Polynomial: K(a,b) = (a ·b+1)d means of producing quality land cover data on a • Radial Basis Function (RBF): K(a,b) = exp(−g continuing basis[3]. ||a−b||2) • Sigmoid: K(a,b) = tanh(ka ·b+c)[8]. Figure 7 Unsupervised Classification Figure 6 Non Linear SVM ‗s Separation 2.2.1 K-Mean Algorithm The kernel function transforms the data K-means is one of the simplest unsuper- into a higher dimensional space to make it possible vised learning algorithms that solve the well known to perform the separation. clustering problem. The procedure follows a simple and easy way to classify a given data set through a 2.2 Unsupervised Classification certain number of clusters (assume k clusters) fixed Unsupervised classification is a method a priori. The main idea is to define k centroids, one which examines a large number of unknown pixels for each cluster. These centroids shoud be placed in and divides into a number of classed based on natu- a cunning way because of different location causes ral groupings present in the image values. unlike different result. So, the better choice is to place them supervised classification, unsupervised classification as much as possible far away from each other. The does not require analyst-specified training data. The next step is to take each point belonging to a given basic premise is that values within a given cover data set and associate it to the nearest centroid. type should be close together in the measurement When no point is pending, the first step is completed space (i.e. have similar gray levels), whereas data in and an early group age is done. At this point we different classes should be comparatively well sepa- need to re-calculate k new centroids as barycenters rated (i.e. have very different gray levels). of the clusters resulting from the previous step. After The classes that result from unsupervised we have these k new centroids, a new binding has to classification are spectral classed which based on be done between the same data set points and the natural groupings of the image values, the identity nearest new centroid. A loop has been generated. As of the spectral class will not be initially known, a result of this loop we may notice that the k cen- must compare classified data to some from of refer- troids change their location step by step until no ence data (such as larger scale imagery, maps, or site more changes are done. In other words centroids do visits) to determine the identity and informational not move any more.[9] values of the spectral classes. Thus, in the super- Finally, this algorithm aims at minimizing vised approach, to define useful information catego- an objective function, in this case a squared error ries and then examine their spectral separability; in function. The objective function: the unsupervised approach the computer determines spectrally separable class, and then define their in- formation value. Unsupervised classification is becoming (12) increasingly popular in agencies involved in long 2181 | P a g e
  • 6. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 where Xi(j) - Cj∥ 2 is a chosen distance Decision fusion has been widely used to measure between a data point Xi(j) and the cluster improve the overall classification accuracy Most centre Cj, is an indicator of the distance of the n data decision fusion approaches mainly focus on super- points from their respective cluster centres. vised classifiers as base learner, i.e., all classifiers The algorithm is composed of the following steps: need training, so the classification results can only 1) Place K points into the space represented by the be as good as training data. To avoid the possible objects that are being clustered. These points repre- negative influence from the limited quality of train- sent initial group centroids. ing data, we are motivated to propose amethod that 2) Assign each object to the group that has the clos- can combine supervised and unsupervised classifi- est centroid. ers. 3) When all objects have been assigned, recalculate A supervised classifier may provide better the positions of the K centroids. classification than an unsupervised classifier. How- 4) Repeat Steps 2 and 3 until the centroids no longer ever, in addition to training data limitation, a super- move. This produces a separation of the objects into vised classifier may result in over-classification for groups from which the metric to be minimized can some homogeneous areas. Although it may be less be calculated. powerful, an unsupervised classifier can generally Although it can be proved that the procedure will well classify those spectrally homogeneous areas. always terminate, the k-means algorithm does not Thus, fusing supervised and unsupervised classifica- necessarily find the most optimal configuration, tion may yield better performance since the impact corresponding to the global objective function from trivial spectral variations may be alleviated and minimum. The algorithm is also significantly sensi- the subtle difference between spectrally similar pix- tive to the initial randomly selected cluster centres. els may not be exaggerated. Although individual The k-means algorithm can be run multiple times to classifiers are pixel-based, the final fused classifica- reduce this effect. tion has a similar result to an objectbased classifier. K-means is a simple algorithm that has been adapted to many problem domains. As we are going to see, it 4. VOTING RULE is a good candidate for extension to work with fuzzy In this paper, we used a weighted majority feature vectors. voting (WMV) rule to further improve classification accuracy. The final fusion output is dependent on 3. DECISION FUSION not only the classification accuracy of the super- A decision fusion approach is developed to vised classifier and the chosen unsupervised classi- combine the results from supervised and unsuper- fier but also the fusion rule. The MV rule is the most vised classifiers. The final output frequently used. It is simple and requires no train- takes advantage of the power of a support- ing. The MV rule may have some limitations. For vector machine- based supervised classification in instance, it only counts the number of pixels in each class separation and the capability of an unsuper- class in a segment, but does not consider pixel vised classifier, such as K-means clustering, in re- ―quality‖ since it treats all the pixels equally; as a ducing trivial spectral variation impact in homoge- result, the spectral variations from pixel to pixel is neous regions [10][11][12]. completely ignored in a relatively homogeneous segment. Intuitively, pixels close to the spectral cen- troid of a segment should be assigned higher weights for voting because they are the best repre- sentatives of the segment, and pixels deviated from the centroid are not representatives and should have lower weights. The resulting WMV rule can allevi- ate the impact from outliers, whose spectral signa- tures are quite different from the rest of pixels al- though they are grouped into the same segment. To gauge such deviation, the Mahalanobis distance can be used to measure the distance between a pixel and the centroid. 4.1 Accuracy Assessment: Accuracy assessment is an important step in the classification process. The goal is to quantita- tively determine how effectively pixels were grouped into the correct feature classes in the area Figure 8 Decision Fusion Approach under investigation. The land cover types derived from digital image interpretation and analysis re- quires validation with data obtained from ground 2182 | P a g e
  • 7. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 verification. For the accuracy assessment, a coeffi- cient of agreement between classified image data and ground reference data were calculated using Kappa. The accuracy was determined by kappa sta- tistics in order to test whether any difference exists in the interpretation work. Briefly, Kappa statistic considers a measure of overall accuracy of image classification and indi- vidual category accuracy as a means of actual agreement between classification and observation. The value of Kappa lies between 0 and 1, where 0 represents agreement due to chance only. Mean- while 1 represents complete agreement between the two data sets. Negative values can occur but they are spurious. It is usually expressed as a percentage (%). [13] claimed that the Kappa statistic has been shown to be a statistically more sophisticated meas- Figure 9 Hyperspectral Image ure of classifier agreement and thus gives better interclass discrimination than overall accuracy. 5.1 Classification output: 5. RESULT AND ANALYSIS The software implementation of the algo- rithm is written in a Matlab environment using Mat- lab7.10 software. This hyperspectral dataset was generated by the SAMSON sensor The SAMSON components are integrated with the flight manage- ment system to enable real-time management of data collection and flight orientation. K- Means SVM The HSI sensor incorporates improved spectral alignment, radiometric calibrations and at- mospheric correction procedures. The charge cou- pled device (CCD) used in the SAMSON sensor is a thinned, backside-illuminated CCD (produced by Sarnoff Imaging). The CCD manufacturing process greatly increases the quantum efficiency from about 5% to 60% at 400 nm, and from about 40% to 85% at 700 nm.with a band width of 3.2nm. The data was collected by the Florida Environmental Research Weighted Majority Institute as part of the GOES-R sponsored experi- Voting Majority Voting ment. The instrument flown during the collect is the Figure 10.Classified Output for band 75 SAMSON, a push-broom,visible to near IR, hyper- spectral sensor. This sensor was designed and devel- Bands OA= AA= OA= AA= oped by FERI. The following paper describes the 90.5886 90.585 99.0039 99 basic design of the sensor: Kohler et al. (2006)[14]. Majority Voting Weighted Majority This dataset utilized the new radiometric calibration Kappa Coefficient Voting technique specifically designed to characterize and Kappa Coefficient correct the stray light found within the sensor. The 1 91.4158 99.9080 following paper describes the approach: Kohler et al. (2004). It has samples = 952, 10 90.8216 99.2586 lines = 952,bands = 156. 25 91.0987 99.5614 45 90.7436 99.1733 75 88.8225 97.0738 50 90.3567 98.7505 2183 | P a g e
  • 8. Gitanjali S. Korgaonkar, Dr. R. R. Sedamkar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2177-2184 100 89.0256 97.2958 multiclass classification of hyperspectral- remote sensing data,‖ in Proc. IEEE Int. 125 91.4878 99.9867 Conf. on Acoustics, Speech, and Signal Processing, Toulouse, France,2006. [6] M. L. G. Mercier, ―Support vector ma- 145 89.0309 97.3015 chines for hyperspectral image classifica- tion with spectral-based kernels,‖in Geo- 150 89.0639 97.3376 science and Remote Sensing Symposium, vol. 1. IGARSS ‘03. Proceedings, July 156 89.1105 97.3886 2003. [7] N. Keshava, ―Distance metrics and band selection in hyperspectral processing with Table 1. Comparision of different Bands application to material iden-tification and spectral librairies,‖ IEEE Trans. on Geo- science and Remote Sensing, vol. 42, pp. 1552–1565, July2004. [8] http://people.sabanciuniv.edu/berrin/cs512/l ectures/11-svmtutorial2.pdf by Hwanjo Yu andSungchul Kim[9]http://home.dei.polimi.it/matteucc/C lustering/tutorial_html/kmeans.html [10] H. Yang, B. Ma, and Q. Du, ―Decision fu- sion for supervised and unsupervised hyper spectral image classification,‖ in Proc. IEEE Geosci. Remote Sens. Symp., Jul. 2009, pp. 948–951. [11] M.Petrakos,J.A. Beniktsson, and I. Kanel- lopoulos,‖The effect of Classifier agree- Figure 11.Kappa Coefficient of hyperspectral image ment on the accuracy of the combined clas- at different wavelengths sifier in decision level fusion,‖IEEE Trans.Geosci. Remote Sens., Vol 6. CONLUSION 39,no.11,pp.2539-2546, Nov.2001. Decision Fusion is the most widely used [12] A.Cheriyadat,L.M.Bruce and A. Mathur, ― methods for Classification of hyperspectral image. Decision Level Fusion with best-bases for In this paper, we used a weighted decision fusion hyperspectral Classification,‖ in Proc. approach. The final output take advantage of the IEEE Workshop Adv. Tech. Anal. Remotely power of supervised classification in class separa- Sensed Data, 2003,pp.399-406 tion and the capability of unsupervised classifier in [13] Fitzgerald,R.W and Lees,B.G. 1994. As- reducing the impact from spectral variations in ho- sessing the classification accuracy of multi- mogeneous regions. sources remote sensing data. Remote Sens- The Weighted Decision Fusion Method ing of Environment, 47: 362-368. were examined using SAMSON hyperspectal sam- [14] ple data, which shows that the classification accu- www.opticks.org/confluence/display/optick racy and the kappa coefficient is better than that s/sample+data using MV method. REFERENCES [1] Documention Tutorials on hyprspec by Radall B. Smith [2] Peg Shippert‖ Introduction to Hyperspec- tral Image Analysis‖ Earth Science Appli- cations Specialist Research Systems, Inc [3] http://www.sc.chula.ac.th/courseware/2309 507/Lecture/remote18.htm [4] B. Scholkopf and A. J. Smola, Learning with Kernels. MIT Press, 2002. [5] M. Fauvel, J. Chanussot, and J. A. Benediktsson, ―Kernels Evaluation for 2184 | P a g e