Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Publicidad
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping
Próximo SlideShare
Relative value of radar and optical data for land cover/use mapping: Peru exa...Relative value of radar and optical data for land cover/use mapping: Peru exa...
Cargando en ... 3
1 de 42
Publicidad

Más contenido relacionado

Presentaciones para ti(20)

Similar a An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping(20)

Publicidad
Publicidad

An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping

  1. Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=tgei20 Download by: [George Mason University] Date: 19 November 2015, At: 16:19 Geocarto International ISSN: 1010-6049 (Print) 1752-0762 (Online) Journal homepage: http://www.tandfonline.com/loi/tgei20 An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping Terry Idol, Barry Haack & Ron Mahabir To cite this article: Terry Idol, Barry Haack & Ron Mahabir (2015): An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping, Geocarto International, DOI: 10.1080/10106049.2015.1120351 To link to this article: http://dx.doi.org/10.1080/10106049.2015.1120351 Accepted author version posted online: 18 Nov 2015. Submit your article to this journal View related articles View Crossmark data
  2. Publisher: Taylor & Francis Journal: Geocarto International DOI: http://dx.doi.org/10.1080/10106049.2015.1120351 An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping TerryIdol1 , BarryHaack2 , RonMahabir3 * 1 George Mason University, Department of Geography and Geoinformation Science, Fairfax, Virginia, USA; E-Mail: tidol@gmu.edu; +1-703-993-1210 2 George Mason University, Department of Geography and Geoinformation Science, Fairfax, Virginia, USA; E-Mail: bhaack@gmu.edu; +1-703-993-1215 3 George Mason University, Department of Geography and Geoinformation Science, Fairfax, Virginia, USA; E-Mail: rmahabir@gmu.edu; +1-703-993-1210 * Correspondence author; E-Mail: rmahabir@gmu.edu Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  3. 2 An evaluation of Radarsat-2 individual and combined image dates for land use/ cover mapping Various land use/cover types exhibit seasonal characteristics which can be captured in remotely sensed imagery. This study examined how different seasons of Radarsat-2 data influence land use/cover classification accuracies for two study sites. Two dates of Radarsat-2 C-band quad-polarized images were obtained for Washington, D.C., USA and Wad Madani, Sudan. Spectral signatures were extracted and used with a maximum likelihood decision rule for classification and thematic accuracies were then determined. Both despeckled radar and derived texture measures were examined. Thematic accuracies for the two despeckled image dates were similar with a difference of 3% for Washington and 6% for Sudan. Merging the despeckled images for both seasons increased overall accuracy by 2% for Washington and 9% for Sudan. Further combining the original radar for both seasons with derived texture measures increased overall accuracies by 9% for Washington and 16% for Sudan for final overall accuracy values of 73% and 82%. Keywords: Radarsat-2; quad-polarization; multidate; multitemporal; texture; Sudan; Washington DC Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  4. 3 1. Introduction The ability to obtain accurate and timely information on multiple surface conditions has frequently been demonstrated by both airborne and spaceborne remote sensing methods. That information can be for single features such as mapping deforestation as well as for complex multiple land use/cover class mapping. Most of these activities have been accomplished, especially via spaceborne platforms, with multispectral sensors (Idol et al. 2015a). Currently, there are over 60 of these spaceborne civil sensors in fine to moderate spatial resolutions (Haack et al. 2014). The longest, continuously functioning spaceborne mission has been Landsat with eight successful launches since 1972. Optical systems, such as Landsat Thematic Mapper (TM), passively record the surface reflectance of the Sun’s energy in the visible and infrared spectral range. Of increasing interest to the remote sensing community are active sensors, such as radar, that emit and receive wavelengths that are significantly longer than those detected by optical systems. Radar can pass through atmospheric conditions, such as clouds, that would obstruct the wavelengths of traditional spaceborne optical and multispectral systems (Stefanski et al. 2014; Al-Tahir et al. 2014; Henderson et al. 2002). Radar can also acquire data at night as it is not dependent upon the Sun for illumination. These benefits of radar have important data collecting potential for many regions, especially those often obscured by persistent cloudy conditions, such as tropical and high latitude regions (Sheoran Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  5. 4 & Haack 2013; Sawaya et al. 2010). In those locations, radar may be an important independent source of information. Furthermore, as many studies have shown, radar can be fused with optical data to improve land use/cover mapping accuracies (Hong et al. 2014; Hu & Ban 2012; Pacifici et al. 2008; Waske & Van der Linden, 2008; Amarsaikhan et al. 2007; Shupe et al. 2004;Chust et al. 2004; Kuplich et al. 2000; Haack et al. 2000; Haack & Bechdol 2000; Solberg et al. 1994). For many applications with optical data, in particular applications involving green vegetation such as agriculture, forestry, rangeland or wetlands, image date is often very critical to the thematic accuracy of classifications. Scientists often collect detailed information on phenology of natural vegetation and crop calendars to assist in the identification of the best image date for acquisition (Van Niel &McVicar 2004). Having multiple dates of imagery has also proven very effective in improving crop discrimination (Le Hegarat-Mascle et al. 2000; Tso & Mather 1999; Turner & Congalton 1998). Multidate analysis continues to be used by the United States Department of Agriculture, both domestically and in the Foreign Agricultural Office, to improve the accuracy of their crop inventories. The availability of free imagery, such as from the Landsat and other more recent spaceborne missions, has made the compilation and acquisition of multitemporal image datasets easier. Consequently, more research on multidate analysis can be found in the literature on applications in many disciplines. Other remote sensing technologies such as hyperspectral imaging are also becoming increasingly employed (Liu & Bo 2015; Gomez- Chova et al. 2003; Camps-Valls et al. 2003). Historically, the remote sensing community has placed greater emphasis on specific date and multidate analysis using optical imagery. That decision was primarily based upon having only single band and single polarization radar data where the information derived was more on form or structure than composition. Given both the complexity and variety of land use/cover types that exist, this presents challenges when attempting to extract unique signatures for individual classes using data extracted for a single band (Dell’Acqua et al. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  6. 5 2003; Toyra et al. 2001). Furthermore, compared to radar, optical data have been much more widely accessible and for that reason, it is not surprising that it continues to be the dominant source of imagery for most applications. More recently, radar systems such as the Japanese Phased Array L-band Synthetic Aperture Radar (Palsar), the Canadian RADARSAT-2, the German Aerospace Center/ European Aeronautic Defense and Space Company TerraRadar-X and the European Sentinel sensors collect information from multiple polarizations. Importantly for the remote sensing community, there is open access to the Sentinel radar data. These data may improve the ability to extract more detailed surface information by different or multiple radar image dates. Multidate or multitemporal remote sensing data sets have frequently provided better mapping accuracies than single date images, especially for agriculture. There have been numerous studies that have demonstrated improved crop accuracies with multidate radar or multidate radar and optical integration (Bargiel & Herrmann, 2011; Blaes, Vanhalle & Defourny 2005; De Wit & Clevers 2004). Yekkihkhay et al. (2014), for example, demonstrated while studying agriculture in Canada that overall thematic accuracies increased by 14% using a second date of radar and an additional 9% with a third date. That study evaluated the use of multidate radar for more general land uses/covers than agriculture. Similarly, Niu and Ban (2013) examined various land use/covers in an urban to rural fringe in the Greater Toronto Area. Six image dates from Radarsat-2 were used, consisting of data collected from both ascending and descending orbits. Kappa values of 0.91 were reported using all six image dates compared to kappa values in the range of 0.51 to 0.67 using individual image dates. Such studies continue to demonstrate the importance of investigating multiple image dates for improving land use/cover mapping accuracies. Although various studies have used multiple radar image dates to study land use/cover, most of these studies have focused on comparing differences within the same season and/or the same study area or for only one class (Bargiel & Herrmann 2014; Feng et al. 2012; Hu & Ban 2012; Waske & Bruan 2009; Skriver 2008; Ban & Wu 2005; Engdahl et Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  7. 6 al. 2003; Shao et al. 2001; Pierce et al. 1998). Multiple image dates applied to different sites provides the opportunity to test the transferability and robustness of varied methods and data for extracting land use/cover in unique geographic areas. This is important since properties of land use/cover often change with location because of differences in many human related variables, such as demographic, technological, cultural and institutional factors. Another issue with radar is that little attention has been given with respect to the necessity or appropriateness of removing, or at least reducing the amount of speckle (Maghsoudi et al. 2012; Bouchemakh et al. 2008; Lu et al. 1996). The amount of speckle varies between radar data sets both as a function of the level of vendor preprocessing and mode of acquisition, single or multi-look, with the first typically having more speckle (Saevarsson et al. 2004). Decisions as to whether or not to despeckle the radar imagery before more detailed analysis are important. In addition, it has been shown that combining original radar and derived texture can improve mapping accuracies, at least for some classes (Idol et al.2015b; Haack & Bechdol 2000), making derived texture measures a useful source of information in surface classification. The purpose of this study was to compare the differences in two seasons of Radarsat-2 imagery and the changes in classification accuracies for combining those seasons for two sites. In this analysis both despeckled original data and derived texture measures were also evaluated. The locations for these evaluations included an urban location, Washington, D.C. USA and a site characterized by much more agrarian land use/cover, Wad Madani, Sudan. In section 2 the study areas and data used are discussed. Section 3 contains the methodology while in section 4 results are presented. Finally section 5 provides conclusions. 2. Study areas and data Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  8. 7 Two sites were selected for this analysis representing very different landscapes and climates. The first is Washington, D.C. USA and the second Wad Madani, Sudan. Radarsat-2 quad- polarized images at a nominal spatial resolution of 8 m were acquired. In addition to the Radarsat imagery, ASTER optical images at 15 m spatial resolution were obtained for each site to assist in identification of calibration and validation areas of interest (AOI). Radarsat-2 C-band quad-polarization images were acquired over Washington, D.C. on 18 December 2008 and 17 July 2009. The December image is during the winter season when many trees are in leaf-off condition and July is the summer leaf-on season. The Washington, D.C. subset includes the major metropolitan complex and several surrounding suburban areas (Figure 1). The imagery also includes a significant portion of forested areas (green and brown tones). In addition, the Potomac River (black tones) provides the opportunity to classify water bodies. The land use/cover classification features for Washington, D.C. consisted of urban, forest, suburban and water (Figure 2) as described by Anderson et al. (1976). The vast majority of high backscatter areas in Figure 1 are urban features located in and around Washington, D.C. Most of these are suburban residential areas (pink tones) consisting of a complex landscape of buildings, lawns, trees, roads etc. and creates a mix of high and low radar returns. Governmental and commercial features (white tones) were mainly located in downtown Washington, D.C. INSERT FIGURE 1 INSERT FIGURE 2 Radarsat-2 data for Wad Madani were captured on 13 January 2009 during the winter, fallow season and 6 June 2009 during the summer growing season (Figure 3). The Sudan’s major geographic feature is the Nile River and its tributaries, which include the Blue Nile and the White Nile. The city of Wad Madani is located on a bend on the west bank of the Blue Nile River (white tones in Figure 3) and is approximately 160 km southeast of the Sudan’s Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  9. 8 capital city of Khartoum (Sawaya et al. 2010). The land to the west of the Blue Nile is primarily agriculture (green tones). The land to the east of the Blue Nile and north of Wad Madani is desert (dark tones). The Blue Nile crosses the middle of the image from South Central to Northwest (bright green tones). INSERT FIGURE 3 Consistent with Anderson et al. (1976), the following land use/cover features were classified for Wad Madani: urban, fallow agriculture and/or bare ground, sparse natural forest, water and agriculture (Figure 4). The width of the Blue Nile River fluctuates between 280 m and 460 m around the city of Wad Madani. Because of its narrow width, this water body could introduce issues when using larger pixels for classifications or with some window derived values. Also, and similar to Washington, D.C, the land use/cover classes extracted were generalized and limited in number. However, for a comparison of the methods and data used, these classes were considered acceptable for determining relative accuracies. INSERT FIGURE 4 3. Methodology Radar images for both dates were registered to a common geographic coordinate system, UTM Zone 18 in the case of Washington, D.C. and UTM Zone 36 in the case of Wad Madani. In both cases, an earth model of WGS 1984 was used. Next, land use/cover classification was done following a three-stage approach. These stages as suggested by Foody (1999) are training, classification and testing or accuracy assessment. During the training stage, calibration or training Area of Interests (AOIs) were determined by knowledge of the area and from visual analysis of the the ASTER imagery and from various other remote sensing data, including image scenes from Google Earth. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  10. 9 For the classification, the maximum likelihood classifier (MLC) was used. MLC is the most widely adopted parametric classification algorithm for land use/cover mapping (Bailly et al. 2007; Jensen 2005; Currit 2005; Weng 2002). In addition, its simplicity and wide implementation in most remote sensing software packages, such as the ERDAS Imagine software used in this study, were understandable factors in employing this algorithm for classification. The combined quad-polarized bands (HH, HV, VH and VV) for the radar images for each site (Section 4.1), for two seasons together (Section 4.2) and for both extracted texture and in combination with the original radar (Section 4.3) were layer stacked to create composite images for classification. Finally, during the testing stage, thematic accuracies for each land use/cover class and overall accuracies were assessed by comparison of a sample of the classified data to validation or truth sites which were separate from the calibration locations. A classification accuracy of 85% was used to suggest good class and overall thematic accuracy as recommended by Congalton & Green (1999) and Anderson et al. (1976). For both calibration and validation, two to four AOIs were selected for each class, containing about 1600 pixels on average. It could be argued that point would be better than polygon validation samples since polygons represent discrete generalizations of land use/cover classes (Verbyla & Hammond 1995; Moisen et al. 1994; Janssen &Vanderwel 1994). This leads to issues impacting multivariate normality or impurity of land use/cover classes (Richards & Jia 2005), resulting in accuracy estimates, which are systematically different than the actual estimates (Wulder et al. 2006). There are both advantages and limitations in each sampling approach, point or polygon, some of which are addressed in Stehman & Czaplewski (1998) and with either method used containing some level of inherent error (Congalton 1991). However, a polygon sampling approach was selected for this study. Special attention was however taken to select pure polygons to help overcome issues with class impurities. In addition, because this research investigates relative thematic accuracy in comparison to absolute accuracy, the use of polygons for validation was considered sufficient. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  11. 10 In evaluating the differences in mapping accuracies between image dates and combinations of dates, it is critical that coinciding land use/covers in both sets of images remains unchanged. For example, if an area consists of forests at time T1 and at time T2 this land use/cover was converted to urban land, this will lead to misleading or erroneous classification results. Any new image generated from the fusion of two or more image dates must therefore consider this issue before proceeding to use the new image for any further analysis. To address this problem, both sets of images were carefully examined, reviewing both calibration and validation sites, along with examining other areas where change may be more susceptible, such as the periphery of city areas, for both image dates and for both study areas. This was done by visually examining both sets of images directly and by use of ancillary data from sources such as ASTER, Landsat and Google Earth. The results of this qualitative assessment assured that both image dates for Wad Madani and Washington, D.C. were acceptable to be fused. The next section presents the results of the various classifications beginning with an independent classification of the original radar compared to despeckled radar for independent seasons, followed by combined season images and then progressing to texture evaluations and data combinations. 4. Results 4.1.Independent seasons In this study, an initial comparison was made between the spectral signatures and thematic classifications of the original and despeckled radar. Two window sizes were investigated, 3x3 and 5x5 using a Lee-Sigma algorithm. Comparison of both original and despeckled radar for Washington, D.C. and Wad Madani showed a similar small increase in the range of 3% to 6% in overall thematic accuracy, moving from the original to the despeckled radar with increased window size. This was expected, particularly given the use of polygons for accuracy assessment since despeckling is basically a smoothing filter. For Washington, D.C., the Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  12. 11 overall thematic accuracies for the original radar, despeckled 3x3 and despeckled 5x5 data were 57% and 56%, 60% and 58%, and 62% and 59% for the December and July images respectfully. The overall thematic accuracies for Wad Madani were 46% and 51%, 49% and54%, and 51% and 57% for the January and June image scenes. The results for the 5x5 despeckled radar by location and season are in Tables 1 and 2. The overall thematic accuracy is the lower right number in bold in each Table. INSERT TABLE 1 INSERT TABLE 2 For Washington, D.C. the highest accuracies, user and producer, were observed for water. Compared to other classes, water is much more easily detected by a relatively flat and low return in radar imagery (Idol et al. 2015b; Jaroszewski & Lefevre 1998). This is evident from Figure 1 and helps to explain the wide and continued use of radar in flooding applications and in monitoring inland water reservoirs. The highest sources of confusion were between forest and suburban and between suburban and urban classes. Green and scattered vegetation along many suburban districts may give a similar appearance in radar backscatter to fragmented or thinning forests. In an urban-rural interface, for example, around a large city, land use/cover classifications using remote sensing continues to be difficult. The built environments at these locations often share similar characteristics, which complicate the selection of unique signatures. Furthermore, comparison of results for Washington D.C. by class, both user and producer, and in overall thematic accuracy for both seasons had minimal differences, about 3% with the winter, leaf off season having slightly better results. This may be in part due to less classification ambiguity occurring between forest and suburban classes during this leaf off period. For Wad Madani, there was considerable confusion between the water and bare soil classes. The highest producer’s accuracy for water was 93% with a much lower user’s Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  13. 12 accuracy of 59% in June and slightly improved for January (66%). Given the small width of the Blue Nile, the larger window size in speckle reduction may have influenced these results, along with an increase in the volume of water in the Blue Nile during January, making water much more discernable in radar imagery during this time. Confusion was also evident in the very low producer’s accuracy for bare soil, likely because the water and bare soil are both acting as specular reflectors with similar low backscatter. The sparse trees producer’s accuracy was low for both image seasons for Wad Madani, 19% and 39% for June and January respectfully. Sparse trees were confused with bare soil, agriculture and urban. The agriculture classification producer’s accuracy was low in January (29%) with a great deal of confusion between the agriculture, bare soil and sparse trees. This is understandable in that it was the fallow, winter season. The June improvement in overall thematic accuracy by 6%, from 51% in January to 57%, is also reasonable as there would be more separability in features because of the active vegetative growth, especially for agriculture, during this season. 4.2. Combining seasonal radar images As few as two images taken in different seasons can improve classifications when using Landsat TM images (Guerschman et al. 2003). It has also been shown that combining wet and dry season radar images can improve classification results (Villiger 2008). This has largely been due to seasonal fluctuations influencing the separability between land use/cover types (Vogelmann 2001). The combination of multiple image dates could then potentially lead to improved classification accuracies, both overall and for individual classes. Each of the two study sites had imagery that were acquired in both the winter and summer seasons. The two seasonal images were layer stacked for each site, the combined images classified and their respective error matrices generated (Table 3). As reported in Section 4.1, there was an overall increase in classification accuracy in using the 5x5 despeckled radar imagery, when Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  14. 13 compared to the original radar and 3x3 despeckled data. The decision was therefore made to use the 5x5 despeckled data when combining image dates for both the Washington, D.C. and Wad Madani sites. INSERT TABLE 3 The Washington, D.C. overall accuracy improved very little with the merging of the winter and summer season images. The overall accuracy of the individual December and July images were 62% and 59%, with the combination of both seasonal images increasing the accuracy to 64%. Improvement in using the combined image was however better for the producer’s accuracy for both the suburban and urban classes when compared to either of the single seasons alone, with increases of 5% - 10%. In contrast, the forest and water classes for each site showed a small decrease in producer’s accuracy of 2% - 8% in the combined image classification. The combination of the Wad Madani winter and spring season imagery helped to improve most of the individual class results when compared to the results from either of the two images independently. The combined image resulted in improved producer’s accuracy in the bare soil (26%), sparse trees (7%), agriculture (37%) and urban (19%) classes when compared to the best results obtained for the individual seasons. Whereas for the water class, a small decrease of 3% in producer’s accuracy was observed in the combined imagery. Combining the two different Wad Madani seasons provided an improvement of 8% when compared to the best single season imagery results. The larger increase in overall accuracy for Wad Madani was expected compared to the relatively small increase for the Washington site because in Sudan there were more classes sensitive to seasonality. The built up areas, however, provided better classification results when using the combined season images than did the natural vegetated areas. This was unexpected, as impervious areas should not change much by season. Rather, more diverse information should be available on plant Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  15. 14 life between seasons such as trees and agriculture areas in the combination of seasonal images and leading to improved classification results. 4.3. Texture analysis and combination with original radar For this study, variance texture measures were extracted for four different window sizes for each band of the original, not despeckeled, Radarsat-2 data. The window sizes were 5x5, 9x9, 13x13 and 17x17. The use of variance texture was guided by the results of previous work, suggesting this to be a suitable measure for extracting land use/cover from radar imagery (Herold et al. 2004; Haack & Bechdol 2000). Also, it has been shown by Ulby et al. (1990) that most texture measures extracted from the grey-level co-occurrence matrix are correlated, further supporting the use of the variance measure in this study. The error matrices for the best texture window size by season that were created using the original Radarsat-2 images are contained in Tables 4 and 5. For all but one derived texture data set, the 17 x 17 window provided the best results. The one exception was the July Washington image where the 13x13 window was slightly higher in overall accuracy. Figure 5 is an example of a classified texture map for Wad Madani. INSERT TABLE 4 INSERT TABLE 5 INSERT FIGURE 5 For both sites, the derived texture values significantly increase overall thematic accuracies from the original radar. For Washington, D.C, those increases were about 10% and for Wad Madani about 20%. However, the relative seasonal results were similar to the original radar in both locations. For some individual classes, texture resulted in improved producer’s accuracies while decreasing others. For Washington, D.C, texture improved producer’s accuracies for forest and urban by 26% and 42% respectfully, while decreasing Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  16. 15 suburban by 24%. In Wad Madani, producer’s accuracy increased for sparse trees and urban by 28% and 37%, with agriculture decreasing by 3%. This analysis further took both of the 5x5 despeckled original radar images and combined them with the best of the texture measures that was generated for that specific image for both seasons. For Washington, D.C., this was the 17x17 and 13x13 texture measures for the December and July images, and the 17x17 texture measures for both image dates in the case of Wad Madani. Table 6 contains the results of these combinations of despeckled and texture for the two seasons. Neither of these multiple season original and texture combinations improved over earlier classifications for Washington, D.C., with the best single date texture of 72% compared to the 73% when combined. Wad Madani, on the other hand, improved from its best texture measure of 78% to 82%. INSERT TABLE 6 5. Conclusions There is an increasing availability of spaceborne radar to the basic science and application remote sensing communities. The European Space Agency, for example, under its Corpernicus program has a series of missions planned collectively known as the Sentinel program. A series of six satellites will be launched into orbit collecting radar data at various spatial resolutions, swaths and polarizations, providing global coverage of the earth surface. Already, the first two missions Sentinel-1A and Sentinel-2A have been deployed in April 2014 and June 2015 respectively, capturing information free for civilian use (ESA, 2014). Similarly, the Alaska Satellite Facility continues to share much of its radar data with the science community. More recently, the Japanese Aerospace Exploration Agency in November 2014 has made available four annual global Palsar datasets for the years 2007 to Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  17. 16 2010, also free for use (JAXA, 2014). Many more similar contributions are expected in the future with the expansion and deployment of more operational spaceborne radar programs. Given the increasing availability of radar data, it is important to understand both the strengths and weaknesses of using radar for land use/cover classifications. Optical imagery generally provides better classifications than radar but in many parts of the world, such as the tropics and high latitudes, it is difficult to collect optical imagery without extensive amounts of clouds. These locations where optical imagery is unavailable will increasingly use spaceborne radar. Multidate, quad-polarization and derived radar values such as texture can all contribute to improved mapping accuracies based only on radar alone as this study has demonstrated. This study compared classifications of different seasons of radar and the combination of seasons for two study locations. Despeckled Radarsat-2 and derived texture measures were both included in this analysis. There were minimal differences between seasons in thematic accuracies, 3% in Washington, D.C. and 6% in Sudan. Interestingly, but not surprising, given that Washington is an urban/suburban landscape and Sudan is agricultural, the leaf off image was better for Washington, D.C., while the growing season gave better classification results for Sudan. Combining the two dates increased results by 2% for Washington and 9% for Sudan. This suggests that different surface features will respond better to multidate analysis than others. There is less reason to believe that urban classification will vary seasonally but agriculture and natural vegetation would be expected to be very responsive to image date and multidate analysis. It is also very likely that different image dates and more than two dates might further lead to improved classifications for both user and producer, and overall thematic accuracies. As with many studies using radar to map land use/cover, derived texture measures greatly improved results (Idol et al. 2015a; Sawaya et al. 2010), but did not vary the results for the individual and combined seasons. The combination of both dates of despeckled radar and texture had very little improvement on the best texture measure. The result for this Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  18. 17 combination for Washington, D.C. and Wad Madani were 78% and 82% respectively, representing a good classification for a radar only derived land use/cover map. In the future, further investigation will evaluate various other classifiers (e.g. support vector machines and neural networks) and image dates, the combining of radar backscattering coefficients, polarimetric parameters and textural features for classification, along with an assessment of different texture measures for improving both class specific and overall thematic accuracies. Additionally, and moving from relative to absolute accuracies, an evaluation of the sampling process including the choice of point versus polygon validation samples, the sample design (e.g. random versus stratified) and the impact of these various issues on multi-date radar imagery will be investigated. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  19. 18 Acknowledgements The authors would to thank the following organizations for providing and/or funding the imagery used and for supporting this research. Radarsat-2 images were provided by the Canadian Space Agency under project 3126 of the Science and Operational Application Research for RADARSAT-2 program. The NASA Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and Science (EROS) Center provided the ASTER imagery. Finally, additional support was provided through grants received by the Department of Geography and Geoinformation Science at George Mason University. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  20. 19 Conflicts of Interest The authors declare no conflict of interest. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  21. 20 References Al-Tahir R, Saeed I, Mahabir R. 2014. Application of remote sensing and GIS technologies in flood risk management. In Flooding and Climate Change: Sectorial Impacts and Adaptation Strategies for the Caribbean Region, edited by D.D. Chadee,J.M. Sutherland and J.B. Agard, Nova Publishers, Hauppauge, New York. pp. 137-150. Anderson JR, Hardy EE, Roach JT, Witmer RE.l976. A land use and land cover classification system for use with remote sensor data. US Geological Survey Professional Paper,No. 964, pps. 28. Washington D.C. Amarsaikhan D, Ganzorig M,Ache P,Blotevogel H.2007. The integrated use of optical and InSAR data for urban land cover mapping. International Journal of Remote Sensing 28(6): 1161-1171.doi:10.1080/01431160600784267 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  22. 21 Bailly JS, Arnauda M, Puech C.2007. Boosting: a classification method for remote sensing. International Journal of Remote Sensing 28(7): 1687–1710. doi:10.1080/01431160500469985 Ban Y, Wu Q. 2005. RADARSAT SAR data for land use/land cover classification in the rural-urban fringe of the greater Toronto area. In Proceedings of 8th AGILE Conference on Geographic Information Science Estoril, Portugal, 26-28 May, pps. 8. Bargiel D, Herrmann S. 2011. Multi-temporal land-cover classification of agricultural areas in two European regions with high resolution spotlight Terra SAR-X data. Remote Sensing 3(5): 859-877. doi:10.3390/rs3050859 Blaes X, Vanhalle L, Defourny P. 2005. Efficiency of crop identification based on optical and SAR image time series, Remote Sensing of Environment, 96 (3–4): 352-365. DOI: 10.1016/j.rse.2005.03.010. Bouchemakh L, Smara Y, Boutarfa S, Hamadache Z. 2008. A comparative study of speckle filtering in polarimetric RADAR images. In Information and Communication Technologies: From Theory to Applications, 3rd International Conference, Damascus, Syria, 7-11 April, pps 6. doi:10.1109/ICTTA.2008.4530040 Camps-Valls G, Gómez-Chova L, Calpe-Maravilla J, Soria-Olivas E, Martín-Guerrero JD, Moreno J. 2003. Support vector machines for crop classification using hyperspectral data. In Pattern recognition and image analysis. Springer Berlin Heidelberg.pp. 134- 141. Chust G, Ducrot D, Pretus JL. 2004. Land cover discrimination potential of radar multitemporal series and optical multispectral images in a Mediterranean cultural landscape. International Journal of Remote Sensing 25(17): 3513-3528. doi:10.1080/0143116032000160480 Congalton RG. 1991. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sensing of Environment. 37(1): 35-46. doi:10.1016/0034-4257(91)90048- B. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  23. 22 Congalton R, Green K. 1999. Assessing the accuracy of remotely sensed data: Principles and practices. 1st edition, CRC Press, pps. 137. Currit N. 2005. Development of remotely sensed, historical land cover change database for rural Chihuahua, Mexico. International Journal of Applied Earth Observation and Geoinformation7(3): 232–247.doi:10.1016/j.jag.2005.05.001 Dell’Acqua F, Gamba P, and Lisini G. 2003. Improvements to urban area characterization using multitemporal and multiangle RADAR images. IEEE Transactions on Geoscience and Remote Sensing 41(9): 1996-2004. doi:10.1109/TGRS.2003.814631 De Wit J, Clevers J. 2004. Efficiency and accuracy of per-field classification for operational crop mapping, International Journal of Remote Sensing, 25 (20): 4091-4112. doi: 10.1080/01431160310001619580 Engdahl ME, Hyyppa JM. 2003. Land-cover classification using multitemporal ERS-1/2 InSAR data. IEEE Transactions on Geoscience and Remote Sensing 41(7): 1620-1628. doi: 10.1109/TGRS.2003.813271 ESA 2014. European Space Agency. https://sentinel.esa.int/web/sentinel/sentinel-data-access (Accessed on September 14, 2014) Feng Q, Chen E, Li Z, Guo Y, Zhou W, Li W, Xu G. 2012. Land cover classification by Support Vector Machines using multi-temporal polarimetric SAR data. In IEEE International Geoscience and Remote Sensing Symposium.22-27 July, Munich, Germany, pp. 6244-6246. doi: 10.1109/IGARSS.2012.6352685 Foody GM. 1999. The continuum of classification fuzziness in thematic mapping.Photogrammetric Engineering and Remote Sensing 65(4): 443-452. Gomez-Chova L, Calpe J, Camps-Valls G, Martin JD, SoriaE, Vila J, Alonso-Chorda L Moreno J. 2003. Feature selection of hyperspectral data through local correlation and SFFS for crop classification. In Geoscience and Remote Sensing Symposium proceedings. 21-25 July,France,Vol. 1, pp. 555-557. doi: 10.1109/IGARSS.2003.1293840 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  24. 23 Guerschman J, Paruelo J, Bella C, Giallorenzi M, Pacin F. 2003. Land cover classification in the Argentine Pampas using multi-temporal Landsat TM data. International Journal of Remote Sensing24(17): 3381-3402. doi:10.1080/0143116021000021288 Haack BN, Bechdol M. 2000.Integrating multisensor data and RADAR texture measures for land cover mapping. Computers and Geosciences 26(4): 411-421. doi:10.1016/S0098- 3004(99)00121-1 Haack B, Mahabir R, Kerkering J. 2014. Remote sensing-derived national land cover land use maps: a comparison for Malawi. Geocarto International30(3): 270-292. doi: 10.1080/10106049.2014.952355 Henderson F, Chasan R, Portolese R, Hart T. 2002. Evaluation of RADAR-optical imagery synthesis techniques in a complex coastal ecosystem. Photogrammetric Engineering and Remote Sensing 68(8): 839-846. Hong G, Zhang A, Zhou F, Brisco B. 2014. Integration of optical and synthetic aperture radar (SAR) images to differentiate grassland and alfalfa in Prairie area. International Journal of Applied Earth Observation and Geoinformation28 (2014): 12- 19.doi:10.1016/j.jag.2013.10.003 Hu H,Ban Y. 2012. Multitemporal RADARSAT-2 ultra-fine beam SAR data for urban land cover classification. Canadian Journal of Remote Sensing 38(1): 1-11. doi:10.5589/m12-008 Idol T, Haack B, Mahabir R. 2015a. Comparison and integration of spaceborne optical and radar data for mapping in Sudan. International Journal of Remote Sensing 36(6): 1551- 1569. doi: 10.1080/01431161.2015.1015659 Idol T, Haack B, Mahabir R. 2015b. Radar and optical remote sensing data evaluation and fusion: A case study for Washington DC, USA. International Journal of Image and Data Fusion pps. 17. doi: 10.1080/19479832.2015.1017541 Janssen LL, Vanderwel FJ. 1994. Accuracy assessment of satellite derived land-cover data: A review. Photogrammetric Engineering and Remote Sensing 60(4): 419-426. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  25. 24 Jaroszewski S, Lefevre, R. 1998. Radar remote sensing: land cover classification. In IEEE Aerospace Conference proceedings, 21-28, July, Colorado, USA, Vol. 3, pp. 373-378. doi: 10.1109/AERO.1998.685842 JAXA 2014. Japanese Aerospace Exploration Agency. http://www.eorc.jaxa.jp/ALOS/en/index.htm. (Accessed on Novermber 30, 2014) Jensen JR. 2005. Thematic information extraction: pattern recognition. In Introductory Digital Image Processing: A Remote Sensing Perspective. 3rd edited by C.C. Keith, Prentice Hall Series in Geographic Information Science, Saddle River, NJ, USA, pp. 337–406. Kuplich T, Freitas CDC, Soares J. 2000. The study of ERS-1 SAR and Landsat TM synergism for land use classification. International Journal of Remote Sensing 21(10): 2101–2111. doi:10.1080/01431160050021321 Le Hegarat-Mascle, S, Quesney A, Vidal-Madjar D, Taconet O, Normand M, Loumagne C. 2000. Land cover discrimination from multitemporal ERS images and multispectral Landsat images: a study case in an agricultural area in France. International Journal of Remote Sensing 21(3): 435–456. doi:10.1080/014311600210678 Liu, X,Bo Y. 2015. Object-based crop species classification based on the combination of airborne yyperspectral images and LiDAR data. Remote Sensing7(1): 922- 950.doi:10.3390/rs70100922 Lu YH, Tan SY, Yeo TS, Ng WE, Lim I, Zhang CB. 1996. Adaptive filtering algorithms for RADAR speckle reduction. In IEEE Transactions on Geoscience and Remote Sensing Symposium Proceedings Lincoln, Nebraska, 27-31 May, Vol 1,pp. 67-69. doi:10.1109/IGARSS.1996.516246 Maghsoudi Y, Collins M, Leckie D. 2012. Speckle reduction for the forest mapping analysis of multi-temporal Radarsat-1 images. International Journal of Remote Sensing 33(5): 1349-1359. doi:10.1080/01431161.2011.568530 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  26. 25 McNairn H, Champagne C, Shang J, Holmstrom D, Reichert G. 2009. Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories. ISPRS Journal of Photogrammetry and Remote Sensing 64(5): 434- 449.doi:10.1016/j.isprsjprs.2008.07.006 Moisen GG, Edwards TC, Cutler DR. 1994. Spatial sampling to assess classification accuracy of remotely sensed data pp. 161-178. Taylor and Francis, Philadelphia, Penn. Niu, X, and Y. Ban. 2013. Multi-temporal RADARSAT-2 polarimetric SAR data for urban land-cover classification using an object-based support vector machine and a rule-based approach. International Journal of Remote Sensing 34(1): 1-26. doi:10.1080/01431161.2012.700133 Nyoungui A, Tonye E, Akono A. 2002. Evaluation of speckle filtering and texture analysis methods for land cover classification from RADAR images. International Journal of Remote Sensing.23(9): 1895-1925. doi: 10.1080/01431160110036157 Pacifici F, Del Frate F, Emery WJ, Gamba P, Chanussot J. 2008. Urban mapping using coarse SAR and optical data: Outcome of the 2007 GRSS data fusion contest. IEEE Geoscience and Remote Sensing Letters 5(3): 331-335. doi:10.1109/LGRS.2008.915939 Pierce LE, Bergen KM, Dobson MC, Ulaby FT. 1998. Multitemporal land-cover classification using SIR-C/X-SAR imagery. Remote Sensing of Environment 64(1): 20- 33.doi:10.1016/S0034-4257(97)00165-X Richards JA, Jia X. 2005. Remote Sensing and Digital Image Analysis. 1st ed, 194–199. Berlin: Springer. Saevarsson, BB, Sveinsson, JR, Benediktsson JA. 2004. Combined wavelet and curvelet denoising of SAR images. In Proceedings of IEEE Geoscience and Remote Sensing Symposium Anchorage, Alaska, 20-24 September, 6, pp. 4235-4238. doi:10.1109/IGARSS.2004.1370070 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  27. 26 Sawaya S, Haack B, Idol T, Sheoran A. 2010. Land use/cover mapping with quad- polarization RADAR and derived texture measures near Wad Madani, Sudan. GIScience and Remote Sensing 47(3): 398-411.doi:10.2747/1548-1603.47.3.398 Shao Y, Fan X, Liu H, Xiao J, Ross S, Brisco B, Brown R, Staples G. 2001. Rice monitoring and production estimation using multitemporal RADARSAT. Remote Sensing of Environment 76(3): 310-325.doi:10.1016/S0034-4257(00)00212-1 Sheoran A, Haack B. 2013. Classification of California agriculture using quad polarization radar data and Landsat Thematic Mapper aata. GIScience and Remote Sensing50(1): 50-63. doi:10.1080/15481603.2013.778555 Shupe SM, Marsh SE. 2004. Cover and density-based vegetation classifications of the Sonoran Desert using Landsat TM and ERS-1 SAR imagery. Remote Sensing of Environment 93(1-2): 131–149.doi:10.1016/j.rse.2004.07.002 Skriver H. 2008. Comparison between multitemporal and polarimetric SAR data for land cover classification. In IEEE Transactions on Geoscience and Remote Sensing Symposium. 3, Boston, Massachusett, 7-11 July, pp. 558-561. doi:10.1109/IGARSS.2008.4779408 Solberg AHS, Jain AK,Taxt T. 1994. Multisource classification of remotely sensed data: fusion of Landsat TM and SAR images. IEEE Transactions on Geoscience and Remote Sensing32(4): 768-778. doi:10.1109/36.298006 Solberg AHS, Anil KJ, 1997. Texture fusion and feature selection applied to RADAR imagery. IEEE Transactions on Geosciences and Remote Sensing 35(2): 475-479. Stehman SV, Czaplewski, RL. (1998). Design and analysis for thematic map accuracy assessment: fundamental principles. Remote Sensing of Environment, 64(3), 331-344. Stefanski J,Kuemmerle T, Chaskovskyy O, Griffiths P, Havryluk V, Knorn J, Waske B. 2014. Mapping land management regimes in Western Ukraine using optical and SAR data. Remote Sensing 6(6): 5279-5305.doi:10.3390/rs6065279 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  28. 27 Toyra J,Pietroniro A, Martz L. 2001. Multisensor hydrologic assessment of a freshwater wetland. Remote Sensing of Environment 75(2): 162-173.doi:10.1016/S0034- 4257(00)00164-4 Tso B, Mather PM. 1999. Crop discrimination using multi-temporal SAR imagery. International Journal of Remote Sensing 20(12): 2443-2460. doi:10.1080/014311699212119 Turner MD, Congalton RG. 1998. Classification of multi-temporal SPOT-XS satellite data for mapping rice fields on a West African floodplain. International Journal of Remote Sensing, 19(1): 21–41. doi:10.1080/014311698216404 Niel TG,and McVicar TR. 2004. Determining temporal windows for crop discrimination with remote sensing: a case study in south-eastern Australia.Computers and Electronics in Agriculture 45(1): 91-108.doi:10.1016/j.compag.2004.06.003 Ulaby FT, Kouyate F, Brisco B, Williams THL. 1986. Textural information in SAR images. IEEE Transactions on Geoscience and Remote Sensing 24(2): 235–245. doi:10.1109/TGRS.1986.289643. Verbyla DL, Hammond TO. 1995. Conservative bias in classification accuracy assessment due to pixel-by-pixel comparison of classified images with reference grids. Remote Sensing, 16(3), 581-587. doi: 10.1080/01431169508954424 Villiger E, 2008. Radar and Multispectral Image Fusion Options for Improved Land Cover Classification. PhD Thesis, George Mason University. Vogelmann JE, Howard SM, Yang L, Larson CR, Wylie B.K, Van Driel N. 2001. Completion of the 1990’s National Land Cover Set for the Coterminous United States from Landsat Thematic Mapper Data and ancillary data sources. Photogrammetric Engineering and Remote Sensing 67(6): 650–662. Waske B,Braun M. 2009. Classifier ensembles for land cover mapping using multitemporal SAR imagery. ISPRS Journal of Photogrammetry and Remote Sensing 64(5): 450- 457.doi:10.1016/j.isprsjprs.2009.01.003 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  29. 28 Waske B, Van der Linden S. 2008. Classifying multilevel imagery from SAR and optical sensors by decision fusion. IEEE Transactions in Geoscience and Remote Sensing 46(5): 1457–1466. doi:10.1109/TGRS.2008.916089 Weng Q. 2002. Land use change analysis in the Zhujiang Delta of China using satellite remote sensing, GIS and stochastic modeling. Journal of Environment Management 64(3): 273–284.doi:10.1006/jema.2001.0509 Wulder MA, White JC, Luther JE, Strickland G, Remmel TK, Mitchell SW. 2006. Use of vector polygons for the accuracy assessment of pixel-based land cover maps. Canadian Journal of Remote Sensing32(3): 268-279. doi: 10.5589/m06-023 Yekkehkhany B, Homayouni S, McNairn H, Safari A. 2014. Multi-temporal full polarimetry L-band SAR data classification for agriculture land cover mapping, Geoscience and Remote Sensing Symposium pp. 2770-2773. doi: 10.1109/IGARSS.2014.6947050 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  30. 29 Figure 1. Radarsat-2 composite (HH, VV and VH in RGB) image over Washington, D.C. Image footprint is approximately 27 x 32 km and was collected on 17 July 2009. (top left image) Forested , (top right image) Suburban, (bottom left image) Urban, (bottom right image) Water Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  31. 30 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  32. 31 Figure 2. Optical scenes of Washington D.C. classes from Aster imagery Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  33. 32 Figure 3. Palsar composite (HH, VV and HV in BGR) image for Wad Madani. Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  34. 33 Figure 4. Optical scenes of Wad Madani classes from Aster imagery. (top left image) Agriculture,(top right image) Sparse trees, (middle left image) Bare soil, (middle right image) Urban, (bottom image) water Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  35. 34 Figure 5. Classification occurring over Wad Madani. Classification completed using Radarsat-2 January Texture measure 17x17 (water – blue, agriculture – light green, bare soil – gray, sparse trees – dark green, urban – red) Approximate scene width 22 x 21 km Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  36. 35 Tables Table 1. Error matrices for Washington classification using despeckled 5 x 5 window Washington, D.C. - Radarsat 2 - December image accuracies in % Water Forest Suburban Urban User accuracy Water 4782 0 2 29 99 Forest 0 2597 1658 744 52 Suburban 86 1777 2398 1963 38 Urban 101 519 528 2403 67 Producer accuracy 96 53 52 46 62 Washington, D.C. - Radarsat 2 - July image accuracies in % Water Forest Suburban Urban User accuracy Water 4101 1 2 3 99 Forest 0 2365 1661 777 49 Suburban 598 2145 2507 1785 35 Urban 270 382 416 2574 70 Producer accuracy 82 48 54 50 59 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  37. 36 Table 2. Error matrices for Wad Madani classification using despeckled 5 x 5 window Wad Madani - Radarsat 2 - January image accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 14822 7081 230 68 107 66 Bare Soil 1692 4774 2481 1285 1109 42 Sparse Trees 108 255 10073 8592 4767 42 Agriculture 10 8 3351 5566 5229 39 Urban 2 12 1879 3286 8838 63 Producer accuracy 89 39 55 29 44 51 Wad Madani - Radarsat 2 - June image accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 15486 9558 203 572 125 59 Bare Soil 1145 2409 1266 2621 632 29 Sparse Trees 1 18 9434 3100 4179 56 Agriculture 2 145 5390 11777 4636 53 Urban 0 0 1721 727 10478 81 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  38. 37 Producer accuracy 93 19 52 62 52 57 Table 3. Error matrices of two combined seasons Washington, D.C. - Radarsat 2 - Despeckled 5x5 merged seasons accuracies in % Water Forest Suburban Urban User accuracy Water 4350 0 0 0 10 Forest 0 2483 1404 431 58 Suburban 414 2069 2725 1825 39 Urban 205 341 457 2883 74 Producer accuracy 88 51 59 56 64 Wad Madani - Radarsat 2 - Despeckled 5x5 merged seasons accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 14993 6618 7 2 4 69 Bare Soil 1624 5447 570 650 169 64 Sparse Trees 11 39 10608 3913 3318 59 Agriculture 6 26 5149 12697 3939 58 Urban 0 0 1680 1535 12620 80 Producer accuracy 90 45 59 68 63 66 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  39. 38 Table 4. Washington, D.C. error matrices of Radarsat 2 variance texture measures December texture window 17x17 accuracies in % Water Forest Suburban Urban User accuracy Water 4897 0 0 0 100 Forest 0 3430 2242 7 60 Suburban 70 1233 1160 546 39 Urban 2 230 1184 4586 76 Producer accuracy 99 70 25 89 72 July texture window 13x13 accuracies in % Water Forest Suburban Urban User accuracy Water 4215 0 0 0 100 Forest 0 3882 2300 94 62 Suburban 754 876 1396 673 38 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  40. 39 Urban 0 135 890 4372 81 Producer accuracy 85 79 30 85 71 Table 5. Wad Madani error matrices of Radarsat 2 variance texture measures January texture window 17x17 accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 14697 6011 0 0 0 71 Bare Soil 1619 6044 316 87 0 75 Sparse Trees 226 58 14468 9745 577 58 Agriculture 92 17 2691 7164 2067 60 Urban 0 0 539 1801 17406 88 Producer accuracy 88 50 80 38 87 70 June texture window 17x17 accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 15481 7595 0 0 0 67 Bare Soil 1072 4493 29 2029 0 59 Sparse Trees 31 0 17075 5749 1497 70 Agriculture 50 42 523 10989 32 94 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  41. 40 Urban 0 0 387 30 18521 98 Producer accuracy 93 37 95 59 92 78 Table 6. Dual seasons original despeckled and texture Washingon, D.C. accuracies in % Water Forest Suburban Urban User accuracy Water 4418 0 0 0 100 Forest 0 3501 1700 0 67 Suburban 416 1167 1615 382 45 Urban 135 225 1271 4757 75 Producer accuracy 89 72 35 93 73 Wad Madani accuracies in % Water Bare Soil Sparse Trees Agriculture Urban User accuracy Water 15155 5849 0 0 0 72 Bare Soil 1398 6278 31 544 0 76 Sparse Trees 66 3 16881 5300 1203 72 Agriculture 15 0 556 12891 56 95 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
  42. 41 Urban 0 0 546 62 18791 97 Producer accuracy 91 52 94 69 94 82 Downloadedby[GeorgeMasonUniversity]at16:1919November2015
Publicidad