Вы находитесь на странице: 1из 10

1

Analyzing the Differences between the Classification of a 4 Band and an 8 Band Image Segmentation
of a WorldView-2 Image of Acme, Washington Using SPRING and ENVI

Tori Niewohner

Abstract

In this lab, I ran two image segmentations in SPRING, one using 4 bands and one using 8, on a

WorldView-2 image of Acme, Washington taken on September 29, 2010. I ran an ISODATA unsupervised

classification on the images and then assigned each segment to one of twelve information classes in

ENVI and ran an accuracy assessment on the classifications. I used only eight of the information classes

and unfortunately when I overlayed the ground truth data, I was only able to get ENVI to give me the

information for the eight classes which I used, which means that my accuracy assessment is incorrect.

The overall accuracy of both images was identical at 65.79%, but each of them had certain classes which

they were more successful at classifying. When comparing the appearance of the two images, I lean

towards the 4 band image segmentation being a better way of classifying the image because it was less

dominated by the Pasture class, but since the accuracy assessment is not correct and revealed very

similar results, I would need to do the process over again with a different similarity value and using all

twelve information classes to gain more insight.

Methods

All of the methods that follow and the images used were provided by Dr. Wallin on his website

(Wallin 2016). The image used in this lab was acquired by the WorldView-2 satellite on September 29,

2010. It has a spatial resolution of 2 meters for 8 bands as well as a panchromatic band at 0.5 meter

resolution. The image has a pixel size of 2 meters by 2 meters with 2048 rows and 2048 columns and

displays Acme, Washington and some of the South Fork of the Nooksack River.

Using the program IMPIMA, I changed the image from a TIFF file to a format which could be

used by the program SPRING. Before bringing the image into SPRING, I created a SPRING database and a
2

SPRING Project to define the projection, location, and spatial extent of the image. In this step, the

coordinates of the lower left and upper right corner of the image are needed. I obtained the coordinates

from ENVI of the upper left corner and did some math to get the coordinates for the correct corners.

Finally, I created a Data Model in SPRING and was able to import and view the image.

Next I performed two image segmentations in SPRING on the image, one using 4 bands and one

using 8 bands. For the 4 band image, I used the bands comparable to TM Bands 1 through 4, which are

WorldView-2 bands 2, 3, 5, and 7. For both image segmentations, I set a similarity of 10 and the area in

pixels to 25. The area in pixels sets the smallest polygon that will be created, which at 25 pixels would be

100 square meters. The similarity value determines what pixels will be grouped together based on

brightness values, with a lower similarity value giving you smaller image segments that are more

homogenous.

The next step was to classify the two image segmentations. In order to do this I performed an

ISODATA unsupervised classification in SPRING and exported the resulting images as TIFF files. ENVI is

unable to read these files, so I imported them into ArcMap and exported them into an ENVI format.

Once the images were in ENVI, I changed the projection to the Northern Hemisphere because it had

changed it to the Southern Hemisphere. Next, I assigned the spectral classes to the information classes

of Residential, Farm Building, Road, Shrubs, Crops, Pasture, Recent Clear-cut, Deciduous Forest, Conifer

Forest, Water, Nonforested Wetland, and Rock/Gravel Bar/Soil. After I completed the classifications, I

combined the classes and then performed an accuracy assessment on both image segmentations using

ground truth data of about 150 points using LULC Level 3 codes.
3

Results

I ran into a problem when performing the accuracy assessment for both the 4 band and 8 band

image segmentations and classifications, because I ended up only using eight of the twelve information

classes (Farm Buildings, Shrubs, Crops, Pasture, Deciduous Forest, Conifer Forest, Water, and

Rock/Gravel Bar/Soil) (Figure 1)(Table 1). When I used ENVI to compare the ground truth data to my

classification with a confusion matrix, I could only get it to bring in the ground truth data for the eight

classes that I used. This means that my calculated users accuracies and overall accuracy are most likely

inflated and are only correct with respect to the ground truth data for the eight classes I used and not

according to all the ground truth data collected. The producers accuracy remains correct because I did

not make any assignments to the four classes which I was unable to overlay ground truth data for.

For the 4 band image segmentation and classification, the class which took up the most area in

the image is Conifer Forest at 1,289.55 hectares or 73.7% of the image (Table 1). The class taking up the

least area is Farm Buildings at 1. 2 hectares or 0.07% of the image (Table 1). The highest producers

accuracy was in the Pasture class at 100% and the lowest were in the Shrubs and Deciduous Forest

classes at 0% (Table 1). The highest users accuracies were in the Farm Buildings, Crops, and Water

classes at 100% and the lowest was in the Deciduous Forest class at 0% (Table 1). The Shrubs class did

not have users accuracy because of the issue with the use of only eight out of the twelve classes which I

discussed earlier. The overall accuracy for this image was 65.79%.
4

Legend
Farm Building
Shrubs
Crops
Pasture
Deciduous Forest
Conifer Forest
Water
Rock/Gravel Bar/Soil

Figure 1. A classification of Acme, Washington from a WorldView-2 image taken on September 29, 2010.
I performed image segmentation using 4 of the 8 bands (Bands 2, 3, 5, and 7) and then ran an ISODATA
unsupervised classification on the result. I put each image segment into one of twelve different
information classes, but only ended up using eight.

Table 1. The information listed is about the 4 band image segmentation and classification of a
WorldView-2 image of Acme, Washington taken on September 29, 2010 and displayed in Figure 1. For
each information class, I calculated the area in hectares, the percent of the image, the producers
accuracy, and the users accuracy. I also calculated the overall classification accuracy. I only used eight of
the twelve possible information classes and was unable to pull in the ground truth data for the four I did
not use, so the users accuracies and overall accuracy are not correct. The Unclassified class was from an
edge that was surrounding the image.

Area Producers Users Accuracy


Class Percent of Image
(hectares) Accuracy (%) (%)
Unclassified 116.56 6.66 n/a n/a
Farm Buildings 1.20 0.07 62.5 100
Shrubs 6.85 0.39 0 n/a
Crops 51.87 2.96 68.75 100
Pasture 167.33 9.56 100.00 94.12
Deciduous Forest 55.39 3.17 0 0
Conifer Forest 1,289.55 73.70 96.77 49.18
Water 22.70 1.30 77.78 100
Rock/Gravel Bar/Soil 38.29 2.19 85.71 66.67
Overall Accuracy: 65.79%
5

For the 8 band image segmentation and classification, the class which took up the most area in

the image is Conifer Forest at 838.68 hectares or 47.93% of the image (Table 1). The class taking up the

least area is Farm Buildings at 1.55 hectares or 0.09% of the image (Table 1). The highest producers

accuracies were in the Pasture and the Farm Buildings classes at 100% and the lowest was in the Shrubs

class at 0% (Table 1). The highest users accuracies were in the Farm Buildings, Crops, Deciduous Forest,

and Water classes at 100% and the lowest was in the Conifer Forest class at 54.72% (Table 1). The

Shrubs class did not have users accuracy for the same reason that the 4 band classification didnt have

one either. The overall accuracy for this image was 65.79%.

Legend
Farm Building
Shrubs
Crops
Pasture
Deciduous Forest
Conifer Forest
Water
Rock/Gravel Bar/Soil

Figure 2. A classification of Acme, Washington from a WorldView-2 image taken on September 29, 2010.
I performed image segmentation using all 8 bands and then ran an ISODATA unsupervised classification
on the result. I put each image segment into one of twelve different information classes, but only ended
up using eight.
6

Table 1. The information listed is about the 8 band image segmentation and classification of a
WorldView-2 image of Acme, Washington taken on September 29, 2010 and displayed in Figure 2. For
each information class, I calculated the area in hectares, the percent of the image, the producers
accuracy, and the users accuracy. I also calculated the overall classification accuracy. I only used eight of
the twelve possible information classes and was unable to pull in the ground truth data for the four I did
not use, so users accuracies and overall accuracy are not correct. The Unclassified class was from an
edge that was surrounding the image.

Area Producers Users Accuracy


Class Percent of Image
(hectares) Accuracy (%) (%)
Unclassified 116.56 6.66 n/a n/a
Farm Buildings 1.55 0.09 100.00 100.00
Shrubs 16.10 0.92 0 n/a
Crops 25.95 1.48 43.75 100.00
Pasture 504.12 28.81 100.00 55.17
Deciduous Forest 144.30 8.25 10.53 100.00
Conifer Forest 838.68 47.93 93.55 54.72
Water 22.05 1.26 77.78 100.00
Rock/Gravel Bar/Soil 80.43 4.60 85.71 85.71
Overall Accuracy: 65.79%

Comparing the two classifications, the producers accuracy for Farm Buildings and Deciduous

Forest increased from the 4 band classification to the 8 band classification, Shrubs, Water, and

Rock/Gravel Bar/Soil remained the same, and Crops and Conifer Forest decreased (Table 1 and 2). In

terms of users accuracy, Deciduous Forest, Conifer Forest, and Rock/Gravel Bar/Soil increased from the

4 band classification to the 8 band classification, Farm Buildings, Crops, and Water remained the same,

and Pasture decreased (Table 1 and 2). The percent of the image taken up by Farm Buildings, Shrubs,

Pasture, Deciduous Forest, and Rock/Gravel Bar/Soil increased from the 4 band classification to the 8

band classification and Crops, Conifer Forest, and Water decreased (Table 1 and 2).

Discussion

In the first part of this lab, the program SPRING was a bit tricky at times because of the unclear

nature of certain aspects of the program and my unfamiliarity with it. When I was setting the projection

for the image, it asked for the coordinates of the lower left and upper right pixels, but ENVI gives the
7

coordinates of the upper left pixel. While this should just involve some simple math to get the correct

coordinates, it was unclear as to whether SPRING wanted the coordinates of the center of those pixels

(Wallin 2016). To make things a bit more complicated, Dr. Wallin also wasnt sure whether or not the

coordinate that ENVI provided was for the center of the pixel (Wallin 2016). I did the coordinate

calculations based on the assumption that everything was based on the center of the pixels, but if this

assumption was incorrect, then the projection could be wrong.

There were quite a lot of detailed steps in SPRING before even arriving at the image

segmentation. I was able to work through the steps with Dr. Wallins instructions, but I feel that this

program is not very user friendly and maybe would not be the best program to use for someone

unfamiliar with it or without detailed instructions. However, SPRING is free to use and thus a good

option for someone wanting to do image segmentation without a high price.

Once I reached the image segmentation step, I had to set the area in pixels and the similarity.

For this lab, the area in pixels was to be set at 25 pixels, which is 100 square meters for this image. This

was chosen because 100 square meters is about the zone of influence of a canopy dominant tree. This

means that image segments typically shouldnt be any smaller than a tree of that description. If I were to

have set a smaller area for the smallest segment size, I could have potentially run into issues where a

single tree was put into two different segments, maybe because one side was shaded. Thus, setting the

smallest segment size as equivalent to a tree seems appropriate. I set the similarity value at 10, which I

am actually not sure is appropriate for this image. A higher similarity will give you larger image segments

with more variability within the segments and a lower similarity will give you smaller segments with less

variability within the segments. If I was to do this lab again, I would test out higher and lower similarity

values to see which ones worked better. As I will talk about later on, I ran into issues where there was a

lot of overlap of information classes within image segments. Therefore, I would think that a lower

similarity value would reduce this issue because it would have less variability within the segments.
8

The classification process in this lab proved to be quite challenging. The only two information

classes that were very easy to assign for both classifications were Farm buildings and Water. The

Nooksack River, the several ponds scattered throughout the image, and the farm buildings were nearly

always isolated within their own image segments without other information classes getting mixed in.

There were a couple locations in both of the images where a few areas that were not water were

classified as water, but these were very minor compared to other issues.

Unfortunately, both the 4 band and the 8 band image segmentations had a lot of overlap of

information classes within image segments. Both of the images had some overlap between buildings

and rock/soil areas which prevented me from assigning any image segments to the Residential class

because it would have made many areas which were for sure rock or soil, into the Residential class.

There also was overlap between rock/soil, clear-cuts, road, and some crops. This makes sense because

clear-cuts and roads are essentially rock/soil and many cropland areas would be rock/soil in this image

because it was taken in September and many crops would have already been harvested. I ended up not

assigning any segments to the class of Recent Clear-cut and simply keeping those areas as the class

Rock/Gravel Bar/Soil, because when I attempted to assign the Recent Clear-cut class, many areas which

were not clear-cuts were labeled as so. I also did not assign any segments to the Road class because it

caused many areas which were not road to be labeled as so. In terms of crop and soil/rock overlap, this

issue occurred in both images, but was more extreme in the 8 band image. Another large issue in both

classifications was the overlap between the classes of Conifer Forest and Deciduous Forest. In the true

color image, it was very easy to distinguish between conifer and deciduous trees because the deciduous

tree were beginning to turn orange, yellow, and red due to the time of year. Unfortunately, neither of

the images seemed to pick up the difference between deciduous and conifer trees and thus, I was not

confident about which classes I assigned to the two forest categories. The two other information classes

that caused me problems were Nonforested Wetlands and Shrubs. There was one area in the study site
9

which had a nonforested wetland, but by classifying it as such, it caused many areas that were pasture

or forest to be classified as nonforested wetland. Thus, I left this class out in both classifications since it

covered such a small area. Shrubs was a tricky class to assign because it was not clear to me which areas

were shrubs by looking at the true color and color-IR images and using Google Earth. However, in both

classifications, I had a class or two which seemed to represent the shadows at the edges of the forested

areas. Although I could not be certain that there were shrubs there, since the shadowed areas were

usually at the edge, I assumed that there was likely to be shrub-like vegetation there.

I am not very confident in my results of this lab because of the issue with the overlay of the

ground truth data that I described earlier. Since I was only able to get ENVI to give me the ground truth

data for the classes that I assigned, this meant that my users accuracies and my overall accuracy were

not quite right. Even though my accuracy assessment is not correct, I can still compare the accuracies of

the 4 band and the 8 band classification to see where it appears that improvements were made.

Surprisingly, the overall accuracy for both classifications was exactly the same at 65.79% (Table

1 and 2). This seemed like an error at first, but upon looking over the assessment, I realized that the

overall accuracies were indeed the same and that increases in the producers accuracy for some classes

from one classification to the other were met with decreases in the producers accuracy in other classes.

These increases and decreases in producers accuracy equaled out to an identical overall accuracy.

Going based off of the producers accuracy (despite my issues, it still is accurate since I assigned no

segments to the 4 classes which were unable to be brought in through ground truth data), the 4 band

classification was better for classifying Crops and Conifer Forest, the 8 band classification was better for

classifying Farm Buildings and Deciduous Forest, and they were the same at classifying Shrubs, Water,

Pasture, and Rock/Gravel Bar/Soil. The 4 band and the 8 band classification both had different strengths

in terms of what classes were easier to classify, but since their overall accuracy was the same, I cant say

that one was quantitatively better than the other. If I were to do this again, I would change the similarity
10

as I discussed earlier and attempt to use all twelve information classes in order to correctly do an

accuracy assessment and determine whether the 4 band or 8 band classification worked better.

Comparing the images qualitatively, the 4 band classification makes more sense for several

reasons. Some issues that were in the 8 band classification were that too much of the image appeared

to be classified as pasture and some crops were classified as rock/soil. In the 4 band classification, the

location of the pasture makes much more sense and was replaced by conifer forest and the most of the

crops seem to be classified as such.

Although qualitatively the 4 band classification looks better, since the overall accuracy is the

same, I cant say that one worked better than the other. Also, because of my issues with the

classification accuracy assessment, I am not confident in my results and would need to repeat the

process to gain better insight. Image segmentation could be applied to any situation where you need to

gain information about the land use and land cover of a study area, but I am unsure as to whether it is

better to do 4 band or 8 band classification.

Literature Cited

Wallin, D. 2016. Lab VI: Image Segmentation with SPRING and ENVI: WorldView-2, 8-band image of
Acme, WA. http://faculty.wwu.edu/wallin/envr442/ENVI/442_segmentation_ENVI_acme2.htm

Вам также может понравиться