Вы находитесь на странице: 1из 26

Digital Image Processing Introduction to Supervised

and Unsupervised
ClassificationFOR MS. JANET FINDLAY
AUTHOR: MARGARITA ISAZA

Table of Contents
1.

Background.............................................................................................................................................................2

2.

Purpose....................................................................................................................................................................2

3.

Procedure of Supervised Clasification....................................................................................................................2

4.

Classifications comparison...................................................................................................................................19

5.

Procedure and Answer to the questions on Unsupervised Classification.............................................................20

1. Background
The classification process allows to categorize the pixels in a digital image into land cover classes,
called themes. This categorized data can be used to produce thematic maps of the land cover.

2. Purpose
Create training areas and the produce supervised classification as well unsupervised classification,
evaluate each method and a compare them.

3. Procedure of Supervised Clasification


Section 1: Training Site Creation
1. Select the image that you are going to classify and display it. From the Raster tab, Classification,
Supervised, start the Signature Editor.

Figure 1: Toronto subset map

2. In order to create training sites, you need to create some Areas of Interest. This is done by
selecting the Drawing tab, and using the insert Geometry tools.

3. After the training site has been created, create a new signature in the Signature Editor box by
clicking on the
button.
Create at least 6 signatures for your image by repeating steps 3 and 4.

Figure 2: Signature Editor with created classes

Section 2: Signature Evaluation


1. Once the signatures are created, they can be evaluated, deleted, renamed, and merged with other
signatures.
2. Use the Signature Alarm utility to highlight the pixels in the viewer that belong to, or are
estimated to belong to the parallelpiped decision rule. Choose a signature. When you click OK,
the alarmed pixels are displayed in the viewer with the corresponding class colour. You can save
the Alarm image as an *.img file that you can use for any other processing. Remove the Alarm
mask layer before proceeding.

Figure 3: View of the map after applied signature

3. The Histogram Plot Control Panel allows you to analyze the histograms for the layers to make
your own evaluations and comparisons. A histogram can be created with one or more signatures.
If you create a histogram for a single signature, then the active signature is used. If you create a
histogram for multiple signatures, then the selected signatures are used. What does the x-axis
show and what does the y-axis show?
There are a histogram for each band. The histogram displays the pixel values on the x-axis and
the count on the y-axis.

Figure 4: Histogram for each band

In true color:
Band1:
Represent well the different tipes of vegetation presenting normal distibution , even
Has a good reresentation of water and vegetation, distribution is normal and the amount of pixels
is hight, urban distribution is normal, count of pixels is not very hight
Band 1: Normal distribution is observed in all classes except by urban which is difuse, overlaping
between water and vegetation and having the higher amount of pixes show that this band provides
increased penetration of water bodies but is also capable of differentiating different types of soil
and rock surfaces from vegetation.
Band 2: Water and forest is not overlaping for wich is sensitive to water turbidity differences.
Because it covers the green reflectance peak from leaf surfaces, it has separated vegetation
(forest, croplands with standing crops) from soil.
Band 3: Senses in a strong chlorophyll absorption region and strong reflectance region for most
soils. It has discriminated vegetation and soil. But it couldnt separated water and forest. This band

has highlighted barren lands, urban areas, street pattern in the urban area and highways. It has
also separated croplands with standing crops from bare croplands with stubble.
Band 4 Operates in the best spectral region to distinguish vegetation varieties and conditions.
Because water is a strong absorber of near IR, this band has delineated water bodies (lakes and
sinkholes), distinguished between dry and moist soils (barren land and croplands). This band has
also separated croplands from bare croplands.
Band 5 Iis sensitive to the turgidity or amount of water in plants. Band 5 has separated forest
lands, croplands, water body distinctly..Band 5 has separated water body from barren lands,
croplands, and grass lands. Since urban area and croplands have responded almost in same
spectral reflectance band 5 could not be able to separate these areas.
Band 7 Has separated land and water sharply.

Figure 5: Histogram in false color (with different signature)

In false color : Colors have higher concentration of pixels and distributions are closer to normal
distribution, there is more overlapping on some classes

4. Signature Separability is calculated as the statistical difference between pairs of spectral


signatures. You can use the Signature Separability panel to monitor the quality of your training
sites. Use the Signature Separability dialog to determine which bands are the best for identifying
features.

Signature Separability Listing


File: x:/students/misaza1/second
semester/gisc9216_dip/assignment#1/isazamargaritagisc9216d1/torontosignaturefinal.sig
Distance measure: Euclidean Distance
Using bands: 1 2 3 4 5 6
Taken 6 at a time
Class
1
2
3
4
5
6
7

Water
Vegetation
BareFields1
BareFields2
Commercial
Urban
Roads

Best Minimum Separability


Bands

AVE

1 2 3 4
5 6

98

MIN Class Pairs:


1: 2 1: 3 1: 4 1: 5 1: 6 1: 7 2: 3
2: 4 2: 5 2: 6 2: 7 3: 4 3: 5 3: 6
3: 7 4: 5 4: 6 4: 7 5: 6 5: 7 6: 7
27
95 160 118 236 140 110
68 184 90 60 50 82 35
59 129 40 27 99 132 33

Best Average Separability


Bands

AVE

MIN Class Pairs:


1: 2 1: 3 1: 4 1: 5 1: 6 1: 7 2: 3
2: 4 2: 5 2: 6 2: 7 3: 4 3: 5 3: 6

113

1 2 3 4
5 6

98

3: 7 4: 5 4: 6 4: 7 5: 6 5: 7 6: 7
27
95 160 118 236 140 110
68 184 90 60 50 82 35
59 129 40 27 99 132 33

113

1:3 and 4:3 have less pair counts indicating this classes might need to be resample
5. The Statistics utility allows you to analyse the statistics for the layers to make your own
evaluations and comparisons. Statistics may be generated for one signature at a time. The active
signature is used. Use the Statistics utility to examine your layers.

Figure 6: Statistics Roads

Figure 7: Statistics Urban

Figure 8: Statistics Commercial

Figure 9: Statistics BareFields1

Figure 10: Statistics BareFields2

Figure 11: Statistics Vegetation

Figure 12: Statistics Water

Section 3: Perform Supervised Classification


6. Perform a Supervised Classification. Use the Attribute Options dialog to select Maximum,
Minimum, Mean and Standard Deviation so that the signatures in the output thematic raster layer
have this statistical information. Keep the Non-Parametric Rule option as None (see figure 1
below). Discuss the differences between the three types of classification (Maximum likelihood,
Mahalanobis distance, Minimum distance)
The method that renders with more accuracy between the three methods executed was the
Minimum distance method, following a figure that shows comparison between the supervised
Minimum distance method vs the original image in false colour and in true colour.

Figure 13: Supervised Minimum distance method (left) - Original image - false colour (right)

Figure 14: Supervised Minimum distance method (left) - Original image - True color.

Figure 15: Toronto Supervised Minimum distance map

Figure 16: Toronto Supervised Mahalanobis distance map

Figure 17: Toronto Supervised Maximum likelihood distance map

4. Classification Comparaison
Discuss the effect of the iterations number on the classification results (you should test
different values for the Max Iterations parameter).
The more iterations the result is cleaner (patters, borders and colors are more homogeneous) on
the thematic raster layer that is created automatically, figures can be identified easily and colors
are slightly different as well. For example in figure 1 the building circled on classification done
with more iterations (left, 10iteractions) has a more neat color that the one with less iterations
(right, 4iterations). The maximum number of iterations is not reached if the algorithm does find
more improvements by comparing pixels, which happened at 6 iterations in this case.

In unsupervised classification, clusters of pixels are separated based on statistically similar spectral
response patterns rather than user-defined criteria. Each pixel in an image is compared to a discrete
cluster to determine which group it is closest to. Colours are then assigned to each cluster and the
analyst interprets the clusters after classification based on the original imaginary.
The supervised classification method requires the analyst to specify the desired classes upfront, and
these are determined by creating spectral signatures for each class. In a supervised classification, the
analyst locates specific training areas in the image that represent homogenous examples of known
land cover types. The statistical data are used from each training site to classify the pixel values for
the entire scene into likely classes according to some decision-rule or classifier.
The following imaginaries show results of the two methods:

Figure 18: Supervised

Figure 19: Unsupervised 10 classes

Figure 20: Unsupervised 4 classes

Selection of the method, number of classes and colors depend of the needs, if for example is
important to select line features as rivers, pipes or roads the supervisor method will work better.

5. Procedure and Answer to the questions on


Unsupervised Classification
Section 1: Assignment 4 - *.img file creation

1. Create a 1024 pixels by 1024 lines subset from the image provided in this workshop. The
image contains 6 bands. Make sure that your subset is in Niagara area and includes the following
features,
Water
Farmland
Urban areas

Section 2: Assignment 4 Unsupervised classification


2. Start Imagine and load the subset file that you created.
3. Under the Raster tab, Classification, select unsupervised classification. Specify your Input
and output files.
4. ERDAS uses ISODATA algorithm for its unsupervised classification. The following
parameters can be used:
a. Number of classes = 10
b. Max Iterations = 10
c. Convergence threshold = 0.95
d. Skip factor (X & Y) = 1
Discuss the effect of the iterations number on the classification results (you should test
different values for the Max Iterations parameter).
The more iterations the result is cleaner (patters, borders and colors are more homogeneous) on
the thematic raster layer that is created automatically, figures can be identified easily and colors
are slightly different as well. For example in figure 1 the building circled on classification done
with more iterations (left, 10iteractions) has a more neat color that the one with less iterations
(right, 4iterations). The maximum number of iterations is not reached if the algorithm does find
more improvements by comparing pixels, which happened at 6 iterations in this case.

Figure 21: Comparison of unsupervised classification with 4 interactions (left) vs 10 interactions right)

5. Once the classification is completed, open the classified image in a viewer in pseudo-colour
mode to give you a quick visual.

Figure 22: Original image, false color view (left), unsupervised classified image (right)

Section 3: Assignment 4 Post Classification Exercise


6. Unsupervised classification creates classes but there is no description attached to
them. This section allows you to give each class a name as well as change the colours to
more appropriate ones. Change the colour associated with each class In the Table of

Contents, right-click your unsupervised image, and select Display Attribute Table.
your colour selection as a layer. You are now ready to edit the imagery.

Figure 23: Original image, false color view (left), unsupervised classified image with selected colors (right)

Figure 24: Original image, true color view (left), unsupervised classified image with changes in colors (right)

Save

Figure 25:Original image, false color view (left), unsupervised classified image with changes in colors (right)

Discuss the difficulties with the unsupervised classification by comparing the original false
colour image with the new classified image.
The difficult with this classification is that some features get loss or confuse because they get the
same or very close color than other features, for example roads were represented with other
classes on the attribute table and changes in color on this classes affect also roads which have
different colors depending of the classes surrounding them, however roads were still can be
observed on the map by recognizing them by the shape even in this mismatch of color
representation. The result of unsupervised classification is very good
7.

The aggregate editing function is primarily used with unsupervised classifications. It allows
you to group classes that cover similar areas and assign proper land cover class names to
them.

To group the classes that represent the same theme you can use Recode found under the
Thematic tab. Saving the aggregate results to a new image gives you the ability to create
several different thematic representations.
Using the Recode tool and referring to the original image, try to match each of the 10
classes from the input class list to the 5 aggregate (output) class list below. To do so,
you can open two viewers (one for the original image and the other for the classified one).
Link both viewers by using View---Link/unlink viewers---Geographical. By clicking on
the cross hair a window will pop up showing the value of the selected pixel in the selected
viewer.
Residential grey
Commercial white
Water- Blue
Vegetation green
Farmland dark green
NULL Black this class contains any pixels that were not otherwise classified.
When you have finished making all of your changes, view your new aggregated
classification image. If you are satisfied with the results, you can then add the channel
that contains this classified image to the 6 original channels and produce only one image
(Interpreter---Utilities---Layer stack).
What changes would you have made to the parameters to improve upon the
classification, if any, and explain how the changes would affect the classification result?
The original 10 classes were an appropriated parameter for the software be able to represent
the classes properly and still it was possible to group classes as desired by manipulating the
attribute table. Unsupervised classification provided a good quality of a thematic map.
Parameters should be selected depending of the needs, what is intended to show but is better
to have extra classes and them to group them to get more accuracy from the software.
In in the event that the requirement were to show more classes, for example to show the
different types of farmland it would had be possible with good quality (see figure 3).
Respect to 10 iterations, it was a good number since the software stop when there is no need
for more iterations what happens at 6, which means that by choosing 6 the result would be the
same but choosing less than that will affect the quality of the resultant layer.

Вам также может понравиться