Академический Документы
Профессиональный Документы
Культура Документы
Level:
Beginner
Time:
An image you can load into Definiens for this exercise. For this unit
the Landsat 7 image orthol7_20423xs100999.img (Row: 204 Path: 23
Date 10/09/1999) over North Wales and available from the Landmap
service is recommended.
By the end of this unit you should:
Be aware of the purpose of the Definiens Developer software and the
types of projects the software can be used for.
Know how to create a new project within Definiens and the options
available for creating a project.
Know the main elements of the Definiens Developer user interface.
Know how to change the viewing properties.
1.1. Background
This section will provide you with a brief outline of the purpose of the
Definiens software and an overview of the collection of software tools
Definiens have to offer. For a more detail on Definiens products please visit
the Definiens website http://www.definiens.com or contact them directly.
1.1.1 Purpose
The purpose of the Definiens Developer is to facilitate the development of
object oriented rule based classification procedures. Therefore, rather than
simply independently classifying the individual pixels within the scene the
image is split (segmented) into regions representing objects within the scene.
Working with objects rather than pixels have numerous benefits over
traditional pixel based analysis, for example, the spatial relationship between
objects can be represented or the shape of an object analysed. Definiens
Developer provides an easy to use (although relatively step learning curve)
interface to represent the classification rules and visual scripting interface
(processes) to control the segmentation and classification process.
1.1.2 The Suite of Definiens Tools
Definiens Developer is one of a number of tools which Definiens produce
which together form the Definiens Enterprise Image IntelligenceTM suite
(Figure 1).
Figure 1.1. Definiens Enterprise Image Intelligence Suite, Client and Server software.
(Source: Definiens.com)
The software can be divided into three categories End-user, Developer and
Server-side, where the use of each depends on your role within the image
processing chain.
The end-user products include Definiens Architect, Definiens Analyst and
Definiens Viewer. Definiens Viewer is the simplest of the three and intended
for a user to simply view results that have been previously processed.
Definiens Analyst allows a use to import and execute fully automatic
classification processes (previously developed) and view the results. Finally,
Definiens Architect provides a framework within which a user can run fully and
semi automatic image analysis programs, written using Definiens processes,
and provides a mechanism for manual correction of results.
Definiens Developer encapsulates the functionality of the Viewer, Analyst and
Architect products but with the additional functionality to develop the ruleware
(classification algorithms) required to process image data.
Finally, the Definiens eCognitionTM Server provides functionality to process
your images through a dataset type infrastructure where the ruleware
(developed using Definiens Developer) can be executed simultaneously
across a series of servers. Additionally, this infrastructure supports automatic
tiling and stitching of images and results to allow very large images to be
efficiently processed.
1.1.3. Help while using Definiens
Definiens provide customer (and user to user) support through their online
forums (http://forum.definiens.com/index.php) where you can post your
problems or read previous answers. Additionally there is a section where you
can download sample rulesets. Definiens also make a number of documents
available online including presentations, case studies, white papers and
scientific papers (http://www.definiens.com/resource-center_61_24_0.html).
Where the interface is different, from Figure 1.2, you will need to toggle the
view buttons
. The view buttons result in the following interfaces,
Figure 1.3, but for this series of units you will only require the Developer
interface (4) as shown in Figure 1.2.
2) Analysis Interface
3) Results Interface
4) Developer Interface
Figure 1.3. The interface views available with Definiens Developer.
1.3.2.1
Data Viewer: The image and classification data viewer. The viewer allows
you to view the imagery you are classifying, including manipulating the band
order and image stretching.
Process Tree: The window within which you develop your ruleset script.
Class Hierarchy: The window displaying the classes you develop.
Image Object Information: This window displays selected feature values for
a selected object.
Feature View: This window displays a list of all the available features within
Definiens Developer and allows the current image objects to be coloured
(green high values and blue low values) given their value for a select feature.
1.3.2.2
Toolbar Icons
Table 1.1 provides a glossary of the icon available on the various toolbars
within Definiens Developer.
Icon
Description
File Toolbar
Create New Project
Open Existing Project
Save Project
New Workspace
Open Workspace
Save Workspace
Predefined Import
View Settings Toolbar
Workspace view
Analysis View
Results View
Developer View
View Image data
View Classification
View Samples (for Nearest Neighbour Classification)
Feature View
Toggle Object Means and Pixel Data
Toggle Object Outlines
Toggle Polygons
Toggle Skeletons
Toggle Image View and Project Pixel View.
Single Layer (Grey Scale)
Mix Three Layers RGB
Show Previous Layer
Show Next Layer
Select Layers to be Displayed and Image Stretch
View Navigation Toolbar
Delete Level
Select Level For Display
Down a Level
Up a Level
Tools Toolbar
Object Information
Object Table
Undo process
Redo process
Class hierarchy
Process tree
Feature View
Managed customised features
Toggle manual editing toolbar
Zoom Toolbar
Manual Editing Toolbar
Single Selection
Polygon Selection
Line Selection
Rectangle Selection
Cut Object
Merge Object Selection
Merge Selected Objects
Clear Merge Object Selection
Filter Classes for Multiple Image Object Selection
Classify Image Objects
Samples Toolbar
Select Samples
Drag and Click Brush to Assign Image Object Samples
Sample Editor
Sample Selection Information
Toggle Sample Navigation
Table 1.1. A glossary of toolbar icons.
layer aliases, as shown in Figure 1.5b, where bands 1-6 correspond with
aliases BLUE, GREEN, RED, NIR SWIR1 and SWIR2, respectively. To bring
up the layer properties dialog double click on each image band in turn, or
select the band and select on the edit button.
You can also add thematic information in the list below the image layers list
which could be used during your classification and segmentation, for example
a polygon shapefile of building. The next step is to give your project a name,
in this case call it Example 1 and check the projection information for your
image has been correctly read. If this information is incorrect then you need to
check the Pixel size (unit) on the right-hand side. The Pixel size (unit)
should be set to auto and the unit to meters and the Use geocoding option
ticked on. You also have the option of re-sampling you imagery to a resolution
of your choice using the Resolution (m/pxl) dialog box and to select a subset
using the Subset Selection button, which present a dialog similar to the one
shown in Figure 1.6. To select a subset you can either draw a red box on the
image in the dialog or provide the pixel limits of your subset. Before finalising
the project and selecting OK we will subset the image (as shown in Figure
1.6), where minimum X is set to 4600, the maximum X is set to 5200, the
minimum Y is set to 4400 and maximum Y is set to 5000.
Now click OK to create the project and you will move back to the Definiens
Developer interface, Figure 1.7.
Figure 1.7. The Definiens Developer interface once the project has been loaded.
If the zoom functions toolbar is not displayed you can turn it on using the
View>Toolbars menu.
1.5.2. Selecting bands for Display
To select the layer(s) to be displayed you need to use the Edit Image Layer
Mixing dialog, Figure 1.9, available via the following icon
Using the Layer Mixing drop down menu you can select the number of layers
to be mixed in the display and then by selecting the individual layers you may
turn then on and off (or increase the weight), Figure 1.20.
Also, you can adjust the equalisation (or stretch) of the data layers being
displayed using the Equalizing drop down menu. The available options are
Linear (1.00%), Standard Deviation (3.00) Gamma Correction (0.50),
Histogram and Manual.
1.5.3. Multiple Views
Definiens Developer also allows you to split your display, therefore allowing
you to have multiple views of the same data. This functionality is available
from the Window menu (Figure 5.21.).
Here the current display can be split horizontally and/or vertically and once
split can be linked to provide views which automatically move together.
Once you have split your screen by selecting the window you wish to change
the same tools as outlined above can be used to manipulate the display
properties in each of the different views.
1.6. Conclusion
In summary, you should now be able to open Definiens Developer, create a
project and manipulate the display to view the data as you wish. The following units
will take you through the segmentation and classification of the
imagery you have loaded into your project and some more advanced features
of the Definiens software.
1.7. Exercises
1) Experiment with the layer properties, such that you can view each image
band individually and then a number of 3 and 6 band mixings. Observe how
the different land cover types visually change as you change the band
mixings.
2) Using the layer combination of your choice (R: NIR G: SWIR1 B: RED, is
recommended) experiment wit the image equalisations available. Again,
observe how the various land cover types change to these changes.
3) Produce a four way split of the display (i.e., a vertical and horizontal split)
and set each region to different viewing properties. Finally, link all four
together (side by side).
This unit should not take you more than 1.5 hours
Resources:
An image you can load into Definiens for this exercise. For this unit
the multispectral Landsat 7 image orthol7_20423xs100999.img and its
corresponding panchromatic scene o20423_pan.tif (Row: 204 Path:
23 Date 10/09/1999) over North Wales and available from the
Landmap Service is recommended.
Processes:
RulesetTemplate.dcp
Chessboard_Segmentation.dcp
Quadtree_Segmentation.dcp
Multiresolution_Segmentation.dcp
SpectralDifference_Segmentation.dcp
ContrastSplit_Segmentation.dcp
ContrastFilter_Segmentation.dcp
By the end of this unit you should:
Be able to apply each of the segmentation techniques available with
Definiens Developer to an image.
Be aware of the difference between the various segmentation
algorithms and the types of objects (size and shape) they each
produce.
2.1. Introduction
Segmentation is always the first step of any process within Definiens
Developer as it generates the image objects on which the classification
process will be performed. The important part is for the segmentation process
to identify objects which a representative of the features you wish to classify
and are distinct in terms of the features available within Definiens (e.g.,
spectral values, shape, texture).
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral landsat 7 data will be resampled to
the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 2.1.
select OK and create your project.
Figure 2.3. The Definiens Developer interface with the project and display parameters defined
To insert a process right-click within the process tree window and the
following menu will appear, Figure 2.5. Select Append New and the Edit
Process dialog will appear, Figure 2.6.
Finally, you can save and load your process independently of your project
(although, your process is saved within the project), this is done by rightclicking within the process tree window and selecting Save Rule Set.
Alongside the contains of your process tree this will also save any classes or
customised features you have created, which are associated with you
process.
Thematic Layer
usage
Scale Parameter
Shape - Colour
Compactness
To remove your segmentation and try new parameters, you need to delete the
level before re-executing your segmentation process, this is done using the
Delete Level icon (
).
pixel values above the thresholds) and dark objects (consisting of pixel values
below the threshold). The algorithm aims to optimize this separation by
considering different pixel values, within the range provided by the user
parameters, with values selected based on the inputted step size and
stepping parameter. Table 2.2 provides a list of the parameters for the
algorithm.
Parameter
Chessboard
Tile Size
Description
If no level is already present then a chessboard
segmentation is undertaken to generate a set of large
objects which are iterated through during the
segmentation process.
Minimum
Threshold
Maximum
Threshold
Step Size
The sizes of the steps the algorithm will use to move from
the minimum threshold to the maximum threshold.
Large values will make the algorithm quicker to calculate
but smaller values will tend to produce better results.
Stepping Type
Either:
Add Calculate each step by adding the value in the
scan step field.
Multiply Calculate each step by multiplying by the value
in the scan step field.
Image Layer
Contrast Mode
Execute
Splitting
Best Threshold
Best Contrast
Minimum Rel.
Area Dark
Minimum Rel.
Area Bright
Minimum
Contrast
Minimum Object The minimum object size for the segmentation to take
Size
place.
Table 2.2. The parameters associated with the contrast split segmentation.
To execute this algorithm you will need to create two classes, one for the
bright objects and one for the dark objects. To do this within Definiens
Developer right-click within the class hierarchy window and select New Class,
you do not need to enter any parameters at this point, so just select OK.
2.8.1. Simple Exercise
As with the other segmentation algorithms try to achieve the best
segmentation of the landscape you can using this algorithm. Remember to
investigate all the parameters to observe there effect of the final
segmentation.
Description
The chessboard segmentation parameters for producing
the final segmentation from the filter results.
Layer
Scale 1-4
Gradient
Lower
Threshold
Upper
Threshold
Table 2.3. The main parameters for the contrast filter segmentation.
Description
Larger values reduce the inclusion of irregularly shaped
objects.
Working on
Class.
Parameter
Enable Class
Assignment
Description
If set as no, the remaining parameters are not used.
No Objects
Ignored by
Threshold
Object in First
Layer
Table 2.4. The shape parameters for the contrast filter segmentation.
Object in
Second Layer
Object in Both
layers
Table 2.5. The classification parameters for the contrast filter segmentation.
2.10.1.
Simple Exercise
2.11. Conclusion
Following the completion of this unit you should have knowledge of all the
segmentation processes available within Definiens Developer and
implemented each of the algorithms on the image provided.
2.12. Exercises
1) Decide on the most appropriate segmentation algorithm for segmenting this
scene. As you are doing this think of what elements you think provide a good
segmentation and how the different characteristics of the various algorithms
could be used to achieve the segmentation you require.
This unit should not take you more than 1.5 hours
Resources:
NN_Classification_Process.dcp
By the end of this unit you should:
Be able to complete all the steps require in the process tree to
complete a classification (using the nearest neighbour classifier) within
Definiens Developer.
Be aware of the parameters and features to aid the nearest neighbour
classification.
Be aware of the classification, merge and export processes.
3.1. Introduction
Within in this worksheet you will create a nearest neighbour classification of a
segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors,
North Wales (Figure 3.1). This area contains extensive tracts of upland heath
and bog as well as coniferous forest plantations and grasslands at various
levels of improvement.
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral Landsat 7 data will be resampled
to the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 3.2.
select OK and create your project.
For classification, the first task is to create the classes you require and (in this
case) to insert the Nearest Neighbour Feature into each class. To create a
class, you require the Class Hierarchy window (shown in Figure 3.3) to be
open. If the window is not already visible, then click on the
icon.
To insert a class, right click in the Class Hierarchy window and select Insert
Class (Figure 3.4).
The next step is to edit your class description by first giving your class a
name. For example, give the class the name Water and assign a blue
colour. When you have done this, insert and name new classes of Forest,
Other vegetation and Not Vegetation. You should then have four classes
inserted and named:
Water
Forest
Other Vegetation
Not Vegetation
After giving each class a name, select an appropriate colour for each. This
can be anything you wish, although the final classification will be easier to
understand and interpret if you chose a logical colour (e.g., Green for Forest).
Next, the features (e.g., mean object spectral response) to be used for
classification (in this case, the standard nearest neighbour algorithm) need to
be inserted into the class. To do this, right-click on the and (min) and select
Insert new Expression (Figure 3.6).
This will present the window (Figure 3.7), where you need to select Standard
Nearest Neighbour and click Insert.
Your resulting class description should be similar to that shown in Figure 3.8
for the forest class.
Figure 3.8. The resulting class description to be used for the classification.
The same procedure now needs to be repeated for the remaining three
classes so that you end up with a classification hierarchy similar to that shown
in Figure 3.9.
To select the features used for the nearest neighbour classification use the
Edit Standard NN feature Space function, Figure 3.10a, where initially you
should just use the mean spectral values of the objects, Figure 3.10b.
Figure 3.12. Parameters used for the segmentation of the Landsat image.
Note, that the layer weighting for the panchromatic band (PAN) has been
increased to 2. This is in take advantage of the extra spatial resolution of the
panchromatic band, 15 m rather than 25 m of the multispectral.
3.4.2. Classification
To run the classification, you need to add a classification process to your
process tree. This is achieved by right-clicking on the process you named
Classification and selecting Insert Child Process. Edit the new process such
that it is similar in appearance to that shown in Figure 3.13. To select multiple
classes, use the Shift and Control keys as you would in Windows Explorer.
After inputting the parameters into the process, click on the OK button at the
bottom, you need to select samples before performing your classification.
Your process tree should now be similar to that shown in Figure 3.14.
Figure 3.14: The process tree after the inclusion of the classification process.
To save time, once you have created you first merge process you can copyand-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it
and then edit the class you wish to merge.
To select the classes to export you again edit the Image Object Domain,
remember these parameters define the image objects the process will be
applied to. The name of the outputted shapefile has been defined as
Classification while the features to be exported are the area (of the image
object) and the class name. Area is found under Object Features > Shape >
Generic while class name is found under Class-Related features > Relations
to Classification > Class name. For the class name feature you will need to
create it, right-click on the Create new Class name and select Create, leave
the parameters as their default values and just select OK. The shapefile will
output to the directory within which your project is saved, if you have not yet
saved your project then the shapefile will be outputted to the directory
containing the input imagery.
You final process tree should then be the same as the one shown below in
Figure 3.18.
The next stage is to select the samples for each of the four classes, you need
to have executed the segmentation process before undertaking these steps If
you are unsure of the distribution of ground cover types, please refer to the
shapefile LlynBrenig_BasicLandcover.shp. To create a sample, you need to
first activate the tool for sample selection (Select Samples) as shown in Figure
3.19.
Once you have activated sample selection, highlight the class you wish create
a sample for in the class hierarchy window. Either double click on the objects
you wish to select as samples or hold down the Shift key and use a single
click. To unselect a sample, repeat the process of selection for each chosen
object.
To aid the selection of your samples, Definiens Developer offers two windows
(both available from the menu in Figure 3.19) of information based on the
selected samples. Firstly the Sample Editor window (Figure 3.20) and
secondly the Sample Selection Information window (Figure 3.21).
The Sample Editor provides a visual comparison of two classes using a range
of selected features. In Figure 3.20 the Forest and Water classes are
compared using the object means from each spectral band of the Landsat
data. When an object is selected, a red arrow is displayed to illustrate where
the object mean fits in relation to the mean of the other samples. To change
the displayed features, right-click within the main window and select Features
to Display or, if you only want the features being used within the NN
calculation, select Display Standard Nearest Neighbour Features.
a) Before merging
b) After Merging
Figure 3.19. Before and after merging classes.
Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens
classification
Once you have selected your samples open the feature space optimization
tool, Figures 3.21 and 3.22.
To use this tool, select the features you wish to compare Initially try the
mean, standard deviation and the pixel ratio but later try other combinations.
Then select calculate, once the calculation has finished then select advanced
to see which features offered the best separation, and Apply to the Std NN to
use within the classification.
You can now run your classification step.
3.8. Conclusions
Following the completion of this unit you should now understand basis the
process of classification within Definiens Developer where future examples
will simply build more complex classification and segmentation routines.
3.9. Exercises
1) Experiment with different segmentation parameters, both within the multiresolution segmentation and the other segmentation algorithms. Be aware
that you will have to select new samples each time you delete the level.
2) Experiment with different sets of features within the standard NN feature
space. (Classification > Nearest Neighbor > Edit Standard NN feature
space)
3) Experiment with different sets of features and maximum dimension levels
within the feature optimisation tool. (Classification > Nearest Neighbor >
Feature Space Optimisation)
References
Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine,
D. (2005). Automated tree recognition in old growth conifer stands with high
resolution digital imagery. Remote Sensing of Environment, 94, 311-326.
This unit should not take you more than 1.5 hours
Resources:
NN_Classification_Process.dcp
By the end of this unit you should:
Be able to complete all the steps require in the process tree to
complete a classification (using the nearest neighbour classifier) within
Definiens Developer.
Be aware of the parameters and features to aid the nearest neighbour
classification.
Be aware of the classification, merge and export processes.
3.1. Introduction
Within in this worksheet you will create a nearest neighbour classification of a
segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors,
North Wales (Figure 3.1). This area contains extensive tracts of upland heath
and bog as well as coniferous forest plantations and grasslands at various
levels of improvement.
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral Landsat 7 data will be resampled
to the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 3.2.
select OK and create your project.
For classification, the first task is to create the classes you require and (in this
case) to insert the Nearest Neighbour Feature into each class. To create a
class, you require the Class Hierarchy window (shown in Figure 3.3) to be
open. If the window is not already visible, then click on the
icon.
To insert a class, right click in the Class Hierarchy window and select Insert
Class (Figure 3.4).
The next step is to edit your class description by first giving your class a
name. For example, give the class the name Water and assign a blue
colour. When you have done this, insert and name new classes of Forest,
Other vegetation and Not Vegetation. You should then have four classes
inserted and named:
Water
Forest
Other Vegetation
Not Vegetation
After giving each class a name, select an appropriate colour for each. This
can be anything you wish, although the final classification will be easier to
understand and interpret if you chose a logical colour (e.g., Green for Forest).
Next, the features (e.g., mean object spectral response) to be used for
classification (in this case, the standard nearest neighbour algorithm) need to
be inserted into the class. To do this, right-click on the and (min) and select
Insert new Expression (Figure 3.6).
This will present the window (Figure 3.7), where you need to select Standard
Nearest Neighbour and click Insert.
Your resulting class description should be similar to that shown in Figure 3.8
for the forest class.
Figure 3.8. The resulting class description to be used for the classification.
The same procedure now needs to be repeated for the remaining three
classes so that you end up with a classification hierarchy similar to that shown
in Figure 3.9.
To select the features used for the nearest neighbour classification use the
Edit Standard NN feature Space function, Figure 3.10a, where initially you
should just use the mean spectral values of the objects, Figure 3.10b.
Figure 3.12. Parameters used for the segmentation of the Landsat image.
Note, that the layer weighting for the panchromatic band (PAN) has been
increased to 2. This is in take advantage of the extra spatial resolution of the
panchromatic band, 15 m rather than 25 m of the multispectral.
3.4.2. Classification
To run the classification, you need to add a classification process to your
process tree. This is achieved by right-clicking on the process you named
Classification and selecting Insert Child Process. Edit the new process such
that it is similar in appearance to that shown in Figure 3.13. To select multiple
classes, use the Shift and Control keys as you would in Windows Explorer.
After inputting the parameters into the process, click on the OK button at the
bottom, you need to select samples before performing your classification.
Your process tree should now be similar to that shown in Figure 3.14.
Figure 3.14: The process tree after the inclusion of the classification process.
To save time, once you have created you first merge process you can copyand-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it
and then edit the class you wish to merge.
To select the classes to export you again edit the Image Object Domain,
remember these parameters define the image objects the process will be
applied to. The name of the outputted shapefile has been defined as
Classification while the features to be exported are the area (of the image
object) and the class name. Area is found under Object Features > Shape >
Generic while class name is found under Class-Related features > Relations
to Classification > Class name. For the class name feature you will need to
create it, right-click on the Create new Class name and select Create, leave
the parameters as their default values and just select OK. The shapefile will
output to the directory within which your project is saved, if you have not yet
saved your project then the shapefile will be outputted to the directory
containing the input imagery.
You final process tree should then be the same as the one shown below in
Figure 3.18.
The next stage is to select the samples for each of the four classes, you need
to have executed the segmentation process before undertaking these steps If
you are unsure of the distribution of ground cover types, please refer to the
shapefile LlynBrenig_BasicLandcover.shp. To create a sample, you need to
first activate the tool for sample selection (Select Samples) as shown in Figure
3.19.
Once you have activated sample selection, highlight the class you wish create
a sample for in the class hierarchy window. Either double click on the objects
you wish to select as samples or hold down the Shift key and use a single
click. To unselect a sample, repeat the process of selection for each chosen
object.
To aid the selection of your samples, Definiens Developer offers two windows
(both available from the menu in Figure 3.19) of information based on the
selected samples. Firstly the Sample Editor window (Figure 3.20) and
secondly the Sample Selection Information window (Figure 3.21).
The Sample Editor provides a visual comparison of two classes using a range
of selected features. In Figure 3.20 the Forest and Water classes are
compared using the object means from each spectral band of the Landsat
data. When an object is selected, a red arrow is displayed to illustrate where
the object mean fits in relation to the mean of the other samples. To change
the displayed features, right-click within the main window and select Features
to Display or, if you only want the features being used within the NN
calculation, select Display Standard Nearest Neighbour Features.
a) Before merging
b) After Merging
Figure 3.19. Before and after merging classes.
Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens
classification
Once you have selected your samples open the feature space optimization
tool, Figures 3.21 and 3.22.
To use this tool, select the features you wish to compare Initially try the
mean, standard deviation and the pixel ratio but later try other combinations.
Then select calculate, once the calculation has finished then select advanced
to see which features offered the best separation, and Apply to the Std NN to
use within the classification.
You can now run your classification step.
3.8. Conclusions
Following the completion of this unit you should now understand basis the
process of classification within Definiens Developer where future examples
will simply build more complex classification and segmentation routines.
3.9. Exercises
1) Experiment with different segmentation parameters, both within the multiresolution segmentation and the other segmentation algorithms. Be aware
that you will have to select new samples each time you delete the level.
2) Experiment with different sets of features within the standard NN feature
space. (Classification > Nearest Neighbor > Edit Standard NN feature
space)
3) Experiment with different sets of features and maximum dimension levels
within the feature optimisation tool. (Classification > Nearest Neighbor >
Feature Space Optimisation)
References
Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine,
D. (2005). Automated tree recognition in old growth conifer stands with high
resolution digital imagery. Remote Sensing of Environment, 94, 311-326.
Rulebased_Classification_Process.dcp
By the end of this unit you should:
Know how to create a rule-base classification within Definiens
Developer.
Know about the difference between absolute and fuzzy thresholds.
Know how to create a customized feature within Definiens Developer to
represent a band ratio or relationship (e.g., NDVI).
4.1. Introduction
Following on from the previous unit, you will now implement a more
detailed rule-based classification by using thresholds manually defined within
the class hierarchy rather than a nearest neighbour classification.
This unit uses the same Landsat 7 subset of Llyn Brenig in the Denbigh
Moors, North Wales, although it now aims to identify more classes to increase
the detail of the habitat classification. The aim of the unit is to provide you
with experience in entering thresholds for a rule-based classification and
creating the corresponding processes. You are not expected to identify any
thresholds as these will be given and the next unit will cover techniques
used commonly to identify these.
When selected, you will be presented with the window shown in Figure 4.2.
Here, you set the threshold and the operator (e.g., <, , =, > or ).
Within the class description you can add as many of these thresholds as you
require. You can also include and and or statements, as shown in Figure
4.3. By default, all the features you introduce are considered within an and
statement and therefore all thresholds have to be met for the object to be
classified. On the other hand, if the statement is an or statement, only one of
the thresholds needs to be met for the object to be classified. By combining
these statements (as shown in Figure 4.3), more complex class descriptions
can be developed.
To edit the and(min) to or(max), right-click on the and(min) (Figure 4.4) and
select Edit Expression.
Within the resulting window (Figure 4.5), select or(max) and click OK. To add
and(min) operators beneath the or(max) (as in Figure 3), right-click on
or(min) as before and select Insert new Expression. From the list of features
(see Figure 1), you will find the same operators (at the bottom) shown in
Figure 5. By selecting and(min) and then adding other features/thresholds
under this operator, you can create structures similar to those in Figure 4.3.
In this example, the object is assigned to the class forest but the fuzzyness of
other classes (water and urban) will also be allocated within Definiens
Developer to give a fuller picture of the contents of the object. Since the
introduction of the processes into the functionality of Definiens Developer,
careful consideration needs to be given to the use of the fuzzy logic
thresholds. Therefore, for most of these units, only absolute thresholds are
included.
To create a fuzzy (membership function) threshold, follow the same process
as outlined above but rather than selecting Insert Threshold in Figure 4.1
select Insert Membership Function. You will then be presented with a new
window (Figure 4.6) where you can select a membership function (not all of
these are fuzzy) and the corresponding thresholds.
Once you have matched your project window to those shown in Figure 4.7.
select OK and create your project.
NDVI =
NIR RED
NIR + RED
Tables 4.1 4.6 give the thresholds for each class. Note that when an upper
and a lower boundary are required, a membership function (see explanation
of fuzzy logic) can be used (see Figure 4.10).
Acid Semi Improved Grassland
Leave empty No rules
Table 4.1. Rules for the class Acid Semi Improved Grassland.
Bog/Heath
Mean GREEN
>
Mean GREEN
<
30
42
Forest
Mean NIR
<
100
Mean SWIR1
Mean NDVI
Mean NDVI
<
>
<
40
0.3
0.6
Improved Grassland
Mean NIR
>
100
Mean NDVI
>= 0.5
Table 4.4. Rules for the class Improved Grassland.
Not Vegetation
Mean NDVI
<= 0.275
Table 4.5. Rules for the class Not vegetation.
Water
Mean NDVI
<=
0.05
Figure 4.10. Setting a membership function with an upper and lower bound.
Figure 4.12. Parameters used for the segmentation of the Landsat image.
4.6.2. Classification
The classification process is similar to the previous unit but here each
class will be classified with a separate classification process and the
classification will only be performed on the objects which are remaining to be
classified. Therefore, you need to update your process tree to appear like the
one in Figure 4.13, please make sure you have the same order as shown as
the order is important for the classification to work correctly.
Figure 4.14. shows the parameters for the classes Water and Not Vegetation.
Make sure that you match these parameters, paying attention to the Image
Object Domain for the Not Vegetation classification process which restricts the
classification to only those object which are currently unclassified.
By classifying the scene in this way the aim is to initial remove those elements
when can be easily identified and classified, in this case water, and remove it
from the scene before classifying the next class.
4.6.3. Merge and Export Image Objects
The merging and exportation operation is, again, the same as the one used in unit
3 but with the inclusion of the extra classes. Therefore, your final
process tree should be like the one shown in Figure 4.15.
4.8. Conclusion
Following the completion of this unit you should now be aware of the ability
to define an object oriented rule based classification within Definiens
Developer. Using a rule based classification can allow you to encode your
expert know, for example Lucas et al (2007) developed an object oriented rule
based classification for upland habitats within Wales using Definiens
Developer to encode the expert knowledge of ecologists. One of the problems
with rule based classification is to define the rules used within the
classification the next unit will go through the techniques available with
Definiens to aid the development of these rules.
4.9. Exercises
1) Experiment with different segmentation algorithms and parameters, you
should not have to edit the thresholds you have already entered to reclassify
the resulting segments but you may notice varying levels of accuracy between
different segmentations.
2) The classification which has been produced during this unit is
superficially OK but when viewed in more detail contents numerous errors. Try
to improve the quality of the classification through the refinement of the
existing rules.
3) In addition to the rule used within the classification there maybe other
features available within Definiens Developer which could aid the
classification. Review the feature available and try to include extra features (or
remove currently used features) from the classification to try and improve the
result. Please refer to the reference guide for details of other features.
References
Lucas R.M. Rowlands A., Brown, A., Keyworth, S and Bunting, P. (2007).
Rule-based classification of multi-temporal satellite imagery for habitat and
agricultural land cover mapping. International Society for Photogrammetry and
Remote Sensing, 62(3), 165-185.
5.1 Introduction
Through this unit, you will go over a number of techniques to aid the
identification of thresholds. To illustrate the techniques more easily and
simply, an artificial image (Figure 5.1) has been created and will be used
throughout this unit. Afterwards, you can try these techniques on actual
data acquired by remote sensing instruments.
Figure 5.1. Artificial image created to illustrate the different techniques of threshold
identification.
You also need to create new processes to perform the classification once you
have created the rules within your class hierarchy. You can do this in two
ways:
1) Create an individual classification process for each class as in the previous unit
, or
2) Create a single process and edit while developing the rules and finally
select all classes and classify them in one process once the rules have been
developed (Figure 5.4).
5.3
The next consideration is how to develop the rules required for classification
of the image. This seems difficult to start with but will become a lot easier with
perseverance. To help identify thresholds, a series of functions/options are
available which are:
But, also the extent to which you know your imagery in terms of.
The range of values.
What you are seeing. (e.g., What is vegetation type X likely to be
doing at the time of image capture?)
The nature of the objects you are trying to extract (e.g., in the form of
a model such as a hill and valley model for tree crown delineation).
Interpreting the colour you can see within the image. For example, if
the object is yellow in the image, which bands need to be used for
classification?
But above all it comes down to experience!! So, take your time going through
the following exercises and consider how the features and options outlined
above help. Experiment with each of these and decide which ones you are
most comfortable with and use these. Note, that you quite often produce a
different result using these different methods but there is no right answer and
the most important consideration is that your classification works and is
appropriate to your application.
5.3.1 The Feature View
The feature view window (Figure 5.5) can be used to colour the objects (using
a colour bar) within the scene based on a single feature. The upper (green)
and lower (blue) bounds can be edited manually. Moving these upper and
lower bounds until only the area of interest is in the coloured area allows the
upper and lower bounds to be identified. These values can then be inserted
as a rule into the appropriate class.
Write down the brightness thresholds in the table below. Note, that not all
classes may be identified using the brightness feature.
Object
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White
background
Thresholds
b=
1 nL
Ci
n L i =1
From Figure 5.7, you can see that the black and yellow classes have a good
separation using the features brightness, mean red and mean green but a
reduced separation in the mean blue feature. Therefore, you can start to get
a feel for where suitable thresholds may exist. By continuing the process
through comparing the black class to all others, you should be able to identify
feature rules or combinations of these that separate the classes of interest.
Again, identify features with their thresholds to separate the given classes and
list below. These may differ from those you might have listed using the
Feature View. After you have defined these, add the new thresholds to the
hierarchy and classify your image.
Object
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White
background
Thresholds
The first step is to select the classes you wish to consider. In Figure 5.8, all of
the available classes have been selected but you can select a subset of
classes if you want to focus on these. Second, select the features you wish
to consider for the separation, and select the level (if appropriate) you wish to
work on (Levels are discussed in the next unit so for the moment, you dont
need to worry about this). Finally, you need to select the number of
Figure 5.9. Advanced results window for the Feature Space Optimization.
In Figure 5.9, the most significant information is within the textbox where, as
you saw in the previous window (Figure 5.8) and in the displayed graph, 5
dimensions produce the best separation of the classes. By scrolling down to
the Dimension 5 information you can discover the features which produced
the separation. You could now, by using the Apply to Std NN button, add
these classes to the standard nearest neighbour and use the nearest
neighbour classifier but here we are identifying thresholds so we will not do
this.
Now you have identified the features which give the best separation for all
classes, experiment to identify those features which are most suited for the
separation of individual or groups of classes. After which, use that knowledge
and the two techniques above to refine the thresholds required for the
classification.
Thresholds
Thresholds
objects, this class can be identified. This class might otherwise be very
difficult and complex to identify because of the variation in the data values
associated with the broad range of vegetation types that is likely to exist within
this class.
5.3.6. Knowing your imagery
One of the most important aspects of classification is to know what you are
viewing and equally what you are not viewing within the imagery. For
example, in the Landsat 7 imagery for North Wales you have used for the
previous units, the date of the imagery is important as the vegetation
behaves differently at different times of the year and will therefore need a
different set of rules at different times. Equally, with temporal data from
different seasons these variations can be exploited for identifying and
classifying the land cover.
Also, in knowing your imagery and the objects you wish to classify you may be
able to think of them in the form of a model. For example, when trying to
identify tree crowns, it is useful to visualise the image as conforming to a hill
and valley model, where the crowns form the hills. This can be used to
identify seeds at the crown tops (brightest parts of the image on the hill tops)
which can be expanded to identify the crown edges (in the valleys).
5.3.7. Interpreting the colour within the image
The image you are seeing on the screen is displayed (for the most part) as a
Red, Green and Blue (RGB) composite and therefore, if the object looks red
on the screen you know it must have a large contribution from the channel you
are displaying as red. From this observation, you can use the channel in red
in the classification. Figure 5.10 shows the RGB colour space and by
considering the colours you observe in the image and in this figure, you can
start to establish which channels are contributing to the appearance of the
image as displayed in a particular colour combination. Note that when using
this approach, consider also the stretch you are applying to the image to
enhance the display as this can change the colour you see and the contrast
between features.
5.3.8. Experience
Finally, and perhaps the most important thing to recognise, is that it takes
experience to become good at identifying thresholds and developing the
processes and methods which fit around those thresholds and which form
your classification. The more imagery you gain experience with, the better
you will become at classifying and youll be able to apply your knowledge from
one set of imagery to the next.
Another aspect of classification that should be considered is the ability of your
developed rule bases and processes to be applied to imagery other than
those to on which they have been developed. Ideally, this should be done
such that no or minimal adjustments are applied.
5.4. Conclusions
Following the completion of this unit you should now be aware of the tools
and concept through which you can identify the thresholds you will require to
classify a scene using a rule-base.
5.5
Exercises
This unit should not take you more than 1.5 hours
Resources:
LevelsExampleProcess.dcp
By the end of this unit you should:
Be aware of the concept of levels within Definiens Developer and how
they can be used to increase the concepts represented through the
classification.
Know how to use the enclosed by class process.
6.1. Introduction
Within this unit, you will learn how to use Levels within Definiens Developer
and some of the features which allow interaction between levels. These
features increase the knowledge available within the system as different
scales of information are used.
To illustrate the use of Definiens Developer levels you will use an artificial
image that has been created for this unit (Figure 6.1). Within this image,
the green objects represent trees (herein referred to as Level 1) and a second
level (herein referred to as Level 2) will be created to represent the forest
extent. To identify the forest extent, the use of more complex processes will
be required to fill in the gaps between the crowns to create the forest mask.
Figure 6.9. The classification process to identify objects within a border to a crown.
The feature Rel. border to Crowns is used rather than Border to Crowns as
it is normalised and independent of the object size and border length and
therefore creates a more stable threshold.
6.2.7.
The final part of the classification is to fuse and tidy the result. To do this, you
need to reproduce the processes shown in Figure 6.10.
Figure 6.10. Processes to tidy and merge to give the final result.
You should already be familiar with the fusion process but take note of which
processes require execution on Level 1 and Level 2. To switch the Level,
remember to use the Parameter button next to the drop down box.
The new process here fills in the gaps within the areas of forest so when
executing, it is worth stepping through the processing and executing one step
at a time to observe the workings of each of the processes. The parameters
required for the process which fills the gaps are given in Figure 6.11.
Figure 6.11. Parameters for the process which fills any gaps within the forested areas.
6.3. Results
Once the process has been executed you should a result at each level as
shown in Figure 6.12.
6.4. Conclusions
Following the completion of this unit you should be aware of the concept of
levels within Definiens Developer and how to implement them and represent
the relationships between the levels. You have also come into contact with
another process, in this case the fill enclosed by class process.
6.5. Exercises
1) Experiment with different segmentation strategies when creating a new
level. Figure 6.12 demonstrates a multi-resolution segmentation process
which will create a new Level above the existing one.
Figure 6.12. A segmentation process which creates the segmented layer as a new level
above the existing one.
2) Examine and experiment with the other features which allow interaction
between objects within and between levels (e.g., Relations to sub objects and
Relations to super objects). Note that super objects are those on the level
above while sub objects are those on the level below.
3) Explain below why the class background on Level 1 cannot be fully fused to
create one large object.
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
________
LandcoverClassificationExample_provided.dcp
LandcoverClassificationExample.dcp
By the end of this unit you should:
Be able to put together a real world land use classification.
Be able to demonstrate that you can calculate thresholds for
classification for real world images.
Be aware of the process to grow a class from an identified core.
7.1. Introduction
The aim of this unit is to allow you to pull together the skills you have
developed within Definiens Developer to produce a single more complex
example. The process outline will be provided, with a segmentation and initial
classification of elements such as water to illustrate some more advanced
features but you will be require to identify and enter the thresholds for the
classification of the scene.
Please note the inclusion of the two thematic layers. The first defines the area
of the image where cloud is present and the second defines the upland and
lowland areas of the scene and will be used for segmentation.
Once executed, you should observe that the segmentation process has
identified the areas defined within the shapefile defining the cloud cover area.
The following step is to classify these as such and ignore them for the
remainder of the classification process. The classification is performed with
reference to the thematic layer (Figure 7.4) and results in the following
process tree, Figure 7.5.
Once the cloud has been removed from the scene, the follow segmentation
process, Figure 7.6a, will be added to the process tree, Figure 7.6b. Please
note the use of the second shapefile to separate the lowland and upland
regions of the scene. Also, note that the segmentation is being performed at
Level 1 and the Level Usage parameter is set to Use current.
Once segmented the classes of upland need to be defined using the thematic
layer, Figure 7.7.
Figure 7.7. The classification process and description to define the regions of upland and
lowland.
The final part of the segmentation is to segment within the upland and lowland
regions to produce the segments for classification. The process and
parameters are shown in Figure 7.8.
Within the Groups tab of the class hierarchy classes can placed in a hierarchy
allowing the relationships between the different classes to be defined. For
example, all the forest classes have been placed under the Forest class.
Therefore, Definiens Developer is aware that the Broadleaf Forest, Coniferous
Forest and Young Coniferous Forest classes are all types of forest. Once
classified if you collapse the Forest group all these forest regions will be
coloured as forest. But, be aware that if you merge the Forest class all the
sub-classes will be merged forming only a single class and removing
information from your classification.
If you are unsure of the classes to be identified please refer to the shapefile
Landcover_classification.shp.
The first step within this classification is to identify the Not Vegetation regions
within both the upland and lowland regions. The NDVI has been calculated
using a customised feature, you will need to create (see earlier unit), and a
threshold of NDVI < 0.25 has been identified (enter within the class
description of the class Not Vegetation) to separate the Not Vegetation
regions, Figure 7.10.
Figure 7.10. The process tree for the classifying the Not Vegetation regions.
Within the Not Vegetation regions the area of Water have been identify,
using the rules SWIR2 < 15 AND NDVI < 0.1, but when you run these rules
you will notice that not all the areas of Water have been identified. This is
because there are still some small regions of cloud over the lake. To correctly
classify these regions we will grow the Water class using the Water Grow
class. The Grow Water class contains the rules shown in Figure 7.11, where
the new rule Rel. border to Water > 0 defines that to be a member of the
class Grow Water the object needs to have a border to a Water object.
By defining the process tree as shown in Figure 7.12 the Grow Water class is
iteratively classified 10 times (10x: for all, Figure 7.13) where the identified
Grow Water objects are assigned to the Water class in between each
iteration.
Figure 7.12. The process tree to classify the regions of water within the scene.
Finally, the Water regions are merged and the remainder of the classification
will concentrate on the vegetation within the scene, where you are require to
develop you our process and rules.
7.3.3. Tidy and exportation.
As with the previous processes the final steps are to merge and tidy
classification classes and export to results for use within a GIS. Therefore,
based on the knowledge gained within the previous units developed these
parts of the process.
7.4. Results.
Once you have completed the classification and executed the tidy and
exportation processes you should have results similar to those shown in
Figure 7.14.
Finally, it is recommended that you check you classification process with the
model process developed (LandcoverClassificationExample.dcp). To do this
open another instance of Definiens Developer and setup the same project
structure, then right-click within the process tree window and select Load
Rule Set, Figure 7.15. You may now execute this process and should gain
the same result as the one shown in Figure 7.14. Observe how each object
can have membership to multiple classes (use the membership to feature
and the object information window) as fuzzy membership functions have been
developed for each of the added classes.
7.5. Conclusions
Following the classification of this scene you should now be confident in
performing you own classifications including several structured classes and
many of the classification features available within Definiens. It is recommend
that you look through the reference guide within your installation of Definiens
to observe the large number of features available for you during classification.
7.6. Exercises
1) The classification ruleset provided is a fuzzy classification; therefore each
object has a membership to all the classes. Look up the classification stability
feature and observe the objects which are on the boarder between two
classes.
CalculatingThresholdsExample.dcp
By the end of this unit you should:
Be able to develop Definiens processes which calculate features from
the image.
Be able to implement iterative processes to loop through a series of
objects and/or grow an object from its core.
Be able to use variables during the classification.
8.1. Introduction
A limitation of the methods which have so far been presented is that the
thresholds for classification have be identified manually. This is time
consuming and thresholds can vary between images and even across a single
image. With appropriate image pre-processing atmospheric correction and
topographic correction many of these differences can be corrected for but not
for all. Therefore, this unit will demonstrate the how Definiens Developer
can calculate thresholds from the imagery and use it for classification, in this
case for cloud and shadow detection.
Once a segment of the chessboard has been selected an upper and lower
quartile within the segment will be calculated and used as the thresholds for
identifying seeds for the clouds and the shadows within the scene. A fine
segmentation is then performed on the segment and once these seeds (or
cores) have been identified the remainder of the scene will be used to
recalculate these threshold values. The seeds will then be grown to the limit of
these new thresholds. Finally, the cloud and shadow objects will be merged
and the process will move on to the next large segment until all parts of the
image are processed.
Figure 8.5, shows the process tree which you should have up to this point.
You then need to define the iterative process which will allow each segment to
be selected in turn, Figure 8.6. A while loop needs to be set up to loop while
the number of unclassified objects is greater than 0. Within this loop the first
process selects a single object. The Find domain extreme process will be
used, Figure 8.7., where the object with the maximum value in its Y location is
selected. Where multiple objects have the same value one will be selected at
random, as the option Accept Equal Extrema has been set to no, otherwise
all objects with the same value would be selected together. The select object
will be given the class _active. The preceding underscore is used to define a
class which is used for processing and not a classification class.
Once an object has the class _active it can then be processed individually
using the Image Object Domain filter within a process. Finally, once the
processing has finished all the remaining _active objects need to be removed
to allow the loop to terminate.
The process you will use to calculate the thresholds is the compute statistical
value process which allows the number, sum, minimum, maximum, mean,
standard deviation, median and quantile to be calculated. The value from this
calculation is outputted into a variable, Definiens Developer supports the
concept of variables within the process tree. Definiens offer 5 variable types,
Scene, Object, Class, Feature and Level, where the scope of the variable is
defined by the type. For example, an object variable will be created for each
individual object, while a Level variable will be defined for an individual level
(i.e., a different value can be stored for each level) and a scene variable is
defined for the whole project (i.e., only one value for the whole project). For
this project you will only use scene variables and calculate the quantile from
the compute statistical value process. To calculate the quantile you first need
to define the quantile you are interested in, for example the 90 % quantile, to
increase the flexibility of the process we will define a variable to store this
value.
The classification processes, Figure 8.10, starts by defining the set of
variables to be used during the classification. The first LowerQuantile
contains the quantile threshold used to calculate the threshold for the shadow
seeds. While the second UpperQuantile is used to define the quantile for the
cloud
seeds.
Finally,
the
LowerQuantileBrightness
and
UpperQuantileBrightness need to be defined to store the brightness
thresholds to be used for classification. To setup a variable the update
variable process needs to be used, Figure 8.11. The initial values for the
variables is shown in Table 8.1.
Variable
LowerQuantile
UpperQuantile
LowerQuantileBrightness
UpperQuantileBrightness
Initial Value
5
88
0
0
The next stage of the process, Figure 8.12, is to compute the threshold values
into the LowerQuantileBrightness and UpperQuantileBrightness variables.
Note, the Image Object Domain specifies the _active class, therefore the
values are only computed over objects which have the class _active and will
therefore vary across the scene, as each chessboard segment is selected in
turn.
The next stage is to grow the cloud and shadow seeds to identify the full
extent of the clouds and their shadows, this will require another loop. But first,
the thresholds UpperQuantileBrightness and LowerQuantileBrightness need
to recalculate to provide the thresholds to terminate the loop used to grow the
seeds. The same process as before is used to calculate the new upper and
lower quantiles of the _active class. The new value calculated will differ from
the one previous calculated because the identified cloud and shadows seeds
no longer have the class _active and therefore not included in this
calculation. To define the loop create a new process, below calculate
threshold, and tick on the Loop while something changes option, Figure 8.15.
The elements within the loop will now be inserted as child processes.
a) Process Tree
b) Process parameters
Figure 8.15. The process tree and process parameters to set up the loop.
Once the loop has been defined, the classes Cloud Grow and Shadow
Grow need to be created, where the class description will be the same as
Cloud and Shadow but for the inclusion of the relative border features to
restrict the classification objects to those bordering the Shadow and Cloud
features, Figure 8.16.
Following the classification of Cloud Grow and Shadow Grow the two class
need to be assigned to the Cloud and Shadow classes, before being merged
and any remaining _active class objects assigned to _processed, Figure 8.17.
These three steps, classification, assign and tidy will happen for each iteration
of the loop, where the loop will continue until all the objects fits the rules have
been identified.
The next step is to tidy the classification, which consists of three steps. The
first is to assign all the _processed objects to be unclassified and then merge
them. The next is to fill any holes in the cloud or shadow object with an area
less than 20000 m2. To do this the fill enclosed by class process was used,
Figure 8.18, where all the unclassified objects with an area less than 20000m2
enclosed by cloud are assigned (i.e., use class description = no) to the class
cloud. This is repeated for the shadow class. Finally, the Shadow and Cloud
classes are merged and exported (remember to export the class names),
Figure 8.19.
Figure 8.19. The process tree to tidy and export the classification.
b) From ArcGIS
Finally, to increase your understanding of the process you can make use of
the Update View option (right-click on a process) which is available on every
process and will update the view in the data window after a process has
executed. This will allow you to watch the progress of your classification.
Initially, select Update View on the processes which selects an active object,
the merging of the cloud shadow seeds and the merging of the cloud shadow
during the grow. Now execute the process and you can watch your
classification being performed. Beware of over using this feature as updating
the view is a slow process and can significantly increase the processing time.
For example, switching on the three updates as suggested will double the
processing time for this algorithm.
8.6. Conclusions
From this worksheet you should be aware some of the more advanced
process and functions available within Definiens, including growing a class,
using variables and calculating thresholds.
8.7. Exercises
1) Although the method superficially works well there are numerous small
errors when area of vegetation have been included in the cloud mask;
Develop rules to remove this mis-classification.
2) Currently if a segment (from the chessboard segmentation) does not
contain any cloud or shadow objects will be identified as such regardless. Add
in extra if statements to try to remove or reduce this problem.
3) Select a new subset and try the classification on the new subset to check
for the algorithm robustness.