Вы находитесь на странице: 1из 8

26 OMET 2006

CLASSIFICATION OF SHIP BY IMAGE PROCESSING AND NEURAL NETWORK

Khin Hnin Thant1, 2 Shi Chao-jian2


1
Department of Marine Electrical System and Electronics, Myanmar Maritime University
Thilawa, Yangon, Myanmar
2
Merchant Marine College, Shanghai Maritime University
1550 PuDong DaDao, Shanghai, 200135, China

Abstract: This paper presents a method for classification of ship in image recorded by a stationary camera. This paper mainly discusses
the classification of ship by using image processing. Firstly, we extract and calculate the dimension and parameters of the ship by using
image processing techniques and then these parameters are given to the inputs of neural network, which classifies the ships into three
categories; small, medium and large. We use a feed forward neural network trained using the back-propagation learning algorithm to
classify the ships. We also provided the experimental results using actual data of ship which demonstrate the effectiveness of the
method. Moreover, we present a method to recognize the type of ship and also provided the graphical user interface (GUI) which
allows the user to simulate the classification result. The paper also discusses the present condition of Marine Watch System and gives
some issues to be considered. The objective of this study is to achieve the integration of the traditional navigation equipment with an
image processing system.

Keywords: Ship Classification, Image Processing, Neural Network, Marine Watch System.

೒‫ڣ‬໘⧚Ϣ⼲㒣㔥㒰䆚߿ᡔᴃ೼㠍㠊ߚ㉏ⱘᑨ⫼
䞥䲾Ѝ 1, 2 ᮑᳱ‫ ع‬2
1.
㟾⍋⬉࡯㋏㒳Ϣ⬉ᄤ㋏ˈ㓙⬌⍋џ໻ᄺˈӄ‫ ˈܝ‬㓙⬌
2.
ଚ㠍ᄺ䰶ˈϞ⍋⍋џ໻ᄺˈ1550 ⌺ϰ໻䘧, Ϟ⍋ˈ200135, Ё೑

ᨬ 㽕˖ᴀ᭛ᦤߎϔ⾡෎Ѣ䴭ᗕ✻⠛ᇍ㠍া䖯㸠ߚ㉏ⱘᮍ⊩ˈЏ㽕䞛⫼೒‫ڣ‬໘⧚ᡔᴃDŽ佪‫ܜ‬៥Ӏ䗮䖛೒‫ڣ‬໘⧚ᡔᴃᇍ✻⠛ᦤপ
⡍ᕕˈ✊ৢᇚ䖭ѯ⡍ᕕ԰Ў໮ሖࠡ৥⼲㒣㔥㒰ⱘ䕧ܹˈ䆹⼲㒣㔥㒰⫼ BP ㅫ⊩(ড৥Ӵ᪁ㅫ⊩)ᴹ䖯㸠, ⬅䆹⼲㒣㔥㒰ᇚ✻⠛Ё
ⱘ㠍াߚЎ໻ൟ㠍ǃЁൟ㠍ǃᇣൟ㠍ϝ㉏DŽ៥Ӏ⫼ⳳᅲⱘ㠍ⱘሎᇌ԰ᅲ偠ˈᅲ偠㒧ᵰ䆕ᯢњ䆹ᮍ⊩ⱘ᳝ᬜᗻDŽ঺໪ˈ៥Ӏᦤ
ߎϔ⾡䆚߿㠍া㉏ൟⱘᮍ⊩ˈ⫼᠋ৃҹ䗮䖛 GUI 䖯㸠῵ᢳᅲ偠DŽ᳔ৢˈᴀ᭛䖬䅼䆎њ"┰㠛ⲥ⌟㋏㒳"(Marine Watch System)
ⱘⷨお⦄⢊DŽᴀ᭛䆩೒ᇚ೒‫ڣ‬໘⧚ᡔᴃ䲚៤ࠄӴ㒳ⱘ㠍াᇐ㟾䆒໛Ёˈᑊ㦋ᕫњᕜདⱘ㒧ᵰDŽ

݇䬂䆡 : 㠍㠊䆚߿, ೒‫ڣ‬໘⧚, ⼲㒣㔥㒰, ┰㠛ⲥ⌟㋏㒳

1. Introduction
Safer Navigation is very important issue to be considered. Therefore, ship detection and classification is very
important to prevent the ships from causing marine accidents. For the sea surveillance, if we only want to detect the
ships that are approaching the shore or other ships, radar system is the best choice. But radar only knows some
objects are approaching and it is hard to classify what kind of object it belongs to. Moreover, it cannot detect object
in its blind area. Automatic identification system (AIS) is required for the convention ships of 300 GT and over
when the SOLAS Chapter V as revised comes into force on July 1, 2002. Since the ships that weigh below 300 GT
have no duty to install, that is, not all vessels are installed with AIS and ships without AIS installed cannot be
detected and identified. Besides, AIS may be switched off under the dangerous condition. This might be the case in
sea area where pirates and armed robber are known to operate. Therefore, we should consider other possible ways
helping to get vessel information and identification of vessels that are not equipped with AIS and cannot be detected
by radar at sea. Under these circumstances, other marine watch support systems should be constructed and used
extensively.
Safer navigation can be made by image processing, that is, monitoring the other ships and tries to get
information via image processing techniques. This paper proposes a method for classification of ships by image
processing. A classification system like the one proposed here can provide important data for ship identification.
The system we propose consists of four stages.


Khin Hnin Thant: Postgraduate student, Instructor, MMU. khthant@gmail.com
Shi Chaojian, Proffessor, Shanghai Maritime University, cjshi@shmtuedu.cn

26
 OMET 2006 27

x Image Acquisition
x Segmentation
x Feature Extraction
x Ship Classification
The paper is organized as follows: Section 1 is introduction; Section 2 describes an overview of our work and
then a description of our approach is presented; Section 3 experimental results of image processing result and actual
data results are presented; Section 4 discussion on idea for further work and finally conclusions are drawn in Section
5.

2. Image processing and feature extraction

2.1 Image Acquisition


Image acquisition is the very first step. In our work, acquisition could be as simple as being given an image that is
already in digital form. We use a single digital camera placed on a tall structure. We take the some photos of ships of
different type and size. Generally, the image acquisition stage involves preprocessing, such as scaling. We assume
that the distance between camera and the target ships is constant (precisely specified). We also choose very simple
and homogeneous background for improving the quality of the segmentation result. We take the side view photo of
the ship and do not consider the width of the ship.

2.2 Segmentation
In the analysis of the objects in images it is essential that we can distinguish between the objects of interest and "the
rest." This latter group is also referred to as the background. The techniques that are used to find the objects of
interest are usually referred to as segmentation techniques - segmenting the foreground from background.
In detecting ships, segmentation of the image is to separate the ships from the background. There are various
approaches to this, with varying degrees of effectiveness. To be useful, the segmentation method needs to accurately
separate ships from the background, be fast enough to operate in real time, be insensitive to lighting and weather
conditions and require a minimal amount of initialization. In general, the more accurate the segmentation, the more
likely recognition is to succeed. In this paper we use two of the most common techniques: thresholding and edge
finding.

2.2.1 Thresholding
Threshold means pixel value set to discriminate dark and light intensities. This technique is based upon a simple
concept. A parameter called the brightness threshold is chosen and applied to the image f [i, j ] as follows:
If f [i, j ] t - f [i, j ] = object = 1
Else f [i, j ] = background = 0
This version of the algorithm assumes that we are interested in light objects on a dark background. For dark objects
on a light background we would use:
If f [i, j ] < - f [i, j ] = object = 1
Else f [i, j ] = background = 0
In this paper, we choose background which is simple and homogeneous so as to be easy and good for segmentation.

2.2.2 Edge Detection


Segmentation subdivides an image into its constituent regions or objects. Segmentation algorithms for monochrome
images generally are based on the basic properties of image intensity values. Edges are places in the image with
strong intensity contrast. Since edges often occur at image locations representing object boundaries, edge detection
is extensively used in image segmentation when we want to divide the image into areas corresponding to different
objects. Representing an image by its edges has the further advantage that the amount of data is reduced significantly
while retaining most of the image information.
In this paper, we use Sobel operator to calculate the edge of the input image.

OMET 2006 27
28 OMET 2006

Sobel operator

1 0 1 1 2 1
2 0 2 and 0 0 0
1 0 1 1  2 1

is used for edge detection which is represented as;

Let input image be f (i, j ) and output image be S (i, j )


S (i, j ) = | f(i-1, j-1) + 2f (i-1, j) + f(i-1, j+1) – f(i+1, j-1) – 2f(i+1, j) – f(i+1, j+1)| +
| f(i-1, j-1) + 2f(i, j-1) + f(i+1, j-1)- f(i-1, j+1) – 2f(i, j+1)-f(i+1, j+1)|
where
f (i, j ) is input image or background image and S (i, j ) is the output edge image.
We also present a method to calculate the number of pixels in edge image. This pixel value will be one of the input
vectors to the neural network.

Fig.1. An example of object extraction and edge detection result

2.3 Feature Information Extraction


In this paper, we propose a method to compute certain geometric properties such as object length, height, perimeter
length etc. These properties can be used to determine the identification of an object. When the classification of
vehicle is performed via human visual system, the typical standard is the feature information of vehicle, such as
shape and size. In measuring these dimension in image, we also considered the concept of principal dimension of
ship. Measuring the dimensions can be carried out in original gray scale image and no need to use threshold binary
image. So, we can avoid the loss of information in image and can get rather accurate dimension. Our proposed
method allows the user to point to some locations on the images, that is, user marks the end points of desired
dimension which want to be measured, and the system can then compute the pixel value of that dimension. Another
part of inaccuracy in determining distances depends on the user’s ability to mark end points in the image. In general,

28
 OMET 2006 29

however, in spite of the inaccuracies discovered, this method proved to be much quicker, more accurate, and more
adaptable to generic scenes.

Length overall

Water line

Base Line
Length between perpendiculars

Fig.2.The principal dimensions of ship

Fig.3.The image of the ship: x and y denote the length and high of the ship in image

3. Ship Classification
Multilayered feed forward neural networks trained using the back propagation algorithm account for a majority of
applications on neural networks to real world problems. This is because BP is easy to implement and fast and
efficient to operate.
In this paper, we used feed forward neural networks trained using the BP algorithm to classify the ships. We
use the back propagation training function in Matlab toolbox to train feed forward neural network to solve
classification problem. Input vectors and the corresponding target vectors are used to train a network until it can
classify input vectors in an appropriate way as defined by us.
The configuration of the network is as follows:
Number of network layers : 3
Number of neurons in input layer : 8
Number of neurons in hidden layer : 17
Number of neurons in output layer : 2
The transfer function of hidden layer and output layer are ‘tansig’ and ‘logsig’. The training function of the network
is ‘trainlm’. The network classifies the ships into three categories.
In the multilayer feed forward network, each input vector is weighted with an appropriate weight, the sum of
the weighted inputs and the bias forms the input to the transfer function. Neurons use the differentiable transfer
function to generate their output. The network is trained with back propagation.
There are generally four steps in our training process:
i. Assemble the training data
ii. Create the network object
iii. Train the network
iv. Simulate the network response to new inputs

OMET 2006 29
30 OMET 2006

3.1 Classification of Ship with Dimension


We train this classification network by using the parameters of ship in image. To classify ship, we use the data
obtained by the feature information extraction in previous section (that is, total number of edge pixels, the overall
length, height and perimeter of the ship in image) to the input of neural network. We can classify the ships into three
categories: small, medium and large and can’t classify the approximate DWT of ship.
We also train this classification network with the actual dimensions of ships. We use the actual data of the ship
dimensions such as overall length and height of the ships to the input of network. We considered the minimum,
maximum and average dimensions for each type of vessel under various DWT.
The data we use are obtained from ‘Navigational Channel Side Slope & Design Ship Size’. We classify the
ships into three categories, that is, ships under 5000 DWT, ships between 10000 DWT to 50000 DWT and ships
over 70000 DWT. The network is well trained and can perform successfully the classification of above three
categories.
Moreover, the network can predict the class of ship, that is, the approximate DWT of ship, by giving its
dimensions to input of network.
Our proposed method is effective and can give the correct classification rate 88% when we use the parameters
in image and higher rate of 96% when we use the actual data of ship. Based on the results obtained, we get some
issue to be integrated. Due to the camera orientation, the computed height in image is sometimes actually a
combination of the ship’s width and height. It is very hard to separate the two. This ambiguity allows some errors
and the classification rate is less accuracy when we use parameters in image. This issue will be studied in future.

3.2 Classification the Type of Ship


In our work, a feed forward neural network trained using the back-propagation learning algorithm is also used to
classify the ship patterns. The network architecture for this purpose is; 45 neurons in hidden layer and 2 neurons in
output layer. The transfer function of hidden layer and output layer are ‘tansig’ and ‘logsig’. The training function of
the network is ‘trainlm’. The proposed network is capable to classify the target ship pattern among the variety of
ship types.
In our test, we use the gray scale ship images of 600 x 400 pixels size. There are altogether 6 ship patterns are
used; 4 ship patterns are used as training pattern sets and 2 ships are used as the input data to recognize. In our test, 4
kinds of ship such as cargo ship, ferry ship, LNG carrier and container ship are used as the training ship pattern sets
and LNG carrier and its noisy image are used as the input target ship to recognize.
We make pre-processing tasks on the image patterns before we perform the network. Pre-processing tasks
include image size standardization and gray-scale standardization. Image size standardization is transforming the
original image size to 30x20 pixel images according to the scale. Gray-scale standardization is representing the
scale of light intensity by numerical values; in 256 gray levels, zero-gray level means the darkest and 255 gray-level
means the brightest light and the others lies between the values 0 and 255. When the images are used as input to the
network, the gray-scale values are scaled again to allow between [0, 1].
The smallest gray scale value is 0, the largest gray scale value is 1 and the other gray scale values lie between 0 and
1.
In addition, we test the network to be able to handle noise because in practice, the network cannot receive a
perfect ship image pattern as input. To obtain the network not sensitive to noise, we trained with the ideal copies and
the noisy copies of images in ship patterns. The noisy image is made by adding 10% and 20% of noise to ideal
image. This force the neurons to learn how to properly identify the noisy image; while requiring that it can still
response well to ideal image.
The experiment proved that the network performs well on ship patterns and classification can be done correctly
by using the proposed method. We also provide the graphical user interface (GUI) which allows the user to simulate
the classification result. The proposed GUI is easy to use and intuitive to operate, using the neural network output
result. As the experimental result, the average recognition (classification) rate for ideal ships was obtained 95 % and
87% for noisy images.

30
 OMET 2006 31

Fig.4. An example of ship patterns in training sets (above row) and their noisy patterns (below)

Fig.5.GUI for classification result of ship’s type

Input Digital Image

Object Region Extraction

Edge Information Feature Information


(Binary Image) (Gray-scale image)

Ship Classification

Fig.6.Block-diagram for ship classification by using ship’s parameters

Test Pattern Pre-


Feature Measurement Classification NN Result
processing

Classification

Training Pre- Feature Extraction/ Learning Training


processing Selection
Pattern

Fig.7.Block-diagram for neural network model of ship type recognition

OMET 2006 31
32 OMET 2006

4. Experiment
In this paper, the computer system and software configuration given below is used to implement and evaluate the
performance of the proposed dimension-based ship classification by image processing. We use the images with
600x400 pixels size. According to the data processing capability of computer hardware used in this experiment,
image processing works required an average time of 3.0 s.
x Main Computer System
CPU: Intel ® 1.66 GHz, Centrino Duo, 980 MHz, Main Memory: 504 MB
x Digital Image Acquisition Device: Sony DSC- W12
x Operation System: Windows XP Professional
x Software Development tool for image processing: MS Visual C++ 6.0
Software Development tool for training neural network: MATLAB 7

Classification Results
1. Classification Rate by Using Parameters in Image

99
97
95
93 Le ngth
91 He ight
89
87
85
small me dium large
ship ship ship

Fig.8.The rate of ship classification by using dimension in image


2. Classification Rate by Using Real Data of ship

99
97
95
93 Le ngth
91 Height
89
87
85
small me dium large
ship ship ship

Small (under 5000 DWT); Medium (10000 DWT~50000 DWT); Large (above 70000 DWT)

Fig.9.The rate of ship classification by using real dimension of ship

5. Conclusion and discussions


In this paper, we propose the ship classification method by image processing and neural network. In the proposed
method, first we extract the ship parameters from image and then we improve the accuracy of ship classification by
using these parameters in image as input of neural network. We also tried this network by using actual parameters of
ship and discuss on data comparison between the two results. We also propose a method to classify the ship’s type
and provide the GUI which allows the user easy to simulate the classification result. To enable the classification into
a large number of ship categories is under study.

32
 OMET 2006 33

Both the generalization and approximation ability of a feed forward neural network are closely related to the
architecture of the network. Therefore, choosing an appropriate network architecture (which means the number of
layers and the number of hidden neurons) is an issue of paramount important. Answers to questions such as these
come more from experience than from mathematics. A large effort has to be devoted to this issue. To perform the
network with large size of training set is our goal. Also, the resolution of the input image vectors has to be increased.
To enable classification into a larger number of ship categories, we intend to integrate the network which is more
robust and speed up in the network convergence. This issue is under study.

Acknowledgement
The research work of this paper is partially sponsored by Shanghai Leading Academic Discipline Project, T0603

References
1. Lampinen J, Oja E. Pattern Recognition. In: Leondes C T,ed. Neural Network Systems, Techniques and
Applications, Vol.5, Image Processing and Pattern Recognition. New York: Academic Press,1998,1-59.
2. Surendra Gupte, Osama Masoud, Robert F.K.Martin, and Nikolaos P.Papanikolopoulos, “Detection and
Classification of Vehicles”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL.3, NO.1,
MARCH 2002.
3. Masatoshi Shimpo, Masato Hirasawa, Keiichi Ishida, Masaki Oshima, “A trial Toward Marine Watch System
by Image Processing”, Proceedings of 12th IAIN World Congress and 2006 International Symposium on
GPS/GNSS, October 2006, Jeju Korea. Vol.1, 41-46.
4. Design Code of General Layout for Sea Port (JTJ211-99) Partially Revised, “Navigational Channel Side Slope
& Design Ship Size”, 2002.
5. John Peter Jesan, “The Neural Approach to Pattern Recognition”, Ubiquity, Volume 5, Issue 7, April 2004.
6. D.M.Ha*, J.M.Lee, Y.D.Kim, “Neural-edge-based vehicle detection and traffic parameter extraction” in Image
and Vision Computing, SCIENCE DIRECT, May 2004.
7. D.Beymer, P.McLauchlan, B.Coifam, and J.Malik, “A real-time computer vision system for measuring traffic
parameters,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Puerto Rico, June 1997.
8. Pete McCollum, “An Introduction to Back-Propagation Neural Networks”.
9. Neural Network Toolbox, Matlab 7

OMET 2006 33

Вам также может понравиться