Вы находитесь на странице: 1из 54

Abandoned Object Detection

CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
In last decade many terrorist attacks took place at railway stations, hotels, malls, Cinema
halls, Parliament House and at many other crowded places. Many people lost their lives in this.
To fight with such anti-social activities government appointed multiple Strategies. Video
Surveillance System is one among them. But the problem with CCTV system is they are fully
controlled and monitored by Human. Survey on CCTV shows that they are mainly used for
investigation of case rather than monitoring live footage of CCTV. That was primary issue why it
didnt help to avoid the terrorist attacks. Another reason behind that was limitation of human to
response highly sensible situation. Human eyes have some limits to process complex real time
video. Human eyes are not effective for looking at multiple Displays at a time. So to enhance the
processing of CCTV system for real time video stream abandoned object detection System was
introduced.
Many algorithms and techniques were suggested by many researcher, scientists to design
an abandoned Object detection System. But most of them are practically more complex and more
expensive to implement. Some uses physical components such as highly expensive filters, lasers,
sonic waves etc. it is not possible to install all those cause the cost factor.
The proposed System is simple and effective. It avoids the Use of above expensive
physical components. It simply works on background segmentation scheme. And an additional
feature is added to discover the threat (human) by using a face detection algorithm.

What is an abandoned object?


The detection of abandoned objects is more or less the detection of idle/inactive
(stationary or non-moving) objects that remain stationary over a certain period of time. The
period of time may be adjustable. In several types of images or frames idle objects should be
detected. For example in complex near Elevator bag is left by some person. An unknown object
is any object that is not a person or a vehicle. In general, unknown objects cannot move they are
considered as stationary.
1 | Page
Department Of ECE, GECI

Abandoned Object Detection

What should be detected?


Keep track on object for particular time if that object still for that particular time it means
that object may be an abandoned object so check that object manually. After detection of object
check object size if it is greater than decide size generate alarm and highlight that object we can
also inform police and fire workers through message.
.1.2 CURRENTLY EXISTING TECHNOLOGIES
Most existing techniques of abandoned (and removed) object detection employ
a modular approach with several independent steps where the output of each step serves as the
input for the next one. Many efficient algorithms exist for carrying out each of these steps and
any single complete AOD (Abandoned Object Detection) system has to address the problem of
finding a suitable combination of algorithms to suit a specific scenario.
Following is a brief description of these steps and related methods, in the order they are carried
out:
1.2.1 Background Modeling and Subtraction (BGS): This stage creates a dynamic model of
the scene background and subtracts it from each incoming frame to detect the current foreground
regions. The output of this stage is usually a mask depicting pixels in the current frame that do
not match the current background model. Some popular background modeling techniques
include adaptive medians, running averages, mixture of Gaussians, kernel density estimators,
Eigen-backgrounds and mean-shift based estimation. There also exist methods that employ dual
backgrounds or dual foregrounds for this purpose. The BGS step often utilizes feedback from the
object tracking stage to improve its performance.

2 | Page
Department Of ECE, GECI

Abandoned Object Detection

1.2.2 Foreground Analysis: The BGS step is often unable to adapt to sudden changes in the
scene (of lighting, etc.) since the background model is typically updated slowly. It might also
confuse parts of a foreground object as background if their appearance happens to be similar to
the corresponding background, thus causing a single object to be split into multiple foreground
blobs. In addition, certain foreground areas, while being detected correctly, are not of interest for
further processing. The above factors necessitate an additional refinement stage to remove both
false foreground regions, caused by factors like background state changes and lighting variations,
as well as correct but uninteresting foreground areas like shadows.
Several methods exist for detecting sudden lighting changes, ranging from simple gradient and
texture based approaches to those that utilize complex lighting invariant features combined with
binary classifiers like support vector machines. Shadow detection is usually carried out by
performing a pixel-by-pixel comparison between the current frame and the background image to
evaluate some measure of similarity between them. These measures include normalized cross
correlation, edge-width information and illumination ratio.
1.2.3 Blob Extraction: This stage applies a connected component algorithm to the foreground
mask to detect the foreground blobs while optionally discarding too small blobs created due to
noise. Most existing methods use an efficient linear time algorithm. The popularity of this
method is owing to the fact that it requires only a single pass over the image to identify and label
all the connected components therein, as opposed to most other methods that require two passes.
1.2.4 Blob Tracking: This is often the most critical step in the AOD process and is concerned
with finding a correspondence between the current foreground blobs and the existing tracked
blobs from the previous frame (if any). The results of this step are sometimes used as feedback to
improve the results of background modeling. Many methods exist for carrying out this task,
including finite state machines, color histogram ratios, Markov chain Monte Carlo model,
Bayesian inference, Hidden Markov models and Kalman filters.
1.2.5 Abandonment Analysis: This step classifies a static blob detected by the tracking step as
either abandoned or removed object or even a very still person. An alarm is raised if a detected
abandoned/removed object remains in the scene for a certain amount of time, as specified by the
3 | Page
Department Of ECE, GECI

Abandoned Object Detection

user. The task of distinguishing between removed and abandoned objects is generally carried out
by calculating the degree of agreement between the current frame and the background frame
around the objects edges, under the assumption that the image without any object would show
better agreement with the immediate surroundings. There exist several ways to calculate this
degree of agreement; two of the popular methods are based on edge energy and region growing.
There also exist methods that use human tracking to look for the objects owner and evaluate the
owners activities around the dropping point to decide whether the object is abandoned or
removed.

1.3 ANALYSIS OF PREVIOUS RESEARCH IN THIS AREA


A great deal of research has been carried out in the area of AOD owing to its significance in
anti-terrorism measures. Most methods developed recently can be classified into two major
groups: those that employ background modeling and those that rely on tracking based detection.
Most of these use Gaussian Mixture Model (GMM) for background subtraction. In this model,
the intensity at each pixel is modeled as the weighted sum of multiple Gaussian probability
distributions, with separate distributions representing the background and the foreground. The
method used first detects blobs from the foreground using pixel variance thresholds and then
calculates several features for these blobs to decrease false positives. The approach maintains
two separate backgrounds- one each for long and short term durations- and modifies them using
Bayesian learning. These are then compared with each frame to estimate dual foregrounds. The
method detailed mainly focuses on tracking an object and its owner in an indoor environment
with the aim of informing the owner if someone else takes that object. The method proposed
applies GMM with three distributions for background modeling and uses these to categorize the
foreground into moving, abandoned and removed objects. A similar background modeling
method has been used in along with crowd filtering to isolate the moving pedestrians in the
foreground from the crowd by the use of vertical line scanning. There are also some approaches
to background modeling that do not employ GMM that uses approximate median model for this
purpose, this one too maintains two separate backgrounds, one of which is updated more
frequently than the other.

4 | Page
Department Of ECE, GECI

Abandoned Object Detection

Some of the approaches based on the other class of methods, based on tracking. The tracking
based approach used in comprises three levels of processing- starting with background modeling
in the lowest level using feedback from higher levels, followed by person and object tracking in
the middle level and finally the person-object split in the highest level to classify an object as
abandoned. The system proposed in considers the abandonment of an object to comprise of four
sub-events, from the arrival of the owner with the object to his departure without it. Whenever
the system detects any unattended object, it traces back in time to identify the person who
brought it into the scene and thus identifies the owner. Tracking and detection of carried objects
is performed using histograms in where the missing colors in ratio histogram between the frames
with and without the object are used to identify the abandoned object. The method used performs
tracking through a trans-dimensional Markov Chain Monte Carlo model suitable for tracking
generic blobs and thus incapable of distinguishing between humans and other objects as the
subject of tracking. The output of this tracking system therefore needs to be subjected to further
processing before luggage can be identified and labeled as abandoned.

1.4 PROBLEM DEFINITION AND SCOPE


The problem that this work attempts to solve is concerned with the tracking and detection
of suspicious objects in surveillance videos of large public areas. A suspicious object here is
defined as one that is carried into the scene by a person and left behind while the person exits the
scene. To be classified as abandoned, such an object should remain stationary in the scene for a
certain period of time without any second party showing any apparent interest in it. In addition to
detecting abandoned objects, this system also detects removed objects as any objects that were in
the scene long enough to become part of the background and were subsequently removed.
The scope of this task is to identify any such suspicious objects in real time by looking for
certain pre-defined patterns in the incoming video stream so as to raise an alarm without
requiring any human intervention. It is assumed that the data about the scene is available from
only one camera and from a fixed viewpoint.
The objectives of this system can be summarized as follows:
It should be able to identify abandoned objects in real time and therefore must employ
efficient and computationally inexpensive algorithms.
5 | Page
Department Of ECE, GECI

Abandoned Object Detection

It should be able to detect the face of person who left the object or luggage there (a
particular place i.e., railway station, airports or any other crowded places).
It should be robust against illumination changes, cluttered backgrounds, occlusions, ghost
effects and rapidly varying scenes.
It should try to maximize the detection rate while at the same time minimizing false
positives.

1.5 PROJECT MOTIVATION


In the last decade the topic of automated surveillance has become very important in the
field of activity analysis. Within the realm of automated surveillance, much emphasis is being
laid on the protection of transportation sites and infrastructure like airport and railway stations.
These places are the lifeline of the economy and thus particularly prone to attacks. It is difficult
to monitor these places manually because of two main reasons. First, it would be very labor
intensive and thus a very expensive proposition. Second, it is not humanly possible to
continuously monitor a scene for an extended period of time, as it requires a lot of concentration.
Therefore, as a step in that direction, we need an automated system that can assist the security
personnel in their surveillance tasks. Since a common threat to any infrastructure establishment
is through a bomb placed in an abandoned bag, we look at the problem of detecting potentially
abandoned objects in the scene.

1.6 ORGANISATION OF THE PROJECT


The report is organized as follows:
Abstract
Table of Contents
List of Figures
Chapter 1: Introduction
Chapter 2: Literature survey
Chapter 3: Project description
Chapter 4: Software
Chapter 5: Block diagram & description
6 | Page
Department Of ECE, GECI

Abandoned Object Detection

Chapter 6: Simulation and results


Chapter 7: Future scope & Conclusion
References
Appendix

1.7 CONCLUSION
In this chapter brief introduction for project, currently existing technologies, analysis of
previous research in this area, problem definition and scope, motivation and organization of
project has been presented.

7 | Page
Department Of ECE, GECI

Abandoned Object Detection

CHAPTER 2
LITERATURE SURVEY
2.1 INTRODUCTION
In this section, the various analyses and researches made in the field of abandoned object
detection and result already published, taking into account various parameters of project and the
extent of the project are discussed
Visual surveillance is an important computer vision research problem. As more and more
surveillance cameras appear around us, the demand for automatic methods for video analysis is
increasing. Such methods have broad applications including surveillance for safety in public
transportation, public areas, and in schools and hospitals. Automatic surveillance is also essential
in the fight against terrorism. And hence has become a very popular subject and has been
projected remarkably in various areas with a large scope. They have carried out numerous
laboratory experiments and field observations to illuminate the darkness of this field.
Their findings and suggestions are reviewed here:

2.2 LITERATURE REVIEW


An Abandoned Object Detection from Real time video [1] is useful for live processing as
well as post processing of Video Stream. This System Detects Abandoned object in Highly
Sensitive Area .The System works on resolution of 320*240 (QVGA). The system is Interactive
in nature and works with coordination of CCTV controller. This system is implemented using
dual background segmentation. The result is highly robust in nature. It is self-adaptive. The
system costs lower as compared to other systems which consists filters, infrared and noise

8 | Page
Department Of ECE, GECI

Abandoned Object Detection

detection and reduction techniques. The Abandoned object detection system is very useful in
multiple environment because of its portability. The simple mathematical processing make
system to be available in real time environment along with post processing.
An abandoned object detection system based on dual background segmentation [2] in this
the detection is based on a simple mathematical model and works efficiently at QVGA resolution
at which most CCTV cameras operate. The pre-processing involves a dual-time background
subtraction algorithm which dynamically updates two sets of background, one after a very short
interval (less than half a second) and the other after a relatively longer duration. The framework
of this algorithm is based on the Approximate Median model. An algorithm for tracking of
abandoned objects even under occlusion is also proposed. Results show that the system is robust
to variations in lighting conditions and the number of people in the scene. In addition, the system
is simple and computationally less intensive as it avoids the use of expensive filters while
achieving better detection results.
Robust unattended and stolen object detection by fusing simple algorithms [3], a new
approach for detecting unattended or stolen objects in surveillance video is used. It is based on
the fusion of evidence provided by three simple detectors. As a first step, the moving regions in
the scene are detected and tracked. Then, these regions are classified as static or dynamic objects
and human or nonhuman objects. Finally, objects detected as static and nonhuman are analyzed
with each detector. Data from these detectors are fused together to select the best detection
hypotheses. Experimental results show that the fusion based approach increases the detection
reliability as compared to the detectors and performs considerably well across a variety of
multiple scenarios operating at real time.
Detection of Abandoned Objects in Crowded Environments [4], describes a general
framework that recognizes the event of someone leaving a piece of baggage unattended in
forbidden areas. This approach involves the recognition of four sub-events that characterize the
activity of interest. When an unaccompanied bag is detected, the system analyzes its history to
determine its most likely owner(s), where the owner is defined as the person who brought the
bag into the scene before leaving it unattended. Through subsequent frames, the system keeps a
lookout for the owner, whose presence in or disappearance from the scene defines the status of

9 | Page
Department Of ECE, GECI

Abandoned Object Detection

the bag, and decides the appropriate course of action. The system was successfully tested on the
i-LIDS dataset..
An Abandoned object Detection System using Background Segmentation [5], represents
the measures and algorithms for abandoned object detection system. Terrorist activities have
threaten the lifes of common people and has given a flame for lack of security or the threat
about losing their lifes in such activities. In this, the main emphasis is given on tracking, alert
system wide ranging which can be used for the detections of the explosive objects that may cause
any harm.
Today, video surveillance is commonly used in security system but requires more
intelligent and more robust technical approaches. Such system are used at Airports, Railway
Station and other public places. Precise and accurate detection of the left luggage or the
abandoned object is very essential in todays world. In this context, a system is used which is
observed by a multi-camera system and also involves multiple actors .We first detect the object
by background subtraction. In this system there are 4 blocks which are as mentioned Video
Conversion, Blob Detection, Object Tracking, Alarm. Initially the live video stream is segmented
into frames or image. By using the image generated in matrix contain the value of each pixel by
taking values of pixel and performing various operation on it we able to detect the abandoned
object after detection we can track it and raise alarm. The problem of determining if object is left
unattended is solved by analyzing the output of the tracking system in a detection process.
Extraction of stable foreground image regions for unattended luggage detection [6] is a
novel approach to detection of stationary objects in the video stream. Stationary objects are these
separated from the static background, but remaining motionless for a prolonged time. Extraction
of stationary objects from images is useful in automatic detection of unattended luggage. The
mentioned algorithm is based on detection of image regions containing foreground image pixels
having stable values in time and checking their correspondence with the detected moving
objects. In the first stage of the algorithm, stability of individual pixels belonging to moving
objects is tested using a model constructed from vectors. Next, clusters of pixels with stable
color and brightness are extracted from the image and related to contours of the detected moving
objects. This way, stationary (previously moving) objects are detected. False contours of objects
removed from the background are also found and discarded from the analysis. The results of the
10 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

algorithm may be analyzed further by the classifier, separating luggage from other objects, and
the decision system for unattended luggage detection. The main focus of the paper is on the
algorithm for extraction of stable image regions. However, a complete framework for unattended
luggage detection is also presented in order to show that this approach provides data for
successful event detection. The results of experiments in which this algorithm was validated
using both standard datasets and video recordings from a real airport security system.
The experiments performed using the real airport recordings confirmed that this
algorithm works with satisfactory accuracy and is suitable for implementation in a working
unattended luggage detection system. The most important drawback of this approach is that if the
input data coming from the BS procedure is inaccurate, false results may be obtained.
Robust abandoned object detection using region level [7], a robust abandoned object
detection algorithm for real-time video surveillance. Different from conventional approaches that
mostly rely on pixel-level processing, here perform region-level analysis in both background
maintenance and static foreground object detection. In background maintenance, region-level
information is fed back to adaptively control the learning rate. In static foreground object
detection, region-level analysis double-checks the validity of candidate abandoned blobs.
Attributed to such analysis, this algorithm is robust against illumination change, ghosts left by
removed objects, distractions from partially static objects, and occlusions. Experiments on nearly
130,000 frames of i-LIDS dataset show the superior performance of this approach.
A Simple Approach for Abandoned Object Detection [8] implemented a low-cost solution
to detect abandoned and removed objects. To detect abandoned and stolen objects, the focus is to
determine static regions that have recently changed in the scene by performing background
subtraction. The time and presence of static objects, which may be either abandoned or stolen,
are marked on the video feed and may be used to alert security personnel. This system can detect
abandoned objects and is capable of performing this in real-time. No special sensors are required
and the results are shown to be satisfactory.
Unattended and Stolen Object Detection Based on Relocating of Existing Object [9],
proposes a new design and implementation method in supporting a smart surveillance system
that can automatically detect abandoned and stolen objects in public places such as bus stations,
train stations or airports. The developing system is implemented by using image processing
11 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

techniques. In the circumstance such as when suspicious events (i.e. left unattended or stolen
objects) have been detected, the system will alert to people responsible for the role such as
security guards or security staff. The detection process consists of four major components: 1)
Video Acquisition 2) Video Processing 3) Event Detection and 4) Result Presentation. The
experiment will be conducted in order to access the following qualities: 1) Usability: to verify
that the system can detect the object and recognize the events, and 2) Correctness: to measure the
accuracy of the system. Finally, the data sets are tested in the experimental result can measure
and represent the correctness of system by percentage. The correctness of object classification is
approximately 76%, and the correctness of event classification 83%.

2.3 SUMMARY OF LITERATURE REVIEW


An Abandoned Object Detection from Real time video is useful for live processing as
well as post processing of Video Stream and the System works on resolution of 320*240
(QVGA).It is implemented using dual background segmentation. And in this simple
mathematical processing make system to be available in real time environment along with post
processing. In abandoned object detection system based on dual background segmentation the
scheme based on the Approximate Median Model. It avoids the use of expensive filters while
achieving better detection results. In Robust unattended and stolen object detection by fusing
simple algorithms, real-time and robust unattended or stolen object detection is presented. The
fusion based approach increases the detection reliability as compared to the detectors and
performs considerably well across a variety of multiple scenarios operating at real time. In
Detection of Abandoned Objects in Crowded Environments the system keeps a lookout for the
owner, whose presence in or disappearance from the scene defines the status of the bag, and
decides the appropriate course of action. The system was successfully tested on the i-LIDS
dataset.

2.4 CONCLUSION
In the above chapter we discussed about some papers in research in this area of
abandoned object detection. The researches already made in this area shown us that there is

12 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

different type of algorithms for abandoned object detection but does not have an idea about how
to find the owner of the luggage.

CHAPTER 3
PROJECT DESCRIPTION
3.1 INTRODUCTION
It mainly consists of two parts
1. Intrusion detection
2. Abandoned Object Detection

3.1.1 Intrusion Detection


Intrusion detection provides surveillance and automatically detects prohibited intrusion
scenarios and can be used on stationary cameras. There are several modes for detection:
Tripwire, Regional Entrance and Fence Trespassing.
In Regional Entrance, an alarm is produced when a person or vehicle moves within a
restricted area. Detection can be defined to prohibit entrance from a specific direction while
ignoring entry from other directions and can be defined independently for vehicle detection or
stranger, each allowing distinct directional criteria.
In tripwire and fence trespassing, an alarm is produced, when a person or vehicle
breaches a demarcation line of separation. Object Detection can be specified to prohibit any
crossover or to allow movement in a single direction. Detection of Tripwire ignores movement in
parallel to specified lines and only detects if the lines are crossed. The features of tripwire allows
the definition of more than one line per scene and multiple segments per single line.

13 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

In this project viola jones face detection algorithm (in MATLAB) is used to detect the
owner. Namely the intruder here is the person who left the bag or luggage in the respective
scenario. When the object is detected as abandoned, either an alarm or a message will be sent to
the authorities.

3.1.2 Abandoned Object Detection


An alarm is produced when an item (package, debris, baggage, etc.) is deposited or
appears in a controlled area. Additionally, detection can be configured to ignore items that are
attended by a nearby person. It helps to reduce the need for patchy roving patrols and provides
rapid detection and pre-alarm recording so that abandon luggage owners can be located more
rapidly. It provides detection for disguised objects that may be overlooked by passing patrols or
regarded as familiar.
When the baggage is carried into a scene by a person, baggage alone cannot be detected,
but it can also be detected when the baggage is dropped or thrown into a scene from off-camera.
Additionally, baggage can be detected even if it appears while the scene is temporarily blocked.
The detection of abandoned object offers 3 levels to match the requirements of the scene, semicrowded areas, unattended detection in sterile areas, as well as detection in crowded scenarios for
the presence of stationary baggage. The ideal thing is for identifying the appearance of potential
baggage bombs, rail-line obstruction hazards, traffic lane/runway debris, dumping/littering,
fallen rocks etc.
In this proposed system a detection method of background subtraction is used.

3.2 TOOL USED

MATLAB R2015a installed in PC.

14 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

3.3 CONCLUSION
In this chapter project description for owner detection, abandoned object detection and
the tools used for doing this project has been presented

CHAPTER 4
SOFTWARE
4.1 INTRODUCTION
The software used in this project is MATLAB R2015a

4.2 MATLAB R2015a


4.2.1 The Language of Technical Computing
Millions of engineers and scientists worldwide use MATLAB to analyze and design the
systems and products transforming our world. MATLAB is in automobile active safety systems,
interplanetary spacecraft, and health monitoring devices, smart power grids, and LTE cellular
networks. It is used for machine learning, signal processing, image processing, computer vision,
communications, computational finance, control design, robotics, and much more.

4.2.2Math. Graphics. Programming.


The MATLAB platform is optimized for solving engineering and scientific problems. The
matrix-based MATLAB language is the worlds most natural way to express computational
mathematics. Built-in graphics make it easy to visualize and gain insights from data. A vast
library of prebuilt toolboxes lets you get started right away with algorithms essential to your
domain. The desktop environment invites experimentation, exploration, and discovery. These
MATLAB tools and capabilities are all rigorously tested and designed to work together.
15 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

4.2.3 Scale. Integrate. Deploy.


MATLAB helps you take your ideas beyond the desktop. You can run your analyses on
larger data sets and scale up to clusters and clouds. MATLAB code can be integrated with other
languages, enabling you to deploy algorithms and applications within web, enterprise, and
production systems.

4.2.4 Key Features

High-level language for scientific and engineering computing

Desktop environment tuned for iterative exploration, design, and problem-solving

Graphics for visualizing data and tools for creating custom plots

Apps for curve fitting, data classification, signal analysis, and many other domainspecific tasks

Add-on toolboxes for a wide range of engineering and scientific applications

Tools for building applications with custom user interfaces

Interfaces to C/C++, Java, .NET, Python, SQL, Hadoop, and Microsoft Excel

Royalty-free deployment options for sharing MATLAB programs with end users

4.2.5 Why MATLAB?


MATLAB is the easiest and most productive software for engineers and scientists.
Whether youre analyzing data, developing algorithms, or creating models, MATLAB provides
an environment that invites exploration and discovery. It combines a high-level language with a
desktop environment tuned for iterative engineering and scientific workflows.

4.2.6 MATLAB Speaks Math


The matrix-based MATLAB language is the worlds most natural way to express
computational mathematics. Linear algebra in MATLAB looks like linear algebra in a textbook.
This makes it straightforward to capture the mathematics behind your ideas, which means your
code is easier to write, easier to read and understand, and easier to maintain.

16 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

You can trust the results of your computations. MATLAB, which has strong roots in the
numerical analysis research community, is known for its impeccable numerics. A MathWorks
team of 350 engineers continuously verifies quality by running millions of tests on the MATLAB
code base every day.
MATLAB does the hard work to ensure your code runs quickly. Math operations are
distributed across multiple cores on your computer, library calls are heavily optimized, and all
code is just-in-time compiled. You can run your algorithms in parallel by changing for-loops into
parallel for-loops or by changing standard arrays into GPU or distributed arrays. Run parallel
algorithms in infinitely scalable public or private clouds with no code changes.
The MATLAB language also provides features of traditional programming languages,
including flow control, error handling, object-oriented programming, unit testing, and source
control integration.

4.2.7 MATLAB Is Designed for Engineers and Scientists


MATLAB provides a desktop environment tuned for iterative engineering and scientific
workflows. Integrated tools support simultaneous exploration of data and programs, letting you
evaluate more ideas in less time.

You can interactively preview, select, and preprocess the data you want to import.

An extensive set of built-in math functions supports your engineering and scientific
analysis.

2D and 3D plotting functions enable you to visualize and understand your data and
communicate results.

MATLAB apps allow you to perform common engineering tasks without having to
program. Visualize how different algorithms work with your data, and iterate until youve got the
results you want.

17 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

The integrated editing and debugging tools let you quickly explore multiple options,
refine your analysis, and iterate to an optimal solution.

You can capture your work as sharable, interactive narratives.


Comprehensive, professional documentation written by engineers and scientists is always
at your fingertips to keep you productive. Reliable, real-time technical support staff answers your
questions quickly. And you can tap into the knowledge and experience of over 100,000
community members and MathWorks engineers on MATLAB Central, an open exchange for
MATLAB and Simulink users.
MATLAB and add-on toolboxes are integrated with each other and designed to work
together. They offer professionally developed, rigorously tested, field-hardened, and fully
documented functionality specifically for scientific and engineering applications.

4.2.8 MATLAB Integrates Workflows


Major engineering and scientific challenges require broad coordination to take ideas to
implementation. Every handoff along the way adds errors and delays.
MATLAB automates the entire path from research through production. You can:

Build and package custom MATLAB apps and toolboxes to share with other MATLAB
users.

Create standalone executables to share with others who do not have MATLAB.

Integrate with C/C++, Java, .NET, and Python. Call those languages directly from
MATLAB, or package MATLAB algorithms and applications for deployment within web,
enterprise, and production systems.

Convert MATLAB algorithms to C, HDL, and PLC code to run on embedded devices.

Deploy MATLAB code to run on production Hadoop systems.


MATLAB is also a key part of Model-Based Design, which is used for multidomain simulation,
physical and discrete-event simulation, and verification and code generation.
18 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure 4.1: A MATLAB window in which the code for the project is written

4.3 CONCLUSION
The software part uses MATLAB.ie, the proposed system is tested using
MATLAB.Different functions and features of MATLAB is discussed in this section.

19 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

CHAPTER 5
BLOCK DIAGRAM AND DESCRIPTION

20 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure 5.1: block diagram of the proposed system

5.1 BLOCK DIAGRAM DESCRIPTION

21 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure above shows the block diagram (flow chart) for the proposed system. The problem
is divided into different modules.
First the video input is given to the system which is from a CCTV camera in the case of
real time. But here the test is done using videos of real time situation. The videos given as input
are of the same resolution as the video from a CCTV camera.
In the second stage the video is extracted to frames for analyzing the video frame by
frame.at a time single frames are analyzed.
In the third stage we check for any faces in the incoming frames and if any save that
frame to a memory location and keeps the co-ordinates of the faces in an array.
The next stage checks whether the frame is first frame or not. If it is first frame store it as
background image after a conversion from RGB to gray. And the next coming frames are also
converted to gray scale and compared with the stored background image. If there is any change
is detected. That is any new object is come in next frames (moving or stationary), that will save
as the foreground image. If there is no change the result of background subtraction will be
zero.i.e., we will get a black image.
So now after change detection we have to ignore the moving objects and concentrate on
stationary objects which are not in the scene before.so we avoid the motion changes using some
thresholding method. And find the stationary objects. We avoid stationary objects of small area.
Because of chance of presence of noise. And find the objects of some range of area. After this
operation the resulting image will be a binary image showing objects detected as white and all
other as black.
Now we calculate the centroids of objects including the faces which are saved earlier to a
particular location.
Then we have to track the stationary objects inorder to detect when it is abandoned.so we
keep an eye on each object is alone without the presence of its owner for a particular time delay,
the object is termed as abandoned, and when the object is detected as abandoned we find the
minimum distance face centroid from the object centroid inorder to find the face of the owner.

22 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

And finally we makes an alarm or pass a message to the authority subject to the detection
of the abandoned object and displays the frame from the video in which the face of the owner is
marked inside a box.
And if there is no object is detected as abandoned the process is repeated looking for any
change in the incoming frames.ie.looking for a foreground image. This process repeats.

5.2 PROPOSED ALGORITHM


The system architecture is shown in figure (5.1). It represents the process of the system.
The system obtains a video from a video surveillance camera (e.g. CCTV) or a video file. Then,
detect objects using image processing techniques. The output of the system is an event
classification result that is acquired from a decision-making. The result of the system processing
can be viewed via a user interface, which is a TV screen or a computer screen.

A. Video Acquisition
This processing unit is the process of importing the video from a video stream and
capture into sequence frames.
1) Video Stream This method receives a streaming video from a file or a CCTV camera.
Currently, the following video file formats are supported, mp4, avi, bmp and others. By
far, this paper works with only one video from one camera or one video file at a time.
2) Sequence Frame After the program reads the video file, it takes and processes each
image by querying frames from the video file.
3) Capture Image Displaying Creating a window in which the captured images from
camera will be shown on that window.

B. Store the face frames & co-ordinates


This section saves the faces in each frames and face co-ordinates also. The system
object used to detect face is vision.CascadeObjectDetector System object. And it is explained
in detail below.
23 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

vision.CascadeObjectDetector:Detect objects using the Viola-Jones algorithm

Construction
detector = vision.CascadeObjectDetector creates a System object, detector, that detects
objects using the Viola-Jones algorithm. The ClassificationModel property controls the type of
object to detect. By default, the detector is configured to detect faces.
detector = vision.CascadeObjectDetector(MODEL) creates a System object, detector,
configured to detect objects defined by the input string, MODEL. The MODEL input describes
the type of object to detect. There are several valid MODEL strings, such as 'FrontalFaceCART',
'UpperBody', and 'ProfileFace'. See the ClassificationModel property description for a full list of
available models.
detector = vision.CascadeObjectDetector(XMLFILE) creates a System object, detector,
and configures it to use the custom classification model specified with the XMLFILE input.
The XMLFILE can be created using the trainCascadeObjectDetector function or OpenCV (Open
Source Computer Vision) training functionality. You must specify a full or relative path to
the XMLFILE, if it is not on the MATLAB path.
detector = vision.CascadeObjectDetector (Name, Value) configures the cascade object
detector object properties. You specify these properties as one or more name-value pair
arguments. Unspecified properties have default values.
To detect a feature:
1.

Define and set up your cascade object detector using the constructor.

2.

Call the step method with the input image, I, the cascade object detector object, detector,
points PTS, and any optional properties. See the syntax below for using the stepmethod.
Use the step syntax with input image, I, the selected Cascade object detector object, and any

optional properties to perform detection.

24 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

BBOX = step (detector, I) returns BBOX, an M-by-4 matrix defining M bounding boxes
containing the detected objects. This method performs multiscale object detection on the input
image, I. Each row of the output matrix, BBOX, contains a four-element vector, [x y width
height], that specifies in pixels, the upper-left corner and size of a bounding box. The input
image I, must be a grayscale or true color (RGB) image.
BBOX = step (detector, I, roi) detects objects within the rectangular search region specified
by roi. You must specify roi as a 4-element vector, [x y width height], that defines a rectangular
region of interest within image I. Set the 'UseROI' property to true to use this syntax.

The Viola Jones Algorithm is explained below

Viola Jones Face Detection Algorithm:-

Feature types and evaluation

The characteristics of ViolaJones algorithm which make it a good detection algorithm are:

Robust very high detection rate (true-positive rate) & very low false-positive rate
always.

Real time For practical applications at least 2 frames per second must be processed.

Face detection only (not recognition) - The goal is to distinguish faces from non-faces
(detection is the first step in the recognition process).

25 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure 5.2: feature types used by Viola and Jones

The algorithm has four stages:


1. Haar Feature Selection
2. Creating an Integral Image
3. Adaboost Training
4. Cascading Classifiers
The features sought by the detection framework universally involve the sums of image
pixels within rectangular areas. As such, they bear some resemblance to Haar basis functions,
which have been used previously in the realm of image-based object detection. However, since
the features used by Viola and Jones all rely on more than one rectangular area, they are
generally more complex. The figure on the right illustrates the four different types of features
used in the framework. The value of any given feature is the sum of the pixels within clear
26 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

rectangles subtracted from the sum of the pixels within shaded rectangles. Rectangular features
of this sort are primitive when compared to alternatives such as steerable filters. Although they
are sensitive to vertical and horizontal features, their feedback is considerably coarser.

1. Haar Features All human faces share some similar properties. These regularities may be
matched using Haar Features.
A few properties common to human faces:

The eye region is darker than the upper-cheeks.

The nose bridge region is brighter than the eyes.

Composition of properties forming matchable facial features:

Location and size: eyes, mouth, bridge of nose

Value: oriented gradients of pixel intensities

The four features matched by this algorithm are then sought in the image of a face (shown at
left).
Rectangle features:

Value = (pixels in black area) - (pixels in white area)

Three types: two-, three-, four-rectangles, Viola & Jones used two-rectangle features

For example: the difference in brightness between the white &black rectangles over a
specific area

Each feature is related to a special location in the sub-window

2. An image representation called the integral image evaluates

rectangular features

in constant time, which gives them a considerable speed advantage over more sophisticated
alternative features. Because each feature's rectangular area is always adjacent to at least one
27 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

other rectangle, it follows that any two-rectangle feature can be computed in six array references,
any three-rectangle feature in eight, and any four-rectangle feature in nine.
The integral image at location (x,y), is the sum of the pixels above and to the left of (x,y),
inclusive.

Learning Algorithm
The speed with which features may be evaluated does not adequately compensate for
their number, however. For example, in a standard 24x24 pixel sub-window, there are a total
of

possible features, and it would be prohibitively expensive to evaluate them

all when testing an image. Thus, the object detection framework employs a variant of the
learning algorithm AdaBoost to both select the best features and to train classifiers that use them.
This algorithm constructs a strong classifier as a linear combination of weighted simple
weak classifiers.

Each weak classifier is a threshold function based on the feature

The threshold value


coefficients

and the polarity

are determined in the training, as well as the

Here a simplified version of the learning algorithm is reported:


Input: Set of
face

positive and negative training images with their labels

, if not

. If image is a

28 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

1. Initialization: assign a weight


2. For each feature

to each image .

with

1. Renormalize the weights such that they sum to one.


2. Apply the feature to each image in the training set, then find the optimal threshold
and polarity

that minimizes the weighted classification error. That

is
3. Assign a weight

where
to

that is inversely proportional to the error rate. In this

way best classifiers are considered more.


4. The weights for the next iteration, i.e.

, are reduced for the images that

were correctly classified.

3. Set the final classifier to

Cascade architecture

On average only 0.01% of all sub-windows are positive (faces)

Equal computation time is spent on all sub-windows

Must spend most time only on potentially positive sub-windows.

A simple 2-feature classifier can achieve almost 100% detection rate with 50% FP rate.

29 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

That classifier can act as a 1st layer of a series to filter out most negative windows

2nd layer with 10 features can tackle harder negative-windows which survived the 1st
layer, and so on

A cascade of gradually more complex classifiers achieves even better detection rates.The
evaluation of the strong classifiers generated by the learning process can be done quickly, but
it isnt fast enough to run in real-time. For this reason, the strong classifiers are arranged in a
cascade in order of complexity, where each successive classifier is trained only on those
selected samples which pass through the preceding classifiers. If at any stage in the cascade a
classifier rejects the sub-window under inspection, no further processing is performed and
continue on searching the next sub-window. The cascade therefore has the form of a
degenerate tree. In the case of faces, the first classifier in the cascade called the attentional
operator uses only two features to achieve a false negative rate of approximately 0% and a
false positive rate of 40%.The effect of this single classifier is to reduce by roughly half the
number of times the entire cascade is evaluated.
In cascading, each stage consists of a strong classifier. So all the features are grouped into

several stages where each stage has certain number of features.


The job of each stage is to determine whether a given sub-window is definitely not a face or may
be a face. A given sub-window is immediately discarded as not a face if it fails in any of the
stages.
The cascade architecture has interesting implications for the performance of the
individual classifiers. Because the activation of each classifier depends entirely on the behavior
of its predecessor, the false positive rate for an entire cascade is:

Similarly, the detection rate is:

30 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Thus, to match the false positive rates typically achieved by other detectors, each
classifier can get away with having surprisingly poor performance. For example, for a 32-stage
cascade to achieve a false positive rate of

, each classifier need only achieve a false positive

rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable
if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%,
each classifier in the aforementioned cascade needs to achieve a detection rate of approximately
99.7%.

Advantages of Viola -Jones Algorithm

Extremely fast feature computation


Efficient feature selection
Scale and location invariant detector
Instead of scaling the image itself (e.g. pyramid-filters), we scale the features.
Such a generic detection scheme can be trained for detection of other types of objects (e.g.
cars, hands)

Disadvantages of Viola-Jones Algorithm

Detector is most effective only on frontal images of faces

It can hardly cope with 45 face rotation both around the vertical and horizontal axis.

Sensitive to lighting conditions

We might get multiple detections of the same face, due to overlapping sub-window

MATLAB code for using the cascadeObjectDetector () function on pictures

31 | P a g e
Department Of ECE, GECI

Abandoned Object Detection


function [ ] = Viola_Jones_img( Img )
%Viola_Jones_img( Img )
% Img - input image
% Example how to call function:
Viola_Jones_img(imread('name_of_the_picture.jpg'))
faceDetector = vision.CascadeObjectDetector;
bboxes = step(faceDetector, Img);
figure, imshow(Img), title('Detected faces');hold on
for i=1:size(bboxes,1)
rectangle('Position',bboxes(i,:),'LineWidth',2,'EdgeColor','y');
end
end

C. Object Detection:
Object detection is the process of finding out the area of interest as per users
requirement. Here we have proposed the algorithm for object detection using frame difference
method (One of the background subtraction algorithms). Steps are given as:
a) Read all the image frames generated from the video, which are stored on a variable or storage
medium.
b) Convert them into greyscale image using rgb2gray ( ) from coloured image.
c) Store the first frame as background image
c) Calculate the difference as |frame[i]-background frame|
d) If the difference is greater than a threshold (rth), then the value is considered to be the part of
foreground otherwise background (no change is detected)
e) Update the value of i by incrementing with one.
f) Repeat the step c to d up to the last image frame.
g) End the process.

32 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

D. Post Processing:
The detected object in the previous phase may lead to have a problem of connectivity and
it may also have some holes which may be useless for object representation. Therefore here we
need to have some post processing which will reduce the problem of handling holes and the
connectivity of pixels within object region. Mathematical morphological analysis is one of post
processing approach which leads to enhance the segmented image in order to improve the
required result. In the proposed method we have used the erosion and dilation iteratively so that
an object will clearly appear in foreground while the rest useless blobs will be removed.
Morphological operations are useful to obtain the useful components from the image. These
components may be the object boundary, region, shape and skeleton etc.
Dilation: Dilation is an increasing transform, used to fill small holes and narrow chasm in the
objects.
Erosion: Erosion as morphological transformation can be used to find the contours of the
objects. It is used to shrink or reduce the pixel values.

E. Feature Selection:
The features like centroid of an object, height and width of an object are selected so that
it is easy to plot the location of non-rigid body/objects with frame to frame. The proposed
method evaluates the centroid of detected object in each frame. It is assumed that after the
morphological operations there will not be any false object. And then a centroid of the object in
two dimensional frames can be calculated as the average of the pixels in x and y coordinates
belonging to the object.

Cx=total moments in x-direction/total area------ (1)


Cy=total moments in y-direction/total area------ (2)
33 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

F. Object Representation:
Here we are using centroid and the rectangular shape to cover the object boundary to
represent the object. After calculating the centroid, find the Width Wi and Height Hi of the object
by extracting the positions of pixels Pxmax and Pxmin which has the maximum and minimum
values of X Coordinate related to the object. Similarly, for the Y coordinates, calculate Pymax
and Pymin.

G. Trajectory Plot:
After the process of object detection using frame differencing method, the detected
components are given as input to the tracking process to plot the trajectory. The frame
differencing algorithm will give all the pixel values of the detected object. The centroid of the
objects is calculated by using the equation 1 and 2. Here input will be the pixel values of the
object and the output will be the rectangular area around the object. This process will calculate
the centroid, height and width of the object for the purpose of trajectory plotting.

H. Video Analytics Processing:


Segmentation is the process of detecting changes and extracting relevant changes for
further analysis and qualification. Changed pixels from previous positions are referred to as
"Foreground Pixels"; those that do not change are called "Background Pixels". The segmentation
method used here is Background Subtraction. Image Pixels remaining after the background has
been subtracted are the foreground pixels. The key factor which is used to identify foreground
pixels by means of Degree of change" in segmentation and can vary depending on the
application.
The segmentation result is one or more foreground blobs. A blob is nothing but a collection of
connected pixels.

34 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

I. Tracking
The next process in object detection is tracking the different blobs so as to find which
blobs correspond to abandoned objects. The first step in this process is to create a set, Track,
whose elements have three variables: blob- Properties, hitCount and missCount. The next step is
to analyze the incoming image for all the blobs. If the area change and the centroid position
change, as compared to any of the elements of the set Track are below a threshold value, we
increment hitCount and reinitialize miscount with a zero; otherwise we create a new element in
the Track-set, initializing the blob-properties variable with the properties of incoming blob and
hitCount and missCount are initialized to zero. We then run a loop through all the elements of the
set. If the hitCount goes above a user defined threshold value, an alarm is triggered. If the
missCount goes above a threshold, we delete the element from the set. These two steps are
repeated until there are no incoming images.

J. Alarm and Display


We use the Raise-alarm flag from previous units and highlight that part of the video for
which the alarm has been raised. We also display the face of the person who abandoned that
luggage in the particular location. The face of the person is captured by the following procedure.
We saves the faces and their co-ordinates from each and every frame. Calculates their centroids
and object centroid. When an object is detected as abandoned, we find the minimum distance
face centroid from the object centroid and display that particular frame from the video and the
face of the person is indicated inside a box.

5.3 CONCLUSION
An object tracking algorithm for video pictures, based on image segmentation and pattern
matching of the segmented objects between frames in a simple feature space is proposed.

35 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

CHAPTER 6
SIMULATION AND RESULTS
6.1 INTRODUCTION
The simulation of Owner detection and abandoned object detection is done by using
MATLAB version R2015a by considering background Image in a video of 300 frames. The
simulation results for all the techniques are explained. Initially the moving objects in video

36 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

images are tracked based on image segmentation, background subtraction and object detection
techniques. The simulation results of the algorithms are shown below. Abandoned object
detection: The sample frames from input video sequence is shown in Figure 6.1. The abandoned
object that are detected are shown in Figure 6.2a. The abandoned object found is marked in red
color. The results obtained for owner detection are shown in Figure 6.2b .the face of the owner is
marked using a green coloured box.

Figure 6.1a: sample frames from the video

Figure 6.1b: sample frames from the video showing face detection

37 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure 6.2a: abandoned object detection (object is bounded by red box)

38 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

Figure 6.2b: identified owner face (owner face is bounded by green box)

6.2 ADVANTAGES
Produces low false alarms and missing detection
Provides on time security
No special sensors are required
It can also identify the owner face

39 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

6.3 DISADVANTAGES
In a high density scenario, there is a possibility that the object is prone to be hidden from
camera view for most of the time, leading to a failure in detection.

6.4 CONCLUSION
Simulation for the proposed system using MATLAB R2015a is done. The results are
satisfactory. The simulation results using a video is shown in the above figures. It implies that the
system works well on different video streams of practical situations. Also tried to implement in
real time but the problem was the miss detection of the owner of the luggage because of
variations in some parameters. Object detection is done perfectly. This system can be considered
as foundation for a truly robust frame work that only requires a bit of calibration to perform well
in practically any scenario.

40 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

CHAPTER 7
FUTURE SCOPE&CONCLUSION
Owing to modular nature of this system, it is quite easy to add more sophisticated
methods to any of its module. The relative simplicity of this tracking algorithm promises that a
DSP implementation is possible
We are glad to complete this project work successfully and satisfactory. One of the short
comings of the system was occlusion, since this project uses a single camera view from a fixed
point. The occlusion can be avoided by using multiple camera views and a robust algorithm for
object detection. Also there were some problems in the face detection due to the environmental
changes. But the testing results gave us a satisfactory output. And so we concludes that this
system can be considered as foundation for a truly robust frame work that only requires a bit of
calibration to perform well in practically any scenario.

41 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

REFERENCES
[1].An Abandoned Object Detection from Real time video. Kulkarni Abhishek, Kulkarni
Saurabh, Patil Aniket, Patil Girish. International Journal of Scientific & Engineering Research,
Volume 5, Issue 5, May-2014, ISSN 2229-5518
[2]. F. Porikli, Y. Ivanov, and T. Haga, Robust Abandoned Object Detection Using Dual
Foregrounds, Eurasip Journal on Advances in Signal Processing, vol. 2008, 2008.
[3]. M. Bhargava et al. Detection of abandoned objects in crowded environments. Proc. of
IEEE Conf. on Advanced Video and Signal Based Surveillance, pp. 271-276, 2007.
[4].M. Spengler, B. Schiele, Automatic Detection and Tracking of Abandoned Objects, Joint
IEEE International Workshop on Visual Surveillance and PETS, 2003.
[5]. P. L. Venetianer, Z. Zhang, W. Yin, A. J. Liptop, Stationary Target Detection Using the
Object Video Surveillance System, IEEE International Conference on Advanced Video and
Signal based Surveillance, London , UK , September 2007.
[6]. R. Cucchiara, et al. "Detecting Moving Objects, Ghosts, and Shadows in Video Streams",
IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(10):1337-1342, 2003.
[7]. H.H. Liao, J.Y. Chang, and L.G. Chen, A localized approach to abandoned luggage
detection with foreground-mask sampling, in IEEE International Conference on Advanced
Video and Signal Based Surveillance, 2008, pp. 132139.
[8]. Y. Tian et al., Robust detection of abandoned and removed objects in complex surveillance
videos, IEEE Trans. on Systems, Man, and Cybernetics, Part C: Applications and Reviews,
vol.PP (99), pp. 112, 2010.
[9]. M. Beynon, D. Hook, M. Seibert, A. Peacock, and D. Dudgeon, Detecting Abandoned
Packages in a Multi-camera Video Surveillance System, IEEE International Conference on
Advanced Video and Signal-Based Surveillance, 2003
[10]. N. Bird, S. Atev, N. Caramelli, R. Martin, O. Masoud, N. Papanikolopoulos, Real Time,
Online Detection of Abandoned Objects in Public Areas, IEEE International Conference on
Robotics and Automation, May, 2006.
[11]. Ferrando.S, Gera.G, Regazzoni.C, Classification of Unattended and Stolen Object in
Video-Surveillance System, in IEEE International Conference on Video and Signal Based
Surveillance, 2006. AVSS '06
[12]. http://in.mathworks.com/products/matlab/

42 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

APPENDIX
MATLAB code for proposed system
clc;clear all;close all;
videoFReader = vision.VideoFileReader('Train.avi');
videoPlayer = vision.VideoPlayer;
count=1;cor_mat=[];centroids={};count_cent=1;prev_cent=[];flag_check=1;
faceDetector = vision.CascadeObjectDetector();perm_flag=0;got_count=600;
bb=[];face_frame=[];face_count=1;
permanent=[];%58 imshow
while count<=300

%~isDone(videoFReader) %

frame = step(videoFReader);
bbox
= step(faceDetector,frame);
if ~isempty(bbox) & count<got_count
bb=[bb ;bbox];face_frame{face_count}=frame;
face_count=face_count+1;
end
videoOut = insertObjectAnnotation(frame,'rectangle',bbox,'Face');
step(videoPlayer, videoOut);
if count==150
clc;
end
%
figure(2),imshow(frame)
if count==1
prev_frame=frame;
bcknd=frame;
% save bcknd bcknd
count=count+1;
continue;
end
stat_change=abs(frame-bcknd);
bw_stat_change=im2bw(stat_change,.1);
bw_stat_change = imclose(bw_stat_change, strel('rectangle', [17, 3]));
pos_zeros=find(bw_stat_change==0);
chnge=abs(frame-prev_frame);
43 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

fnl_chnge=abs(stat_change-chnge);
fnl_chnge(pos_zeros)=0;
% figure(2),imshow(fnl_chnge)
bw=im2bw(chnge,.1);

%
%

mask = imdilate(bw, strel('rectangle', [17,3]));


mask = imopen(mask, strel('rectangle', [5,5])); %was commented
mask = imfill(mask, 'holes');
figure(1),imshow(mask)

[L,noc]=bwlabel(mask);
props=regionprops(L);
cent=cat(1,props.Centroid);
bound_box=cat(1,props.BoundingBox);
x=cent(:,1);
y=cent(:,2);
% hold on
% plot(x,y,'r*')
% hold off
% pause(.2);
bw2=im2bw(fnl_chnge,.1);
mask2 = imdilate(bw2, strel('rectangle', [17,3]));%was 17 instead 27
mask2 = imopen(mask2, strel('rectangle', [5,5]));
mask2 = imfill(mask2, 'holes');
%
figure(1),imshow(mask2)
[L_stat,noc_stat]=bwlabel(mask2);

figure(2),imshow(frame)
if noc_stat==0
pause(.1);
prev_frame=frame;
count=count+1;
continue;
end
%figure(1),imshow(label2rgb(L))
props2=regionprops(L_stat);
area2=cat(1,props2.Area);

44 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

cent2=cat(1,props2.Centroid);
bound_box2=cat(1,props2.BoundingBox);
x2=cent2(:,1);
y2=cent2(:,2);
%fnl_mask=abs(mask2-mask);
%figure(1),imshow(fnl_mask)
%pause(.3);
D = pdist2(cent,cent2,'euclidean');
[r,c]=find(D<40);
val=unique(c);
for i=1:length(val)
L_stat(L_stat==val(i))=0;
end
L_stat=imclearborder(L_stat);
[L_stat,noc_stat]=bwlabel(L_stat);
props_stat=regionprops(L_stat);
area_stat=cat(1,props_stat.Area);

pos=area_stat<800;
pos=find(pos);
for i=1:length(pos)
L_stat(L_stat==pos(i))=0;
end
%
pos_gr=area_stat>900;
pos_gr=find(pos_gr);
for i=1:length(pos_gr)
L_stat(L_stat==pos_gr(i))=0;
end
%
[L_stat,noc_last]=bwlabel(L_stat);
L_stat2=L_stat;
if noc_last>0
dup=rgb2gray(frame);
dupback=rgb2gray(bcknd);
props_L=regionprops(L_stat);
45 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

bound_L=cat(1,props_L.BoundingBox);

for i=1:noc_last
z=zeros(size(L_stat));
c=bound_L(i,1);r=bound_L(i,2);c_last=c+bound_L(i,3);r_last=r+bound_L(i,4);
z(floor(r):floor(r_last),floor(c):floor(c_last))=1;
dup(~z)=0;
dupback(~z)=0;
b_diff=dup-dupback;
b_diff=im2bw(b_diff,.1);
pr=regionprops(b_diff,'Area');
if ~isempty(pr)
ar=pr.Area;
if ar<29
pos_L=find(L_stat==i);
L_stat2(pos_L)=0;
end
end
end
hold on
props_last=regionprops(L_stat2);
bound=cat(1,props_last.BoundingBox);
cent_last=cat(1,props_last.Centroid);
if ~isempty(bound)
bound_one=bound(1,:);
for i=1:size(bound,1)
x=bound(i,1);y=bound(i,2);w=bound(i,3);h=bound(i,4);
line([x,x+w],[y,y]);
line([x,x],[y,y+h]);
line([x,x+w],[y+h,y+h]);
line([x+w,x+w],[y,y+h]);

end
46 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

plot(cent_last(:,1),cent_last(:,2),'r*')
centroids=cent_last(1,:);
if flag_check==1
prev_send=centroids;D=100;flag_check=0;
else
D = pdist2(prev_send,centroids,'euclidean');
prev_send=centroids;
end
if D<5
flag=flag+1;
else
flag=0;
end
if flag>=10
permanent=bound_one;got_count=count-10;
x=floor(permanent(1,1));y=floor(permanent(1,2));w=floor(permanent(1,3));h=floor(permane
nt(1,4));
perm_reg=frame(y:y+h,x:x+w,:);
bw_perm=im2bw(perm_reg,.6);

figure(1),imshow(perm_reg)
flag=0; perm_flag=1;

%
line([x,x+w],[y,y]);
%
line([x,x],[y,y+h]);
%
line([x,x+w],[y+h,y+h]);
%
line([x+w,x+w],[y,y+h]);
end
% if centroids{}
% end
count_cent=count_cent+1;
hold off
end
pause(.01);
% pause(.3);
% hold on
% plot(x2,y2,'r*')
47 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

%
%

hold off
pause(.2);
%

figure(2),imshow(fnl_chnge)
pause(.0001)

% step(videoPlayer,frame);
% % figure(1),imshow(frame)
% % pause(.00001);
end
hold on
if ~isempty(permanent)
x=floor(permanent(1,1));y=floor(permanent(1,2));w=floor(permanent(1,3));h=floor(permane
nt(1,4));
cur_reg=frame(y:y+h,x:x+w,:);
bw_cur=im2bw(cur_reg,.6);
%
figure(2),imshow(cur_reg)
if corr2(cur_reg(:,:,1),perm_reg(:,:,1))>.5
line([x,x+w],[y,y],'Color',[1 0 0]);
line([x,x],[y,y+h],'Color',[1 0 0]);
line([x,x+w],[y+h,y+h],'Color',[1 0 0]);
line([x+w,x+w],[y,y+h],'Color',[1 0 0]);
pause(.2);
end
hold off
end
prev_frame=frame;
count=count+1;
end

release(videoFReader);
release(videoPlayer);
if perm_flag==1
dist_values=sqrt((bb(:,1)-x).^2 + (bb(:,2)-y).^2);
pos=find(dist_values==min(dist_values));
attacker_cord=bb(pos,:);
48 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

face=face_frame{1,pos};
figure,imshow(face),title('Identified Owner Face')
x=attacker_cord(1,1); y=attacker_cord(1,2); w=attacker_cord(1,3); h=attacker_cord(1,4);
line([x,x+w],[y,y],'Color',[0 1 0]);
line([x,x],[y,y+h],'Color',[0 1 0]);
line([x,x+w],[y+h,y+h],'Color',[0 1 0]);
line([x+w,x+w],[y,y+h],'Color',[0 1 0]);
%%%%%%%%ALARM
N=10000;
s=zeros(N,1);
for a=1:N
s(a)=tan(a); %*sin(-a/10);
end
Fs=2000; %increase value to speed up the sound, decrease to slow it down
soundsc(s,Fs)
sound(s)
end

MATLAB code for real time implementation


clc;clear all;close all;
%creating object for camera
vid = videoinput('winvideo', 1, 'MJPG_1184x656');
%object for face detection
faceDetector = vision.CascadeObjectDetector();
%conversion to gray scale image
49 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

set(vid,'ReturnedColorSpace','grayscale');
%manual trigger
triggerconfig(vid, 'manual');
start(vid);
preview(vid);
pause();
stoppreview(vid);face_count=1;perm_flag=0;
bb=[];bbox=[];
for count=1:50
frame =getsnapshot(vid);
curr_frame=double(frame);
figure(1),imshow(uint8(curr_frame))
if perm_flag==0
%returns coordinates of face
bbox = step(faceDetector,frame);
end
if ~isempty(bbox)
hold on
x=bbox(1,1);y=bbox(1,2);w=bbox(1,3);h=bbox(1,4);
%saving coordinates of faces in each iteration
bb=[bb;bbox];
%saving the frame in which face is detected in each iteration
face_frame{face_count}=frame;
face_count=face_count+1;
%drawing rectangle around face
line([x,x+w],[y,y],'Color',[0 1 0]);
line([x,x],[y,y+h],'Color',[0 1 0]);
line([x,x+w],[y+h,y+h],'Color',[0 1 0]);
line([x+w,x+w],[y,y+h],'Color',[0 1 0]);
pause(.3);
hold off
end
% Make 1st frame as background frame.
if count == 1
bg_frame = curr_frame;
prev_frame=curr_frame;
end

50 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

cmp = abs(curr_frame - bg_frame);


%% Binarizing image to find the region of motion..
temp = max(cmp(:));
if temp > 70
thresh = (temp/2)-30;
else
thresh = temp;
end
stat_chnge = cmp > thresh;
% figure(2),imshow(stat_chnge)
% pause(.4);
motion_chng=abs(curr_frame-prev_frame);
tmp=max(motion_chng(:));
if tmp>70
thresh_motion=(tmp/2)-20;
else
thresh_motion=tmp;
end
motion_img=motion_chng>thresh_motion;
motion_img=imfill(motion_img,'holes');
% figure(3),imshow(motion_img)
[L,noc]=bwlabel(motion_img);
fnl_chng=stat_chnge;
if noc>0
%returning some properties of each white area
props=regionprops(L);
%accessing bounding boxes of objects
bound=cat(1,props.BoundingBox);
for bound_count=1:noc
x=floor(bound(bound_count,1));y=floor(bound(bound_count,2));x_limit=floor(bound(bound_co
unt,3))+x;y_limit=floor(bound(bound_count,4))+y;
if x<1
x=1;
51 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

end

if x_limit>size(L,2)
x_limit=size(L,2);
end
if y<1
y=1;
end

if y_limit>size(L,1)
y_limit=size(L,1);
end
fnl_chng(y:y_limit,x:x_limit)=0;

end
end
figure(4),imshow(fnl_chng)
fnl_bw=bwareaopen(fnl_chng,2000);
fnl_bw=imfill(fnl_bw,'holes');
prop_final=regionprops(fnl_bw);
fnl_bound=cat(1,prop_final.BoundingBox);
if isempty(fnl_bound)
prev_frame=curr_frame;
continue
end
xf=fnl_bound(:,1);yf=fnl_bound(:,2);w=fnl_bound(:,3);h=fnl_bound(:,4);
if count>6
perm_flag=1;
figure(5),imshow(uint8(curr_frame))
52 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

pause(.3);
hold on
line([xf,xf+w],[yf,yf],'Color',[1 0 0]);
line([xf,xf],[yf,yf+h],'Color',[1 0 0]);
line([xf,xf+w],[yf+h,yf+h],'Color',[1 0 0]);
line([xf+w,xf+w],[yf,yf+h],'Color',[1 0 0]);
hold off
cent=cat(1,prop_final.Centroid);
cent=cent(:,1);
%break;%comment if error
end
prev_frame=curr_frame;
end
stop(vid);
delete(vid);
if perm_flag==1
x=cent(1,1);y=cent(1,2);
dist_values=sqrt((bb(:,1)-x).^2 + (bb(:,2)-y).^2);
pos=find(dist_values==min(dist_values));
attacker_cord=bb(pos,:);
face=face_frame{1,pos};
figure,imshow(face),title('Identified Owner Face')
hold on
x=attacker_cord(1,1); y=attacker_cord(1,2); w=attacker_cord(1,3); h=attacker_cord(1,4);
line([x,x+w],[y,y],'Color',[1 0 0]);
line([x,x],[y,y+h],'Color',[1 0 0]);
line([x,x+w],[y+h,y+h],'Color',[1 0 0]);
line([x+w,x+w],[y,y+h],'Color',[1 0 0]);
hold off
%GSM MODULE INITIALISATION/WE CAN USE AN ALARM HERE
ser_obj = serial('COM4'); % depends on the serial port available
set(ser_obj,'BaudRate',9600,'DataBits',8, ...
'Parity','none','StopBits',1, 'FlowControl','none')
fopen(ser_obj);
fprintf(ser_obj,'%s','AT');
fprintf(ser_obj,13);
53 | P a g e
Department Of ECE, GECI

Abandoned Object Detection

pause(15);

fprintf(ser_obj,'%s','AT+CMGF=1');
fprintf(ser_obj,13);
pause(15);
fprintf(ser_obj,'%s','AT+CMGS="9562421292"');
fprintf(ser_obj,13);
pause(15);
fprintf(ser_obj,'%s','Security threat :Attacker Found');
fprintf(ser_obj,26);

fclose(ser_obj);
delete(ser_obj);
end

54 | P a g e
Department Of ECE, GECI

Вам также может понравиться