Вы находитесь на странице: 1из 5

Journal for Research| Volume 02| Issue 02 | April 2016

ISSN: 2395-7549

Video Summarization: Correlation for


Summarization and Subtraction for Rare Event
Aashika Balakrishnan
UG Student
Department of Computer Engineering
SIES Graduate School of Technology, Nerul, Navi Mumbai

Lijitha Govindankutty
UG Student
Department of Computer Engineering
SIES Graduate School of Technology, Nerul, Navi Mumbai

Prof. Namrata Patel


Assistant Professor
Department of Computer Engineering
SIES Graduate School of Technology, Nerul, Navi Mumbai

Abstract
The ever increasing number of surveillance camera networks being deployed all over the world has not only resulted in a high
interest in the development of algorithms to automatically analyze the video footage, but has also opened new questions as how
to efficiently manage the vast amount of information generated. The user may not have sufficient time to watch the entire video
or the whole of video content may not be of interest to the user. In such cases, the user may just want to view the summary of the
video instead of watching the whole video. In this paper, we present a video summarization technique developed in order to
efficiently access the points of interest in the video footage. The technique aims to eliminate the sequences which contain no
activity of significance. The system being developed actually captures each frame from the video, then it processes the frame; if
the frame is of its interest, it retains the frames otherwise it discards the frame; hence the resultant video is very short. The
proposed method is extended to obtain rare event detection for security systems. These rare event detections refer to suspicious
scenarios. The system will consider a particular frame of interest from a video footage taken at given time and search for actions
from video footages across the particular area of interest specified by the user. The user is then notified about the objects and
actions occurred in the area of interest. This helps in detecting suspicious behavior that would have otherwise been deemed
unsuspicious and gone unnoticed in the context of a narrow timeframe.
Keywords: video summaries, video processing, video skimming, image processing
_______________________________________________________________________________________________________
I.

INTRODUCTION

A video nothing but a synchronous sequence of a number of frames, each frame being a 2-D image. So the basic unit in a video
is a frame. The video can also be thought of as a collection of many scenes, where a scene is a collection of shots that have the
same context. A shot on the other hand is a collection of frames. In our project, we deal with input video, frames, output video.
[7]

Fig. 1: Concept of video and frames.


The input video footage is given in mp4 format into the matlab code. The video is converted into frames considering a time
interval. We can skip as many frames as per requirement; here we did consider each frame without skipping any. All the
resultant frames are stored in a separate folder. Now considering the extracted frames, we obtained the summarized frames in
another folder where redundant frames (frames in which no new action is performed) are removed. Further, summarized frames
are processed to pick out actions occurring in the area of interests of the entire video. For example, consider a locker in a room
under camera surveillance for security reasons. The area of interest is the locker and only the actions taking place near the locker.

All rights reserved by www.journalforresearch.org

61

Video Summarization: Correlation for Summarization and Subtraction for Rare Event
(J4R/ Volume 02 / Issue 02 / 011)

We summarize the entire footage then pick out action or object detected near the locker only and not the surrounding areas
covered in the video. The results contain a summarized video, a folder with all the frames extracted, a folder with summarized
frames, a folder with action or object detected near the area of interest in the footage.
With the advent of digital multimedia, a lot of digital content such as movies, news, television shows and sports is widely
available. Also, due to the advances in digital content distribution (direct-to-home satellite reception) and digital video
recorders, this digital content can be easily recorded. However, the user may NOT have sufficient time to watch the entire video
(Ex. User may want to watch just the highlights of a game) or the whole of video content may not be of interest to the user. In
such cases, the user may just want to view the summary of the video instead of watching the whole video.
Thus, the summary should be such that it should convey as much information about the occurrence of various incidents in the
video. Also, the method should be very general so that it can work with the videos of a variety of genre.
II. LITERATURE SURVEY
Several techniques and proposed methodologies for video summarization and rare event detection are available:
In RPCA-KFE [1], RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information
across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each
element in the same data set. A Unified Framework [2], event summarization and rare event detection is done in a single
framework by transforming them into a graph editing problem. Keypoint-based Keyframe Selection [3] uses a keypoint-based
framework to address the keyframe selection problem so that local features can be employed in selecting keyframes. AJ Theft
Prevention [4] generates an alarm in the form of a beep whenever it captures a frame in which there is a person carrying a
specific object. Graph modeling [5] obtains video summarization and scene detection using scene modeling and highlight
detection. Two-Level Redundancy detection for Personal Video Recorders [6] provides a video summarization function to grasp
the original long video content quickly.
This paper shows how to obtain video summarization using a very simple method of correlation for video summarization and
subtraction of matrices for object detection.
III. EXPERIMENTAL WORKS
Video processing is actually image processing taking into consideration the frames extracted as images. Each frame is then
converted into matrix form then into binary form i.e. black and white images for further easier processing. Using matrix form of
the extracted images we can easily compare or subtract the values for processing them. In this paper however we introduce even
easier method for processing of images for summarization without the need of converting them into matrices. We simply
consider the total RGB content of images for comparison. This can be done by calculating the correlation value of images.
The correlation coefficient is a number representing the similarity between 2 images in relation with their respective pixel
intensity. Correlation coefficient's value is maximum of 1. The threshold for deleting frames is dependent on the video and RGB
content. If the correlation coefficient value is 1 it indicates the frames are exactly same otherwise it indicates the similarity value.
The images with too much similarity are discarded for which the threshold is considered.
Following are the steps to be followed for frame extraction and summarization of frames.
The algorithm is named Correlation for Summarization and Subtraction for Rare event (CSSR):
1) Define n as the number of frames needed for the entire video.
2) Read the input video and write it into frames n times.
3) Consider a=frame i and b=frame i+1
4) Frames are converted from RGB to indexed form.
5) Find correlation between a and b to determine similarity between them. r = corr2 (a,b)
6) Frames with coefficient of 1(exactly same frames) are discarded.
7) Similar frames up to a particular threshold are retained and others are deleted.
8) Store the area of interest as background image where actions performed are highly suspicious. The area of interest is a
cropped image and not the entire frame.
9) Consider this background image and subtract the summarized frames from it to obtain the action or object detected at that
area. Note the summarized frames need to be cropped before subtraction.

All rights reserved by www.journalforresearch.org

62

Video Summarization: Correlation for Summarization and Subtraction for Rare Event
(J4R/ Volume 02 / Issue 02 / 011)

Fig. 2: Pseudo code for summarization of frames\

We consider indexed form of images since indexed images are a technique to manage digital images' colors in a limited
fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of
vector quantization compression. CORR2 function is Pearson correlation to determine similarity between images. The CORR2
function applies this definition to bi-dimensional array introducing a data type conversion to double and applying linear indexing
to the array itself. This function is specifically developed to process pixel images often represented by uint8 datatype. [8]
In the above algorithm we have obtained summarization of frames only. We can obtain a summary video of the same.
Summarized frames are considered put into an array of frames and the loop is run. VideoWriter function of matlab is used to
create a file of '.avi' or '.mp4' format. writeVideo function of matlab places all the frames into the video with dynamic frame
rate. Summary video obtained is approximately 1/5th of the original video.

Fig. 3: Data flow using CSSR technique

All rights reserved by www.journalforresearch.org

63

Video Summarization: Correlation for Summarization and Subtraction for Rare Event
(J4R/ Volume 02 / Issue 02 / 011)

IV. ANALYSIS AND FUTURE SCOPE


Our method is pretty simple for implementation however; it doesn't consider the light changes taking place. Illumination
invariant technique can be used to improve efficiency for summarization. But to detect environmental changes which happen
rarely for eg. smoke and fog inside the room illumination invariant will not detect changes.
Each frame can be associated with the corresponding time from the video footage. This can be used to view the part of the
video from the time associated with the interested frame directly.
Screenshots
Consider the video taken with plain background and two hands moving simultaneously. We need to summarize the video
where only the frames with moving hands have to be detected and the rest of the frames be discarded. Initially 283 total frames
were extracted into extracted frames folder in Figure 4 and 25 frames were resulted into summarized frame folder in Figure 5.
Rare event folder separately identified the hands only and subtracted the background for object enhancement in Figure 6.
Consider a video where the background is consisting of a cupboard as shown in Figure 7. The area where the cupboard is
placed is the area of interest. Two people enter the room and only one of them enters the area of interest as shown in Figure 8.
Rare event folder will identify the person as shown in Figure 9.

Fig. 4: Extracted frames with frame number

Fig. 5: Summarized frames

Fig. 6: Object separately detected.

Fig. 7: Entire background of the video and the area of interest

All rights reserved by www.journalforresearch.org

64

Video Summarization: Correlation for Summarization and Subtraction for Rare Event
(J4R/ Volume 02 / Issue 02 / 011)

Fig. 8: Summarized frames: Two people in the room

Fig. 9: Person in the area of interest

V. CONCLUSION
In todays world, for any organization smooth functioning is important for which a lot of technologies have come up. Security of
organization is very important and since a lot of functionalities and time need to be devoted for the organizations existing work,
there needs to be a technology which would look after the security systems automatically and alert the organization. Hence the
proposed system will help to automatically summarize or highlight activities for any video footage. There are a lot of papers
published on video summarization; however implementation of the same is not yet accomplished on a large scale. The proposed
system, therefore, provides an optimized and customized solution to reduce manual work, increase security, efficiency and
provide flexibility.
ACKNOWLEDGMENT
Every project begins with an idea and materializes with concrete efforts. This work would not have been possible without the
assistance, support, and collaboration of many people. It is indeed gratifying to have the privilege to express my deep sense of
gratitude and appreciation to our esteemed project guide Prof. Namrata Patel for her mentorship and support over the past few
months. Throughout this Semester, Prof. Namrata Patel has always provided the perfect combination of encouragement and
guidance and her enthusiastic involvement in this work has been invaluable. We also must thank our advisor Prof. Varsha Patil,
for her insightful comments and recommendations during our research. Special thanks go to Prof. Rizwana Shaikh for giving her
views on choosing our project. We must also thank the teachers of our branch who gave the required support when needed.
Their encouragement and patience have been a godsend. We finally would like to thank our parents and classmates for their cooperation and suggestions.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

Chinh Dang and Hayder Radha, Fellow, IEEE, " RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis" in IEEE
TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 11, NOV 2015
Junseok Kwon, Member, IEEE, and Kyoung Mu Lee, Member, IEEE, "A Unified Framework for Event Summarization and Rare Event Detection from
Multiple Views" in IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 37, NO. 9, SEPTEMBER 2015
Genliang Guan, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, Member, IEEE and David Dagan Feng, Fellow, IEEE, "Keypoint-Based Keyframe
Selection" in IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,VOL. 23, NO. 4, APRIL 2013
Ali Javed and Sidra Noman, "AJ Theft Prevention Alarm Based Video Summarization Algorithm" in International Journal of Information and Education
Technology, Vol. 2, No. 1, February 2012
Chong-Wah Ngo, Member, IEEE, Yu-Fei Ma, Member, IEEE, and Hong-Jiang Zhang, Fellow, IEEE, "Video Summarization and Scene Detection by
Graph Modeling" in IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 2, FEBRUARY 2005
Yue Gao, Wei-Bo Wang and Jun-Hai Yong, "A Video Summarization Tool using Two-Level Redundancy detection for Personal Video Recorders" in IEEE
Transactions on Consumer Electronics, Vol. 54, No. 2, MAY 2008
Mahmoud, K. M., Ismail, M. A., & Ghanem, N. M. (2013). VSCAN: An Enhanced Video Summarization Using Density-Based Spatial Clustering. InImage
Analysis and ProcessingICIAP 2013 (pp. 733-742). Springer Berlin Heidelberg
K. Pearson, "Mathematical contributions to the theory of evolution. III. Regression, heredity and panmixia" Philos. Trans. Royal Soc. London Ser. A , 187
(1896) pp. 253318

All rights reserved by www.journalforresearch.org

65

Вам также может понравиться