Академический Документы
Профессиональный Документы
Культура Документы
Guided by: Ms. Hetal Gaudani Prepared by: Manali K. Shukla (07cp621) Niraj A. Chandrani (07cp622)
AIM
The aim of this project is to develop a prototype drowsiness detection system. The focus will be placed on designing a system that will accurately monitor the open or closed state of the drivers eyes in real-time.
INTRODUCTION
Driver fatigue is a significant factor in a large number of vehicle accidents. Recent statistics estimate that annually 1,200 deaths and 76,000 injuries can be attributed to fatigue related crashes. The development of technologies for detecting or preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems. Because of the hazard that drowsiness presents on the road, methods need to be developed for counteracting its affects.
INTRODUCTION
Detection of fatigue can be done in one of the following ways mentioned or by combining them.
Measuring changes in physiological signals, such as brain waves, heart rate and eye blinking. Measuring physical changes such as sagging posture, leaning of the drivers head and the open/closed states of the eyes.
INTRODUCTION
Monitoring the steering wheel movement, accelerator or brake patterns, vehicle speed, lateral acceleration, and lateral displacement. Monitoring the response of the driver. This involves periodically requesting the driver to send a response to the system to indicate alertness.
INTRODUCTION
By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early enough to avoid a car accident. Hence we have used the eye open-closed detection technique combined with detection of blinks to detect drowsiness. It involves a sequence of images of a face, and the observation of eye position in each image and analysis blink patterns.
OVERVIEW OF IMPLEMENTATION
Obtain Image Frames From live video feed Face Detection From Image Frames Locating the portion of the eyes Use Eye Tracking Algorithm Drowsy Detection Comparison of successive frames Application Of Processed Information Trigger alarm in case of a fatigue
CORE OF IMPLEMENTATION
Implementation of our project is divided into 3 core processes: Image acquisition, Image Processing, Output Generation.
IMAGE ACQUISITION
IMAGE PROCESSING
OUTPUT GENERATION
IMAGE ACQUISITION
This process works at getting data into MATLAB and passing it to other processes. It works in 3 phases as shown.
IMAGE ACQUISITION
videoinput() function is used. set() function is used to set properties such as framegrabinterval etc.
Obtain the frames from the live feed into MATLAB memory.
IMAGE PROCESSING
The obtained images from the image acquisition are processed here to get the desired output. Image processing also works in 3 phases as shown
SKIN COLOR
SEGMENTATION
FACE DETECTION
EYE DETECTION
0<h<0.11 or 0.90<h<1.00
FACE DETECTION
The job of face detection is to locate a face out of the skin color segmented image. The phase starts with binarization ie converting the obtained skin color image to binary image. This binary image is subjected to morphological operation followed by connected component analysis to obtain the face component. Using this face component we can design a mask that would give the entire face from original image.
BINARIZATION
MOPHOLOGICAL OPERATION
FACE COMPONENT
MASK
FACE DETECTION
Morphological operation.
To separate the unnecessary joints. To segment catchment area type regions created by eyes, lips.
FACE DETECTION
Face component
Fill back the watershed components to a bigger single component. The biggest such component is face
Mask
Obtain a rectangular bounding box around the biggest component. This is required mask.
EYE
DETECTION
The job of eye detection is to locate the eye portion, find the eye-brow and eye-lashes and compute the distance between them. This distance is sent to the output generation phase. The technique that we have used is known as Horizontal Average Intensity Technique The working flow in this module is shown in next slide
EYE DETECTION
This is done to avoid getting wrong results due to tilting of face. We calculate the horizontal average intensities over each y-coordinate for both eye regions Using the dips observed in the horizontal intensities we segment out the black regions
EYE DETECTION
The distance between valleys satisfying above condition is the required distance. Find it for both eyes
OPEN EYE
CLOSED EYE
Comparison of graphs of horizontal avg intensity for open and closed eye
OUTPUT GENERATION
The job of output generation as the name suggests is to give the output. It receives the distance between eye-brow and upper eye-lashes from eye-detection module. It has to work on it to decide if this distance represents open eye or closed eye
OUTPUT GENERATION
The average distance of first 5 frames supplied to output generation is used as reference.
As each frame arrives, compare its distance with reference and accordingly send it to eye-close or eye-open module.
OUTPUT GENERATION
THANK YOU.