Вы находитесь на странице: 1из 5


The definition of face detection refers to computer technology that is able to identify the
presence of people’s faces within digital images. In order to work, face detection
applications use machine learning and formulas known as algorithms to detecting
human faces within larger images. These larger images might contain numerous objects
that aren’t faces such as landscapes, buildings and other parts of humans (e.g. legs,
shoulders and arms).

Face detection is a broader term than face recognition. Face detection just means that a
system is able to identify that there is a human face present in an image or video. Face
detection has several applications, only one of which is facial recognition. Face
detection can also be used to auto focus cameras. And it can be used to count how
many people have entered a particular area. It can even be used for marketing
purposes. For example, advertisements can be displayed the moment a face is

Face recognition can confirm identity. It is therefore used to control access to sensitive


While the process is somewhat complex, face detection algorithms often begin by
searching for human eyes. Eyes constitute what is known as a valley region and are
one of the easiest features to detect. Once eyes are detected, the algorithm might then
attempt to detect facial regions including eyebrows, the mouth, nose, nostrils and the
iris. Once the algorithm surmises that it has detected a facial region, it can then apply
additional tests to validate whether it has, in fact, detected a face.


One of the most important applications of face detection, however, is facial

recognition. Face recognition describes a biometric technology that goes way beyond
recognizing when a human face is present. It actually attempts to establish whose face
it is. The process works using a computer application that captures a digital image of an
individual’s face (sometimes taken from a video frame) and compares it to images in a
database of stored records. While facial recognition isn’t 100% accurate, it can very
accurately determine when there is a strong chance that an person’s face matches
someone in the database.

There are lots of applications of face recognition. Face recognition is already being used
to unlock phones and specific applications. Face recognition is also used for biometric
surveillance. Banks, retail stores, stadiums, airports and other facilities use facial
recognition to reduce crime and prevent violence.

So in short, while all facial recognition systems use face detection, not all face detection
systems have a facial recognition component.

In general, face recognition requires certain conditions such as fully frontal

face images as well as consistent lighting. Furthermore, facial recognition
might not be suitable for certain user populations such as healthcare
professionals wearing surgical masks or people with religious face veils,
niqabs or winter scarves. Additionally, a person who is enrolled with their eyes
can be readily recognized whether or not their face is partially obscured.
Refrence : https://www.bioid.com/periocular-recognition/

Because eye recognition is less sensitive than face recognition to varied

lighting conditions, it works well for applications with challenging lighting.
Furthermore, the technology is also less sensitive to appearance changes
such as facial hair or makeup. Eye recognition can always be combined with
face recognition for increased accuracy, thus providing greater performance
with the same intuitive user experience. User-friendly biometric recognition is
the key to convenient strong authentication, which enables for truly secure
and trusted systems.

We here propose an in-plane alignment of a face using eye coordinates that are automatically

found in a face image. In this way we aim to obtain in future work high recognition results with

a face recognition algorithm, without the need to use full 3D modeling techniques.


Here, our novel hierarchical detector system based on eye-pair and eye detectors is explained. In this
system, it is assumed that a face is detected in a picture by a face detector, therefore we focus only on
the eyepair and eye detection process before the alignment.
The system is comprised of two detection layers. In the first layer, the eye-pair detector searches for an
eye-pair in an image containing a face. After the eyepair is found, the eye detector, which is in the second
layer, looks for the eyes in the eye-pair region. So, the eye detector assumes its input image is an eye-pair
image rather than a face image. Decreasing the search space hierarchically like described above has as
advantage that false positives can be greatly reduced in number. Both detectors use a sliding window
method to locate the object of their interest and use a detector frame of fixed resolution. On the other
hand, an input image is rescaled in a predefined range of resolutions preserving the aspect ratio of the
detector frame.

Eye Detector Dataset.

To construct the eye dataset, we first cropped eye regions of the faces which are around 400 in
number. We then added mirrored versions of them to the eye dataset. To obtain negatives, we have used
two different methods. The first one is automatic non-eye image collection using initial eye ground truth
information and the second one is obtaining the negatives by testing the system with our initially trained
detector. We used approximately two times more image patches (for both the positive and negative set)
than for the eye-pair dataset used in (Karaaba et al., 2014).

Further Additions. To make the system more robust to rotated faces, we have rotated the face
samples in the training sets using angles of 5, 10,15, 20 using the initial in-plane angle of the faces
computed from the manually selected eye coordinates. After this automatically cropped eye-pair and eye
regions using the ground truth information of original cropped patches are added to the training set.

After we aggregated around 1,200 new eye-pairs, we tested the systems (eye and eye-pair
detector) on the training set of face images and collected more negatives. The final amount of images in
the eye-pair and eye detector datasets increased to 7,000 and 13,500, respectively.

Sample eye-pair pictures used to train the eye-pair detector (in original resolution) are shown in
Figure 1.

Sample eye and non-eye pictures (in original resolution) are shown in Figure 2.

To locate the eyes, the SVM is invoked on all windows of the sliding window with the appropriate
feature vector extracted from the window patch, and finally the highest outputs of the SVM are selected
as the locations of the eyes. the locations of the eyes. Figure 1: Sample eye-pair regions for training the
eye-pair detector. (a) (b)

Figure 2: Sample eye (a) and non-eye (b) regions cropped from eye-pair image patches. Note that
the non-eye regions may still contain eyes, but they are not very precisely located in the center.

In-Plane Rotational Alignment of Faces by Eye and Eye-Pair Detection by M.F.
Karaaba1, O. Surinta1, L.R.B. Schomaker1 and M.A. Wiering11Institute of
Artificial Intelligence and Cognitive Engineering (ALICE), University of Groningen
Nijenborgh 9, Groningen 9747AG, The Netherlands fm.f.karaaba, o.surinta,
l.r.b.schomaker, m.a.wieringg@rug.nl
Important for diagram site research paper


Face recognition is all in the eyes

Although it may not come as too much of a surprise to the biometrics industry, a new
study from a researcher at the University of Barcelona shows that our brains extract
important information for face recognition principally from the eyes, and secondly
from the mouth and nose.

Our brain extracts important information for face recognition principally from
the eyes, and secondly from the mouth and nose, according to a new study
from a researcher at the University of Barcelona. This result was obtained by
analyzing several hundred face images in a way similar to that of the brain.

Imagine a photograph showing your friend's face. Although you might think that every single detail in
his face matters to recognize him, numerous experiments have shown that the brain prefers a rather
coarse resolution instead, irrespective of the distance at which a face is seen. Until now, the reason
for this was unclear. By analyzing 868 male and 868 female face images, the new study may explain
The results indicate that the most useful information is obtained from the images if their size is
around 30 x 30 pixels. Moreover, images of eyes give the least "noisy" result (meaning that they
convey more reliable information to the brain compared to images of the mouth and nose),
suggesting that face recognition mechanisms in the brain are specialized to the eyes.
This work complements a previously conducted study published inPLoS One, which found that
artificial face recognition systems have the best recognition performance when processing rather
small face images – meaning that machines should do it just like humans.
Story Source:
Materials provided by Public Library of Science. Note: Content may be edited for style and length.

Personal identification by eyes.

Identification of persons through the eyes is in the field of biometrical science. Many security
systems are based on biometric methods of personal identification, to determine whether a person is
presenting itself truly. The human eye contains an extremely large number of individual
characteristics that make it particularly suitable for the process of identifying a person. Today, the
eye is considered to be one of the most reliable body parts for human identification. Systems using
iris recognition are among the most secure biometric systems.