Академический Документы
Профессиональный Документы
Культура Документы
ABSTRACT
Covert, long-range, night/day identification of stationary human subjects using face recognition has been previously
demonstrated using the active-SWIR Tactical Imager for Night/Day Extended-Range Surveillance (TINDERS) system.
TINDERS uses an invisible, eye-safe, SWIR laser illuminator to produce high-quality facial imagery under conditions
ranging from bright sunlight to total darkness. The recent addition of automation software to TINDERS has enabled the
autonomous identification of moving subjects at distances greater than 100 m. Unlike typical cooperative, short range
face recognition scenarios, where positive identification requires only a single face image, the SWIR wavelength, long
distance, and uncontrolled conditions mean that positive identification requires fusing the face matching results from
multiple captured images of a single subject. Automation software is required to initially detect a person, lock on and
track the person as they move, and select video frames containing high-quality frontal face images for processing.
Fusion algorithms are required to combine the matching results from multiple frames to produce a high-confidence
match. These automation functions will be described, and results showing automated identification of moving subjects,
night and day, at multiple distances will be presented.
Keywords: Face Recognition, SWIR, Night Vision, Surveillance, Biometrics, Active Imaging
1. INTRODUCTION
The capability to covertly detect and identify people at long distances would be of great value to the military, law
enforcement, and private security communities. Of most interest would be a capability that works night or day, under
conditions ranging from bright sunlight to total darkness. Such a capability does not currently exist. While there are
several biometric modalities commonly used to identify individuals, including DNA, fingerprint, iris, and face
recognition, only face recognition has the potential to be of use at long distances. In an effort to develop this capability,
the West Virginia High Technology Consortium Foundation (WVHTCF), under a research contract from the Office of
Naval Research (ONR) and oversight from the Office of the Secretary of Defense Deployable Force Protection Program
(DFP), is developing the Tactical Imager for Night/Day Extended Range Surveillance (TINDERS), an active short-wave
infrared (SWIR) imaging system that illuminates targets with an invisible and eye-safe SWIR laser beam and matches
SWIR facial images against mug-shots enrolled in a visible-spectrum database for identification.1,2,3 TINDERS
nighttime face recognition results for stationary targets have been published at distances of 100 m, 200 m, and 350 m. A
practical system, however, must be able to identify people in the distance as they move around naturally, since a covert
identification system cannot expect subjects to stand still and look directly at the camera. Thus, a system must have an
automated capability to detect people, track them as they move, capture good facial images from video, and process
them for face recognition. These automated capabilities have recently been developed for the TINDERS system and are
described in this paper.
1.1 Background
Detailed motivations for and descriptions of the TINDERS system were previously published,1,2,3 and are summarized in
this section. Active-SWIR imaging has a number of advantages over other imaging modalities that make it uniquely
suitable for long-range night/day human identification. Traditional visible-spectrum imagery produces the most
recognizable facial images, but at night, there is not enough light to make a long-range close-up facial image. A
powerful spotlight could be used to illuminate the face, but this would not be covert, and the required intensity would
pose an eye-safety hazard. Thermal infrared imagery is commonly used for long-range nighttime surveillance, but the
imagery produced does not correlate well to visible-spectrum mug shots. Active near infrared (NIR) imagery is an
excellent modality for shorter-range nighttime face recognition, as the facial imagery correlates well to visible-spectrum
mug-shots; however, the NIR illumination power required for close-up face images at distances beyond 100 m poses a
severe eye-safety hazard near
n the illumiinator, where the beam is s maller and moore intense. A At SWIR wavvelengths
longer than 1400 nm, all light
l bed in the eye before reachinng the retina, rresulting in a maximum perrmissible
is absorb
exposure that is 65 times hiigher than at 800 nm. Thus, a SWIR illumiinator of waveelength > 1400--nm can safelyy emit 65
times more liight than a NIR
R illuminator ofo the same sizze. For these reeasons, active--SWIR imagingg, with an illum
mination
wavelength > 1400 nm wass chosen as thee TINDERS sen nsor modality.
Facial imageery in this wav velength region n has no therm
mal component,, and only shoows light that iis reflectively sscattered
off of the faace, revealing most of the same features as visible-sp ectrum or NIR R imagery. F Facial imageryy in this
wavelength range
r differs frrom visible-speectrum imagery
y primarily in tthe lower refleectivity of skin and higher refflectivity
of hair, i.e. sk
kin tone appeaars dark and haair appears whitte. Despite theese differencess in skin and haair reflectivity,, off-the-
shelf face recognition softw ware availablee in 2010 (Facee Examiner W Workstation 2.11 from Identix)) was able to ccorrectly
match 40 ou ut of 56 (71%) high-quality SWIRS images with correspo nding visible-sspectrum imagges from a dattabase of
1156 differen nt faces. Notee that success rate
r as measureed this way dep epends on databbase size, decrreasing with inncreasing
size. Figure 1 shows exam mples of SWIR and visible-speectrum face im mages used in thhat experimentt. These SWIR R images
were made using
u the samee illumination wavelength an nd focal plane array used inn TINDERS, but at close rannge, thus
representing a best case forr TINDERS SW WIR image qu uality. The cur
urrent TINDER RS face recognnition software is based
on a customized version off the latest Facee Examiner Wo orkstation softw
ware from Morrpho Trust USA A.4
2. RESULTS
To experimeentally illustratte the various automation fun nctions that haave been impllemented into tthe TINDERS S system,
daytime and nighttime scen narios were run n with a single subject walkinng at a distancee > 100 m and a distance > 2200 m, to
exercise the live
l body detecction, face deteection, tracking g, and automatted face recognnition functionns. For ease of analysis,
TINDERS reecords video containing
c teleemetry that alllows the videoo to be playedd back later onn the TINDER RS GUI,
displaying on
o the GUI ex d when live,, with all dettection algorithhms and facee recognition software
xactly as it did
functioning as
a if the data were
w live. All images
i includeed below were taken as screenn shots during this “playbackk mode”,
but they are representative of the live dissplay. In addittion to the singgle-subject waalking scenarioos, previously-ccollected
TINDERS nighttime
n video
o data of test subjects
s rotatin
ng in place byy 360 degrees from 100-m aand 350-m waas run in
“playback mode” to illustraate the automatted face capturre and face recoognition functiions.
on and Trackiing
2.1 Detectio
A daytime sccenario was ru un in which a subject was asked to walk aaround in a rooughly rectangular pattern inn an area
roughly 138 m from the TINDERS sysstem. Figure 4 illustrates tthe TINDERS S upper-body ddetection and tracking
functions. Inn the image on
n the left, the dashed
d blue box shows the uppper body deteected in the livve video. The operator
then clicked on this box, which
w initiated tracking.
t The image
i on the riight shows thee subject some time later, afteer he has
turned and walked
w away from
fr the buildiing. The solid d orange (largeer) box showss the tracking bbox, while thee smaller
dashed blue box
b shows upp per body detecttion. It shouldd be noted that the subject is nno longer centeered in the oraange box,
and in time, the SLA-2000 will lose track k of the subjecct. This typicallly occurs wheen there is a feeature-rich backkground,
as in this imaage. One apprroach to mitigaate this in the future
f is to usee the upper-boddy detection innformation, succh as the
dashed blue box
b in the righ
ht image, to perriodically adjusst the tracking box to keep it centered on thhe subject.
Figure 4. Daytime examp ple at 138-m rannge. (left) Upperr body is detectedd and a dashed bblue box is displlayed. Trackingg was
initiated by
b clicking on th
his box. (right) TINDERS pans to track subjectt as he walks. O Orange (large) traacking box and blue
(smaller) upper-body deteection box are ovverlaid on the liv
ve image as the subject walks.
Figure 5. Nighttime exaample at 137-m range.r (left) Upp
per body is deteected and a dashhed blue box is ddisplayed. Traccking
was initiaated by clicking on this box. (rig
ght) TINDERS pans
p to track subj
bject as he walkss. Orange trackinng box is overlaiid on
the live im
mage as the subjject walks.
A nighttime scenario was run with the sames subject at the same loccation. Figure 5 illustrates tthe same detecction and
tracking funcction exercised
d at night. In th
he left image, the
t subject is ffacing the cameera, and an uppper-body detecction box
is displayed. The operato or clicked on this
t box, initiaating tracking. In the right image, the suubject is trackeed as he
approaches the
t building, with
w the orang ge tracking box x overlaid on the image. IIn this case, thhere is no uppper-body
detection sho
own in the righht image.
Similar day anda night scenaarios were run at another locaation at a distaance of roughlyy 220-m. Figuure 6 shows an example
of nighttime upper-body deetection and traacking at a disttance of 224 mm. The reason tthe ground apppears dark in thhis image
is that it is co
overed with sno
ow, which appears black due to high absorpption of the SW WIR illuminatioon by water3.
Figure 6.. Nighttime exaample at 224-mm range. TINDE ERS pans to traack subject as he walks. Orangge tracking box and
dashed bllue upper-body detection
d box aree overlaid on thee live image as thhe subject walkss.
The images shown
s in figurres 4 through 6 have a large field of view,, in which the full body is viisible. TINDE ERS face
detection is not enabled whenw the field
d of view (FOOV) exceeds an upper limit.. Figure 7 shhows a progreession of
nighttime im
mages at 215-m m range as TIN NDERS zooms in to a smallerr field of vieww where face deetection is enaabled. In
image (a) thee FOV is 2.2 m,m and only an upper-body deetection box is shown. In im mage (b) the FOOV has been reduced to
1.83 m, and now both an upper-body
u detection and a face
f detection box are visiblle. The operattor then clickeed on the
face box, inittiating trackingg of the face. In image (c), an
a orange trackking box is noow visible arouund the face, inndicating
that the facee is being track ked. In imagee (d), the subjject has turnedd and is now w walking awayy from the cam mera, but
TINDERS iss still tracking hish head, even though the facce is no longer visible.
(a)
( (b)
(c)
( (d)
1 2
3 4
Figure 100. Daytime 215 5-m long-term trracking with au utomated face reecognition exam mple. (clockwisee from top left) First
screen sh
hot shows subjecct walking and taalking on cell phhone just after traacking was initiaated on the face. The second sccreen
shot show
ws the subject stiill being tracked
d as he walks awaay from the cammera. Captured fface images from m the queue conttinue
to be seaarched. In the third
t image, the subject is again
n walking towaard the camera, and new face im mages are captuured,
processedd, and the fused matching resultss are updated.
To better illu
ustrate how th he automated face
f recognitio
on works, two examples are presented herre in detail of subjects
rotating 360 degrees in place, once arou und and back again. Recordded video wass run in “playbback mode” w while the
TINDERS au utomated face capture deteccted video fram mes with nearr-frontal face iimages and suubmitted them for face
recognition. The first exam mple was recorrded at a distan me conditions. Figure 11, lefft, shows
nce of 100 m inn dark nighttim
a TINDERS screen shot off the rotating subject
s after 4 probe images have been proocessed for facce recognition,, and the
subject has been
b successfufully identified
d. The right side
s of the figgure shows thhe 4 images thhat were captuured and
processed, allong with the automatically-g
a generated eye locations. No tice that the eyyes are correcttly located in pprobes 1,
3, and 4, butt incorrectly lo
ocated in probee 2. Also, nottice that probe 1 has a slighttly angled posee and is slightlly out of
focus, most likely
l due to motion
m blur.
1 2
3 4
Figure 12
2. Close-up view of face reco ognition results following eachh of the 4 probee searches alonng with single-pprobe
g scores for the raank 1 (genuine) candidate.
matching
Figure 12 shhows close-ups of the face reecognition wind dow as it appeeared followingg the processinng of each proobe, with
the most receent probe imagge and top two matching cand didates. The ddetailed single-pprobe scores fofor the rank 1 ccandidate
are shown ata the right (thhis detail can be
b accessed on o the TINDER RS GUI by hhovering the m mouse over onne of the
candidate im
mages in the facce recognition window).
w Afteer probe 1 was searched, the ccorrect candidaate was alreadyy rank 1,
but with a low
w score. The second
s probe search
s yielded only low matcching scores (du due to incorrectt eye location) and thus
ores of the top two candidatess. The third prrobe search ressulted in a new
did not affect the fused sco w rank 2 candiddate, and
increased thee score of the rank 1 candid date, and the fourth
f probe seearch yielded a new rank 2 candidate andd further
increased thee score of the rank 1 candid date, increasing the confidennce level of thhe positive ideentification. A second
example of automated
a facee recognition of a rotating sub
bject, this timee recorded at nnight at a distannce of 350-m iis shown
in Figure 13.
Figure 133. Nighttime 350-m
3 rotating test
t subject. (leeft) Screen shott shows video oof rotating subjeect while successsful
automated face recognitioon is displayed below
b as it appeears following thhe processing annd fusion of 9 caaptured face imaages.
(right) Th
he nine captured
d probe images and eye locationss used in the facee recognition.
3. DISCUSSION
D N
In this paperr we reviewed the developmeent of the TIN NDERS active- SWIR imagingg system for ccovert, night/daay, long-
range face reecognition, desscribed the auttomation capabbilities that woould be requireed for fully-auttomated operattion, and
provided expperimental exam mples that illu omated capabiliities that have been implemeented to date. Specific
ustrate the auto
examples of upper-body deetection, face detection,
d and tracking
t were provided for bboth daytime annd nighttime ooperation
at multiple distances.
d Deetailed examplles of automaated face recoggnition, both daytime and nighttime, at multiple
distances ran
nging from 100 0 m to 350 m werew also prov
vided, includinng examples whhere automatedd face recogniition was
performed while
w the TIND DERS system was tracking the face of a walking test subject. Thee work describbed here
represents an
n initial implem
mentation of baasic automationn functions. Siignificant addittional work woould be requireed before
fully-automaated operation of
o TINDERS would
w be possib
ble.
4. ACKNO
OWLEDGM
MENTS
This research
h was perform
med under contrract N00014-0 09-C-0064 from m the Office oof Naval Reseaarch, with fundding and
oversight fro
om the Deplo oyable Force Protection Sciience and Tecchnology Proggram. The aauthors wouldd like to
acknowledgee important tecchnical contrib
butions from Jaason Stanley, W
William McCoormick, Ken W Witt, and MorpphoTrust
USA, and thee cooperation of
o the WVU Ceenter for Identiification Technnology Researcch in some of tthe data collecttion.
REF
FERENCES
[1] Brian E.. Lemoff, Robert B. Martin, Mikhail Sluch h, Kristopher M M. Kafka, Willliam B. McCoormick, and R Robert V.
Ice, “Loong-range nig ght/day human on using activve-SWIR imaaging”, Proc. SPIE 8704, Infrared
n identificatio
Technology and Applications XXXIX X, 87042J (Junne 18, 2013).
[2] Brian E.. Lemoff, Robert B. Martin, Mikhail Sluch h, Kristopher M M. Kafka, Willliam B. McCoormick, and R Robert V.
Ice, “Auutomated nightt/day standoff detection, tracking, and idenntification of ppersonnel for innstallation protection”,
Proc. SPPIE 8711, Sen nsors, and Coommand, Conttrol, Communiications, and Intelligence (C C3I) Technoloogies for
Homelan nd Security andd Homeland Defense
D XII, 87110N (June 6, 2013).
[3] Robert B.B Martin, Mikhail Sluch, Kristopher M. Kafka, Robeert V. Ice, annd Brian E. L Lemoff, “Activve-SWIR
signaturees for long-range night/day human detecttion and identtification”, Prooc. SPIE 87344, Active and Passive
Signaturres IV, 87340J (May 23, 2013 3).
[4] MorphoT Trust USA Facce Examiner Workstation
W webb page.
http://ww
ww.morphotru ust.com/Identity
ySolutions/ForrFederalAgenciies/Officer360/Investigator3660/ABIS/FaceE Examine
rWorkstation.aspx .
[5] SightLinne Applicationss web page. htttp://www.sighhtlineapplicatioons.com/index.html .