Вы находитесь на странице: 1из 10

Automated, Long-Range, Night/Day, Active-SWIR

Face Recognition System


Brian E. Lemoff, Robert B. Martin, Mikhail Sluch, Kristopher M. Kafka, Andrew Dolby, Robert Ice
WVHTC Foundation, 1000 Technology Drive, Suite 1000, Fairmont, WV, USA 26554

ABSTRACT

Covert, long-range, night/day identification of stationary human subjects using face recognition has been previously
demonstrated using the active-SWIR Tactical Imager for Night/Day Extended-Range Surveillance (TINDERS) system.
TINDERS uses an invisible, eye-safe, SWIR laser illuminator to produce high-quality facial imagery under conditions
ranging from bright sunlight to total darkness. The recent addition of automation software to TINDERS has enabled the
autonomous identification of moving subjects at distances greater than 100 m. Unlike typical cooperative, short range
face recognition scenarios, where positive identification requires only a single face image, the SWIR wavelength, long
distance, and uncontrolled conditions mean that positive identification requires fusing the face matching results from
multiple captured images of a single subject. Automation software is required to initially detect a person, lock on and
track the person as they move, and select video frames containing high-quality frontal face images for processing.
Fusion algorithms are required to combine the matching results from multiple frames to produce a high-confidence
match. These automation functions will be described, and results showing automated identification of moving subjects,
night and day, at multiple distances will be presented.

Keywords: Face Recognition, SWIR, Night Vision, Surveillance, Biometrics, Active Imaging

1. INTRODUCTION
The capability to covertly detect and identify people at long distances would be of great value to the military, law
enforcement, and private security communities. Of most interest would be a capability that works night or day, under
conditions ranging from bright sunlight to total darkness. Such a capability does not currently exist. While there are
several biometric modalities commonly used to identify individuals, including DNA, fingerprint, iris, and face
recognition, only face recognition has the potential to be of use at long distances. In an effort to develop this capability,
the West Virginia High Technology Consortium Foundation (WVHTCF), under a research contract from the Office of
Naval Research (ONR) and oversight from the Office of the Secretary of Defense Deployable Force Protection Program
(DFP), is developing the Tactical Imager for Night/Day Extended Range Surveillance (TINDERS), an active short-wave
infrared (SWIR) imaging system that illuminates targets with an invisible and eye-safe SWIR laser beam and matches
SWIR facial images against mug-shots enrolled in a visible-spectrum database for identification.1,2,3 TINDERS
nighttime face recognition results for stationary targets have been published at distances of 100 m, 200 m, and 350 m. A
practical system, however, must be able to identify people in the distance as they move around naturally, since a covert
identification system cannot expect subjects to stand still and look directly at the camera. Thus, a system must have an
automated capability to detect people, track them as they move, capture good facial images from video, and process
them for face recognition. These automated capabilities have recently been developed for the TINDERS system and are
described in this paper.
1.1 Background
Detailed motivations for and descriptions of the TINDERS system were previously published,1,2,3 and are summarized in
this section. Active-SWIR imaging has a number of advantages over other imaging modalities that make it uniquely
suitable for long-range night/day human identification. Traditional visible-spectrum imagery produces the most
recognizable facial images, but at night, there is not enough light to make a long-range close-up facial image. A
powerful spotlight could be used to illuminate the face, but this would not be covert, and the required intensity would
pose an eye-safety hazard. Thermal infrared imagery is commonly used for long-range nighttime surveillance, but the
imagery produced does not correlate well to visible-spectrum mug shots. Active near infrared (NIR) imagery is an
excellent modality for shorter-range nighttime face recognition, as the facial imagery correlates well to visible-spectrum
mug-shots; however, the NIR illumination power required for close-up face images at distances beyond 100 m poses a
severe eye-safety hazard near
n the illumiinator, where the beam is s maller and moore intense. A At SWIR wavvelengths
longer than 1400 nm, all light
l bed in the eye before reachinng the retina, rresulting in a maximum perrmissible
is absorb
exposure that is 65 times hiigher than at 800 nm. Thus, a SWIR illumiinator of waveelength > 1400--nm can safelyy emit 65
times more liight than a NIR
R illuminator ofo the same sizze. For these reeasons, active--SWIR imagingg, with an illum
mination
wavelength > 1400 nm wass chosen as thee TINDERS sen nsor modality.
Facial imageery in this wav velength region n has no therm
mal component,, and only shoows light that iis reflectively sscattered
off of the faace, revealing most of the same features as visible-sp ectrum or NIR R imagery. F Facial imageryy in this
wavelength range
r differs frrom visible-speectrum imagery
y primarily in tthe lower refleectivity of skin and higher refflectivity
of hair, i.e. sk
kin tone appeaars dark and haair appears whitte. Despite theese differencess in skin and haair reflectivity,, off-the-
shelf face recognition softw ware availablee in 2010 (Facee Examiner W Workstation 2.11 from Identix)) was able to ccorrectly
match 40 ou ut of 56 (71%) high-quality SWIRS images with correspo nding visible-sspectrum imagges from a dattabase of
1156 differen nt faces. Notee that success rate
r as measureed this way dep epends on databbase size, decrreasing with inncreasing
size. Figure 1 shows exam mples of SWIR and visible-speectrum face im mages used in thhat experimentt. These SWIR R images
were made using
u the samee illumination wavelength an nd focal plane array used inn TINDERS, but at close rannge, thus
representing a best case forr TINDERS SW WIR image qu uality. The cur
urrent TINDER RS face recognnition software is based
on a customized version off the latest Facee Examiner Wo orkstation softw
ware from Morrpho Trust USA A.4

Figure 1.. Example high--quality active-S


SWIR images used
u in a 2010 face recognitionn experiment, aalong with matcching
visible-sp
pectrum images. The SWIR imagesi were accquired at closee range, in the dark, using thee same illuminaation
waveleng gth, > 1400 nm, and SWIR focall plane array useed in the TINDE ERS system. Offf-the-shelf face rrecognition softw ware
correctly matched 40 out of 56 SWIR images with the co orrect visible imaage in a databasee containing 11556 different facess.

Figure 2. (left) Conceptu


ual illustration off TINDERS systtem. (right) Currrent version of T
TINDERS protottype system.
Figure 2 includes both a conceptual illustration and a photograph of the TINDERS prototype hardware. The TINDERS
system consists of three physical units: an optical head that sits on a pan-tilt (PT) stage; an electronics box that provides
power, light (through and optical fiber), and communications to the optical head; and a computer that runs the user
interface, low-level camera control functions, system automation, and face recognition software. The TINDERS optical
head includes both the SWIR illuminator optics and the imager. In the current version of the hardware, the optical head
weighs roughly 30 pounds and sits in an environmentally-controlled enclosure atop a commercial pan-tilt stage. The
imager and illuminator pan, tilt, and zoom together so that the illuminator beam is always just filling the imager field of
view. This serves to maximize the image signal level and avoid wasted light. The illuminator light source, located in
the electronics box, delivers a maximum power of 5W to the optical head through an optical fiber in the umbilical.
Because the illumination beam is expanded to 5-inches in diameter prior to exiting the optical enclosure, the TINDERS
illuminator is safe to the unaided eye at point-blank range.
1.2 Automation Strategy
As discussed above, even with high-quality, short-range SWIR facial imagery from a stationary, frontal face, the single-
image success rate is on the order of 70%. Thus, high-confidence identification of noncooperative, moving targets, at
long distances will require the fusion of face matching results from multiple SWIR facial images known to be of the
same person. Ideally this process would be fully-automated, to allow for unattended operation. The individual
automation processes required for a fully automated system include:
 Detection of an individual and designation of that individual to be tracked;
 Tracking of the individual as they move;
 Zooming in on the face of the moving target;
 Capturing video frames containing frontal facial images of sufficient quality for identification;
 Submission of captured facial images to face recognition software;
 Fusion of the matching results from multiple video frames;
 Thresholding to determine whether fused matching result has enough confidence to report as a match;
 Reporting the positive identification result.
As previously reported2, a cascade pattern matching algorithm was developed to detect upper bodies. This algorithm is
capable of automatically detecting people at distances up to 3 km as long as the field of view is wide enough to include
the full upper body. To implement full automation, a rule would need to be applied to determine when a detected person
should or should not be tracked. In lieu of this, TINDERS displays a box around all detected people, and an operator can
click on the box in order to designate the person to be tracked. For tracking, TINDERS currently uses an SLA-2000
video processing board5, a commercial product primarily used for tracking ground objects in aerial surveillance video.
The TINDERS video is processed by this board, and when a detected person is designated for tracking, the tracking box
coordinates are sent to the board, which updates the target position after each frame. The updated tracking coordinates
are then used to calculate a velocity vector that is sent to the pan-tilt stage to keep the tracked target as close to the center
of the imager field of view as possible.
Once a person is being tracked with a wide field of view, a fully-automated solution would automatically zoom in on the
target while continuing to track. This has not yet been implemented in TINDERS, so zoom is still controlled by an
operator. At narrower fields of view, another cascade algorithm, also previously reported2, detects faces. As with the
upper-body detection algorithm, a box is displayed around the detected face, and an operator can click on the box to
initiate tracking on the face. Once the face box coordinates have been sent to the SLA-2000 for tracking, the head can
still be tracked even when the person turns so that the face is no longer visible.
Once the field of view and distance are small enough for face recognition to be possible, TINDERS begins to evaluate
detected faces for face recognition suitability. It was previously reported2 that eye-detection and nose-detection
algorithms have been developed to determine whether a frontal face with clear features is present in the video frame. At
30 frames per second, the algorithms typically run fast enough to search one third of the frames for good faces. When a
“good” face is detected, it is placed on a queue of images to be processed for face recognition. As new “good” faces are
detected, they are placed at the front of the queue so that the face recognition software will always be processing the
most recently detected “good” face image. The face recognition software processes SWIR face probe images from the
queue one at a time, matching them against a visible-spectrum database of facial images. After each probe image is
matched, the results are fused with the previous results, and the fused results are displayed.
In a fully-auttomated solutioon, TINDERS would need to o determine whhether the facess on the queuee all belong to tthe same
person, priorr to fusing the face matching results. For th his purpose, a very fast SWIIR-to-SWIR faace matching aalgorithm
is currently under
u developmment, but it haas not yet been
n integrated. W When a head iis being trackeed, only the paart of the
image in and d near the traccking box is searched
s for faaces, largely ppreventing the capture of facces belonging to other
people in thee image. Otheerwise, the enttire image is seearched, leavinng it up to thee operator to ensure that all captured
faces belongg to the same person.
p Contro ols on the GUUI allow the opperator to clearr the queue annd the face reccognition
results when a new person is to be identiffied.
In a fully au utomated soluttion, a confideence level woould be calculaated for the fuused face recoognition resultts, and a
threshold wo ould be applied d to determine when
w to reportt a result as a ppositive identiffication. Workk is currently uunderway
to do this, buut it has not beeen implemented. Currently,, the operator m must look at thhe top candidaates, ranked in order of
fused matchiing score, and determine wheether a positive match has beeen made. Thhe operator cann then click a bbutton to
report the ressult.

2. RESULTS
To experimeentally illustratte the various automation fun nctions that haave been impllemented into tthe TINDERS S system,
daytime and nighttime scen narios were run n with a single subject walkinng at a distancee > 100 m and a distance > 2200 m, to
exercise the live
l body detecction, face deteection, tracking g, and automatted face recognnition functionns. For ease of analysis,
TINDERS reecords video containing
c teleemetry that alllows the videoo to be playedd back later onn the TINDER RS GUI,
displaying on
o the GUI ex d when live,, with all dettection algorithhms and facee recognition software
xactly as it did
functioning as
a if the data were
w live. All images
i includeed below were taken as screenn shots during this “playbackk mode”,
but they are representative of the live dissplay. In addittion to the singgle-subject waalking scenarioos, previously-ccollected
TINDERS nighttime
n video
o data of test subjects
s rotatin
ng in place byy 360 degrees from 100-m aand 350-m waas run in
“playback mode” to illustraate the automatted face capturre and face recoognition functiions.
on and Trackiing
2.1 Detectio
A daytime sccenario was ru un in which a subject was asked to walk aaround in a rooughly rectangular pattern inn an area
roughly 138 m from the TINDERS sysstem. Figure 4 illustrates tthe TINDERS S upper-body ddetection and tracking
functions. Inn the image on
n the left, the dashed
d blue box shows the uppper body deteected in the livve video. The operator
then clicked on this box, which
w initiated tracking.
t The image
i on the riight shows thee subject some time later, afteer he has
turned and walked
w away from
fr the buildiing. The solid d orange (largeer) box showss the tracking bbox, while thee smaller
dashed blue box
b shows upp per body detecttion. It shouldd be noted that the subject is nno longer centeered in the oraange box,
and in time, the SLA-2000 will lose track k of the subjecct. This typicallly occurs wheen there is a feeature-rich backkground,
as in this imaage. One apprroach to mitigaate this in the future
f is to usee the upper-boddy detection innformation, succh as the
dashed blue box
b in the righ
ht image, to perriodically adjusst the tracking box to keep it centered on thhe subject.

Figure 4. Daytime examp ple at 138-m rannge. (left) Upperr body is detectedd and a dashed bblue box is displlayed. Trackingg was
initiated by
b clicking on th
his box. (right) TINDERS pans to track subjectt as he walks. O Orange (large) traacking box and blue
(smaller) upper-body deteection box are ovverlaid on the liv
ve image as the subject walks.
Figure 5. Nighttime exaample at 137-m range.r (left) Upp
per body is deteected and a dashhed blue box is ddisplayed. Traccking
was initiaated by clicking on this box. (rig
ght) TINDERS pans
p to track subj
bject as he walkss. Orange trackinng box is overlaiid on
the live im
mage as the subjject walks.
A nighttime scenario was run with the sames subject at the same loccation. Figure 5 illustrates tthe same detecction and
tracking funcction exercised
d at night. In th
he left image, the
t subject is ffacing the cameera, and an uppper-body detecction box
is displayed. The operato or clicked on this
t box, initiaating tracking. In the right image, the suubject is trackeed as he
approaches the
t building, with
w the orang ge tracking box x overlaid on the image. IIn this case, thhere is no uppper-body
detection sho
own in the righht image.
Similar day anda night scenaarios were run at another locaation at a distaance of roughlyy 220-m. Figuure 6 shows an example
of nighttime upper-body deetection and traacking at a disttance of 224 mm. The reason tthe ground apppears dark in thhis image
is that it is co
overed with sno
ow, which appears black due to high absorpption of the SW WIR illuminatioon by water3.

Figure 6.. Nighttime exaample at 224-mm range. TINDE ERS pans to traack subject as he walks. Orangge tracking box and
dashed bllue upper-body detection
d box aree overlaid on thee live image as thhe subject walkss.
The images shown
s in figurres 4 through 6 have a large field of view,, in which the full body is viisible. TINDE ERS face
detection is not enabled whenw the field
d of view (FOOV) exceeds an upper limit.. Figure 7 shhows a progreession of
nighttime im
mages at 215-m m range as TIN NDERS zooms in to a smallerr field of vieww where face deetection is enaabled. In
image (a) thee FOV is 2.2 m,m and only an upper-body deetection box is shown. In im mage (b) the FOOV has been reduced to
1.83 m, and now both an upper-body
u detection and a face
f detection box are visiblle. The operattor then clickeed on the
face box, inittiating trackingg of the face. In image (c), an
a orange trackking box is noow visible arouund the face, inndicating
that the facee is being track ked. In imagee (d), the subjject has turnedd and is now w walking awayy from the cam mera, but
TINDERS iss still tracking hish head, even though the facce is no longer visible.
(a)
( (b)

(c)
( (d)

Figure 7. Nighttime exam


mple at 215-m range.
r (a) TINDE ERS detects uppper body at FOVV=2.2 m. (b) TIN
NDERS detects both
upper body and face at FOV=1.83
F m. (cc) TINDERS traacks face (orangge box) while ddetecting both faace and upper bbody.
(d) TIND
DERS continues to
t track head aftter subject turns and walks awayy from camera.
2.2 Automa
ated Face Reco
ognition
During the saame nighttime scenario in wh hich full-body tracking was iillustrated in F
Figure 5, face ddetection, trackking, and
automated reecognition werre also perform med. Figure 8 shows a screeen shot of thee full TINDER RS GUI, illustrrating all
three of thesee functions wo
orking simultan neously. In thee figure, TIND DERS is zoomeed into its miniimum field of vview, 64
cm. At thiss time, the sub bject was wallking back and d forth towardds the cameraa and away froom the camerra, while
TINDERS trracked the subjject’s head. The T orange box x overlaid on tthe face indicaates the trackinng box from thhe SLA-
2000 that iss used to con ntrol the pan--tilt stage. The
T green boxx overlaid on the face inddicates face ddetection.
Simultaneously, TINDERS S is detecting an
nd capturing frrontal facial im
mages, and proccessing them fo
for face recogniition.
As each new w SWIR probee image is pro ocessed by thee face recognittion software (searching thee database for the best
matches), a thumbnail of thhe probe imagee appears on the left of the facce recognition window, locatted along the bottom of
the screen. Under
U the prob be image, the number of pro obes searched and the numbber waiting in tthe queue is inndicated.
For each proobe searched, th he top 10 cand didates from thhe database aree returned withh matching scoores. These sccores are
then fused with
w the results from previous probes using the t “maximum m score” fusion method, which assigns a fussed score
to each canddidate equal too that candidatte’s highest matching score over all of the probes. Thee top 8 candiddates are
displayed in the face recogn nition window w, along with thheir fused rank and score. Ass each new proobe image is prrocessed,
the matchingg results are auutomatically up pdated. Figuree 8 shows the face recognitioon results after 4 probe imagges have
been processsed. Note thatt some faces in n the results haave been obsccured for privaacy reasons. F
For all face reccognition
results shownn in this paper,, a visible-specctrum database containing faccial images of 114 people waas used.
Figure 8. Nighttime exaample at 135-m range. In this screenshot of thee full TINDERS S GUI, the subjeect is walking w while
TINDER RS is tracking hiss head (orange box).
b At the samme time, the facee is detected (greeen box). Whilee the subject is bbeing
tracked, TINDERS
T is deetecting and cappturing frontal face
fa shots, autom matically processsing them for fface recognitionn and
displayin
ng the cumulativee identification results
r in the botttom window.

1 2

3 4

Figure 9. Evolution of face


fa recognition results as 4 capttured probe imaages are sequentiially processed ffor face recognition.
Results arre for nighttime,, 135-m range, while
w the subjectt was walking annd being trackedd by TINDERS.
Figure 9 givves more detail of how the face recognittion results evvolve as the ffour probe imaages are captuured and
processed. The
T left side of o the figure sh
hows a close-u up of the left sside of the facee recognition wwindow as it aappeared
following thee processing of
o each probe image,
i showin
ng the top two fused search results. The rright side of thhe figure
shows the caaptured images along with the t automatic eye placementt. Notice thatt of the four pprobe images, only the
second one has
h accurate ey ye locations. Following the first
fi probe searcch, the top twoo results are immpostors, but both have
low scores. After the second probe, the oneo with correect eye locationns, is searchedd the correct caandidate is rankked first.
After the thirrd and fourth probe
p searchess, the top two results
r are uncchanged. The TINDERS GU UI also has a ““manual”
mode in which an operatorr or analyst cann review all off the captured pprobe images, m manually adjuust eye locationns, delete
poor-quality probe imagess, change the watch list, rep process the faace recognitionn search, and review more detailed
matching sco
ore informationn. While this mode
m can be very useful, andd can improve identification accuracy, it is difficult
to work with ve target is engaged and is normally used affter the engagem
h while an activ ment has ended.
When trackinng non-cooperaative subjects at
a long range, there may be llong periods off time betweenn face capturess, when a
subject has tu
urned away froom the cameraa or is otherwise not presentiing a suitable face image. D
During this tim
me, probe
images that have
h accumulaated on the quueue will be prrocessed and m
matching resullts fused until the queue is eempty or
until new facce images are captured. Figuure 10 shows an example froom 219-m rannge during brigght daylight, w where the
subject was tracked as he walked back anda forth towaard the cameraa and away froom the cameraa while automaated face
recognition was
w performed.

Figure 100. Daytime 215 5-m long-term trracking with au utomated face reecognition exam mple. (clockwisee from top left) First
screen sh
hot shows subjecct walking and taalking on cell phhone just after traacking was initiaated on the face. The second sccreen
shot show
ws the subject stiill being tracked
d as he walks awaay from the cammera. Captured fface images from m the queue conttinue
to be seaarched. In the third
t image, the subject is again
n walking towaard the camera, and new face im mages are captuured,
processedd, and the fused matching resultss are updated.
To better illu
ustrate how th he automated face
f recognitio
on works, two examples are presented herre in detail of subjects
rotating 360 degrees in place, once arou und and back again. Recordded video wass run in “playbback mode” w while the
TINDERS au utomated face capture deteccted video fram mes with nearr-frontal face iimages and suubmitted them for face
recognition. The first exam mple was recorrded at a distan me conditions. Figure 11, lefft, shows
nce of 100 m inn dark nighttim
a TINDERS screen shot off the rotating subject
s after 4 probe images have been proocessed for facce recognition,, and the
subject has been
b successfufully identified
d. The right side
s of the figgure shows thhe 4 images thhat were captuured and
processed, allong with the automatically-g
a generated eye locations. No tice that the eyyes are correcttly located in pprobes 1,
3, and 4, butt incorrectly lo
ocated in probee 2. Also, nottice that probe 1 has a slighttly angled posee and is slightlly out of
focus, most likely
l due to motion
m blur.
1 2

3 4

Figure 11. Niighttime 100-m rotating


r test subj
bject. (left) Screen shot shows vvideo of rotatingg subject while ssuccessful autom
mated face
recognition is displayed below
w as it appears following
fo the proocessing and fussion of 4 captureed face images. (right) The fourr captured
probe images and
a eye location ns used in the facce recognition.

Figure 12
2. Close-up view of face reco ognition results following eachh of the 4 probee searches alonng with single-pprobe
g scores for the raank 1 (genuine) candidate.
matching
Figure 12 shhows close-ups of the face reecognition wind dow as it appeeared followingg the processinng of each proobe, with
the most receent probe imagge and top two matching cand didates. The ddetailed single-pprobe scores fofor the rank 1 ccandidate
are shown ata the right (thhis detail can be
b accessed on o the TINDER RS GUI by hhovering the m mouse over onne of the
candidate im
mages in the facce recognition window).
w Afteer probe 1 was searched, the ccorrect candidaate was alreadyy rank 1,
but with a low
w score. The second
s probe search
s yielded only low matcching scores (du due to incorrectt eye location) and thus
ores of the top two candidatess. The third prrobe search ressulted in a new
did not affect the fused sco w rank 2 candiddate, and
increased thee score of the rank 1 candid date, and the fourth
f probe seearch yielded a new rank 2 candidate andd further
increased thee score of the rank 1 candid date, increasing the confidennce level of thhe positive ideentification. A second
example of automated
a facee recognition of a rotating sub
bject, this timee recorded at nnight at a distannce of 350-m iis shown
in Figure 13.
Figure 133. Nighttime 350-m
3 rotating test
t subject. (leeft) Screen shott shows video oof rotating subjeect while successsful
automated face recognitioon is displayed below
b as it appeears following thhe processing annd fusion of 9 caaptured face imaages.
(right) Th
he nine captured
d probe images and eye locationss used in the facee recognition.

3. DISCUSSION
D N
In this paperr we reviewed the developmeent of the TIN NDERS active- SWIR imagingg system for ccovert, night/daay, long-
range face reecognition, desscribed the auttomation capabbilities that woould be requireed for fully-auttomated operattion, and
provided expperimental exam mples that illu omated capabiliities that have been implemeented to date. Specific
ustrate the auto
examples of upper-body deetection, face detection,
d and tracking
t were provided for bboth daytime annd nighttime ooperation
at multiple distances.
d Deetailed examplles of automaated face recoggnition, both daytime and nighttime, at multiple
distances ran
nging from 100 0 m to 350 m werew also prov
vided, includinng examples whhere automatedd face recogniition was
performed while
w the TIND DERS system was tracking the face of a walking test subject. Thee work describbed here
represents an
n initial implem
mentation of baasic automationn functions. Siignificant addittional work woould be requireed before
fully-automaated operation of
o TINDERS would
w be possib
ble.

4. ACKNO
OWLEDGM
MENTS
This research
h was perform
med under contrract N00014-0 09-C-0064 from m the Office oof Naval Reseaarch, with fundding and
oversight fro
om the Deplo oyable Force Protection Sciience and Tecchnology Proggram. The aauthors wouldd like to
acknowledgee important tecchnical contrib
butions from Jaason Stanley, W
William McCoormick, Ken W Witt, and MorpphoTrust
USA, and thee cooperation of
o the WVU Ceenter for Identiification Technnology Researcch in some of tthe data collecttion.

REF
FERENCES

[1] Brian E.. Lemoff, Robert B. Martin, Mikhail Sluch h, Kristopher M M. Kafka, Willliam B. McCoormick, and R Robert V.
Ice, “Loong-range nig ght/day human on using activve-SWIR imaaging”, Proc. SPIE 8704, Infrared
n identificatio
Technology and Applications XXXIX X, 87042J (Junne 18, 2013).
[2] Brian E.. Lemoff, Robert B. Martin, Mikhail Sluch h, Kristopher M M. Kafka, Willliam B. McCoormick, and R Robert V.
Ice, “Auutomated nightt/day standoff detection, tracking, and idenntification of ppersonnel for innstallation protection”,
Proc. SPPIE 8711, Sen nsors, and Coommand, Conttrol, Communiications, and Intelligence (C C3I) Technoloogies for
Homelan nd Security andd Homeland Defense
D XII, 87110N (June 6, 2013).
[3] Robert B.B Martin, Mikhail Sluch, Kristopher M. Kafka, Robeert V. Ice, annd Brian E. L Lemoff, “Activve-SWIR
signaturees for long-range night/day human detecttion and identtification”, Prooc. SPIE 87344, Active and Passive
Signaturres IV, 87340J (May 23, 2013 3).
[4] MorphoT Trust USA Facce Examiner Workstation
W webb page.
http://ww
ww.morphotru ust.com/Identity
ySolutions/ForrFederalAgenciies/Officer360/Investigator3660/ABIS/FaceE Examine
rWorkstation.aspx .
[5] SightLinne Applicationss web page. htttp://www.sighhtlineapplicatioons.com/index.html .

Вам также может понравиться