Вы находитесь на странице: 1из 40

APRIL 2017

www.vision-systems.com

VISION AND AUTOMATION SOLUTIONS


FOR ENGINEERS AND INTEGRATORS WORLDWIDE

Currency
security
Counterfeit detection

Synchronize
cameras
Cut interval time lag

Image
processing
Software options

3D vision
Pinion gear
metrology

1704VSD_C1 1 3/23/17 9:08 AM


Our vision software
is sweeter than ever
Take your vision applications to a whole new
level with Matrox Design Assistant 5
This flowchart-based integrated development environment
makes vision application development more intuitive
and straightforward than ever. The latest release includes
functionalities that let you set up measurements directly
on the image itself, create a single flowchart with different
settings for inspecting similar part types, and also includes
Matrox SureDotOCR, the industrys most comprehensive
Pairs well with the
tool to read dot-matrix text even on distorted, rotated or
Matrox Iris GTR
smart camera irregular backgrounds.

Let us add the cherry on top of your next vision project!


www.matrox.com/da/vsd

1704VSD_C2 2 3/23/17 9:08 AM


April 2017
VOL. 22 | NO. 4
Vision and Automation Solutions for
Engineers and Integrators Worldwide A P EN N W EL L P U B L I C AT I O N

C ove r St o r y
Mesure-Systems3D has developed a 3D scanning
system to measure gears during production.
(See page 10).

f e ature s d ep ar tment s

15 INDUS T RY S OLU T IONS PROFIL E 3 My View

Smart camera checks currency for 4 Online@vision-systems.com


Read the latest news from our website
counterfeits
Open-source software and modular embedded 5 Snapshots

hardware facilitates the rapid development of 10 Technology Trends


3D IMAGING
systems to detect counterfeit banknotes. 3D vision detects gear defects fast
Ricardo Ribalda LIGHTING AND ILLUMINATION
Pattern projector aids prosthetics maker
SYSTEMS INTEGRATION
20 IN T EGR AT ION INSIGH T S 3D vision system inspects connector pin
height
IEEE1588 simplifies camera synchronization 29 Vision+Automation Products
Turning the camera itself into the master eradicates 32 Vision+Automation Events
the need for preparing a Grand Master Clock. 35 Ad Index/Sales Offices

Satoshi Eikawa 36 Faces in Vision

23 PRODUC T F OC US

Software packages offer developers


numerous options for machine vision design
Systems integrators can take a number of
different approaches when configuring their
Complete Archives White Papers
machine vision systems.
Industry News Feedback Forum
Andrew Wilson
Buyers Guide Free e-newsletter

Webcasts Video Library

www.vision-systems.com

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 1

1704VSD_1 1 3/23/17 9:05 AM


graphics: www.donaldecompany.com
Complete Vision Solution

Tattile designs, develops and manufactures high-tech products dedicated to machine vision, such as
embedded systems, smart cameras and high-performance industrial cameras.

M100 Series: Multi-camera vision controllers, GigE and TAG-7 Series: GigE Vision and CameraLink Line-scan
CameraLink version, with open platform and compact and cameras
fanless case
TAG-5 Series: GigE Vision Area-scan cameras
S100/200 Series: High Performance Smart Cameras TAG-5 HYP series: Hyperspectral Mosaic GigE cameras
Linux based with compact size and IP67 rated enclosure
NAUTILUS with Halcon Embedded: Easy to use Machine
S12MP Series: High Resolution 12Megapixel Smart Vision design tool, make design of your machine vision
Camera with programmable FPGA and Linux based applications faster and more efficient

Visit us and meet our team at:

Custom Vision Solutions

www. t at t ile .c o m

1704VSD_2 2 3/23/17 9:06 AM


my
Alan Bergstein: Group Publisher
(603) 891-9447
alanb@pennwell.com
John Lewis: Editor-in-Chief
view
(603) 891-9130
johnml@pennwell.com
James Carroll Jr.: Senior Web Editor
(603) 891-9320
jamesc@pennwell.com Machine vision improves
Andrew Wilson: Contributing Editor
+44 7462 476477
andycharleswilson@gmail.com quality and process control
Kelli Mylchreest: Art Director
Mari Rodriguez: Production Director
Though consistent product quality is a production mandate, achieving it can be
Dan Rodd: Senior Illustrator
Debbie Bouley: Audience Development Manager a challenge. As line speeds increase and cycle times decrease, manual inspec-
Marcella Hanson: Ad Services Manager tion has given way to automation using machine vision and image processing
Joni Montemagno: Marketing Manager systems that help manufacturers improve production efficiency by reducing waste.
Improved quality and process control are driving demand for new machine vision applications
in discrete manufacturing. Technologies such as 3D laser displacement sensors, as described in
www.pennwell.com two of this months Technology Trends articles, are helping manufacturers reduce the associated
EDITORIAL OFFICES: cost of defects for suppliers to the automotive industry.
Vision Systems Design
61 Spit Brook Road, Suite 401 While machine vision has been successfully applied to many end-of-line inspection problems
Nashua, NH 03060 to improve quality, manufacturers continue to move towards 100% in-line inspection at multiple
Tel: (603) 891-0123
Fax: (603) 891-9328 points in production to achieve better process control. By catching defects as soon as they occur,
www.vision-systems.com they can avoid adding value to parts which eventually would be scrapped.
CORPORATE OFFICERS: In one of the above mentioned articles we cover a 3D vision system that was demonstrated
Robert F. Biolchini: Chairman
during VISION 2016 in Stuttgart. The system, developed to measure and inspect pinion gears,
Frank T. Lauinger: Vice Chairman
Mark C. Wilmoth: President and uses no less than five 3D laser displacement sensors. The individual point clouds generated are
Chief Executive Officer combined to take gear measurements well below 1 micron in less than half a minute.
Jayne A. Gilsinger: Executive Vice President,
Corporate Development and Strategy In the other previously referenced article, we cover another 3D imaging application that
Brian Conway: Senior Vice President, replaces error-prone manual inspection by using three laser line triangulation sensors to mea-
Finance and Chief Financial Officer
sure connector pin height, preventing faulty connectors from making it into the supply chain.
TECHNOLOGY GROUP:
Christine A. Shaw: Senior Vice President and
Group Publishing Director Machine vision pays off
FOR SUBSCRIPTION INQUIRIES Modern digital technology makes it extremely easy for counterfeiters to reproduce fake banknotes.
Tel: (847) 559-7330; However, it also can be used to help detect them. In an article on currency security, Ricardo Rib-
Fax: (847) 763-9607;
e-mail: vsd@omeda.com alda demonstrates how open-source software and modular embedded hardware make it possible
web: www.vsd-subscribe.com to efficiently develop a low-cost system that detects counterfeit banknotes.
Vision Systems Design (ISSN 1089-3709), Volume 22, While many developers use such open-source software and freely-available image processing
No. 4 Vision Systems Design is published 11 times a year
in January, February, March, April, May, June, July/Au- libraries combined with system diagnostic tools and inexpensive modular camera hardware as
gust, September, October, November, December by Pen-
nWell Corporation, 1421 S. Sheridan, Tulsa, OK 74112.
an inexpensive means for prototyping machine vision applications, new software packages are
Periodicals postage paid at Tulsa, OK 74112 and at addi- also advancing the role of these tools in industrial applications as well.
tional mailing offices. SUBSCRIPTION PRICES: USA $130
1yr., $190 2 yr., $244 3 yr.; Canada $148 1 yr., $217 2 yr., In our Product Focus, for example, contributing editor Andy Wilson describes recent devel-
$280 3 yr.; International $160 1 yr., $235 2 yr., $305 3
yr. POSTMASTER: Send address corrections to Vision Sys- opments in image processing software that allow developers to combine both open-source algo-
tems Design, P.O. Box 3425, Northbrook, IL 60065-3425.
Vision Systems Design is a registered trademark. Pen- rithms and commercially-available packages into a single environment, to tailor software solu-
nWell Corporation 2017. All rights reserved. Reproduc-
tion in whole or in part without permission is prohibited. tions that are based on the most effective algorithms. I hope you enjoy this issue.
Permission, however, is granted for employees of corpo-
rations licensed under the Annual Authorization Service
offered by the Copyright Clearance Center Inc. (CCC),
222 Rosewood Drive, Danvers, Mass. 01923, or by calling
CCCs Customer Relations Department at 978-750-8400
prior to copying. We make portions of our subscriber list
available to carefully screened companies that offer prod-
ucts and services that may be important for your work. If
you do not want to receive those offers and/or informa-
tion via direct mail, please let us know by contacting us
at List Services Vision Systems Design, 61 Spit Brook Road,
John Lewis, EDITOR-IN-CHIEF
EDITOR-IN-C
Suite 401, Nashua, NH 03060. Printed in the USA. GST No. WWW.VISION-SYSTEMS.COM
WWW VISION SYSTEMS COM
126813153. Publications Mail Agreement no. 1421727.

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 3

1704VSD_3 3 3/23/17 9:06 AM


online @ www.vision-systems.com
Current and future state of Boston Dynamics introduces N. American robotics market
computer vision new wheeled robot reaches all-time high in 2016
Check out this Having already developed Orders and shipments
presentation given by such robots as theAtlas ofrobotsin North
Jeff Bier, founder of two-legged humanoid America reached an all-
the Embedded Vision robotand theSpot time high in 2016, as
Alliance Computer Vision 2.0: Where quadruped robot, Boston Dynamics 34,606 robots valued at approximately
We Are and Where Were Going recently unveiled video of Handle, a $1.9 billion were ordered, representing a
from the May 2016 Embedded Vision new robot that features both legs and 10% growth over 2015.
Summit. http://bit.ly/VSD-1704-1 wheels. http://bit.ly/VSD-1704-3 http://bit.ly/VSD-1704-2

NASA selects 20 teams for


Deep learning framework Uber to open autonomous Space Robotics Challenge
for big data open-sourced vehicle research center
NASA has selected the
by Yahoo Uberhas announced
top 20 teams in the
Yahoo has announced plans to open
SpaceRoboticsChallenge,
that it has open sourced anautonomous
which tasks teams with
its TensorFlowOnSpark vehicleresearch center
developing and displaying the ability of
(TFoS)softwarefor in Wixom, Michigan, USA, by the end of
NASAsRobonaut 5humanoidrobot
distributeddeep learningon big-data March, according to the company.
to assist in the procedures of a NASA
clusters. http://bit.ly/VSD-1704-6 http://bit.ly/VSD-1704-4
mission. http://bit.ly/VSD-1704-5

The Worlds First


TRUE HD THERMAL CAMERA

VIENTO HD LAB

+ True HD Resolution + 30 mK NETD at F1.0


+ 1920 x 1200 x 12 m Uncooled LWIR + 24 mm F1.1 Athermalized Optic
+ 30 Hz Frame Rate + Full Digital Acquisition and Display

LEARN MORE AT SPIE DCS BOOTH #740

3100 CASCADE AVENUE HOOD RIVER, OREGON 97031 WWW.SIERRAOLYMPIC.COM

1704VSD_4 4 3/23/17 9:06 AM


snapshots Short takes on the
leading edge

Drones light up the sky for Super


Bowl halftime performance
Collaborating with Pepsi and the NFL, Intel ture built-in LED lights
(Santa Clara, CA, USA; www.intel.com) de- that can reportedly
ployed 300 of its Shooting Star drones (http:// create more than 4 bil-
bit.ly/VSD-SB51-1) to create an image of the lion color combinations
American flag as halftime performer Lady in the sky. All 300 drones
Gaga performed at Houstons NRG Stadium. are controlled with one
The drones were recently tested with computer and one drone
Disney (http://bit.ly/VSD-SB51-2), as this past pilot, with a second pilot
holiday season, visitors at Disney Springsa on hand as backup. In
waterfront shopping, dining, and entertain- order to perform at the
ment districtsaw 300 of the drones deployed halftime show, Intel re-
in the night sky. Starbright Holidays, An Intel ceived a special waiver
Collaboration was held during this past holi- from the FAA to fly the fleet up to 700 feet. stead of fireworks. We can program them to
day season. During the show, the drones per- Furthermore, they were required to obtain a create any pattern that you want. We think
formed a synchronized light show choreo- special waiver to fly the drones in the more re- were doing something thats unique, he said.
graphed to holiday music. strictive Class B airspace. We knew our technology would be utilized
Intels Shooting Star drones are 15.11 in x Steve Fund, CMO at Intel, commented on in the game. We just think it takes things to
15.11 x 3.66 in. (384 x 384 x 93 mm) with a the drones in Forbes (http://bit.ly/VSD-SB51-3). the next level.
rotor diameter of 6 in. and can fly for up to 20 We have been using our drone technol- While the technology and the end result
minutes. The drones have a maximum take- ogy to create amazing experiences. We put is undeniably impressive, Wired (http://bit.
off weight of 0.62 lb. and can fly up to almost the Intel logo in the sky in Germany. We re- ly/VSD-SB51-4) notes that the drone perfor-
one mile away. Additionally, the drones fea- cently partnered with Disney in their fireworks mance was taped on an earlier night, and did
showthey used drones in- not appear live to the crowd at Super Bowl LI.

NVIDIA high-school intern


builds humanoid robots
Sixteen-year-old NVIDIA (Santa Clara, CA, USA; www.nvidia.com) intern Prat
Prem Sankars interest in robotics began more than five years ago when his father
bought him a Lego (Billund, Denmark; www.lego.com) Mindstorms NXT set,
which is a programmable version of the toy. At that point, he said, he knew he
wanted to be a robotics engineer. Flash forward to 2016s GPU Technology Con-
ferencethe worlds largest event for GPU developerswhere he sat in on a tu-
torial on deep learning. Here, said Sankar, is where he first saw the possibilities
of what deep learning could be used for.
It was the kind of technology I wanted to see in robotics. continued on page 6

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 5

1704VSD_5 5 3/23/17 9:06 AM


snapshots

Facebook computer vision algorithms caption your photos


Most Facebook (Menlo Park, CA, USA; www.facebook.com) users added to images that you upload that are populated with keywords repre-
know by now that when you upload an image with you or one of your senting the content of your images. Facebook, according to the Chrome
friends in it, that the social networking website uses facial recognition extension developer Adam Geitgey (http://bit.ly/VSD-FB-2) is labeling
algorithms to suggest who you might tag in the image. What some your images using a deep convolutional network built by Facebooks AI
Research (FAIR) Team.
cloud On one hand, this is really great, said the developer. It improves
sky accessibility for blind users who depend on screen readers which are
tree
only capable of processing text. But I think a lot of internet users dont
outdoor
nature
realize the amount of information that is now routinely extracted from
water photographs.
I put the extension to the test myself, and searched for an image that
might come up with interesting results. The image was from a wedding
of myself and three friends, two of which were the bride and groom.
The word wedding was not in
the caption, nor did it appear any-
4 people
where in the image. The results
are to the right: people smiling
While the computer vision algo- people standing
users may not know, is that Facebook also tags photos with data like how rithms may leave out some details wedding
many people are in a photo, the setting of a photo, and even whether or or not always produce results that
not someone is smiling. are 100% accurate, in this particu-
In April of 2016, Facebook rolled out (http://bit.ly/VSD-FB-1) auto- lar case, the algorithms were correct, as they were in the image above
mated alternative (alt) text on Facebook for iOS, which provides visu- as well, which I took with my phone from a vacation in Maine this past
ally impaired and blind people with a text description of a photo using summer. Other companies are doing this, as well, including Google,
computer vision algorithms that enable object recognition. Users with which developed machine learning software that can automatically pro-
a screen reader can access Facebook on an iOS device, and can hear a duce captions to describe images as they are presented to the user. The
list of items that may be shown in a photo. software, according to a May Vision Systems Design article (http://bit.
A new Chrome extension now shows what images Facebook has au- ly/VSD-FB-3) may eventually help visually impaired people understand
tomatically detected in your photos using deep learning. Show Face- pictures, provide cues to operators examining faulty parts and automat-
book Computer Vision for Chrome shows the alt tags that have been ically tag such parts in machine vision systems.

continued from page 6I knew right away I had ternet. In the video, the Cyclops robot is Sankar continues to build robots with his
to be part of the company. shown looking at an apple using a Logi- FIRST robotics team, the Arrowbotics. As
Two months later, he joined NVIDIA as tech webcam, and is able to recognize it and the vice president of engineering, he has
a summer intern, where he was given the keep it within its range of vision. It sees it helped his team excel in competitions. Ear-
chance to utilize deep learning by building as an apple, and is confirming it constantly, lier this year, they built an obstacle-tackling
three humanoid robots using Jetson TX1 em- checking back with the Jetson. robot that made it to the quarterfinals in the
bedded systemdeveloper kits, which are capa- The second that apple drops out of frame, Silicon Valley Regional FIRST Robotics
ble of running complex, deepneural networks. and you put an orange [in frame] or replace it Competition. NVIDIA was recently named
The Jetson TX1 features 1 Teraflop / 254- with some other kind of fruit, the robot con- a gold-level supplier of the FIRST Robot-
core with NVIDIA Maxwell architecture, tinues searchingits still looking for the ics Competition, which is part of the com-
64-bit ARM A57 CPU, 4K video encode (30 apple. Deep learning is flashing thousands panys effort to inspire more young students
Hz) and decode (60 Hz), and a camera inter- of images of apples, oranges, bananas, and like Sankar to become science and technol-
face capable of handling up to six cameras, other types of fruitor whatever objects you ogy innovators.
or 1400 MPixels/s. Sankars robots, which he wantand it creates a network, saying if it Editors note: NVIDIA recently intro-
named Cyclops, were programmed by show- is red, go here, if its a little round, go here. duced the Jetson TX2. View more informa-
ing them thousands of images from the in- In addition to his internship at NVIDIA, tion on this here: http://bit.ly/VSD-TX2

6 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_6 6 3/23/17 9:06 AM


COMPUTAR 40+ YEARS of PRECISION OPTICS

Designed with
Exceptional Vision

Introducing the Computar MPY Series


Presenting a powerful line of 12 Megapixel C-mount lenses designed to
fully leverage the high sensitivity and precise high-speed imaging of the MPY SERIES FEATURES:
EATU
URES
Machine Vision and Life Sciences
1.1 Sony IMX253 and IMX255 sensors.
Applications
MPY Series lenses provide Computars superior, high-performace optics and Fixed Focal Length
theyre available in full range of common focal lengths for virtually any Available in 8, 12, 16, 25, 35 and 50mm
applicationsupporting pixel sizes as low as 3.45m. focal lengths
Smallest 1.1 lens for >10MP Lens in
For more than 40 years, Computar has been setting the standard for the industry
exceptional optical quality, technical innovation, product support, custom OEM F2.8
and industries such as ITS, Machine Vision, Life Sciences, Defense and Security. 3.45m pixel size
Take a look at Computar todaynow more than ever, youll like what you see. Image circle 17

computar.com
EAST COAST: 1 (919) 230-8700 WEST COAST: +1 (310) 222-8600 MEXICO: +52 (55) 5280 4660

1704VSD_7 7 3/23/17 9:06 AM


snapshots

xiX Vision system to enable


autonomous spacecraft
rendezvous
NASAs (Washington, D.C.; www.nasa.gov) Raven technology module
PCIe cameras for integrations launched aboard the 10th SpaceX (Hawthorne, CA, USA; www.spacex.
2 to 50 Mpix with high bandwidths com) commercial resupply mission on February 19. It features a vision

flat flex cables (data, power, GPIO) system comprised of visible, infrared, and lidar sensors that will be af-
fixed outside the International Space Station (ISS) to test technologies
that will enable autonomous rendezvous.
Through Raven, NASA says it will be one step closer to having a rel-
ative navigation capability that it can take off the shelf and use with
minimum modifications for many missions, and for decades to come.
Ravens technology demonstration objectives are three-fold:
Provide an orbital testbed for satellite-servicing relative navigation
algorithms and software.
Demonstrate multiple rendezvous paradigms can be accomplished
with a similar hardware suite.
Demonstrate an independent visiting vehicle monitoring capability.
Ravens visible camera, the VisCam, was originally manufactured for
the Hubble Space Telescope Servicing Mission 4 on STS-109. The 28
Volt camera features an IBIS5 1300 B CMOS image sensor from Cy-
press Semiconductor (San Jose, CA, USA; www.cypress.com), which is
a 1280 x 1024 focal plane array with a 6.7 m pixel size that outputs a
1000 x 1000-pixel monochrome image over a dual data-strobed Low
Voltage Differential Signaling (LVDS) physical interface.
The camera is paired with a commercially available, 7 radiation-tol-
erant 8 24 mm zoom lens that has been ruggedized for spaceflight
by NASAs Goddard Space Flight Center. The motorized zoom lens
provides zoom and focus capabilities via two, one-half inch stepper
May 1-3, motors. The adjustable iris on the commercial version of the lens has
Santa Clara, been replaced with a fixed f/4.0 aperture. Additionally, the Viscam pro-
Space 606
vides a 45 x 45 FOV when at the 8 mm lens setting and a 16 x 16
FOV while at the 24 mm lens setting. The combination of the fixed
aperture and variable focal length and focus adjustments in the lens
yield a depth of field of approximately four inches from the lens out
As seen in XIMEAs 360 VR capture demo: to infinity, according to NASA. Furthermore, the VisCam assembly
includes a stray light baffle coated with N-Sciences ultra-black Deep
Space Black coating, which protects the VisCam from unwanted op-
tical artifacts that arise from the dynamic lighting conditions in Low
Earth Orbit.
Ravens infrared camera, the IRCam, is a longwave infrared (LWIR)
camera that is sensitive in the 8 14 m wavelength range. The
camera features a 640 x 480 pixel U6010 Vanadium Oxide microbo-
lometer array from DRS Technologies (Arlington, VA, USA; www.drs.
com) and has an internal shutter for on-orbit camera calibration and
flat- field correction. Furthermore, the camera operates via USB 2.0
interface and includes an athermalized, 50 mm f/1.0 lens that yields
an 18 x 14 FOV.
Also included in Ravens sensor payload is flash lidar, which collects
www.ximea.com
VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_8 8 3/23/17 9:06 AM


snapshots

data by first illuminating the rela- While on the ISS, Ravens com-
tive scene with a wide-beam 1572 ponents will gather images and
nm laser pulse and then collect- track incoming and outgoing vis-
ing the reflected light on a 256 x iting space station spacecraft. Ra-
256 focal plane array. By clocking vens sensors will feed images to
the time between transmission a processor, which will use spe-
and reception, the focal plane cial pose algorithms to gauge the
array can accurately measure the relative distance between Raven
distance to the reflected surface, and the spacecraft it is tracking.
as well as the return intensity, at a Based on these calculations, the
rate of up to 30 Hz. processor will send commands
Five days after launch, Raven that swivel Raven on its gimbal to
was removed from the unpres- keep the sensors trained on the ve-
surized trunk of the SpaceX hicle, while continuing to track it.
Dragon spacecraft by the Dextre During this, NASA operators on
robotic arm, and attached on a the ground will evaluate the Ra-
payload platform outside the ISS. vens capabilities and make adjust-
Here, Raven provides information for the de- tempted technology, said Ben Reed, deputy ments to increase performance.
velopment of a mature real-time relative navi- division director, for the Satellite Servicing Over two years, Raven will test these tech-
gation system. Projects Division (SSPD) at NASAs Goddard nologies, which are expected to support future
Two spacecraft autonomously rendezvous- Space Flight Center in Greenbelt, Maryland NASA missions for decades to come.
ing is crucial for many future NASA missions the office developing and managing this View a technical paper on Raven at http://
and Raven is maturing this never-before-at- demonstration mission. bit.ly/VSD-RAVEN.

MORE
than a
CAMERA

The right imaging


g system
sy
ystem is
is more than just
ju
ust a camera. Lumeneras imaging experts work with your
product team to optimize your vision system and meet your exact application needs, creating an imaging solution
thats right for you.

www.lumenera.com

1704VSD_9 9 3/23/17 9:06 AM


technologytrends
John Lewis, Editor, johnml@pennwell.com

3D I M AG I N G
SYS T E M S I N T E G R AT I O N

3D vision system detects 3D vision system inspects


gear defects fast connector pin height
While accuracy is critical for gear inspec- sive proposition for gearbox manufacturers A faulty connector that ends up in the
tion, in production, inspection speed can because of the resources required to disassem- final assembly and is buried deep within
make all the difference. If a gear manufac- ble the faulty gearbox and identify and replace an automotive subsystem could endanger
turer produces a gear in a minute or less, but the faulty gear. the consumer and prove very costly to the
it takes several minutes to inspect each gear, Understanding this, Mesure-Systems3D connector manufacturer.
every gear may not be inspected. It goes with- (MS3D; Bruz, France; www.ms3d.eu/en/) has If a tier-1 automotive supplier, for exam-
out saying that if this is the case, many defects developed a non-contact 3D scanning system to ple, integrates a bad connector into an
may not be identified until after the gearbox perform 100% in-line inspection of gears right
is assembled and tested. This is an expen- on the shop floor. continued on page 11

L I G H T I N G A N D I L LU M I N AT I O N

Pattern projector aids prosthetics


maker
A maker of 3D scanning systems for custom racy, and be safe for users and operators.
prosthetics needed a stable, accurate way to Based in Vancouver, BC, Canada;
project light patterns on to peoples bodies to Vorum is one of the worlds leading makers
improve the speed and accuracy of the pros- of prosthetics and orthotics (P&O) using
thetic and orthotic design process. The solu- computer-aided design and computer-aided
tion needed to be compact enough to fit in a manufacturing (CAD/CAM). Founded in
handheld device, provide the required accu- 1989 by Carl Saunders, continued on page 13

Before connectors enter the 3D vision


system for board-side scanning, they
are inspected to verify that the cor-
rect part is present and that its in the
proper orientation.

Engine Control Unit (ECU), and that


ECU gets sold to an automotive manufac-
turer, is installed in vehicles and results in
a recall, much of the associated cost will
By projecting an encoded pattern onto the body, which the camera records, the integrat-
likely be charged back to the connector
ed software can process the resulting digital image to extract 3D information regarding the
bodys volume and shape. continued on page 10 manufacturer. continued on page 12
10

10 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_10 10 3/23/17 9:06 AM


technologytrends

height, notes Rosen-


baum. However, this
requires at least 7 sen-
sors and the Keyence
sensors are too bulky
to fit within the core of
the machine. Conse-
High in
A newly-developed gear inspec-
tion system uses multiple 3D laser
quently, to achieve this
capability MS3D engi-
Quality and
line triangulation sensors to pro-
vide 100% in-line inspection, with
the aim of saving gearbox manu-
neers have specified
more compact 2700
series laser sensors
Features
facturers time and money. from Micro Epsilon
(Ortenburg, Germany;
www.micro-epsilon. EXO Series
up to up to

20160
com), which are small
The system, dubbed GearINSPECTION, was enough to fit into the machine.
previewed at last years VISION 2016 tradeshow For the five-sensor system, one laser dis- MP FPS

and conference in Stuttgart, Germany. placement sensor scans the central bore and
Basically, a gear, depending on the plant another four scan the flanks of the gear with
and the type of gear, is produced every 10-40 two sensors required for each side of the gear
seconds. Thats the cycle time, explains Dr. teeth. In production, pinions would typically HR Series
Marc Rosenbaum, President of MS3D. And be loaded onto a tripod in the center of the up to up to

the tactile measurement systems currently in metrological core using a robot or manually.
use have a measurement cycle time of 5-6 It typically takes four seconds to load and
29 80
MP FPS
minutes. So there
is a pretty big gap
between 40 seconds Xc
Industrial PC
and 6 minutes. In Zc
our system, handling
SHR Series

47 7
of each gear takes 30 3D laser
displacement
seconds or less. sensor W
The metrological MP FPS
core of the machine
on exhibit was
designed for 3D geo- Rotary stage The perfect Picture for
metrical inspection
of pinion gears that In the five sensor system, one profile scanner digitizes the central bore
your Application.
have a 3-mm mini- and another four simultaneously scan the flanks of the gear from various
> CMOS or CCD sensor
mum internal ring positions and angles as the gear rotates. The individual point clouds from > Four LED ligt controller
diameter, an 80-mm each sensor are combined into a single point cloud, which is analyzed to > 256 512 MB of Burst Mode Buffer
> Sequencer, PLC, Safe Trigger
maximum external extract the complete 3D measurements of the gear. > Extended operating temperature range: -10 up to 60C
ring diameter and a
height ranging from 1 to 25 mm. For this size unload the gear, explains Rosenbaum. The
part, the machine uses five LJ-V7060 3D laser gear is then rotated as the sensors simultane-
displacement sensors from Keyence (Itasca, ously scan the gear from various positions and
www.svs-vistek.com
IL, USA; www.keyence.com) to scan the gear angles. It takes about two seconds to stitch
in less than two seconds, collecting millions these images together into a single 3D point SVS-Vistek GmbH
Germany
of data points. cloud. For a standard gear, 12 to 18 million +49 8152 99850
info@svs-vistek.com
We are presently developing a system points are generated, and from these, geometri-
that will measure gears up to 40 mm in cal features are extracted continued on page 12

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 11

1704VSD_11 11 3/23/17 9:06 AM


technologytrends

continued from page 11 within 20-30 seconds. market, and this was the only one which was defects are directly extracted from the dense
One issue that MS3D encountered was that at the same time universal and able to quickly cloud of points using specific software devel-
different companies have different standards interface with our own software, said Rosen- oped by MS3D. The software compares
for how they want the data presented. Rather baum. The real bottleneck comes from the the scan data to dimensions extracted from
than developing a unique graphical user inter- fact this software is geared for 2D input from CAD models or 2D drawings to measure spe-
face (GUI), MS3D integrates Quindos, a third- tactile machines, not 3D data. However, this cific areas of interest and determine if it is
party software from Hexagon Metrology that is software allows us to present the data the way within tolerance or not. A final report is cre-
used as a front end. the customer wants it. ated showing the measurement analysis per-
We tested many software products on the Geometrical features or surface aspect formed on each part.

continued from page 10 In order to minimize the risk of faulty con- the next station, which is board-side inspection. At this station, a
nectors making it into the supply chain, one major US connector scanCONTROL 2650-25 laser line profiler from Micro Epsilon
manufacturer gave engineers at G2 Technologies (Apex, NC, USA; (Ortenburg, Germany; www.micro-epsilon.com) scans the entire
www.g2tek.com) the nod to develop an automated connector inspec- board side of the connector.
tion system. The system, built on the PXI platform from National After board-side inspection, connector-side inspection takes place.
Instruments (NI; Austin,
TX, USA; www.ni.com/
vision), combines a machine-
vision based non-contact 3D
inspection system, a clean-
ing station, electrical test and
engraving stations.
Inspectors can see bent
and missing pins all day
long, but its easy to miss
a faulty connector with a
pin thats just too short,
explains Craig Borsack, Metal clips (1, 2, 3, and 4) on the connector create a reference plane from which the system measures the
President of G2 Technolo- height of the contacts. Two posts (T and U) on the connector provide a datum for measuring the position
gies. This application is a information for each contact. The 3D view (right) shows a close up of post U and contact 5 and 6.
huge improvement. By offer-
ing noncontact inspection and the ability to spot these type of flaws Due to cycle time requirements, and the need to scan the part from
that were previously making it through the process, our non-con- both directions, inspection of the mating side of the connector is per-
tact inspection system could save this connector manufacturer and formed at two stations by two additional laser line profilers.
others millions of dollars or more each year. Two scans are required due to shadowing effects created by the
The inspection system is installed after stitching, a process that connector shell as the part is scanned from one side, explains Bor-
accumulates contact pins and inserts them into molded connector sack. In order to get a complete 3D point cloud, the part must be
housings. Stitched connectors enter on an input conveyor and pass scanned from both directions, then the images are combined to
under a Genie Nano M1920 GigE Vision camera from Teledyne mask out the shadows.
DALSA (Waterloo, ON, Canada; www.teledynedalsa.com). The system scans from both sides and creates a plane based on a
An image of the connector is acquired with illumination pro- pad (feature) on the bottom of the mate side connector. This plane
vided by a DL 194 diffuse dome light from Advanced illumination will be used to measure true position and pin height of the contacts.
(Rochester, VT, USA; www.advancedillumination.com). The image Borsack said his company is proud of the 3D inspection system it
is then analyzed to verify that the correct part is present and that its has developed and hopes other connector manufacturers will con-
in the proper orientation to proceed through the inspection process. sider exploring this for their companies.
If the part is not correct or is improperly oriented, the system diverts Its a small price to pay when you look at the potential savings
it into a reject bin. it offers. Not only can this inspection system help protect a man-
Parts deemed correct and that are properly aligned proceed to ufacturer from being sued for millions of dollars in damages in a
an orientation wheel that repositions the part board-side down, for recall situation, Borsack said. It could also absolutely save lives.

12 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_12 12 3/23/17 9:06 AM


technologytrends

continued from page 10 projected spot size. combination of requisite brightness, form
Vorum had been researching CAD/CAM as We did not want a laser-based source factor, power requirements and size, added
a more precise and efficient way to design because of the additional safety/risk over- Grochowski. These were the main techni-
and make prosthetics and orthotics for more head, said Vorum CTO Ed Grochowski. It cal drivers.
than a decade before it opened for business. is also a potential objection to overcome for Along with the SP30s form factor and
Saunders gave the first public demonstration our typical customer applications, such as intensity level, Vorum also cited the SP30s
of a computer-aided design and manufactur- scanning patients. durability and price in making it the right fit
ing system for P&O professionals in 1983. Smart Vision Lights product offered the for the Spectra scanner.
While Vorum was in the process of
developing a portable, handheld 3D struc-

IntroducIng the
tured-light scanner that would allow P&O
professionals to take advantage of CAD tech-
nology in speeding up the process of making
prosthetics and orthotics by eliminating the
need for plaster casts.
The new product, called Spectra, was
TECHSPEC cr SerIeS LenSeS
designed to be used in tandem with an LED
projector and a single camera to capture a
Compact Ruggedized (Cr) for Shock and Vibration
3D image. Along with Vorums seven-axis
robotic carving system, it utilizes a touch
screen and software customized for carv-
ing prosthetic and orthotic shapes. Spec-
tra also creates electronic files to make up
an easily accessible library of images and to
document how a patients shape and volume
changes over time.
The projector is situated at one end of the
Spectra, with the camera at the other end.
The projector projects an encoded pattern
onto the body, which the camera records.
The integrated software processes the result-
ing digital image, allowing the user to take
3D information from the projected pattern
on the body.
While getting Spectra ready for market,
Vorum shopped for a structured light projec-
tor that would fit on the scanner and prop-
erly illuminate the subject so that the inte-
grated camera could work more efficiently.
Vorum chose to use the SP30 series LED
structured light pattern projector from
Smart Vision Lights (SVL; Muskegon, MI,
USA; www.smartvisionlights.com).
Visit us at | Booth 636
Using a 5-watt LED, the SP30 can be
used to project any user-selectable pat-
tern.With an internal current-based LED
driver capable of strobe output, the new
light source met all of Vorums compact
size requirements and provided an intense www.edmundoptics.com/ruggedized

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 13

1704VSD_13 13 3/23/17 9:06 AM


JAI.COM

GO your own way...


GO-24OO
2.35 MP IMX 174
GO-51OO
5.1 MP IMX 25O GO-51OO-PGE
22.7 fps
GO-24OO-PGE
48.8 fps GO-51OO-USB
GO-5OOO 74 fps
5.O MP Lince5M

GO-24OO-USB
159 fps
GO-51O1
5.1 MP IMX 264

GO-24OO-PMCL GO-51O1-PGE
165.5 fps 22.7 fps

GO-24O1
2.35 MP IMX 249
GO-24O1-PGE
41 fps

When you need low cost, high performance and ultra-


reliability in a machine vision system, theres a JAI Go
The GO Series...
Series camera to help you reach your destination. Choose Small and affordable
the perfect combination of imager, resolution, pixel size industrial cameras
and interface to fit your exact requirements. From our
most affordable 2.35 megapixel GO-2401-PGE, to our Latest CMOS censors
Small size (29 x 29 x 52 mm)
newest 5.1-megapixel GO-5100-USB model, these cameras
MTBF > 200,000 hours
combine small size, light weight (only 46 grams), and MTBF Camera Link, GigE Vision, or USB3 Vision
ratings equal to more than 20 years of continuous 24 x 7
operation. Ready to make your system Go? Find your best
route at www.jai.com/go

Europe, Middle East & Africa - JAI A/S Asia Pacific - JAI Ltd. Americas - JAI Inc.
camerasales.emea@jai.com / +45 4457 8888 camerasales.apac@jai.com / +81 45-440-0154 camerasales.americas@jai.com / + 1 408 383 0300

1704VSD_14 14 3/23/17 9:06 AM


Industry Solutions Profile

Smart camera checks


currency for counterfeits
Open-source software and modular embedded hardware facilitates the rapid
development of systems to detect counterfeit banknotes.

Dr. Ricardo Ribalda

Traditionally, many systems builders have de- range, capturing images


veloped custom vision applications using off- of the banknotes with a
the-shelf Windows-based PC hardware and Qtechnology camera and
vendor supplied software libraries. Howev- then checking the result-
er, there is a growing trend to use less expen- ing images (Figure 1). To
sive embedded systems hardware, open-source achieve real-time perfor-
operating system software such as Linux, and mance, the software was
open-source image processing libraries such as analyzed and optimized
OpenCV (www.opencv.org). using Mentor Embedded
To demonstrate how such an industrial Sourcery CodeBench and
machine vision application can be built quick- Integrated Sourcery Ana-
ly and efficiently using embedded systems and lyzer to identify and resolve
Figure 1: Using an industrial camera prototype, based upon
open source software, Danish camera vendor functional and perfor- AMDs R series SoC (formerly codenamed Merlin Falcon) run-
Qtechnology (Valby, Denmark; www.qtec.com) mance issues. ning Mentor Embedded Linux operating system, Qtechnology has
recently teamed with Advanced Micro Devices created a system to perform high-speed scanning and validation of
(AMD; Sunnyvale, CA, USA; www.amd.com) Cameras and software paper currency.
and Mentor Graphics (Wilsonville, OR, USA; Qtechnology cameras are
www.mentor.com). modular in nature and comprise a number of die. The modular nature of the system enables
Using an industrial camera based on AMDs heads into which a variety of CMOS, CCD, developers to mix and match the heads with
R series system-on-chip (SoC, formerly code- InGaAs, and microbolometer sensors can be the bodies to meet the needs of their applica-
named Merlin Falcon) running Mentor mounted. There is also a variety of camera tion (Figures 2a and 2b).
Embedded Linux operating system, Qtech- bodies, ranging from a pure FPGA system The FPGA inside the camera controls the
nology has developed a system to perform high- (QT5012) to their brand new system con- sensors settings, processes the images and
speed scanning and validation taining two main comput- transfers captured images over a PCIe inter-
of paper currency. Dr. Ricardo Ribalda, ing units (QT5122): a FPGA face. Several operations are performed by the
The application highlights and an AMDs R series SoC, FPGA such as image, white balance, perspec-
Lead Firmware Engineer,
how counterfeit notes can be which features four Excava- tive and illumination correction, thus offload-
identified by scanning the Qtechnology (Valby, Den- tor x86 CPUs cores with a ing the main CPUs of the task.
currency with a light source mark; www.qtec.com) Radeon graphics GPU and Once images have been pre-processed by the
having a specific spectral an I/O controller on a single FPGA, they are transferred over a PCI Express

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 15

1704VSD_15 15 3/23/17 9:06 AM


Industry Solutions Profile

Gen2 x4 interface to the R series SOC. If further Software


computing power is required, an additional GPU User User User User
application application application application
can be added within the body of the camera.
However, for most applications, the computing User
OpenCV Python Gstreamer Third-party libs
application
power of the R series SOC is sufficient.
Accessing the functionality of the GPU com- Libv412
pute units in the system is achieved through the
Video4 Linux2
Open Computing Language (OpenCL), an open
Kernel
standard maintained by the non-profit technolo-
gy consortium Khronos Group (Beaverton, OR,
Figure 3: Application programs can be developed using C or C++. Alternatively, free and open-
USA; www.khronos.org) for programming and source software such as GStreamer can be deployed to process image data by connecting a num-
executing programs on devices such FPGAs, ber of proprietary or third-party image processing plug-ins to create a custom pipelined software
CPUs and GPUs. Similarly, the functionality of program. Also, programs can be developed using libraries of image processing functions that
the image sensor can be accessed through Vide- have already been written in Open CV, Python or third party libraries.
o4Linux (www.linuxtv.org), a collection of device
drivers and an API for supporting video capture on Linux systems. This an application could be created simply using C or C++ . Secondly, open
enables parameters such as the image resolution and the frame rate to be source software such as GStreamer (https://gstreamer.freedesktop.org)
controlled, freeing the user from using proprietary libraries. could be deployed to process image data by connecting a number of pro-
Because of the open architecture of Qtechnologys platform, a devel- prietary or third-party image processing plug-ins to create custom pipe-
oper has many programming options to choose from (Figure 3). First, lined software. Thirdly, programs could be developed using libraries of
image processing functions written in Open CV or Python, or by reus-
ing functions from third-party libraries such as MATLAB from Math-
Works (Natick, MA, USA; www.mathworks.com) or HALCON from
MVTec Software (Munich, Germany; www.mvtec.com).

Banknote characteristics
Before writing any application software, however, developers must be
aware of the characteristics of the inspected product, based on which
the choice of camera and light source technologies will be affected.
Due to its modular nature, this is an easy task with Qtechnology cam-
eras. The vision researcher will develop a single application and then
experiment with the different heads/sensors, reducing dramatically the
time to explore the solution space.
To discern between real and counterfeit banknotes, it was determined
that under infrared (IR) light, only the emerald number, the right side
Figure 2a: Qtechnology cameras comprise a number of heads into of the main image and the silvery stripe are visible on the front of the
which a variety of CMOS, CCD, InGaAs, and microbolometer sensors banknote. On the reverse side, only the numerical value and the hori-
are mounted, and a body (QT5122 is displayed). zontal serial number are visible (Figure 4).
Hence it was decided to illuminate the
Head Body
banknotes using a halogen lamp as a source
Sensor AMD of IR light and to capture the images using a
LVDS FPGA PCle PCle GPU
APU
Qtechnology monochromatic CMOS camera
a Qtec QT5122 body with CMOSIS (Ant-
Video 4 Linux2 Linux OpenCL
werp, Belgium; www.cmosis.com) mono-
chrome IR enhanced 2MPixel headfitted
with an IR filter. Images captured by the camera
User of the real Euro note will have all the left part
Open source APIs Mentor Embedded Linux
application
of the details missing, while in the counter-
Software
feit note, the details will still be clear. This is
Figure 2b: The body contains two main computing units an FPGA and AMDs R series SOC with because Euro notes use an ink that absorbs
four Excavator x86 CPUs cores with a Radeon graphics GPU and an I/O controller all on a single die. IR light, whereas counterfeit notes (possibly

16 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_16 16 3/23/17 9:06 AM


Industry Solutions Profile

photocopied) reflect IR wavelengths.


Once the hardware for the system has been
selected, images of the banknotes can be cap-
tured from the camera and image process-
ing software can be developed. To facilitate
the programming of the camera, Qtechnol-
ogy has developed a web-based interface that
enables the camera to be programmed remote-
ly (Figure 5). The web-based tool accesses the
hardware of the camera through the Video-
4Linux interface and allows easy navigation
through tens of settings, such as the bit mode
of the sensor, the response curve, the trigger-
ing mode and the color map.
By using the web-based tools, a region of
interest (ROI) in the image of the Euronote can Figure 5: Once the hardware for the system has
be selected where details of a legitimate note been selected, images of the banknotes can be
captured from the camera. To facilitate camera
should not be visible when the note is illuminat-
programming, Qtechnology has developed a
ed by IR light. These tools also enable the frame web-based interface that enables camera fea-
rate of the system to be set in software. It is pos- tures to be programmed remotely.
sible, of course, to analyze the whole image of
the note in software. However, by using a small- within the image (Figure 6).
er ROI, the maximum frame rate (up to more Once the position of a note was deter-
than 1000 fps) of the system can be achieved. mined, a ROI was located inside the image
Thanks to the PCIe Gen2 x4 link to the APU, Banknote identification corresponding to the location of features of a
up to 2 GB/s of data can be received by the body. In the case of banknote identification, the algo- legitimate bank note. These would be invis-
Once the images have been sampled, they rithm was easily implemented using the Jupy- ible when the note is illuminated by the IR
can either be saved in a JPEG or in a lossless ter Notebook interface. The first stage of the light source (Figure 7).
format. To analyze the images, the developer can image processing chain involved locating the Because features on legitimate notes are
use Jupyter Notebook (http://jupyter.org). This edges of the notes by applying a Canny filter effectively invisible in the ROI, analyzing
enables programmers to create and share docu- (http://bit.ly/VSD-CANNY) to the images of this region can determine whether the note is
ments that contain live code, equations, visualiza- the Euronotes. The numerical values for the real or counterfeit. To do so, a simple thresh-
tions and explanatory text. Using the Notebook, upper and lower thresholds were calibrated old classifier was applied to determine the
a developer can program the embedded system using static interactive widgets, or, sliders. number of pixels in the ROI. By setting the
in the Python programming language, which in With the edge detection performed, a fill- thresholds of the classifier in software, it
turn provides access to both Open CV (http:// ing algorithm was selected and applied to is possible to determine the pixel intensity
opencv.org) and NumPy/SciPy (www.numpy. fill out all closed regions in the images, after values under the ROI and then determine the
org) image processing libraries, enabling them which the contours of the notes were detected legitimacy of the currency. Having done so,
to be combined in a single program. to determine the specific location of the notes the notes are then labeled to indicate wheth-
er they are real or counterfeit.
The system is light depen-
dent, so it is important to
ensure that the samples of
the banknotes are well illu-
minated. Because of this, it
may be necessary to adjust
the threshold of the classifier
to accommodate the different
Figure 4a and b: By illuminating the front of the banknote with IRlight, only the emerald number, the right luminosity levels used to illu-
side of the main image and the silvery stripe are visible. On the back, only the numerical value and the horizontal minate the banknotes.
serial number are visible. Using the Jupiter Notebook

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 17

1704VSD_17 17 3/23/17 9:06 AM


Industry Solutions Profile

0 0 0 Throughput analysis
50 50 50 The Sourcery Analyzer enables
100 100 100 a throughput analysis to be per-
150 150 150 formed on the Euronote Gstream-
200 200 200
er application. If the system is not
0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400
meeting the image capture rate
Figure 6: The first stage of image processing involves locating the notes in each of the images. To do, so demanded by the application, it is
a Canny filter is applied to detect the edges of the notes. With the edge detection performed, a filling possible to access the Euronote soft-
algorithm is applied to fill out all closed regions in the image, after which the contours of the notes can be
ware program from the Sourcery
detected to determine the specific location of the notes within the image.
Analyzer environment. This en-
0 (http://bit.ly/VSD-I420) where the ables the size of the image that is acquired to
25
images are represented in the YUV. be scaled down before it is applied to the Eu-
50
Once the application was writ- ronote application for processing, hence im-
75
100 ten, Mentors source analyzer was proving the number of frames per second that
125 employed to determine the speed can be acquired.
150 of execution. To do so, the code was However, if scaling the image size still
175
copied into the Mentor Embedded does not effectively enhance the speed of the
200
Sourcery Codebench from where system, it is possible to enhance performance
0 50 100 150 200 250 300 350 400
the binary code could be trans- further by ensuring that all of the four ALUs on
Figure 7: Once the position of a note is determined, an ferred to the target AMD processor the embedded processor are utilized in full. By
ROI is located in the image where the location of features where it was executed. T he using the scaling tracer on the Mentor Embed-
of a legitimate bank note would be invisible when the Sourcery Analyzer then graphical- ded Sourcery Analyzer, the number of ALUs
note is illuminated by the IR light source.
ly highlights CPU statistics that on the SoC used can be determined.
interface, the algorithm can be easily tuned and show the time each core spent in a If the processing power is under used, it
debugged, showing any intermediate results in given state, scheduling statistics that high- is possible to optimize the execution of the
a numerical and graphical way. light which software threads ran on each core program so that multiple threads, or com-
and the thread migration rate, or how fre- ponents of the Euronote application, can be
Code optimization quently the system scheduler moved software run in parallel. To do so, the Gstreamer code
Once the correct vision algorithm has been de- threads between cores. can again be accessed from the analyzer and
veloped, it is important to have an implemen-
Scheduling
tation that can run at the desired throughput. CPU 0
It is optimized to run efficiently on the embed- CPU 1
ded AMD R series SoCspecifically to ensure CPU 2
CPU 3
that optimum use is made of the four Excava-
87.5405k 87.5410k 87.5415k 87.5420k 87.5425k 87.5430k 87.5435k 87.5440k
tor x86 CPU cores on the device. Time (s)
In this case, the implementation was done Figure 8: By using the scaling tracer on the Mentor Embedded Sourcery Analyzer, the number of
using Gstreamer (https://gstreamer.freedesk- ALUs on the SoC used can be determined. If the processing power is underutilized, the execu-
top.org)a multimedia framework that links tion of the program can be optimized so that multiple threads, or components of the Euronote
together a variety of custom built or off-the- application, can be scheduled to run in parallel.
shelf media processing elements called plug-
Frames per sec Latency (msec) Gstreamer change
ins into a software pipeline. Hence, the first step
in the analysis process involved converting the Base 20 45
image processing software into a GStreamer Videoscale ! video/x-raw, width=800,
Resized 50 20
plug-in that was labeled Euronote. height=600
Using Gstreamer, the off-the-shelf Video- Queue 60 400 Queue
4Linux plug-in was used to fetch images into Size=1 100 35 Queue max-size-buffers=1
the pipeline from the 2MPixel camera and
Figure 9: Prior to optimization, the system was capable of capturing images at a rate of 20fps,
set the frame rate at 100fps. Then, another
while the latency between image acquisition and processed output was 45ms. Having resized the
Gstreamer plug in was used to send the images image data, image capture rate was increased to 50fps with 20ms latency. By introducing a queue
to the Euronote plug in, after which it could to the process, system throughput was increased to 60fps with a 400ms latency. Finally, by limiting
be scaled to 1024 x 768 into a I420 format the maximum size of the buffer, the system captured images at 100fps with 35ms latency.

18 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_18 18 3/23/17 9:06 AM


Industry Solutions Profile

queues created in the software to optimize the Optimized system such hardware and software may ultimately pro-
pipeline. Once achieved the resulting code The results achieved by using the Mentor Em- vide a significant challenge to many existing
can again be executed and profiled using the bedded tools to optimize the performance of software-hardware vendors who rely on propri-
Sourcery Analyzer (Figure 8). the system are summarized in Figure 9. Prior etary architectures.
However, using such a technique is not to optimization, the system was only capable
without its limitations. Having parallelized of capturing images at a rate of 20fps, while
Companies mentioned:
the code to run on multiple CPUs, it may the latency between image acquisition and
be that the performance of the system is still processed output was 45ms. Having resized Advanced Micro Devices
less than optimal. Instead of going through the image data, the image capture rate was Sunnyvale, CA, USA
www.amd.com
the pipeline, images are being copied into increased to 50fps with 20ms latency. By in-
Khronos Group
the queues, a process which itself introduces troducing a queue to the process, it was possi-
Beaverton, OR, USA
some overhead. This can be verified by using ble to improve the system throughput further www.khronos.org
the memory allocator tracer in the Sourcery to 60fps with 400ms latency. Finally, by limit- MathWorks
Analyzer and viewing the amount of memory ing the maximum size of the buffer, the system Natick, MA, USA
www.mathworks.com
allocated to the process. captured images at 100fps with 35ms latency.
This issue can be resolved easily, howev- Undoubtedly, the use of open-source oper- Mentor Graphics
Wilsonville, OR, USA
er, by returning to the Euronote code via the ating system software, libraries of freely avail- www.mentor.com
Sourcery Analyzer and ensuring that no image able image processing software, coupled with MVTec Software
data is stored within the queues during the exe- the use of advanced system diagnostic tools and Munich, Germany
www.mvtec.com
cution of the Euronote application. Once this inexpensive modular camera hardware, will
has been achieved, the program can be saved, continue to provide developers with an inexpen- Qtechnology
Valby, Denmark
executed again and the results of the execution sive means by which they can develop machine www.qtec.com
viewed using the Sourcery Analyzer. vision applications. Indeed, the proliferation of

1704VSD_19 19 3/23/17 9:06 AM


Integration Insights

IEEE1588 simplifies camera


synchronization
Turning the camera itself into the master eradicates the
need for preparing a Grand Master Clock

Synchronisation mechanism
System clock count
Satoshi Eikawa
Grand master / server
System camera 1
Traditionally clocks built into cameras are loop), free-running counters are System camera 2

reset when the power is switched on, and the updated every time a synchro-
time stamps asynchronously commence a nization message is exchanged.
count-up. For networked cameras, these time And reducing the cycle interval
stamps gradually go out of synchronization with the Grand Master improves
as time passes owing to discrepancies in the precision even further.
clocks frequencies.
Absolute time
IEEE1588 is a protocol stipulated by the Trigger and GPO links
Precision Time Protocol (PTP) and is in effect Prior to IEEE1588 it was impos- Figure 1: Using IEEE1588 it is possibly to schedule regular
synchronization, bringing cameras with faster / slower clock
for equipment connected together by Ether- sible to guarantee truly-accurate
speeds back in line with the master. By shortening the
net. It synchronizes the time with extremely simultaneous capture in a multi- synchronization interval time lag is minimized; and therefore
high levels of precision in cameras that are camera system. jitter too.
connected via an Ethernet cable to a Grand Previous generations of GigE
Master standard time clock. Vision camerasupto and including GigE The new Sony XCG-CG camera series has
Cameras that support IEEE1588 exchange v1.2contained action commands that were been equipped with a function that begins
synchronized messages with the Grand Master designed to operate simultaneously with a exposure in synchronization with this abso-
at predetermined cycles, with the internal single command for multiple cameras. How- lute time, including functions defined as
counter recalibrated in accordance with the ever, there were still some inconclusive ele- IEEE1588 applications conforming to GigE
time stamp information at the time of send- ments, such as delays in network propagation Vision standards that are known as Scheduled
ing and receiving (Figure 1). and delays in firmware processing. Action Commands.
The IEEE1588 time stamp To overcome these issues, The new cameras have been equipped with
is an epoch time counter Satoshi Eikawa, Sony a combination of IEEE1588 a Scheduled Action Command that presets the
with 00:00, January 01, 1970 and the Action Command time for starting synchronization for the soft-
Image Sensing Solutions,
set as [0], and it provides a added to GigE Vision 2.0 ware trigger and IEEE1588, for example its
resolution of 1ns (1GHz). Weybridge, UK (www.im- enabled each individual possible to set synchronization to once every
Contrary to the mechanism age-sensing-solutions.eu) camera to specify a time for second, with all cameras in a network synchro-
of the PLL (phase-locked executing actions. nized to the Grand Master at this interval.

20 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_20 20 3/23/17 9:06 AM


Integration Insights

Reducing the interval between synchroni- Curtailment of Tact Time: One issue for
zation enables it to be carried out while time system developers and manufacturers using
discrepancies are still small, which helps to such systems that still remains is the time
minimize jitter. With regards to free-run- between image capture and peripheral equip-
ning, the timing for starting camera exposure mentnot supporting IEEE1588taking
is aligned with the time that has been syn- action. As such, Sony is planning to link the
chronized with the Grand Master. Although new camera series to GPO (General Purpose
this is affected by the network environment, Output) in the future, which will enable such
it is ideally possible for exposure synchronous peripheral equipment to be operated based on
control to be performed within 1s (Figure 2). time synchronicity.
It is generally necessary to prepare separate Doing this you can see that, for example, by
equipment that acts as the Grand Master connecting the cameras GPO to robots during
either special equipment available on the open the periodfrom the camera capturing images
market, or a PC with a Linux OS and running Figure 3: Sonys XCG-CG camera series to the robot performing work picking opera-
the free Grand Master software. has been equipped with a function that tionswill enable the image capturing and
However, one area were investigating is the begins exposure in synchronization with this robot operations to be synchronized.
absolute time, including functions defined as
mounting of an IEEE1588 master function Another example is in bottle inspection, here
IEEE1588 applications conforming to GigE Vi-
on the XCG-CG series in a future upgrade. sion standards that are known as Scheduled devices require multiple cameras to be synchro-
Doing this would turn the camera itself into Action Commands. nized and for which inspection samples are
the master and eradicate the need for pre- conveyed at a predetermined speed, taking this
paring a separate Grand Master Clock, sim- two different points will allow decisions as approach will have an extremely high affinity
plifying the mechanism for synchronization to whether vehicles are exceeding the speed with systems that use the high-precision time
between cameras; and between cameras and limit or not to be made, and accurate times synchronization capabilities of IEEE1588.
peripheral equipment. of images obtained from both points will sim-
plify high-precision speed analysis.
Usage scenarios and advantages IEEE1588 time stamps will also prove effec-
Simplification of Post-Event Analysis: Mount- tive for industrial robots and all types of inspec-
ing IEEE1588 onto GigE Vision cameras tion devices used in assembly work. Adding
enables the camera to synchronize its time set- absolute times to images showing inspection

DONT MISS
tings with the Grand Master Clock, and the processing and results will enable the items in
time stamps attached to image packets enable question to be easily identified.
absolute time to be displayed. Improved Image Processing System Reliabil- THE NEXT
One of the scenarios for use with ITS ity: One of the issues with installing vision sys-
ISSUE
(Intelligent Transportation Systems) is the tems for industrial assembly and inspection
more accurate detection of vehicles in vio- robots and devices is the layout of the wiring. SUBSCRIBE
lation of the speed limit, without the need Cables are subject to wearing and severing TODAY!
for radar. The time stamps obtained from when used for long-term operations, and this
causes a drop in system operability. Vision Systems Design
The chance of this increases in line is focused exclusively on
the information needs of
with the number of cameras in a
Camera 1 machine vision and image
(standard) systemthe number of cables must processing professionals.
also be increased if each one is to Offering a variety of
be synchronized. The XCG-CG information products to
help develop and manage
Series enables multiple cameras to vision systems today and
be synchronized with IEEE1588 optimize those systems of
Camera 2
and Scheduled Action Commands. the future.
Within 1s
They also support PoE (power over
Figure 2: Adjusting the exposure timing of the camera
Ethernet), which means that only http://www.vision-systems.com/
one cable is required for exposure subscriptions.html
in free run, based on the grand master. Note, while it can
be affected by the network environment, it is possible to synchronization, image output and
drive exposure synchronization to within 1s. the power supply.

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 21

1704VSD_21 21 3/23/17 9:06 AM


SMALLER, FASTER, STRONGER, CHEAPER.
BETTER IN EVERY WAY THAT MATTERS.

4k UHD 4096 x 2160

VGA 640 x 480

HD 1920 x 1080

25M 5120 x 5120

Genie Nano now from


VGA to 25 megapixels
Small package. Big functionality.
The Genie Nano series offers unprecedented range in
sensor size and image quality. From VGA all the way up
to our newest 18M and 25M versions, we now offer over
40 possible versions for both color and monochrome
applications, all with the same robust build quality and
TurboDrive for unrivalled frame rates.

Get more Genie Nano details and downloads


www.teledynedalsa.com/genie-nano

1704VSD_22 22 3/23/17 9:06 AM


product focus on Image processing software

Software packages
offer developers
numerous options for
machine vision design
Systems integrators can take a number of different approaches when configuring their
machine vision systems.

Andy Wilson, Contributing Editor

When building machine vision systems, devel- Open source code provides
opers can choose from a number of commer- alternative options, p. 24),
cially available software packages from well- vision software manufac-
known companies. In choosing such software, turers realized that systems
however, it is important to realize the function- integrators needed to more
ality these provide, the hardware supported rapidly develop applications
and how easily such software can be config- to solve specific machine
ured to solve a particular machine vision task. vision problems without the
In the past, the choice of software was lim- need to understand the low-
ited with many companies merely offering lying complexity of image
callable libraries that performed relatively processing code. Because of
simple image processing operations. These this, many vendors now offer
included point processing operations such as higher level tools within
image subtraction, neighborhood operations their software packages that Figure 1: Vision Builder AI from NI allows developers to config-
ure, benchmark, and deploy vision systems using functions such
such as image filtering and global operations provide higher-level func-
as pattern matching, barcode reading and image classification
such as Fourier analysis. tionality such as image mea- within an interactive menu-driven development environment.
While useful, developers were tasked with surement, feature extrac-
understanding each of these functions and tion, color analysis, 2D bar code recognition (Natick, MA, USA; www.cognex.com), Vision
how they could provide a solution to a machine and image compression all within an interac- Builder from National Instruments (NI; Austin,
vision task such as part measurement. Often, tive environment. TX, USA; www.ni.com), Common Vision Blox
the need to build software frameworks to sup- Examples of these high-level tools include (CVB) from Stemmer Imaging (Puchheim,
port such libraries made developing such pro- Matrox Imaging Library (MIL) from Matrox Germany; www.stemmer-imaging.com) and
grams laborious and time consuming. Imaging (Dorval, QC, Canada; www.matrox. NeuroCheck from NeuroCheck (Stuttgart,
com), Open eVision from Euresys (Angleur, Germany; www.neurocheck.com). Such tools
Rapid development Belgium; www.euresys.com), HALCON from allow many commonly used machine vision
While such libraries are still available from MVTec Software GmbH (Munich, Germany; functions to be configured without the need for
a number of open-sources (see SIDEBAR www.mvtec.com), VisionPro from Cognex extensive programming. In this way, developers

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 23

1704VSD_23 23 3/23/17 9:06 AM


product focus on Image processing software

are abstracted from the task of low-level code


development, allowing them to build machine
vision applications more easily.
To further simplify this task, many software
packages feature graphical interfaces that
allow high-level image processing features to
be combined within an integrated develop-
ment environment (IDE). Matroxs Design
Assistant, for example, is an IDE where vision
applications are created by constructing a flow-
chart instead of writing traditional program
code. In addition to building a flowchart, the
IDE enables users to directly design a graphi-
cal operator interface for the application. Sim-
ilarly, Vision Builder AI from NI allows devel-
opers to configure, benchmark, and deploy
vision systems using functions such as pattern
matching, barcode reading and image clas-
sification within an interactive menu-driven
Figure 2: To allow developers to access the underlying power of FPGAs, VisualApplets from Sili-
development environment (Figure 1). con Software is a software programming environment that allows developers to perform FPGA
In many cases, vendors will use their software programming using data flow models.
to provide end-users with software developed
to address specific tasks such as optical charac- tiple PCBs, for example, Microscan (Renton, individual circuit boards on the panels can be
ter recognition (OCR). To read and verify bar- WA, USA; www.microscan.com), has used its tracked though the entire production process
code labels on large panels consisting of mul- Visionscape software to ensure that each of the (see Modular vision system eases printed cir-

Open-source code provides alternative options


Many developers choose high-level commercially-available software a DLL written in Managed C++ that wraps the OpenCV library in .NET
packages with which to develop machine vision systems because of classes, so that they are available from C#, VB.NET or Managed C++. The
their ease of use and the technical support available. Other more ambi- wrapper can be downloaded at: http://bit.ly/VSD-1704-8. Other .NET
tious developers may wish to investigate the use of open-source code wrappers include Emgu CV (www.emgu.com), a cross platform .NET
in their projects. Although little technical support may be offered, no wrapper to OpenCV that allows OpenCV functions to be called from
licensing or royalty fees are required. .NET compatible languages such as C#, VB, VC++ and IronPython. The
Such open-source software range from C/C++ and Java libraries, wrapper can be compiled by Visual Studio, Xamarin Studio and Unity and
frameworks, toolkits and end-user software packages many of which runs under Windows, Linux, Mac OS X and Android operating systems.
can be found on the website of RoboRealm (Aurora, CO, USA: www. To build computer vision applications using OpenCV, developers can
roborealm.com) at http://bit.ly/VSD-1704-7. Although some of the links use SimpleCV (http://simplecv.org), an open-source framework that
are outdated, the website does provide a review of many open source allows access to several computer vision libraries such as OpenCV with-
machine vision libraries that are available. out the need to understand bit depth, file format, color space or buffer
Two of the most popular methods of developing applications using management protocols. Since integrating Intels Integrated Performance
open-source code involve leveraging software such as AForge.NET Primitives (IPPs) is automatically performed by OpenCV, over 3,000 pro-
(www.aforgenet.com), a C# framework designed for developers of prietary optimized image processing and computer vision functions are
computer vision and artificial intelligence and the Open Source Com- automatically accelerated. These IPPs can be freely downloaded from
puter Vision Library (Open CV; http://opencv.org), an open source com- Intels developer site at: http://bit.ly/VSD-1704-9.
puter vision and machine learning software library that offers C/C++, To date, a number of companies support development with the
Python and Java interfaces and supports Windows, Linux, Mac OS, iOS OpenCV library. These include Willow Garage (Palo Alto, CA, USA;
and Android operating systems. www.willowgarage.com), Kithara (Berlin, Germany; www.kithara.de),
For those wishing to use OpenCV from C#, Elad Ben-Israel has created National Instruments (Austin, TX, USA; www.ni.com) and ControlVision
a small OpenCV wrapper for the .NET Framework. The code consists of (Auckland, New Zealand; www.controlvision.co.nz).

24 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_24 24 3/23/17 9:06 AM


cuit board traceability, Vision Systems Design,
April 2014, http://bit.ly/VSD-1404-1).
Some companies have even extended this
graphical flowchart interface concept to allow
developers to access the underlying power of
field programmable gate arrays (FPGAs). Visu-
alApplets from Silicon Software (Mannheim,
Germany; https://silicon.software), for example,
is a software programming environment that
allows developers to perform FPGA program-
ming using data flow models. In the companys
latest version, functions for segmentation, clas-
sification and compression are provided as well
as a Fast Fourier transform (FFT) operator that
allows complex band pass filters to be imple-
mented more efficiently (Figure 2). Figure 3: Matroxs Imaging Library (MIL) can now run under the RTX64 RTOS from IntervalZero,
Like Silicon Software, NIs LabVIEW FPGA a fact that has been exploited by Kingstar in the development of PC-based software for industrial
motion control and machine vision applications. Development for RTX64 is performed in C/C++
Module allows FPGA-efficient algorithms such
using Visual Studio and a subset of the Windows API. MIL for RTX64 supports image capture
as image filtering, Bayer decoding and color using GigE Vision and supported Matrox frame grabbers.
space conversion to be performed without using
low-level languages such as VHDL. By doing speeding machine vision applications. mercially available software, development envi-
so, many compute-intensive image processing For those wishing to develop machine vision ronments are now available that allow image
functions can be off-loaded to the FPGA, thus systems using a variety of open source and com- processing algorithms from a number of dif-

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 25

1704VSD_25 25 3/23/17 9:06 AM


product focus on Image processing software

ferent companies to be graphically combined. machine vision software packages are also VP of Sales and Marketing at TenAsys, INtime
Such environments allow developers to com- available. Running the HALCON machine takes control of the response-time-critical I/O
bine both open-source algorithms and com- vision package from MVTec, for example, can devices in the system, while allowing Windows
mercially-available packages to be integrated be accomplished using the RealTime RTOS to control the I/O that is not real-time critical
into a single environment. This allows machine Suite from Kithara (Berlin, Germany; www. (see Packaging Line Vision System Gets Speed
vision software to be specifically tailored based kithara.de). Similar to other RTOS, the Real- Boost, Control Engineering, May 2011; http://
on the most effective algorithms. Time RTOS Suite uses a separate scheduler bit.ly/VSD-CON-ENG).
VisionServer 7.2, a machine vision frame- within the kernel of the RTOS to decide which High-performance image processing has also
work from ControlVision (Auckland, New image processing task should execute at any become the focus of the embedded vision com-
Zealand; www.controlvision.co.nz), for exam- particular time. Like IntervalZero, this kernel munity. Recently, Dr. Ricardo Ribalda, Lead
ple, allows open source image-processing operates in conjunction with Windows (see Firmware Engineer at Qtechnology (Valby,
libraries and commercially-available packages Real-time operating systems target machine Denmark; www.qtec.com) showed how his
such as VisionPro Software from Cognex to vision applications, Vision Systems Design, company had created an application to perform
be used together in a graphical IDE. Also sup- June 2015, http://bit.ly/VSD-1506-1). high-speed scanning and validation of paper
porting VisionPro, the VS-100P framework For its part, Optel Vision (Quebec City, currency using processors from AMD (Sunny-
from CG Controls (Dublin, Ireland; www. QC, Canada; www.optelvision.com) recently vale, CA, USA; www.amd.com) and software
cgcontrols.ie) uses Microsofts .NET 4 frame- showed how it has developed a pharmaceuti- tools from Mentor Graphics (Wilsonville, OR,
work and Windows Presentation Foundation cal tablet inspection machine using its own USA; www.mentor.com) see Smart camera
(WPF) to enable developers to deploy single proprietary algorithms running under INtime checks currency for counterfeits, page 15 this
or multi-camera-based vision systems. from TenAsys (Beaverton, OR, USA; www. issue. A demonstration of the system in oper-
tenasys.com). According to Kim Hartman, ation can be found at http://bit.ly/VSD-QTec.
Real-time options
While most commercially-available machine Companies mentioned
vision software run using operating systems
such as Windows and Linux, the need to AMD Kithara PR Sys Design
Sunnyvale, CA, USA Berlin, Germany Delft, The Netherlands
develop machine vision systems that can per-
www.amd.com www.kithara.de www.perclass.com
form tasks within specific time periods has led
CG Controls Matrox Qtechnology
to the support of real-time operating systems Dublin, Ireland Dorval, QC, Canada Valby, Denmark
(RTOS). These RTOS then allow develop- www.cgcontrols.ie www.matrox.com www.qtec.com
ers to determine the time needed to capture Cognex Mentor Graphics RoboRealm
and process images and perform I/O within a Natick, MA, USA Wilsonville, OR, USA Aurora, CO, USA
www.cognex.com www.mentor.com www.roborealm.com
system while leveraging the power of Windows
to develop graphical user interfaces (GUI). ControlVision Microscan Silicon Software
Auckland, New Renton, WA; USA Mannheim, Germany
Today, a number of companies offer RTOS Zealand www.microscan.com https://silicon.software
support for machine vision software packages. www.controlvision.co.nz
MVTec Software Stemmer Imaging
MIL, for example, can now run under the Cyth Systems Munich, Germany Puchheim, Germany
RTX64 RTOS from IntervalZero (Waltham, San Diego, CA, USA www.mvtec.com www.stemmer-imaging.
www.cyth.com com
MA, USA; www.intervalzero.com), a fact that National Instru-
has been exploited by Kingstar (Waltham, MA, Datalogic ments TenAsys
Bologna, Italy Austin, TX, USA Beaverton, OR, USA
USA; www.kingstar.com) in the development of www.datalogic.com www.ni.com www.tenasys.com
PC-based software for industrial motion con-
Euresys NeuroCheck ViDi Systems
trol and machine vision applications. Built on Angleur, Belgium Stuttgart, Germany Villaz-St-Pierre, Swit-
the EtherCAT standard, machine vision tasks www.euresys.com www.neurocheck.com zerland
www.vidi-systems.com
employ MIL running natively in IntervalZeros IntervalZero Optel Vision
Waltham, MA, USA Quebec City, QC, Willow Garage
RTX64 RTOS. In operation, RTX64 runs on
www.intervalzero.com Canada Palo Alto, CA, USA
its own dedicated CPU cores alongside Win- www.optelvision.com www.willowgarage.com
Kingstar
dows to provide a deterministic environment. Waltham, MA, USA
Using this architecture, developers partition www.kingstar.com
MIL-based applications to run on RTX64 and
For more information about machine vision software companies and products, visit Vision
Windows (Figure 3). Systems Designs Buyers Guide buyersguide.vision-systems.com
Third-party RTOS support for other

26 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_26 26 3/23/17 9:06 AM


Image classification
Today, the tools required to perform gaug-
ing functions, pattern matching, OCR, color
analysis and morphological operations are
common. Such tools allow developers to
configure multiple types of machine vision
systems to classify whether parts are accept-
able or must be rejected. In some cases, how-
ever, where an objects features are variable,
such tools are less useful. In fruit and vege-
table sorting applications, whether a particu-
lar product is good or bad can depend on a
number of different factors.
To determine whether such products
are acceptable then depends on present-
ing the system with many images, extract-
ing specific features and classifying them. A
number of different classifiers are available Figure 4: perClass from PR Sys Design offers multiple classifiers to allow developers to work inter-
to perform this task that include neural nets, actively with data, choose the best features within the data for image classification, train the
numerous types of classifiers and optimize their performance. In this image, a hyperspectral defect
support vector machines (SVMs), Gauss-
detection problem concerning French fries is shown. (Left): A visualization of training data extract-
ian mixture models (GMM) and k-nearest ed from hyper-spectral images with four types of material (healthy potato flesh, potato skin (peel),
neighbors (k-NN). Using its HALCON soft- rot and greening). (Right): A test hyperspectral image (one of 103 spectral wavelengths) with
ware package, for example, MVTec develop- super-imposed colors showing decisions of a classifier.

DO-GOODER
The mvBlueSIRIUS is revolutionizing the The mvBlueSIRIUS com-
world of classical 3D applications for industrial bines innovative technology
image processing. Complex applications, which with a highly-efficient camera system. 6D technology
are specially designed for 3D solutions, such as enables the system to think ahead and the color
those found in the recycling process, can be recognition capability of the mvBlueSIRIUS rounds off
implemented cost effectively and in an elegant its range of applications. Up-to-date features see:
manner using the multi-stereo camera system. www.mv-do-gooder.com
MATRIX VISION GmbH
Talstr. 16 71570 Oppenweiler Germany Phone: +49 -71 91- 94 32- 0
info @ matrix-vision.de www.matrix-vision.de

1704VSD_27 27 3/23/17 9:06 AM


product focus on Image processing software

ers can access all these classifiers. to classification. Using extracted texture, allow developers to work interactively with
Numerous companies have used such geometry and color features, captured data is data, choose the best features within the data
deep learning techniques in commer- presented to an SVM for classification. Simi- for image classification, train the numerous
cial products. To classify or separate prod- larly, the NeuralVision system from Cyth Sys- types of classifiers and optimize their perfor-
ucts based on acceptable or unacceptable tems (San Diego, CA, USA; www.cyth.com) mance (Figure 4).
defects, ViDi green software from ViDi Sys- is designed to allow machine builders with Many deep learning resources are now
tems (Villaz-St-Pierre, Switzerland; www.vidi- no previous image processing experience to available on the Web. Two of the most inter-
systems.com) allows developers to assign and add image classification to their systems (see esting of these are Tombones Computer
label images into different classes after which Machine learning leverages image classifica- Vision Blog (www.computervisionblog.com),
untrained images can be classified. In a bottle tion techniques, Vision Systems Design, Feb- a website dedicated to deep learning, com-
sorting application demonstration, Datalogic ruary 2015; http://bit.ly/VSD-1502-1). puter vision and AI algorithms and The Jour-
(Bologna, Italy; www.datalogic.com) recently By applying multiple image classifiers nal of Machine Learning Research (JMLR;
demonstrated how a k-d tree classifier could be on extracted data, developers can deter- www.jmlr.org), a forum for the publication of
used to identify and sort bottles after test bot- mine whether the extracted features are articles on machine learning.
tles are first presented to the system and key good enough to determine the specific fea- However, while such deep learning
points in the image automatically extracted tures of the product being analyzed. If not, approaches can be used to develop appli-
(see Image classification software goes on then different types of features may need to cations such as handwriting recognition,
show at Automate, Vision Systems Design, be extracted. Because of this, some compa- remote sensing and fruit sorting, they will
May 2015; http://bit.ly/VSD-1505-1). nies offer software packages that allow mul- always have a limited accuracy, making classi-
Developers using CVB Manto from Stem- tiple classifiers to be developed and tested. fiers less applicable where, for example, parts
mer Imaging (Puchheim, Germany; www. One such toolkit, perClass from PR Sys need to be measured with high accuracy or
stemmer-imaging.de) also do not need to Design (Delft, The Netherlands; www. aligned for assembly or processing, or for pre-
select the relevant features in an image prior perclass.com), offers multiple classifiers to cision robotic-guidance applications.

1704VSD_28 28 3/23/17 9:06 AM


E-mail your product announcements, with photo if available, to vsdproducts@pennwell.com | Compiled by James Carroll

Vision + Automation

PRODUCTS
Camera Link cameras feature Sony IMX sensors
New EXO machine vision camera models feature Camera Link interface and the follow-
ing Sony IMX Pregius CMOS sensors: IMX174 (2.3 MPixel, 70 fps), IMX250 (5 MPixel,
32 fps), IMX252 (3.1 MPixel, 52 fps), IMX267 (8.8 MPixel, 18 fps), and IMX304 (12.3
MPixel, 13 fps). Operating in Camera Link based mode Turnkey vision
and combined with frame grabbers, the new
x3 tap configuration provides a 50% system is designed
increase over regular speed. for PCB inspection
SVS-VISTEK
Designed for the verification, inspec-
Seefeld, Germany
tion, and measurement of compo-
www.svs-vistek.com
nents, connectors, solders, and pins
on PCB assemblies, the IVS-PCBi
automated optical inspection (AOI)
machine utilizes artificial intelligence
Infrared camera offers HD resolution machine vision algorithms and opti-
The Viento HD Lab camera features a 2.3 MPixel uncooled VOx microbolometer cal components to enable auto-
detector with a 12 m pixel size and is sensitive in the LWIR wavelength. The camera mated inspection. IVS-PCBi systems
delivers 16-bit, full dynamic range digital data or digital data with automatic gain con- feature HD color cameras, software
trol as well as a 30 fps frame rate, and thermal sensitivity at 30 and touch screen interface, real-time
mK NETD at F1.0. Additionally, the camera includes a config- process monitoring, and statistical
ured frame grabber for full-frame rate, full bit depth analysis and reporting.
acquisition, and display. Industrial Vision Systems
Sierra-Olympic Technologies Oxfordshire, UK
Hood River, OR, USA; www.sierraolympic.com www.industrialvision.co.uk

System amplifies motion in video


The Iris M system includes a monochrome 2.3 MPixel USB3 camera, imaging software,
four lenses (choice of 6 mm, 12 mm, 25 mm, 50 mm, and 100 mm), a USB3 cable, and
an acquisition system that features an Intel i7 processor, 8 GB RAM, 256 GB or 500 GB
SSD, dual batteries, IP65 housing, and MIL-STD-810-G standard drop protection. Powered
by Motion Amplification software, Iris M measures deflection, displacement, movement,
and vibration not visible to the human eye. It also comes with a waterproof case, vibration
pads, and a professional grade tripod.
RDI Technologies, Knoxville, TN, USA, www.rdi-technologies.com

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 29

1704VSD_29 29 3/23/17 9:06 AM


+ Automation Products
Vision

a 320 x 256 MCT or InSb detec-


Tiny CMOS camera is designed for endoscopes tor with a 30 m pixel pitch and
Designed for integration into medical endoscopes, the IK-CT2 camera is has a spectral range of 2.0 5.7
an ultra-small, chip-on-top video camera that measures just 1 mm (tip) m. The ImageIR 7300 features
x 0.7 mm x 0.7 mm. The camera a 640 x 512 MCT or InSb detec-
features a 220 x 220 back side illu- tor with a 15 m pixel pitch and
minated CMOS image sensor with has a spectral range of 2.0 5.7
integrated F4.5 / 120 FOV lens, m. Both are available in USB or
cable, interface board, and camera GigE and feature Stirling coolers, as well as manual focus, 14-bit ADC,
control unit which features 59.94 and a frame rate of 75 fps at full resolution.
Hz output frequency, DVI-D, and InfraTec
USB 2.0 outputs.Additionally, the Dresden, Germany
camera features Toshibas color www.infratec-infrared.com
processing with its 12-channel color matrix adjustment, freeze frame,
5 user-programmable settings files, and can be controlled remotely Telecentric lenses target machine vision and metrology
through RS-232. Designed for machine vision systems and metrology applications, the
Toshiba Imaging 0.045X, TitanTL Telecentric lens contains shims that provide adjust-
Irvine, CA, USA ment for variation in camera sensor location, an adjustable iris and a
www.toshibacameras.com three-set screw lens mount
for rotational alignment to
Cameras offer choice of infrared detectors the camera. The lenses are
ImageIR series 4300 and 7300 infrared camera models are available with designed for 2/3 image
either MCT or InSb cooled infrared detectors. The ImageIR 4300 features sensors and feature <0.1
telecentricity. Typical appli-
cations, suggests the com-
3D Structured Lighting Solutions pany, include automotive
and electronic inspection,

COMPACT LASER MODULE measurement, and gauging.


Edmund Optics
OSELA IS SPECIALIZED IN DEVELOPING, CUSTOMIZING Barrington, NJ, USA
AND MANUFACTURING LASER PATTERN GENERATORS www.edmundoptics.com
FOR MACHINE VISION APPLICATIONS.
10 mm
Hyperspectral camera features dual sensors
The VNIR-SWIR camera combines visible-near-infrared (VNIR) and short-
wave infrared (SWIR) sensors to collect wideband hyperspectral data
from 400-2500nm. Sensitive
56,2 in the 400 1000 nm range,
mm
the VNIR CMOS image fea-
FEATURES tures a 7.4 m pixel pitch and
Compact, 10mm Diameter 330 fps maximum frame rate.
Externally focusable The SWIR sensor is a Stirling-
Violet 405nm
Superior beam shaping Blue 450nm cooled MCT detector sensitive
High Pointing stability Green 520nm in the 900 2500 nm range,
ESD & Over temperature protected Red 640-690nm
IR 785-830nm featuring a 15 m pixel pitch
Up to 2 year warranty
and reaching >100 fps. Mea-
STRUCTURED LIGHT AND suring 7 x 8.2 x 6.5 and weighing 6.25 lbs, the VNIR-SWIR camera
LASER BEAM SHAPING SOLUTIONS
is light enough for certain high-payload UAVs while providing spectral
1869 32nd Avenue pixels for the VNIR and 267 for the SWIR.
Lachine, QC, H8T 3J1, Canada
Headwall Photonics
info@osela.com 514 631.2227 Fitchburg, MA, USA
osela.com
www.headwallphotonics.com

30 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_30 30 3/23/17 9:06 AM


+ Automation Products
Vision

Camera is designed for biomedical generation and applications in the low-volt-


instrumentation age range, up to 50 V.
Designed for biomedical instrumentation, the Smartek Vision
QI400BSI scientific camera from Photometrics Cakovec, Croatia
features a scientific grade backside-illuminated www.smartek.vision NAVITAR WITH
CMOS image sensor that provides a quantum PIXELINK
Latest ace industrial cameras feature BOOTH #357
Sony IMX sensors
Twelve new models of ace industrial cam-
eras feature Sony IMX CMOS image sensors
YOUR IMAGING
and GigE or USB3 interfaces. The acA2440-
75um monochrome and acA2440-75uc color OPTICS PARTNER
cameras feature a USB 3.0 interface and the Industrial & Commercial
5 MPixel Sony IMX250 CMOS image sensor, Surveillance
which can reach a frame rate of 75 fps. The Unmanned Autonomous Systems
acA2040-120um and acA2040-120uc feature Industrial Imaging
Defense and Security
OEM Applications

efficiency of >95%. The camera is equipped Virtual Reality Cameras

with a GPixel GSense 144 BSI CMOS image


sensor, which is a 1.44 MPixel sensor with an
11 m pixel size and features 16-bit (combined
gain) bit depth, 2x2 binning (on FPGA), and
USB 3.0 and PCIe interfaces. Additionally, the
camera features a dynamic range of 94 dB, and a USB 3.0 interface and the 3.2 MPixel IMX252
can control up to four light sources for multi- CMOS image sensor, which reaches speeds
wavelength acquisitions. of 120 fps. Four models feature the 5 MPixel
Photometrics IMX264 sensor: The acA2440-20gm and
Tucson, AZ, USA acA2440-20gc, which feature a GigE inter-
www.photometrics.com face and reach 20 fps, and the acA2440-35um
and acA2440-35uc, which feature a USB 3.0
LED strobe controller features four interface and can reach 35 fps. Lastly, four
I/O channels models feature the 3.2 MPixel Sony IMX265
The HPSC4 LED strobe controller, an upgraded sensor: The acA2040-35m and acA2040-35c,
version of its HPSC1 controller, features four which have a GigE interface and reach 35 fps,
individually controllable input and output chan- and the acA2040-55um and acA2040-55uc,
nels instead of one, which enables fully asyn- which have a USB 3.0 interface and reach 55
chronous operation. All channels can be inde- fps. Color models of the camera also include
the PGI feature set, which consists of a unique
combination of 5x5 debayering, color-anti-
aliasing, denoising and improved sharpness.
Basler
Ahrensburg, Germany
www.baslerweb.com

pendently operated in terms of timing and with High-speed infrared camera enables
different parameters at frequencies of up to 50 research and scientific imaging
kHz. The controller is available in three config- The X6570sc series high-speed infrared cam- navitar.com
info@navitar.com | 585.359.4000
uration and control interfaces: Ethernet, USB, eras feature Camera Link Base, Camera Link
or RS-232 and also supports internal trigger Medium, and GigE interfaces. A 640 x 512

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 31

1704VSD_31 31 3/23/17 9:07 AM


+ Automation Products
Vision

Vision+Automation EVENTS
APRIL June 26-29
LASER World of Photonics
April 9 13
Munich, Germany
SPIE Defense + Commercial
www.world-of-photonics.com/index-2.html
Sensing 2017
Anaheim, CA, USA JULY 2
http://spie.org/conferences-and-exhibitions/
defense--commercial-sensing July 12-13
7th International Conference of Pattern
April 27
Recognition Systems
UKIVA Machine Vision Conference &
Madrid, Spain
Exhibition
http://www.icprs.org/
Milton Keynes, UK mercury cadmium telluride (MCT) infrared
http://www.machinevisionconference. July 21 - 26
detector with a 15 m pixel size achieves 234
co.uk/ IEEE Conference on Computer Vision
fps at full resolution, and 14,550 fps with win-
and Pattern Recognition 2017
dowing and is sensitive enough to distinguish
MAY Honolulu, Hi, USA
temperature differences down to 20 mK. Sen-
http://cvpr2017.thecvf.com/
May 1 - 3 sitive in the 7.79.3 m range, the high-speed
Embedded Vision Summit AUGUST infrared camera allows engineers, researchers,
Santa Clara, CA, USA and scientists to view thermal imagery live on a
http://www.embedded-vision.com/summit August 22-24
detachable touchscreen LED monitor or stream
17th international Conference on
May 8 11 14-bit data to a computer for viewing, analy-
Computer Analysis of Images and
XPONENTIAL sis, or recording.
Patterns
Dallas, TX, USA FLIR
Ystad, Sweden
www.xponential.org/xponential2017
http://www.cvl.isy.liu.se/CAIP2017.html Wilsonville, OR, USA
May 16-19 www.flir.com
14th Conference on Computer and SEPTEMBER
Robot Vision
September 4 - 7
StreamPix Lite single-camera video
Edmonton, AB, Canada
28th British Machine Vision Conference
recording software launched
http://www.computerrobotvision.org/ NorPix, perhaps known best for its Stre-
London, UK
May 22 25 https://bmvc2017.london/ amPix multi-camera recording software, has
NIWeek 2017 announced the release of NorPix Lite sin-
September 9 12
Austin, TX, USA gle-camera recording software, which has a
Second International Conference on
http://niweek.ni.com/events/niweek-2017/
Computer Vision & Image Processing
event-summary
(CVIP-2017)
May 29 June 2 Roorkee, Uttar Pradesh, India
25th International Conference on http://www.iitr.ac.in/cvip2017/
Computer Graphics, Visualization and
September 25-27
Computer Vision
PACK EXPO
Plzen, Czech Republic
Las Vegas, NV, USA
www.packexpolasvegas.com
JUNE
June 22-24 OCTOBER
15th EMVA Business Conference
October 12 - 13
Prague, Czech Republic
Embedded VISION Europe 2017
http://www.emva.org/events/business-
Stuttgart, Germany
conference/15th-emva-business-
http://www.emva.org/events/more/
conference/
embedded-vision-europe/

32 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_32 32 3/23/17 9:07 AM


+ Automation Products
Vision

reduced feature set at a reduced price. With DirectX 12, OpenGL 4.4 and OpenCL 2.0 API,
the software, users can capture video with onboard DVI-I, DVI-D and DisplayPort display
USB3 Vision or GigE Vision cameras from interfaces for Ultra HD 4K resolution.
more than 100 camera lines. The software is Vecow
compatible with Windows 7/8/10, is available New Taipei City, Taiwan
in 32-bit and 64-bit, and features VCR-styled www.vecow.com
controls for recording and playback. Addition-
ally, the software features image time stamp- VisualApplets Embedder enables FPGA
ing with microsecond precision, playback programming
support for sequence and AVI files, frame-by- With VisualApplets Embedder from Silicon
frame sequence browsing, hardware trigger- Software, the FPGA image processing func-
ing, Bayer conversion, and color balance. tionality from VisualApplets can now be
NorPix ing, intelligent transportation systems, and embedded within imaging devices and cam-
Montreal, QC, Canada traffic vision. Based on an Intel Xeon Core i7
www.norpix.com processor (Skylake-S) running with the Intel
C236 chipset, the computer features 6 GigE
Embedded system operates in extended LAN wM12 PoE+ with IVH-9200M, 4 PoE+
temperatures with IVH-9200), four front access SSD tray,
The IVH-9200 embedded system features an eight USB 3.0, M2DOM, 3 SIM, and 32 iso-
extended operating temperature range of lated DIO. With dual-channel DDR4 2133MHz eras. VisualApplets Embedder makes Visual-
-40F to 167F and targets applications that ECC memory, up to 32GB capacity; Intel HD Applets compatible on any third party FPGA-
include machine vision, in-vehicle comput- Graphics 530, the embedded system supports based hardware, making the FPGAs freely

USB Industrial Cameras YOUR DESIGN


YOUR DESIG
GN IIS
S
The Most Cost Effective Imaging Solutions
BUILT TO LAST
YOUR
YOUR
From $159 Each
OEM Pricing Available
CABLES
SHOULD
L TOO
LD

With Intercon 1, youll get world-class machine n vision


ne
cable solutions:
Turnkey design assistance/engineering support
Customized flex testing to prevent any downtimee
Custom overmolds and connector orientations to fit
any requirement
With Intercon 1, youll minimize your downtime
Onboard Frame Buffer and keep your customers cell operational.
Reliable & Intuitive SDK
Contact us today to get going
Linux Support Included
on your next project.
Robust Metal Housing
Low EMI Emission
Precision Cable
Assemblies for the

Mightex
www.mightex.com Vision Industry
sales@mightex.com
(218) 828-3157 | intercon@nortechsys.com | www.intercon-1.com

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 33

1704VSD_33 33 3/23/17 9:07 AM


+ Automation Products
Vision

programmable to realize individual image pro- angular resolution of 0.06 per CCD pixel. By power LED area and bar lights, the TR-HT
cessing applications. Programming the FPGA mounting a conoscope lens directly to a Pro- lighting controllers feature SafePower and
directly in the device enables it to execute por- Metric imaging photometer or colorimeter, patented SafeSense technology. SafePower
tions of the image preprocessing to reduce Radiant Vision Systems now offers a solution automatically adjusts the supply voltage to
data load and system costs. for viewing angle performance measurement make most efficient use of the power, while
Silicon Software for a range of display types, including those SafeSense creates a safe working environ-
Manheimm, Germany based on LCD and OLED technologies, as well ment for controllers that overdrive LED lights
www.silicon.software as backlights. using constant current in order to increase
Radiant Vision Systems light intensity for machine vision applications.
Conoscope lens measures flat panel Redmond, WA, USA The lighting controllers are GigE Vision com-
displays www.radiantvisionsystems.com pliant and operate with both Triniti and non-
Designed for use with ProMetric imaging pho- Triniti machine vision lights.
tometers and colorimeters, the conoscope lens LED lighting controllers deliver 150W Gardasoft
measures the color, luminance, and contrast of per channel Cambridge, UK
multiple angular distributions of light at once, TR-HT LED lighting controllers feature two www.gardasoft.com
allowing flat panel display (FPD) manufacturers independent output channels rated at 150W
to capture viewing angle performance mea- and delivering 50A in pulsed mode and 5A CCD camera targets X-ray imaging
in continuous mode. Suitable for driving high applications
Designed for measurements in the 10 eV to
30 keV range, the SOPHIA-XO:2048 camera
features a backside-illuminated 2048 x 2048
CCD image sensor with no anti-reflection
coating and a 15 m pixel size, the camera
provides >3 fps with 15 MHz readout speed
surements for displays in real time. The lens has at full resolution. Proprietary ArcTec ultra-
a working distance of 3 mm, a 3 mm diame- deep-cooling technology minimizes dark
ter sampling size, and a viewing angle of 60. noise by thermoelectrically cooling the CCD
Additionally, the conoscope lens provides an to less than -130F (-90C) using only air

Pro duc t S howc as e


A D V E R T I S E M E N T

SUBSCRIBE NOW AT
WWW.VSD-SUBSCRIBE.COM

34 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_34 34 3/23/17 9:07 AM


+ Automation Products
Vision

assist. 64-bit LightField


imaging and spectroscopy
Sales Offices
Main Office Product Showcase
software provides user
61 Spit Brook Road, Suite 401 Advertising & Reprint Sales
enhancements such as a Nashua, NH 03060 Judy Leger
built-in math engine to per- (603) 891-0123 (603) 891-9113
FAX: (603) 891-9328 FAX: (603) 891-9328
form live data analysis and E-mail: judyl@pennwell.com
Publisher
direct control from third- Alan Bergstein
International Sales Contacts
party packages. (603) 891-9447
FAX: (603) 891-9328 Germany, Austria,
Princeton Instruments E-mail: alanb@pennwell.com Northern Switzerland,
Eastern Europe Holger Gerisch
Trenton, NJ, USA Executive Assistant +49 (0) 8801-9153791
www.princetoninstruments.com Julia Campbell FAX: +49 (0) 8801-9153792
(603) 891-9174 E-mail: holgerg@pennwell.com
FAX: (603) 891-9328
System on module features E-mail: juliac@pennwell.com Hong Kong, China
Adonis Mak
ARM Cortex A-9 Digital Media Sales 852-2-838-6298
Operations Manager FAX: 852-2-838-2766
The eSOMiMX6-micro is
Tom Markley E-mail: adonism@actintl.com.hk
based on an NXP/Freescale (603) 891-9307
FAX: (603) 891-9328 Japan
i.MX6 Quad/Dual ARM Cortex-A9 Masaki Mori
E-mail: thomasm@pennwell.com
based CPU and runs at up to 800MHz. 81-3-3219-3561
Ad Services Manager FAX: 81-3-5645-1272
It features a 3.3V power supply, eMMC Marcella Hanson E-mail: masaki.mori@ex-press.jp
Flash memory ranging from 4GB to 64GB, (918) 832-9352
FAX: (918) 831-9415 Israel
LPDDR2 with capacity as high as 1GB. Offer- E-mail: marcella@pennwell.com Dan Aronovic (Tel Aviv)
972-9-899-5813
ing parallel, MIPI CSI, GigE, USB, PCIe Gen 2, List Rental E-mail: aronovic@actcom.co.il
and HDMI camera interface options, the com- Kelli Berry
(918) 831-9782 Should you need assistance with
puter on module supports full HD at 30 fps (1080p) FAX: (918) 831-9758 creating your ad please contact:
video encoding and decoding along with 3D graphic acceleration, and E-mail: kellib@pennwell.com Marketing Solutions
built-in Wi-Fi and Bluetooth. Measuring just 54 mm x 20 mm, the eSO- North American Advertising Vice President
& Sponsorship Sales Paul Andrews
MiMX6-micro system-on-module is suitable for medical, handheld, (240) 595-2352
Judy Leger
and wearable applications. (603) 891-9113 Email: pandrews@pennwell.com
e-con Systems FAX: (603) 891-9328
E-mail: judyl@pennwell.com
Tamil Nadu, India
www.e-consystems.com

LED lights target line scan applications Advertisers Index


New to the COBRA line of LED line lights is a multi-wavelength
Advertiser / Page no.
configuration, the COBRA RGB, which is a modular product avail-
able in any length up to 5m, in 100mm increments. Designed for
Allied Vision ........................ CV4 Messe Muenchen GmbH .... CV3
line scan imaging applica-
Azure Photonics Co. Ltd. ........ 19 Mightex ................................. 33
tions, the COBRA RGB LED
line light delivers mul- CBC Americas, Computar ........ 7 Navitar ................................... 31
tispectral illumination Edmund Optics ...................... 13 Osela...................................... 30
in up to three different ImageOps .............................. 34 Sierra-Olympic
wavelengths, including RGB, Technologies ......................... 4
Intercon 1 .............................. 33
ultraviolet, and infrared. The multi- SVS Vistek GmbH....................11
JAI .......................................... 14
wavelength COBRA lights feature inte- Tattile SRL................................. 2
grated intensity control of each wavelength Jargy Co. Ltd. ........................ 25
Teledyne DALSA ..................... 22
via Ethernet, a field adjustable lens position, and a strobing func- Lumenera................................. 9
VS Technology ....................... 28
tion that provides up to five times the products standard intensity. Matrix Vision GmbH .............. 27
ProPhotonix Ximea GmbH ........................... 8
Matrox Imaging .................. CV2
Salem, NH, USA
This ad index is published as a service. The publisher does not assume any
www.prophotonix.com liability for errors or omissions.

www.vision-s y stems .com VISION SYSTEMS DESIGN A p r il 2 017 35

1704VSD_35 35 3/23/17 9:07 AM


f a c e s in v i s i o n

Donal
Waide
Director of sales BitFlow Inc. Year founded and location: 1993,
Woburn, MA, USA

What are three interesting things about easy one for me. I had been working in Qual- In what markets or applications do you
you that people might not know? I love ity Control for some time and wasnt happy. see the most growth? Several markets are
to cook and entertain, and we often host starting to see vision as the best possible solu-
What do you like about working in your
people at our house. I am a strong advocate for tion, including the Internet of Things, auton-
field? Machine vision is no longer a descrip-
education, and am an active member of the omous vehicles, security, medical, and mili-
tion of an event in a company. In many cases,
School Improvement Council at my sons tary. Newer, larger sensors and faster speed
its a division of the engineering team and
school. Im also a foodie, a passion thats interfaces such as CXP allow a lot more data
more. Large companies have dedicated
helped by my worldly travels. to be gathered and analyzed in real time, lead-
machine vision engineering teams, and its
ing to split second decisions. This will be one
What are your top three favorite movies now an option on many curricula for col-
path of our industrys future.
of all time? The Blues Brothers, The Big Leb- leges. What I love though is that we are at the
owski, and Monty Python and the Holy Grail. cutting edge of technology and as such, I get How have market changes affected
to see all the projects where our boards product development at your company?
What are your top three favorite TV are in use. Since the arrival of FireWire and GigE Vision,
shows of all time? Big Bang Theory, The
we have heard of the end of the frame grab-
Daily Show (both versions), and most of Dick What is your companys core focus and
ber companies, yet here we are 23 years after
Wolfes creations. mission? BitFlows focus is to continue to
inception, still offering them. We always
lead the industry in frame grabber customer
What are three of your favorite bands or watch the industry and see whats next or new
and technical support. We have a great repu-
musical acts? The Stone Roses, U2, and in standards, and how we can offer a compa-
tation for this already and receive customer
Bloc Party. rable product. We are also aware that the
compliments weekly. As the industry adopts
frame grabber is more suited to certain areas
What do you like to do in your free time? new standards, we work two-fold on this, to
of industrial applications, so we focus mainly
Cook or go to a restaurant, read, and attend influence the direction of the industry and to
on these.
comedy shows. collaborate with others on making products
that are both user friendly and affordable, yet What is one interesting way youve seen
How did you get into your field of exper- highly innovative. your product utilized recently? With a lot
tise? I was bartending at the time and got talk- of NDAs signed it is difficult to state some of
ing to a patron who was working in Intelligent What are you most excited about at your
the cooler aspects of deployment of our prod-
Transportation Systems. Turned out he was company right now and why? The dynam-
ucts, but having seen where we are in the
looking for an engineer, and I was looking for ics of our company and the fact that we are
Life Sciences market, and some of our mili-
a full-time gig. continually producing new products shows
tary involvement, its definitely easy to say, we
that we are a leading trendsetter in the indus-
are in some very interesting projects around
Why did you choose your profession? I try. BitFlow doesnt sit still and my time here
the planet.
am an engineer by education, so this was an had shown me this.

36 A p r il 2 017 VISION SYSTEMS DESIGN www.vision-s y stems .com

1704VSD_36 36 3/23/17 9:07 AM


Connecting Global Competence

H T
LI G RE H E S

N G K
IC /TI
ET CK ET

D I B U Y T
ICS
. C OM

E A N

E L -P H OTO

TH
F
-O
R LD
WO

JUNE 2629, 2017, MESSE MNCHEN


23rd International Trade Fair and Congress
for Photonics Components, Systems and Applications

world-of-photonics.com

1704VSD_C3 3 3/23/17 9:08 AM


The revolution starts
May 4th 2017
Lets change the system of embedded vision.
Register now: embeddedrevolution.com

1704VSD_C4 4 3/23/17 9:08 AM

Вам также может понравиться