Академический Документы
Профессиональный Документы
Культура Документы
www.vision-systems.com
Currency
security
Counterfeit detection
Synchronize
cameras
Cut interval time lag
Image
processing
Software options
3D vision
Pinion gear
metrology
April 2017
VOL. 22 | NO. 4
Vision and Automation Solutions for
Engineers and Integrators Worldwide A P EN N W EL L P U B L I C AT I O N
C ove r St o r y
Mesure-Systems3D has developed a 3D scanning
system to measure gears during production.
(See page 10).
f e ature s d ep ar tment s
23 PRODUC T F OC US
www.vision-systems.com
Tattile designs, develops and manufactures high-tech products dedicated to machine vision, such as
embedded systems, smart cameras and high-performance industrial cameras.
M100 Series: Multi-camera vision controllers, GigE and TAG-7 Series: GigE Vision and CameraLink Line-scan
CameraLink version, with open platform and compact and cameras
fanless case
TAG-5 Series: GigE Vision Area-scan cameras
S100/200 Series: High Performance Smart Cameras TAG-5 HYP series: Hyperspectral Mosaic GigE cameras
Linux based with compact size and IP67 rated enclosure
NAUTILUS with Halcon Embedded: Easy to use Machine
S12MP Series: High Resolution 12Megapixel Smart Vision design tool, make design of your machine vision
Camera with programmable FPGA and Linux based applications faster and more efficient
www. t at t ile .c o m
online @ www.vision-systems.com
Current and future state of Boston Dynamics introduces N. American robotics market
computer vision new wheeled robot reaches all-time high in 2016
Check out this Having already developed Orders and shipments
presentation given by such robots as theAtlas ofrobotsin North
Jeff Bier, founder of two-legged humanoid America reached an all-
the Embedded Vision robotand theSpot time high in 2016, as
Alliance Computer Vision 2.0: Where quadruped robot, Boston Dynamics 34,606 robots valued at approximately
We Are and Where Were Going recently unveiled video of Handle, a $1.9 billion were ordered, representing a
from the May 2016 Embedded Vision new robot that features both legs and 10% growth over 2015.
Summit. http://bit.ly/VSD-1704-1 wheels. http://bit.ly/VSD-1704-3 http://bit.ly/VSD-1704-2
VIENTO HD LAB
continued from page 6I knew right away I had ternet. In the video, the Cyclops robot is Sankar continues to build robots with his
to be part of the company. shown looking at an apple using a Logi- FIRST robotics team, the Arrowbotics. As
Two months later, he joined NVIDIA as tech webcam, and is able to recognize it and the vice president of engineering, he has
a summer intern, where he was given the keep it within its range of vision. It sees it helped his team excel in competitions. Ear-
chance to utilize deep learning by building as an apple, and is confirming it constantly, lier this year, they built an obstacle-tackling
three humanoid robots using Jetson TX1 em- checking back with the Jetson. robot that made it to the quarterfinals in the
bedded systemdeveloper kits, which are capa- The second that apple drops out of frame, Silicon Valley Regional FIRST Robotics
ble of running complex, deepneural networks. and you put an orange [in frame] or replace it Competition. NVIDIA was recently named
The Jetson TX1 features 1 Teraflop / 254- with some other kind of fruit, the robot con- a gold-level supplier of the FIRST Robot-
core with NVIDIA Maxwell architecture, tinues searchingits still looking for the ics Competition, which is part of the com-
64-bit ARM A57 CPU, 4K video encode (30 apple. Deep learning is flashing thousands panys effort to inspire more young students
Hz) and decode (60 Hz), and a camera inter- of images of apples, oranges, bananas, and like Sankar to become science and technol-
face capable of handling up to six cameras, other types of fruitor whatever objects you ogy innovators.
or 1400 MPixels/s. Sankars robots, which he wantand it creates a network, saying if it Editors note: NVIDIA recently intro-
named Cyclops, were programmed by show- is red, go here, if its a little round, go here. duced the Jetson TX2. View more informa-
ing them thousands of images from the in- In addition to his internship at NVIDIA, tion on this here: http://bit.ly/VSD-TX2
Designed with
Exceptional Vision
computar.com
EAST COAST: 1 (919) 230-8700 WEST COAST: +1 (310) 222-8600 MEXICO: +52 (55) 5280 4660
flat flex cables (data, power, GPIO) system comprised of visible, infrared, and lidar sensors that will be af-
fixed outside the International Space Station (ISS) to test technologies
that will enable autonomous rendezvous.
Through Raven, NASA says it will be one step closer to having a rel-
ative navigation capability that it can take off the shelf and use with
minimum modifications for many missions, and for decades to come.
Ravens technology demonstration objectives are three-fold:
Provide an orbital testbed for satellite-servicing relative navigation
algorithms and software.
Demonstrate multiple rendezvous paradigms can be accomplished
with a similar hardware suite.
Demonstrate an independent visiting vehicle monitoring capability.
Ravens visible camera, the VisCam, was originally manufactured for
the Hubble Space Telescope Servicing Mission 4 on STS-109. The 28
Volt camera features an IBIS5 1300 B CMOS image sensor from Cy-
press Semiconductor (San Jose, CA, USA; www.cypress.com), which is
a 1280 x 1024 focal plane array with a 6.7 m pixel size that outputs a
1000 x 1000-pixel monochrome image over a dual data-strobed Low
Voltage Differential Signaling (LVDS) physical interface.
The camera is paired with a commercially available, 7 radiation-tol-
erant 8 24 mm zoom lens that has been ruggedized for spaceflight
by NASAs Goddard Space Flight Center. The motorized zoom lens
provides zoom and focus capabilities via two, one-half inch stepper
May 1-3, motors. The adjustable iris on the commercial version of the lens has
Santa Clara, been replaced with a fixed f/4.0 aperture. Additionally, the Viscam pro-
Space 606
vides a 45 x 45 FOV when at the 8 mm lens setting and a 16 x 16
FOV while at the 24 mm lens setting. The combination of the fixed
aperture and variable focal length and focus adjustments in the lens
yield a depth of field of approximately four inches from the lens out
As seen in XIMEAs 360 VR capture demo: to infinity, according to NASA. Furthermore, the VisCam assembly
includes a stray light baffle coated with N-Sciences ultra-black Deep
Space Black coating, which protects the VisCam from unwanted op-
tical artifacts that arise from the dynamic lighting conditions in Low
Earth Orbit.
Ravens infrared camera, the IRCam, is a longwave infrared (LWIR)
camera that is sensitive in the 8 14 m wavelength range. The
camera features a 640 x 480 pixel U6010 Vanadium Oxide microbo-
lometer array from DRS Technologies (Arlington, VA, USA; www.drs.
com) and has an internal shutter for on-orbit camera calibration and
flat- field correction. Furthermore, the camera operates via USB 2.0
interface and includes an athermalized, 50 mm f/1.0 lens that yields
an 18 x 14 FOV.
Also included in Ravens sensor payload is flash lidar, which collects
www.ximea.com
VISION SYSTEMS DESIGN www.vision-s y stems .com
data by first illuminating the rela- While on the ISS, Ravens com-
tive scene with a wide-beam 1572 ponents will gather images and
nm laser pulse and then collect- track incoming and outgoing vis-
ing the reflected light on a 256 x iting space station spacecraft. Ra-
256 focal plane array. By clocking vens sensors will feed images to
the time between transmission a processor, which will use spe-
and reception, the focal plane cial pose algorithms to gauge the
array can accurately measure the relative distance between Raven
distance to the reflected surface, and the spacecraft it is tracking.
as well as the return intensity, at a Based on these calculations, the
rate of up to 30 Hz. processor will send commands
Five days after launch, Raven that swivel Raven on its gimbal to
was removed from the unpres- keep the sensors trained on the ve-
surized trunk of the SpaceX hicle, while continuing to track it.
Dragon spacecraft by the Dextre During this, NASA operators on
robotic arm, and attached on a the ground will evaluate the Ra-
payload platform outside the ISS. vens capabilities and make adjust-
Here, Raven provides information for the de- tempted technology, said Ben Reed, deputy ments to increase performance.
velopment of a mature real-time relative navi- division director, for the Satellite Servicing Over two years, Raven will test these tech-
gation system. Projects Division (SSPD) at NASAs Goddard nologies, which are expected to support future
Two spacecraft autonomously rendezvous- Space Flight Center in Greenbelt, Maryland NASA missions for decades to come.
ing is crucial for many future NASA missions the office developing and managing this View a technical paper on Raven at http://
and Raven is maturing this never-before-at- demonstration mission. bit.ly/VSD-RAVEN.
MORE
than a
CAMERA
www.lumenera.com
3D I M AG I N G
SYS T E M S I N T E G R AT I O N
L I G H T I N G A N D I L LU M I N AT I O N
20160
com), which are small
The system, dubbed GearINSPECTION, was enough to fit into the machine.
previewed at last years VISION 2016 tradeshow For the five-sensor system, one laser dis- MP FPS
and conference in Stuttgart, Germany. placement sensor scans the central bore and
Basically, a gear, depending on the plant another four scan the flanks of the gear with
and the type of gear, is produced every 10-40 two sensors required for each side of the gear
seconds. Thats the cycle time, explains Dr. teeth. In production, pinions would typically HR Series
Marc Rosenbaum, President of MS3D. And be loaded onto a tripod in the center of the up to up to
the tactile measurement systems currently in metrological core using a robot or manually.
use have a measurement cycle time of 5-6 It typically takes four seconds to load and
29 80
MP FPS
minutes. So there
is a pretty big gap
between 40 seconds Xc
Industrial PC
and 6 minutes. In Zc
our system, handling
SHR Series
47 7
of each gear takes 30 3D laser
displacement
seconds or less. sensor W
The metrological MP FPS
core of the machine
on exhibit was
designed for 3D geo- Rotary stage The perfect Picture for
metrical inspection
of pinion gears that In the five sensor system, one profile scanner digitizes the central bore
your Application.
have a 3-mm mini- and another four simultaneously scan the flanks of the gear from various
> CMOS or CCD sensor
mum internal ring positions and angles as the gear rotates. The individual point clouds from > Four LED ligt controller
diameter, an 80-mm each sensor are combined into a single point cloud, which is analyzed to > 256 512 MB of Burst Mode Buffer
> Sequencer, PLC, Safe Trigger
maximum external extract the complete 3D measurements of the gear. > Extended operating temperature range: -10 up to 60C
ring diameter and a
height ranging from 1 to 25 mm. For this size unload the gear, explains Rosenbaum. The
part, the machine uses five LJ-V7060 3D laser gear is then rotated as the sensors simultane-
displacement sensors from Keyence (Itasca, ously scan the gear from various positions and
www.svs-vistek.com
IL, USA; www.keyence.com) to scan the gear angles. It takes about two seconds to stitch
in less than two seconds, collecting millions these images together into a single 3D point SVS-Vistek GmbH
Germany
of data points. cloud. For a standard gear, 12 to 18 million +49 8152 99850
info@svs-vistek.com
We are presently developing a system points are generated, and from these, geometri-
that will measure gears up to 40 mm in cal features are extracted continued on page 12
continued from page 11 within 20-30 seconds. market, and this was the only one which was defects are directly extracted from the dense
One issue that MS3D encountered was that at the same time universal and able to quickly cloud of points using specific software devel-
different companies have different standards interface with our own software, said Rosen- oped by MS3D. The software compares
for how they want the data presented. Rather baum. The real bottleneck comes from the the scan data to dimensions extracted from
than developing a unique graphical user inter- fact this software is geared for 2D input from CAD models or 2D drawings to measure spe-
face (GUI), MS3D integrates Quindos, a third- tactile machines, not 3D data. However, this cific areas of interest and determine if it is
party software from Hexagon Metrology that is software allows us to present the data the way within tolerance or not. A final report is cre-
used as a front end. the customer wants it. ated showing the measurement analysis per-
We tested many software products on the Geometrical features or surface aspect formed on each part.
continued from page 10 In order to minimize the risk of faulty con- the next station, which is board-side inspection. At this station, a
nectors making it into the supply chain, one major US connector scanCONTROL 2650-25 laser line profiler from Micro Epsilon
manufacturer gave engineers at G2 Technologies (Apex, NC, USA; (Ortenburg, Germany; www.micro-epsilon.com) scans the entire
www.g2tek.com) the nod to develop an automated connector inspec- board side of the connector.
tion system. The system, built on the PXI platform from National After board-side inspection, connector-side inspection takes place.
Instruments (NI; Austin,
TX, USA; www.ni.com/
vision), combines a machine-
vision based non-contact 3D
inspection system, a clean-
ing station, electrical test and
engraving stations.
Inspectors can see bent
and missing pins all day
long, but its easy to miss
a faulty connector with a
pin thats just too short,
explains Craig Borsack, Metal clips (1, 2, 3, and 4) on the connector create a reference plane from which the system measures the
President of G2 Technolo- height of the contacts. Two posts (T and U) on the connector provide a datum for measuring the position
gies. This application is a information for each contact. The 3D view (right) shows a close up of post U and contact 5 and 6.
huge improvement. By offer-
ing noncontact inspection and the ability to spot these type of flaws Due to cycle time requirements, and the need to scan the part from
that were previously making it through the process, our non-con- both directions, inspection of the mating side of the connector is per-
tact inspection system could save this connector manufacturer and formed at two stations by two additional laser line profilers.
others millions of dollars or more each year. Two scans are required due to shadowing effects created by the
The inspection system is installed after stitching, a process that connector shell as the part is scanned from one side, explains Bor-
accumulates contact pins and inserts them into molded connector sack. In order to get a complete 3D point cloud, the part must be
housings. Stitched connectors enter on an input conveyor and pass scanned from both directions, then the images are combined to
under a Genie Nano M1920 GigE Vision camera from Teledyne mask out the shadows.
DALSA (Waterloo, ON, Canada; www.teledynedalsa.com). The system scans from both sides and creates a plane based on a
An image of the connector is acquired with illumination pro- pad (feature) on the bottom of the mate side connector. This plane
vided by a DL 194 diffuse dome light from Advanced illumination will be used to measure true position and pin height of the contacts.
(Rochester, VT, USA; www.advancedillumination.com). The image Borsack said his company is proud of the 3D inspection system it
is then analyzed to verify that the correct part is present and that its has developed and hopes other connector manufacturers will con-
in the proper orientation to proceed through the inspection process. sider exploring this for their companies.
If the part is not correct or is improperly oriented, the system diverts Its a small price to pay when you look at the potential savings
it into a reject bin. it offers. Not only can this inspection system help protect a man-
Parts deemed correct and that are properly aligned proceed to ufacturer from being sued for millions of dollars in damages in a
an orientation wheel that repositions the part board-side down, for recall situation, Borsack said. It could also absolutely save lives.
continued from page 10 projected spot size. combination of requisite brightness, form
Vorum had been researching CAD/CAM as We did not want a laser-based source factor, power requirements and size, added
a more precise and efficient way to design because of the additional safety/risk over- Grochowski. These were the main techni-
and make prosthetics and orthotics for more head, said Vorum CTO Ed Grochowski. It cal drivers.
than a decade before it opened for business. is also a potential objection to overcome for Along with the SP30s form factor and
Saunders gave the first public demonstration our typical customer applications, such as intensity level, Vorum also cited the SP30s
of a computer-aided design and manufactur- scanning patients. durability and price in making it the right fit
ing system for P&O professionals in 1983. Smart Vision Lights product offered the for the Spectra scanner.
While Vorum was in the process of
developing a portable, handheld 3D struc-
IntroducIng the
tured-light scanner that would allow P&O
professionals to take advantage of CAD tech-
nology in speeding up the process of making
prosthetics and orthotics by eliminating the
need for plaster casts.
The new product, called Spectra, was
TECHSPEC cr SerIeS LenSeS
designed to be used in tandem with an LED
projector and a single camera to capture a
Compact Ruggedized (Cr) for Shock and Vibration
3D image. Along with Vorums seven-axis
robotic carving system, it utilizes a touch
screen and software customized for carv-
ing prosthetic and orthotic shapes. Spec-
tra also creates electronic files to make up
an easily accessible library of images and to
document how a patients shape and volume
changes over time.
The projector is situated at one end of the
Spectra, with the camera at the other end.
The projector projects an encoded pattern
onto the body, which the camera records.
The integrated software processes the result-
ing digital image, allowing the user to take
3D information from the projected pattern
on the body.
While getting Spectra ready for market,
Vorum shopped for a structured light projec-
tor that would fit on the scanner and prop-
erly illuminate the subject so that the inte-
grated camera could work more efficiently.
Vorum chose to use the SP30 series LED
structured light pattern projector from
Smart Vision Lights (SVL; Muskegon, MI,
USA; www.smartvisionlights.com).
Visit us at | Booth 636
Using a 5-watt LED, the SP30 can be
used to project any user-selectable pat-
tern.With an internal current-based LED
driver capable of strobe output, the new
light source met all of Vorums compact
size requirements and provided an intense www.edmundoptics.com/ruggedized
GO-24OO-USB
159 fps
GO-51O1
5.1 MP IMX 264
GO-24OO-PMCL GO-51O1-PGE
165.5 fps 22.7 fps
GO-24O1
2.35 MP IMX 249
GO-24O1-PGE
41 fps
Europe, Middle East & Africa - JAI A/S Asia Pacific - JAI Ltd. Americas - JAI Inc.
camerasales.emea@jai.com / +45 4457 8888 camerasales.apac@jai.com / +81 45-440-0154 camerasales.americas@jai.com / + 1 408 383 0300
Banknote characteristics
Before writing any application software, however, developers must be
aware of the characteristics of the inspected product, based on which
the choice of camera and light source technologies will be affected.
Due to its modular nature, this is an easy task with Qtechnology cam-
eras. The vision researcher will develop a single application and then
experiment with the different heads/sensors, reducing dramatically the
time to explore the solution space.
To discern between real and counterfeit banknotes, it was determined
that under infrared (IR) light, only the emerald number, the right side
Figure 2a: Qtechnology cameras comprise a number of heads into of the main image and the silvery stripe are visible on the front of the
which a variety of CMOS, CCD, InGaAs, and microbolometer sensors banknote. On the reverse side, only the numerical value and the hori-
are mounted, and a body (QT5122 is displayed). zontal serial number are visible (Figure 4).
Hence it was decided to illuminate the
Head Body
banknotes using a halogen lamp as a source
Sensor AMD of IR light and to capture the images using a
LVDS FPGA PCle PCle GPU
APU
Qtechnology monochromatic CMOS camera
a Qtec QT5122 body with CMOSIS (Ant-
Video 4 Linux2 Linux OpenCL
werp, Belgium; www.cmosis.com) mono-
chrome IR enhanced 2MPixel headfitted
with an IR filter. Images captured by the camera
User of the real Euro note will have all the left part
Open source APIs Mentor Embedded Linux
application
of the details missing, while in the counter-
Software
feit note, the details will still be clear. This is
Figure 2b: The body contains two main computing units an FPGA and AMDs R series SOC with because Euro notes use an ink that absorbs
four Excavator x86 CPUs cores with a Radeon graphics GPU and an I/O controller all on a single die. IR light, whereas counterfeit notes (possibly
0 0 0 Throughput analysis
50 50 50 The Sourcery Analyzer enables
100 100 100 a throughput analysis to be per-
150 150 150 formed on the Euronote Gstream-
200 200 200
er application. If the system is not
0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400
meeting the image capture rate
Figure 6: The first stage of image processing involves locating the notes in each of the images. To do, so demanded by the application, it is
a Canny filter is applied to detect the edges of the notes. With the edge detection performed, a filling possible to access the Euronote soft-
algorithm is applied to fill out all closed regions in the image, after which the contours of the notes can be
ware program from the Sourcery
detected to determine the specific location of the notes within the image.
Analyzer environment. This en-
0 (http://bit.ly/VSD-I420) where the ables the size of the image that is acquired to
25
images are represented in the YUV. be scaled down before it is applied to the Eu-
50
Once the application was writ- ronote application for processing, hence im-
75
100 ten, Mentors source analyzer was proving the number of frames per second that
125 employed to determine the speed can be acquired.
150 of execution. To do so, the code was However, if scaling the image size still
175
copied into the Mentor Embedded does not effectively enhance the speed of the
200
Sourcery Codebench from where system, it is possible to enhance performance
0 50 100 150 200 250 300 350 400
the binary code could be trans- further by ensuring that all of the four ALUs on
Figure 7: Once the position of a note is determined, an ferred to the target AMD processor the embedded processor are utilized in full. By
ROI is located in the image where the location of features where it was executed. T he using the scaling tracer on the Mentor Embed-
of a legitimate bank note would be invisible when the Sourcery Analyzer then graphical- ded Sourcery Analyzer, the number of ALUs
note is illuminated by the IR light source.
ly highlights CPU statistics that on the SoC used can be determined.
interface, the algorithm can be easily tuned and show the time each core spent in a If the processing power is under used, it
debugged, showing any intermediate results in given state, scheduling statistics that high- is possible to optimize the execution of the
a numerical and graphical way. light which software threads ran on each core program so that multiple threads, or com-
and the thread migration rate, or how fre- ponents of the Euronote application, can be
Code optimization quently the system scheduler moved software run in parallel. To do so, the Gstreamer code
Once the correct vision algorithm has been de- threads between cores. can again be accessed from the analyzer and
veloped, it is important to have an implemen-
Scheduling
tation that can run at the desired throughput. CPU 0
It is optimized to run efficiently on the embed- CPU 1
ded AMD R series SoCspecifically to ensure CPU 2
CPU 3
that optimum use is made of the four Excava-
87.5405k 87.5410k 87.5415k 87.5420k 87.5425k 87.5430k 87.5435k 87.5440k
tor x86 CPU cores on the device. Time (s)
In this case, the implementation was done Figure 8: By using the scaling tracer on the Mentor Embedded Sourcery Analyzer, the number of
using Gstreamer (https://gstreamer.freedesk- ALUs on the SoC used can be determined. If the processing power is underutilized, the execu-
top.org)a multimedia framework that links tion of the program can be optimized so that multiple threads, or components of the Euronote
together a variety of custom built or off-the- application, can be scheduled to run in parallel.
shelf media processing elements called plug-
Frames per sec Latency (msec) Gstreamer change
ins into a software pipeline. Hence, the first step
in the analysis process involved converting the Base 20 45
image processing software into a GStreamer Videoscale ! video/x-raw, width=800,
Resized 50 20
plug-in that was labeled Euronote. height=600
Using Gstreamer, the off-the-shelf Video- Queue 60 400 Queue
4Linux plug-in was used to fetch images into Size=1 100 35 Queue max-size-buffers=1
the pipeline from the 2MPixel camera and
Figure 9: Prior to optimization, the system was capable of capturing images at a rate of 20fps,
set the frame rate at 100fps. Then, another
while the latency between image acquisition and processed output was 45ms. Having resized the
Gstreamer plug in was used to send the images image data, image capture rate was increased to 50fps with 20ms latency. By introducing a queue
to the Euronote plug in, after which it could to the process, system throughput was increased to 60fps with a 400ms latency. Finally, by limiting
be scaled to 1024 x 768 into a I420 format the maximum size of the buffer, the system captured images at 100fps with 35ms latency.
queues created in the software to optimize the Optimized system such hardware and software may ultimately pro-
pipeline. Once achieved the resulting code The results achieved by using the Mentor Em- vide a significant challenge to many existing
can again be executed and profiled using the bedded tools to optimize the performance of software-hardware vendors who rely on propri-
Sourcery Analyzer (Figure 8). the system are summarized in Figure 9. Prior etary architectures.
However, using such a technique is not to optimization, the system was only capable
without its limitations. Having parallelized of capturing images at a rate of 20fps, while
Companies mentioned:
the code to run on multiple CPUs, it may the latency between image acquisition and
be that the performance of the system is still processed output was 45ms. Having resized Advanced Micro Devices
less than optimal. Instead of going through the image data, the image capture rate was Sunnyvale, CA, USA
www.amd.com
the pipeline, images are being copied into increased to 50fps with 20ms latency. By in-
Khronos Group
the queues, a process which itself introduces troducing a queue to the process, it was possi-
Beaverton, OR, USA
some overhead. This can be verified by using ble to improve the system throughput further www.khronos.org
the memory allocator tracer in the Sourcery to 60fps with 400ms latency. Finally, by limit- MathWorks
Analyzer and viewing the amount of memory ing the maximum size of the buffer, the system Natick, MA, USA
www.mathworks.com
allocated to the process. captured images at 100fps with 35ms latency.
This issue can be resolved easily, howev- Undoubtedly, the use of open-source oper- Mentor Graphics
Wilsonville, OR, USA
er, by returning to the Euronote code via the ating system software, libraries of freely avail- www.mentor.com
Sourcery Analyzer and ensuring that no image able image processing software, coupled with MVTec Software
data is stored within the queues during the exe- the use of advanced system diagnostic tools and Munich, Germany
www.mvtec.com
cution of the Euronote application. Once this inexpensive modular camera hardware, will
has been achieved, the program can be saved, continue to provide developers with an inexpen- Qtechnology
Valby, Denmark
executed again and the results of the execution sive means by which they can develop machine www.qtec.com
viewed using the Sourcery Analyzer. vision applications. Indeed, the proliferation of
Synchronisation mechanism
System clock count
Satoshi Eikawa
Grand master / server
System camera 1
Traditionally clocks built into cameras are loop), free-running counters are System camera 2
reset when the power is switched on, and the updated every time a synchro-
time stamps asynchronously commence a nization message is exchanged.
count-up. For networked cameras, these time And reducing the cycle interval
stamps gradually go out of synchronization with the Grand Master improves
as time passes owing to discrepancies in the precision even further.
clocks frequencies.
Absolute time
IEEE1588 is a protocol stipulated by the Trigger and GPO links
Precision Time Protocol (PTP) and is in effect Prior to IEEE1588 it was impos- Figure 1: Using IEEE1588 it is possibly to schedule regular
synchronization, bringing cameras with faster / slower clock
for equipment connected together by Ether- sible to guarantee truly-accurate
speeds back in line with the master. By shortening the
net. It synchronizes the time with extremely simultaneous capture in a multi- synchronization interval time lag is minimized; and therefore
high levels of precision in cameras that are camera system. jitter too.
connected via an Ethernet cable to a Grand Previous generations of GigE
Master standard time clock. Vision camerasupto and including GigE The new Sony XCG-CG camera series has
Cameras that support IEEE1588 exchange v1.2contained action commands that were been equipped with a function that begins
synchronized messages with the Grand Master designed to operate simultaneously with a exposure in synchronization with this abso-
at predetermined cycles, with the internal single command for multiple cameras. How- lute time, including functions defined as
counter recalibrated in accordance with the ever, there were still some inconclusive ele- IEEE1588 applications conforming to GigE
time stamp information at the time of send- ments, such as delays in network propagation Vision standards that are known as Scheduled
ing and receiving (Figure 1). and delays in firmware processing. Action Commands.
The IEEE1588 time stamp To overcome these issues, The new cameras have been equipped with
is an epoch time counter Satoshi Eikawa, Sony a combination of IEEE1588 a Scheduled Action Command that presets the
with 00:00, January 01, 1970 and the Action Command time for starting synchronization for the soft-
Image Sensing Solutions,
set as [0], and it provides a added to GigE Vision 2.0 ware trigger and IEEE1588, for example its
resolution of 1ns (1GHz). Weybridge, UK (www.im- enabled each individual possible to set synchronization to once every
Contrary to the mechanism age-sensing-solutions.eu) camera to specify a time for second, with all cameras in a network synchro-
of the PLL (phase-locked executing actions. nized to the Grand Master at this interval.
Reducing the interval between synchroni- Curtailment of Tact Time: One issue for
zation enables it to be carried out while time system developers and manufacturers using
discrepancies are still small, which helps to such systems that still remains is the time
minimize jitter. With regards to free-run- between image capture and peripheral equip-
ning, the timing for starting camera exposure mentnot supporting IEEE1588taking
is aligned with the time that has been syn- action. As such, Sony is planning to link the
chronized with the Grand Master. Although new camera series to GPO (General Purpose
this is affected by the network environment, Output) in the future, which will enable such
it is ideally possible for exposure synchronous peripheral equipment to be operated based on
control to be performed within 1s (Figure 2). time synchronicity.
It is generally necessary to prepare separate Doing this you can see that, for example, by
equipment that acts as the Grand Master connecting the cameras GPO to robots during
either special equipment available on the open the periodfrom the camera capturing images
market, or a PC with a Linux OS and running Figure 3: Sonys XCG-CG camera series to the robot performing work picking opera-
the free Grand Master software. has been equipped with a function that tionswill enable the image capturing and
However, one area were investigating is the begins exposure in synchronization with this robot operations to be synchronized.
absolute time, including functions defined as
mounting of an IEEE1588 master function Another example is in bottle inspection, here
IEEE1588 applications conforming to GigE Vi-
on the XCG-CG series in a future upgrade. sion standards that are known as Scheduled devices require multiple cameras to be synchro-
Doing this would turn the camera itself into Action Commands. nized and for which inspection samples are
the master and eradicate the need for pre- conveyed at a predetermined speed, taking this
paring a separate Grand Master Clock, sim- two different points will allow decisions as approach will have an extremely high affinity
plifying the mechanism for synchronization to whether vehicles are exceeding the speed with systems that use the high-precision time
between cameras; and between cameras and limit or not to be made, and accurate times synchronization capabilities of IEEE1588.
peripheral equipment. of images obtained from both points will sim-
plify high-precision speed analysis.
Usage scenarios and advantages IEEE1588 time stamps will also prove effec-
Simplification of Post-Event Analysis: Mount- tive for industrial robots and all types of inspec-
ing IEEE1588 onto GigE Vision cameras tion devices used in assembly work. Adding
enables the camera to synchronize its time set- absolute times to images showing inspection
DONT MISS
tings with the Grand Master Clock, and the processing and results will enable the items in
time stamps attached to image packets enable question to be easily identified.
absolute time to be displayed. Improved Image Processing System Reliabil- THE NEXT
One of the scenarios for use with ITS ity: One of the issues with installing vision sys-
ISSUE
(Intelligent Transportation Systems) is the tems for industrial assembly and inspection
more accurate detection of vehicles in vio- robots and devices is the layout of the wiring. SUBSCRIBE
lation of the speed limit, without the need Cables are subject to wearing and severing TODAY!
for radar. The time stamps obtained from when used for long-term operations, and this
causes a drop in system operability. Vision Systems Design
The chance of this increases in line is focused exclusively on
the information needs of
with the number of cameras in a
Camera 1 machine vision and image
(standard) systemthe number of cables must processing professionals.
also be increased if each one is to Offering a variety of
be synchronized. The XCG-CG information products to
help develop and manage
Series enables multiple cameras to vision systems today and
be synchronized with IEEE1588 optimize those systems of
Camera 2
and Scheduled Action Commands. the future.
Within 1s
They also support PoE (power over
Figure 2: Adjusting the exposure timing of the camera
Ethernet), which means that only http://www.vision-systems.com/
one cable is required for exposure subscriptions.html
in free run, based on the grand master. Note, while it can
be affected by the network environment, it is possible to synchronization, image output and
drive exposure synchronization to within 1s. the power supply.
HD 1920 x 1080
Software packages
offer developers
numerous options for
machine vision design
Systems integrators can take a number of different approaches when configuring their
machine vision systems.
When building machine vision systems, devel- Open source code provides
opers can choose from a number of commer- alternative options, p. 24),
cially available software packages from well- vision software manufac-
known companies. In choosing such software, turers realized that systems
however, it is important to realize the function- integrators needed to more
ality these provide, the hardware supported rapidly develop applications
and how easily such software can be config- to solve specific machine
ured to solve a particular machine vision task. vision problems without the
In the past, the choice of software was lim- need to understand the low-
ited with many companies merely offering lying complexity of image
callable libraries that performed relatively processing code. Because of
simple image processing operations. These this, many vendors now offer
included point processing operations such as higher level tools within
image subtraction, neighborhood operations their software packages that Figure 1: Vision Builder AI from NI allows developers to config-
ure, benchmark, and deploy vision systems using functions such
such as image filtering and global operations provide higher-level func-
as pattern matching, barcode reading and image classification
such as Fourier analysis. tionality such as image mea- within an interactive menu-driven development environment.
While useful, developers were tasked with surement, feature extrac-
understanding each of these functions and tion, color analysis, 2D bar code recognition (Natick, MA, USA; www.cognex.com), Vision
how they could provide a solution to a machine and image compression all within an interac- Builder from National Instruments (NI; Austin,
vision task such as part measurement. Often, tive environment. TX, USA; www.ni.com), Common Vision Blox
the need to build software frameworks to sup- Examples of these high-level tools include (CVB) from Stemmer Imaging (Puchheim,
port such libraries made developing such pro- Matrox Imaging Library (MIL) from Matrox Germany; www.stemmer-imaging.com) and
grams laborious and time consuming. Imaging (Dorval, QC, Canada; www.matrox. NeuroCheck from NeuroCheck (Stuttgart,
com), Open eVision from Euresys (Angleur, Germany; www.neurocheck.com). Such tools
Rapid development Belgium; www.euresys.com), HALCON from allow many commonly used machine vision
While such libraries are still available from MVTec Software GmbH (Munich, Germany; functions to be configured without the need for
a number of open-sources (see SIDEBAR www.mvtec.com), VisionPro from Cognex extensive programming. In this way, developers
ferent companies to be graphically combined. machine vision software packages are also VP of Sales and Marketing at TenAsys, INtime
Such environments allow developers to com- available. Running the HALCON machine takes control of the response-time-critical I/O
bine both open-source algorithms and com- vision package from MVTec, for example, can devices in the system, while allowing Windows
mercially-available packages to be integrated be accomplished using the RealTime RTOS to control the I/O that is not real-time critical
into a single environment. This allows machine Suite from Kithara (Berlin, Germany; www. (see Packaging Line Vision System Gets Speed
vision software to be specifically tailored based kithara.de). Similar to other RTOS, the Real- Boost, Control Engineering, May 2011; http://
on the most effective algorithms. Time RTOS Suite uses a separate scheduler bit.ly/VSD-CON-ENG).
VisionServer 7.2, a machine vision frame- within the kernel of the RTOS to decide which High-performance image processing has also
work from ControlVision (Auckland, New image processing task should execute at any become the focus of the embedded vision com-
Zealand; www.controlvision.co.nz), for exam- particular time. Like IntervalZero, this kernel munity. Recently, Dr. Ricardo Ribalda, Lead
ple, allows open source image-processing operates in conjunction with Windows (see Firmware Engineer at Qtechnology (Valby,
libraries and commercially-available packages Real-time operating systems target machine Denmark; www.qtec.com) showed how his
such as VisionPro Software from Cognex to vision applications, Vision Systems Design, company had created an application to perform
be used together in a graphical IDE. Also sup- June 2015, http://bit.ly/VSD-1506-1). high-speed scanning and validation of paper
porting VisionPro, the VS-100P framework For its part, Optel Vision (Quebec City, currency using processors from AMD (Sunny-
from CG Controls (Dublin, Ireland; www. QC, Canada; www.optelvision.com) recently vale, CA, USA; www.amd.com) and software
cgcontrols.ie) uses Microsofts .NET 4 frame- showed how it has developed a pharmaceuti- tools from Mentor Graphics (Wilsonville, OR,
work and Windows Presentation Foundation cal tablet inspection machine using its own USA; www.mentor.com) see Smart camera
(WPF) to enable developers to deploy single proprietary algorithms running under INtime checks currency for counterfeits, page 15 this
or multi-camera-based vision systems. from TenAsys (Beaverton, OR, USA; www. issue. A demonstration of the system in oper-
tenasys.com). According to Kim Hartman, ation can be found at http://bit.ly/VSD-QTec.
Real-time options
While most commercially-available machine Companies mentioned
vision software run using operating systems
such as Windows and Linux, the need to AMD Kithara PR Sys Design
Sunnyvale, CA, USA Berlin, Germany Delft, The Netherlands
develop machine vision systems that can per-
www.amd.com www.kithara.de www.perclass.com
form tasks within specific time periods has led
CG Controls Matrox Qtechnology
to the support of real-time operating systems Dublin, Ireland Dorval, QC, Canada Valby, Denmark
(RTOS). These RTOS then allow develop- www.cgcontrols.ie www.matrox.com www.qtec.com
ers to determine the time needed to capture Cognex Mentor Graphics RoboRealm
and process images and perform I/O within a Natick, MA, USA Wilsonville, OR, USA Aurora, CO, USA
www.cognex.com www.mentor.com www.roborealm.com
system while leveraging the power of Windows
to develop graphical user interfaces (GUI). ControlVision Microscan Silicon Software
Auckland, New Renton, WA; USA Mannheim, Germany
Today, a number of companies offer RTOS Zealand www.microscan.com https://silicon.software
support for machine vision software packages. www.controlvision.co.nz
MVTec Software Stemmer Imaging
MIL, for example, can now run under the Cyth Systems Munich, Germany Puchheim, Germany
RTX64 RTOS from IntervalZero (Waltham, San Diego, CA, USA www.mvtec.com www.stemmer-imaging.
www.cyth.com com
MA, USA; www.intervalzero.com), a fact that National Instru-
has been exploited by Kingstar (Waltham, MA, Datalogic ments TenAsys
Bologna, Italy Austin, TX, USA Beaverton, OR, USA
USA; www.kingstar.com) in the development of www.datalogic.com www.ni.com www.tenasys.com
PC-based software for industrial motion con-
Euresys NeuroCheck ViDi Systems
trol and machine vision applications. Built on Angleur, Belgium Stuttgart, Germany Villaz-St-Pierre, Swit-
the EtherCAT standard, machine vision tasks www.euresys.com www.neurocheck.com zerland
www.vidi-systems.com
employ MIL running natively in IntervalZeros IntervalZero Optel Vision
Waltham, MA, USA Quebec City, QC, Willow Garage
RTX64 RTOS. In operation, RTX64 runs on
www.intervalzero.com Canada Palo Alto, CA, USA
its own dedicated CPU cores alongside Win- www.optelvision.com www.willowgarage.com
Kingstar
dows to provide a deterministic environment. Waltham, MA, USA
Using this architecture, developers partition www.kingstar.com
MIL-based applications to run on RTX64 and
For more information about machine vision software companies and products, visit Vision
Windows (Figure 3). Systems Designs Buyers Guide buyersguide.vision-systems.com
Third-party RTOS support for other
DO-GOODER
The mvBlueSIRIUS is revolutionizing the The mvBlueSIRIUS com-
world of classical 3D applications for industrial bines innovative technology
image processing. Complex applications, which with a highly-efficient camera system. 6D technology
are specially designed for 3D solutions, such as enables the system to think ahead and the color
those found in the recycling process, can be recognition capability of the mvBlueSIRIUS rounds off
implemented cost effectively and in an elegant its range of applications. Up-to-date features see:
manner using the multi-stereo camera system. www.mv-do-gooder.com
MATRIX VISION GmbH
Talstr. 16 71570 Oppenweiler Germany Phone: +49 -71 91- 94 32- 0
info @ matrix-vision.de www.matrix-vision.de
ers can access all these classifiers. to classification. Using extracted texture, allow developers to work interactively with
Numerous companies have used such geometry and color features, captured data is data, choose the best features within the data
deep learning techniques in commer- presented to an SVM for classification. Simi- for image classification, train the numerous
cial products. To classify or separate prod- larly, the NeuralVision system from Cyth Sys- types of classifiers and optimize their perfor-
ucts based on acceptable or unacceptable tems (San Diego, CA, USA; www.cyth.com) mance (Figure 4).
defects, ViDi green software from ViDi Sys- is designed to allow machine builders with Many deep learning resources are now
tems (Villaz-St-Pierre, Switzerland; www.vidi- no previous image processing experience to available on the Web. Two of the most inter-
systems.com) allows developers to assign and add image classification to their systems (see esting of these are Tombones Computer
label images into different classes after which Machine learning leverages image classifica- Vision Blog (www.computervisionblog.com),
untrained images can be classified. In a bottle tion techniques, Vision Systems Design, Feb- a website dedicated to deep learning, com-
sorting application demonstration, Datalogic ruary 2015; http://bit.ly/VSD-1502-1). puter vision and AI algorithms and The Jour-
(Bologna, Italy; www.datalogic.com) recently By applying multiple image classifiers nal of Machine Learning Research (JMLR;
demonstrated how a k-d tree classifier could be on extracted data, developers can deter- www.jmlr.org), a forum for the publication of
used to identify and sort bottles after test bot- mine whether the extracted features are articles on machine learning.
tles are first presented to the system and key good enough to determine the specific fea- However, while such deep learning
points in the image automatically extracted tures of the product being analyzed. If not, approaches can be used to develop appli-
(see Image classification software goes on then different types of features may need to cations such as handwriting recognition,
show at Automate, Vision Systems Design, be extracted. Because of this, some compa- remote sensing and fruit sorting, they will
May 2015; http://bit.ly/VSD-1505-1). nies offer software packages that allow mul- always have a limited accuracy, making classi-
Developers using CVB Manto from Stem- tiple classifiers to be developed and tested. fiers less applicable where, for example, parts
mer Imaging (Puchheim, Germany; www. One such toolkit, perClass from PR Sys need to be measured with high accuracy or
stemmer-imaging.de) also do not need to Design (Delft, The Netherlands; www. aligned for assembly or processing, or for pre-
select the relevant features in an image prior perclass.com), offers multiple classifiers to cision robotic-guidance applications.
Vision + Automation
PRODUCTS
Camera Link cameras feature Sony IMX sensors
New EXO machine vision camera models feature Camera Link interface and the follow-
ing Sony IMX Pregius CMOS sensors: IMX174 (2.3 MPixel, 70 fps), IMX250 (5 MPixel,
32 fps), IMX252 (3.1 MPixel, 52 fps), IMX267 (8.8 MPixel, 18 fps), and IMX304 (12.3
MPixel, 13 fps). Operating in Camera Link based mode Turnkey vision
and combined with frame grabbers, the new
x3 tap configuration provides a 50% system is designed
increase over regular speed. for PCB inspection
SVS-VISTEK
Designed for the verification, inspec-
Seefeld, Germany
tion, and measurement of compo-
www.svs-vistek.com
nents, connectors, solders, and pins
on PCB assemblies, the IVS-PCBi
automated optical inspection (AOI)
machine utilizes artificial intelligence
Infrared camera offers HD resolution machine vision algorithms and opti-
The Viento HD Lab camera features a 2.3 MPixel uncooled VOx microbolometer cal components to enable auto-
detector with a 12 m pixel size and is sensitive in the LWIR wavelength. The camera mated inspection. IVS-PCBi systems
delivers 16-bit, full dynamic range digital data or digital data with automatic gain con- feature HD color cameras, software
trol as well as a 30 fps frame rate, and thermal sensitivity at 30 and touch screen interface, real-time
mK NETD at F1.0. Additionally, the camera includes a config- process monitoring, and statistical
ured frame grabber for full-frame rate, full bit depth analysis and reporting.
acquisition, and display. Industrial Vision Systems
Sierra-Olympic Technologies Oxfordshire, UK
Hood River, OR, USA; www.sierraolympic.com www.industrialvision.co.uk
pendently operated in terms of timing and with High-speed infrared camera enables
different parameters at frequencies of up to 50 research and scientific imaging
kHz. The controller is available in three config- The X6570sc series high-speed infrared cam- navitar.com
info@navitar.com | 585.359.4000
uration and control interfaces: Ethernet, USB, eras feature Camera Link Base, Camera Link
or RS-232 and also supports internal trigger Medium, and GigE interfaces. A 640 x 512
Vision+Automation EVENTS
APRIL June 26-29
LASER World of Photonics
April 9 13
Munich, Germany
SPIE Defense + Commercial
www.world-of-photonics.com/index-2.html
Sensing 2017
Anaheim, CA, USA JULY 2
http://spie.org/conferences-and-exhibitions/
defense--commercial-sensing July 12-13
7th International Conference of Pattern
April 27
Recognition Systems
UKIVA Machine Vision Conference &
Madrid, Spain
Exhibition
http://www.icprs.org/
Milton Keynes, UK mercury cadmium telluride (MCT) infrared
http://www.machinevisionconference. July 21 - 26
detector with a 15 m pixel size achieves 234
co.uk/ IEEE Conference on Computer Vision
fps at full resolution, and 14,550 fps with win-
and Pattern Recognition 2017
dowing and is sensitive enough to distinguish
MAY Honolulu, Hi, USA
temperature differences down to 20 mK. Sen-
http://cvpr2017.thecvf.com/
May 1 - 3 sitive in the 7.79.3 m range, the high-speed
Embedded Vision Summit AUGUST infrared camera allows engineers, researchers,
Santa Clara, CA, USA and scientists to view thermal imagery live on a
http://www.embedded-vision.com/summit August 22-24
detachable touchscreen LED monitor or stream
17th international Conference on
May 8 11 14-bit data to a computer for viewing, analy-
Computer Analysis of Images and
XPONENTIAL sis, or recording.
Patterns
Dallas, TX, USA FLIR
Ystad, Sweden
www.xponential.org/xponential2017
http://www.cvl.isy.liu.se/CAIP2017.html Wilsonville, OR, USA
May 16-19 www.flir.com
14th Conference on Computer and SEPTEMBER
Robot Vision
September 4 - 7
StreamPix Lite single-camera video
Edmonton, AB, Canada
28th British Machine Vision Conference
recording software launched
http://www.computerrobotvision.org/ NorPix, perhaps known best for its Stre-
London, UK
May 22 25 https://bmvc2017.london/ amPix multi-camera recording software, has
NIWeek 2017 announced the release of NorPix Lite sin-
September 9 12
Austin, TX, USA gle-camera recording software, which has a
Second International Conference on
http://niweek.ni.com/events/niweek-2017/
Computer Vision & Image Processing
event-summary
(CVIP-2017)
May 29 June 2 Roorkee, Uttar Pradesh, India
25th International Conference on http://www.iitr.ac.in/cvip2017/
Computer Graphics, Visualization and
September 25-27
Computer Vision
PACK EXPO
Plzen, Czech Republic
Las Vegas, NV, USA
www.packexpolasvegas.com
JUNE
June 22-24 OCTOBER
15th EMVA Business Conference
October 12 - 13
Prague, Czech Republic
Embedded VISION Europe 2017
http://www.emva.org/events/business-
Stuttgart, Germany
conference/15th-emva-business-
http://www.emva.org/events/more/
conference/
embedded-vision-europe/
reduced feature set at a reduced price. With DirectX 12, OpenGL 4.4 and OpenCL 2.0 API,
the software, users can capture video with onboard DVI-I, DVI-D and DisplayPort display
USB3 Vision or GigE Vision cameras from interfaces for Ultra HD 4K resolution.
more than 100 camera lines. The software is Vecow
compatible with Windows 7/8/10, is available New Taipei City, Taiwan
in 32-bit and 64-bit, and features VCR-styled www.vecow.com
controls for recording and playback. Addition-
ally, the software features image time stamp- VisualApplets Embedder enables FPGA
ing with microsecond precision, playback programming
support for sequence and AVI files, frame-by- With VisualApplets Embedder from Silicon
frame sequence browsing, hardware trigger- Software, the FPGA image processing func-
ing, Bayer conversion, and color balance. tionality from VisualApplets can now be
NorPix ing, intelligent transportation systems, and embedded within imaging devices and cam-
Montreal, QC, Canada traffic vision. Based on an Intel Xeon Core i7
www.norpix.com processor (Skylake-S) running with the Intel
C236 chipset, the computer features 6 GigE
Embedded system operates in extended LAN wM12 PoE+ with IVH-9200M, 4 PoE+
temperatures with IVH-9200), four front access SSD tray,
The IVH-9200 embedded system features an eight USB 3.0, M2DOM, 3 SIM, and 32 iso-
extended operating temperature range of lated DIO. With dual-channel DDR4 2133MHz eras. VisualApplets Embedder makes Visual-
-40F to 167F and targets applications that ECC memory, up to 32GB capacity; Intel HD Applets compatible on any third party FPGA-
include machine vision, in-vehicle comput- Graphics 530, the embedded system supports based hardware, making the FPGAs freely
Mightex
www.mightex.com Vision Industry
sales@mightex.com
(218) 828-3157 | intercon@nortechsys.com | www.intercon-1.com
programmable to realize individual image pro- angular resolution of 0.06 per CCD pixel. By power LED area and bar lights, the TR-HT
cessing applications. Programming the FPGA mounting a conoscope lens directly to a Pro- lighting controllers feature SafePower and
directly in the device enables it to execute por- Metric imaging photometer or colorimeter, patented SafeSense technology. SafePower
tions of the image preprocessing to reduce Radiant Vision Systems now offers a solution automatically adjusts the supply voltage to
data load and system costs. for viewing angle performance measurement make most efficient use of the power, while
Silicon Software for a range of display types, including those SafeSense creates a safe working environ-
Manheimm, Germany based on LCD and OLED technologies, as well ment for controllers that overdrive LED lights
www.silicon.software as backlights. using constant current in order to increase
Radiant Vision Systems light intensity for machine vision applications.
Conoscope lens measures flat panel Redmond, WA, USA The lighting controllers are GigE Vision com-
displays www.radiantvisionsystems.com pliant and operate with both Triniti and non-
Designed for use with ProMetric imaging pho- Triniti machine vision lights.
tometers and colorimeters, the conoscope lens LED lighting controllers deliver 150W Gardasoft
measures the color, luminance, and contrast of per channel Cambridge, UK
multiple angular distributions of light at once, TR-HT LED lighting controllers feature two www.gardasoft.com
allowing flat panel display (FPD) manufacturers independent output channels rated at 150W
to capture viewing angle performance mea- and delivering 50A in pulsed mode and 5A CCD camera targets X-ray imaging
in continuous mode. Suitable for driving high applications
Designed for measurements in the 10 eV to
30 keV range, the SOPHIA-XO:2048 camera
features a backside-illuminated 2048 x 2048
CCD image sensor with no anti-reflection
coating and a 15 m pixel size, the camera
provides >3 fps with 15 MHz readout speed
surements for displays in real time. The lens has at full resolution. Proprietary ArcTec ultra-
a working distance of 3 mm, a 3 mm diame- deep-cooling technology minimizes dark
ter sampling size, and a viewing angle of 60. noise by thermoelectrically cooling the CCD
Additionally, the conoscope lens provides an to less than -130F (-90C) using only air
SUBSCRIBE NOW AT
WWW.VSD-SUBSCRIBE.COM
Donal
Waide
Director of sales BitFlow Inc. Year founded and location: 1993,
Woburn, MA, USA
What are three interesting things about easy one for me. I had been working in Qual- In what markets or applications do you
you that people might not know? I love ity Control for some time and wasnt happy. see the most growth? Several markets are
to cook and entertain, and we often host starting to see vision as the best possible solu-
What do you like about working in your
people at our house. I am a strong advocate for tion, including the Internet of Things, auton-
field? Machine vision is no longer a descrip-
education, and am an active member of the omous vehicles, security, medical, and mili-
tion of an event in a company. In many cases,
School Improvement Council at my sons tary. Newer, larger sensors and faster speed
its a division of the engineering team and
school. Im also a foodie, a passion thats interfaces such as CXP allow a lot more data
more. Large companies have dedicated
helped by my worldly travels. to be gathered and analyzed in real time, lead-
machine vision engineering teams, and its
ing to split second decisions. This will be one
What are your top three favorite movies now an option on many curricula for col-
path of our industrys future.
of all time? The Blues Brothers, The Big Leb- leges. What I love though is that we are at the
owski, and Monty Python and the Holy Grail. cutting edge of technology and as such, I get How have market changes affected
to see all the projects where our boards product development at your company?
What are your top three favorite TV are in use. Since the arrival of FireWire and GigE Vision,
shows of all time? Big Bang Theory, The
we have heard of the end of the frame grab-
Daily Show (both versions), and most of Dick What is your companys core focus and
ber companies, yet here we are 23 years after
Wolfes creations. mission? BitFlows focus is to continue to
inception, still offering them. We always
lead the industry in frame grabber customer
What are three of your favorite bands or watch the industry and see whats next or new
and technical support. We have a great repu-
musical acts? The Stone Roses, U2, and in standards, and how we can offer a compa-
tation for this already and receive customer
Bloc Party. rable product. We are also aware that the
compliments weekly. As the industry adopts
frame grabber is more suited to certain areas
What do you like to do in your free time? new standards, we work two-fold on this, to
of industrial applications, so we focus mainly
Cook or go to a restaurant, read, and attend influence the direction of the industry and to
on these.
comedy shows. collaborate with others on making products
that are both user friendly and affordable, yet What is one interesting way youve seen
How did you get into your field of exper- highly innovative. your product utilized recently? With a lot
tise? I was bartending at the time and got talk- of NDAs signed it is difficult to state some of
ing to a patron who was working in Intelligent What are you most excited about at your
the cooler aspects of deployment of our prod-
Transportation Systems. Turned out he was company right now and why? The dynam-
ucts, but having seen where we are in the
looking for an engineer, and I was looking for ics of our company and the fact that we are
Life Sciences market, and some of our mili-
a full-time gig. continually producing new products shows
tary involvement, its definitely easy to say, we
that we are a leading trendsetter in the indus-
are in some very interesting projects around
Why did you choose your profession? I try. BitFlow doesnt sit still and my time here
the planet.
am an engineer by education, so this was an had shown me this.
H T
LI G RE H E S
N G K
IC /TI
ET CK ET
D I B U Y T
ICS
. C OM
E A N
E L -P H OTO
TH
F
-O
R LD
WO
world-of-photonics.com