Вы находитесь на странице: 1из 6

Optical Flow Based System Design for

Mobile Robots
Mehmet Serdar Guzel Robert Bicker
School of Mechanical and System Engineering School of Mechanical and System Engineering
Newcastle University Newcastle University
Newcastle, UK Newcastle, UK
m.s.guzel@newcastle.ac.uk robert.bicker@newcastle.ac.uk

Abstract— This paper presents a new optical flow based proposed [3]. A portable platform was introduced that supports
navigation strategy, based on a multi-scale variational approach, real-time computer vision applications for mobile robots via
for mobile robot navigation using a single Internet based camera optical flow [4]. Mapless navigation based on a simple balance
as a primary sensor. Real experiments to guide a Pioneer 3-DX strategy consisting of balancing the amount of left and right
mobile robot in a cluttered environment are presented, and the side flow to avoid obstacles was used [5], moreover a new
analysis of the results allow us to validate the proposed behavior
algorithm for navigation, claimed that it is fast enough to give
based navigation strategy Main contributions of this approach is
that it proposes an alternative high performance navigation the mobile robot the capability of reacting to any
algorithm for the systems, consuming high computation time for environmental change in real-time, via optical flow involving
image acquisition an enhanced image segmentation method was proposed [6].
Although several different methodologies have been recently
proposed, we are still far from a satisfactory solution.
Keywords—optical flow, variational approach, mobile robot
navigation, behaviour, obstacle avoidance , time to contact , Previous researchers have employed either standard
I. INTRODUCTION window matching algorithm or gradient based approaches.
However, the main weakness of these approaches is failure to
Vision is one of the key sensing methodologies, and has address how to deal with large displacements. To overcome
been used by researchers for many years to assist robot this problem, a multi-scale approach has been adapted to these
navigation. It can gather detailed information about the methods [7, 8]. However, this method increases time
environment which may not be available by combinations of computations and may not be suitable for real time navigation
other sensors. The computational complexity of an image tasks. Hence, a novel algorithm involving a multi-scale
processing algorithm is one of the most critical aspects for real variational optical flow algorithm via reliable and practical
time applications. If any real time system has fast image image segmentation is presented to contribute to mapless
acquisition and processing ability some loss of reliability or navigation strategy, guiding the local optimization to large
accuracy in the algorithms can be tolerated. The only way to displacement solutions. A sensing system based on optical
have an appropriate solution for a low cost navigation system flow and time-to-collision calculation is tested on a Pioneer 3-
is to determine an efficient and optimum method based on the DX mobile robot equipped with TV7230 IP based camera
problem at hand. having 25 frames per second capacity as shown in Fig.1; and
Optical flow is an appropriate methodology for vision based all calculations are performed onboard the robot.
mapless navigation strategies. Several optical flow based
navigation strategies using more than one camera have been
introduced by researchers; biologically inspired behaviors, II. OPTICAL FLOW
based on stereo vision have been adapted for obstacle Optical flow is considered as a 2D vector field describing
avoidance [1]. A trinocular vision system for navigation has the apparent motion of each pixel in successive images in a 3D
been proposed [2]. Both these methods, in some way, emulate scene taken at different times [9]. The main idea is based on
the corridor following behavior. Their main disadvantage is the assumption that pixel intensity values for corresponding
that they require more than one camera. And as require 3D points on the images are the same. Thus, if two
processing of images from more than one camera, increasing consecutive images have been obtained at different time
computational cost of the system and the implementation cost intervals to and t1, the basic concept is to detect motion using
of the software. There are few works have been proposed image differencing [10]. It is assumed that for a given scene
based on single camera. An ecological psychology, including point, the corresponding image point intensity I remain
control laws using optical flow and action modes which can constant over the time which is referred as conservation of
avoid obstacles and play tag solely using optical flow was image intensity. If any scene point projects onto image point

978-1-4244-6506-4/10/$26.00 2010
c IEEE 545
( ‫ݔ‬ǡ ‫ ) ݕ‬at time t and onto image point ሺ‫ ݔ‬൅ ߜ‫ݔ‬ǡ ‫ ݕ‬൅ ߜ‫ݕ‬ሻ at smoothness terms, shown as (7) involving some ‫ ן‬control
time ሺ‫ ݐ‬൅ ߜ‫ݐ‬ሻ , Equation (1) can be deduced based on the parameters where ‫ >ן‬0.
assumption of the brightness of a point in an image is constant.
Main objective is to find the vector ሺߜ‫ݔ‬ǡ ߜ‫ݕ‬ሻ that minimise the ଶ
Edata(u,v)=‫׊ ׬ ׬‬൫ሺ‫ܫ‬ሺ‫ ݔ‬൅ ߜ‫ݔ‬ǡ ‫ ݕ‬൅ ߜ‫ݕ‬ǡ ‫ ݐ‬൅ ߜ‫ݐ‬ሻ െ ‫ܫ‬ሺ‫ݔ‬ǡ ‫ݕ‬ǡ ‫ݐ‬ሻ൯ ݀‫ݕ݀ݔ‬
error given by (2) where ܵሺሻ represents a function measures
the similarity between pixels (5)
 ଶ  ଶ
Esmooth(u,v) = ‫׊ ׬ ׬‬ห‫׏‬ଶ ߜ‫ݔ‬ห ൅ ห‫׏‬ଶ ߜ‫ݕ‬ห ݀‫ݕ݀ݔ‬ (6)

E(u,v) = Edata + ߪ Esmooth (7)

Gradient information for high frequency content images is


only useful at a very small scale, and displacements of more
than one pixel cannot be measured. The standard approach to
dealing with this problem is to use a multi-resolution coarse-
to-fine algorithm. An image pyramid is constructed by
repeatedly down-sampling the image by a factor of two. The
optical flow can then be found on the smallest image in the
pyramid, and is used to unwarp the next smallest image.
Interpolation is used for the fractional pixel locations. This
Figure 1. Pioneer 3-DX robot via TV7230. process is then iterated until reaching the original image
resolution.
ሺ‫ݔ‬ǡ ‫ݕ‬ǡ ‫ݐ‬ሻ ൌ ‫ܫ‬ሺ‫ ݔ‬൅ ߜ‫ݔ‬ǡ ‫ ݕ‬൅ ߜ‫ݕ‬ǡ ‫ ݐ‬൅ ߜ‫ݐ‬ሻ (1)
B. Variational optical flow estimation
݁ ൌ ܵሺ‫ܫ‬ሺ‫ ݔ‬൅ ߜ‫ݔ‬ǡ ‫ ݕ‬൅ ߜ‫ݕ‬ǡ ‫ ݐ‬൅ ߜ‫ݐ‬ሻǡ ‫ܫ‬ሺ‫ݔ‬ǡ ‫ݕ‬ǡ ‫ݐ‬ሻሻ (2)

The basic idea is to recover the optical flow as a


Expanding the right-hand side of (1) by using a Taylor series minimization of some energy function of type (8). Solving this
aboutሺ‫ݔ‬ǡ ‫ݕ‬ǡ ‫ݐ‬ሻ, and ignoring the higher order terms and making minimization problem depends on to find the solution of
some rearrangements yields following expression. functions u and v [12].

డூ డூ డூ ଶ
ߜ‫ݔ‬ ൅ ߜ‫ݕ‬ ൅ ߜ‫ݐ‬ ൌ Ͳ (3) ‫ܧ‬ሺ݄ሻ ൌ නሺ‫ܫ‬ଵ ሺ‫ݔ‬ǡ ‫ݕ‬ሻ െ ‫ܫ‬ଶ ൫‫ ݔ‬൅ ‫ݑ‬ሺ‫ݔ‬ǡ ‫ݕ‬ሻǡ ‫ ݕ‬൅ ‫ݒ‬ሺ‫ݔ‬ǡ ‫ݕ‬ሻ൯ ݀‫ݔ‬
డ௫ డ௬ డ௧

൅ܽ න ܸሺ‫ݑ‬ǡ ‫ݒ‬ሻ݀‫ ݔ‬


”‡˜‹‘—• ‡“—ƒ–‹‘ ‹• †‹˜‹†‡† „›  ߜ‫†ƒ –—‘Š‰—‘”Š– ݐ‬
ఋ௫
‘˜‡‡– ƒŽ‘‰ –Š‡ Š‘”‹œ‘–ƒŽ ሺ ሻǡ ˜‡”–‹…ƒŽ ሺ ሻ
ఋ௬ (8)
ఋ௧ ఋ௧
†‹”‡…–‹‘• ƒ”‡ …ƒŽŽ‡† ƒ• — ƒ† ˜ ”‡•’‡…–‹˜‡Ž›Ǥ Having these where term V(u,v) is called the regularize, which can be
rearrangements and denoting partial derivatives of I by Ix, Iy
and It gives the differential flow equation shown in (4)Ǥ ܸሺ‫ݑ‬ǡ ‫ݒ‬ሻ ൌ ȁȁ‫—׏‬ȁȁଶ ൅  ȁȁ‫˜׏‬ȁȁଶ ǡ ‫׏‬ൌ ሾμšǡ μ›ሿሺͻሻ

‫ܫ‬௫ ‫ ݑ‬൅ ‫ܫ‬௬ ‫ ݒ‬ൌ െ‫ܫ‬௧ (4)
The associated Eular-Lagrange equations are given by the
A. Multi Scale Variational Algorithms Partial Differential Equation (PDE) system where z := (x,y)
and the object displacement is h(z) ,
Recent work on optical flow has focused on the variational
approach which seeks to minimize an energy function over the ߲‫ܫ‬ଶ
image [10] .The standard approach is to minimize an energy ߙ‫׏‬ଶ ‫ ݑ‬൅ ሺ‫ܫ‬ଵ ሺ‫ݖ‬ሻ െ ‫ܫ‬ଶ ൫‫ ݖ‬൅ ݄ሺ‫ݖ‬ሻ൯ ሺ‫ ݖ‬൅ ݄ሺ‫ݖ‬ሻሻ ൌ ͲሺͳͲሻ
߲‫ݔ‬
incorporating the optical flow constraint, a penalty term (‫׊‬ሻ,
as is approaches some maximum value and also a
‘smoothness’ term is added, similar to the Horn-Schunck ߲‫ܫ‬ଶ
ߙ‫׏‬ଶ ‫ ݒ‬൅ ሺ‫ܫ‬ଵ ሺ‫ݖ‬ሻ െ ‫ܫ‬ଶ ൫‫ ݖ‬൅ ݄ሺ‫ݖ‬ሻ൯ ሺ‫ ݖ‬൅ ݄ሺ‫ݖ‬ሻሻ ൌ Ͳሺͳͳሻ
method, as shown below in (9), [11]. Having an appropriate ߲‫ݕ‬
penalty term is important . The global deviations from the grey 
value constancy assumption, measured by the energy, are The solutions are obtained by calculating the asymptotic
expressed via (5). Equation (6) is a smoothness expression to state (– ՜ λ) of the parabolic system is obtained by following
describe the model assumption of a piecewise smooth flow expressions [13].
field. The total energy is the weighted sum of the data and

546 2010 IEEE Conference on Robotics, Automation and Mechatronics


This system is discretized using finite differences; all
A. FOE and TTC calculation
derivatives are approximated by central differences and for the
discretization in t direction a semi-implicit scheme is When one moves through a world of static objects the
employed where assuming a square grid is used , ǻx = ǻy visual scene is projected on the retina appears to flow past. In
௡ାଵ
and ǻt is time step.‫ݑ‬௜ǡ௝ approximates to ‫ ݑ‬in some grid fact, for translational motion of the camera, image motion
డூభ everywhere is directed away from a singular point
point zi,j at time n×ǻt and ‫ܫ‬ଵ is an approximation to . The corresponding to the projection of the translation vector. This
డ௫
following expressions are obtained. point is called as the Focus of Expansion (FOE) [6].

௡ାଵ ௡
ܽο‫ݐ‬ ௡ାଵ ௡ାଵ ௡ାଵ ௡ାଵ
‫ݑ‬௜ǡ௝ ൌ ‫ݑ‬௜ǡ௝ ൅ ൣ൫‫ݑ‬௜ାଵǡ௝ ൅ ‫ݑ‬௜ିଵǡ௝ ൅ ‫ݑ‬௜ǡ௝ାଵ ൅ ‫ݑ‬௜ǡ௝ିଵ
ʹሺο‫ݔ‬ሻଶ 
െ Ͷ‫ݑ‬௝௡ାଵ ൯
௡ ௡ ௡ ௡
൅ ൫‫ݑ‬௜ାଵǡ௝ ൅ ‫ݑ‬௜ିଵǡ௝ ൅ ‫ݑ‬௜ǡ௝ାଵ ൅ ‫ݑ‬௜ǡ௝ିଵ െ Ͷ‫ݑ‬௝௡ ൯൧
௡ ௡
൅ ቀ‫ܫ‬ଵǡఝ ൫‫ݖ‬௜ǡ௝ ൯ െ ‫ܫ‬ଶǡఝ ൫‫ݖ‬௜ǡ௝ ൅ ݄ఝǡ௜ǡ௝ ൯ቁ

‫ܫ כ‬ଶǡ௫ǡఝ ൫‫ݖ‬௜ǡ௝ ൅ ݄ఝǡ௜ǡ௝ ൯

(12)

௡ାଵ ௡
ܽο‫ݐ‬ ௡ାଵ ௡ାଵ ௡ାଵ ௡ାଵ
‫ݒ‬௜ǡ௝ ൌ ‫ݒ‬௜ǡ௝ ൅ ൣ൫‫ݒ‬௜ାଵǡ௝ ൅ ‫ݒ‬௜ିଵǡ௝ ൅ ‫ݒ‬௜ǡ௝ାଵ ൅ ‫ݒ‬௜ǡ௝ିଵ
ʹሺο‫ݔ‬ሻଶ 
െ Ͷ‫ݒ‬௝௡ାଵ ൯
௡ ௡ ௡ ௡
൅ ൫‫ݒ‬௜ାଵǡ௝ ൅ ‫ݒ‬௜ିଵǡ௝ ൅ ‫ݒ‬௜ǡ௝ାଵ ൅ ‫ݒ‬௜ǡ௝ିଵ െ Ͷ‫ݒ‬௝௡ ൯൧
௡ ௡
൅ ቀ‫ܫ‬ଵǡఝ ൫‫ݖ‬௜ǡ௝ ൯ െ ‫ܫ‬ଶǡǡఝ ൫‫ݖ‬௜ǡ௝ ൅ ݄ఝǡ௜ǡ௝ ൯ቁ

‫ܫ כ‬ଶǡ௬ǡǡఝ ൫‫ݖ‬௜ǡ௝ ൅ ݄ఝǡ௜ǡ௝ ൯

(13)

The Result of optical flow vectors and movement animation


between two successive frames, using the multi-scale
variational approach are shown in Fig. 2.

Figure 2. Optical Flow Vectors and Movement Animation.


III. NAVIGATION STRATEGY VIA OPTICAL FLOW

The aim of proposed system is to navigate autonomous


mobile robot via monocular vision in a cluttered environment.
The robot tries to understand its environment by extracting the
important features taken from image sequence. The block
diagram of the navigation algorithm is shown in Fig. 3;
Optical flow is used as the principal cue to navigate the robot.
Image sequences from an Internet based camera is provided by
developing appropriate converter software which can capture
two successive frames, having 160×120 resolution, under 0.1
Figure 3. Flowchart of Navigation Algorithm.
second using a TV7230 Pan Tilt camera. 2-D motion, also
called apparent motion or optical flow, needs to be recovered
from intensity and color information of an image sequence, FOE is the point from which all optical flow vectors
Proposed variation multi scale approach, discussed in the emerge, and both components of the optical flow vector are
previous section, is adapted to the proposed navigation null at such a point (u= 0 and v= 0). It is determined from the
strategy. Flowchart of proposed navigation strategy is shown optical flow field calculated, searching for the point in which
in Fig. 3, Evaluation of these methods is discussed in the the directions of the vectors in the field cross each other. In
conclusion section of this paper. Fig. 4, FOE point is shown by a green scare on the test image.

2010 IEEE Conference on Robotics, Automation and Mechatronics 547


Accurate estimates of the time-to-contact (TTC) of behaviours are achieved. Calculation of these regions are
approaching objects are crucial for mobile robot navigation, shown in the following expressions where ࣋࢏ refers the total
TTC uses visual information to judge distance and speed of magnitude of flow vectors in the ith region.
actions with respect to time. The source of this visual
information comes from movement of the agent towards an ൫σఱ
೔సభ ࣋࢏ ൯
‫ ݓ݋݈݂ܮ‬ൌ  (15)
object or the object/surface towards the agent. These ହ
movements provide the visual systems with important
൫σభల
೔సభమ ࣋࢏ ൯
information about the constantly changing environment, ܴ݂݈‫ ݓ݋‬ൌ  (16)

allowing the appropriate actions to be produced. In order to
have more reliable and sensitive results from proposed ൫σభల
೔సభమ ࣋࢏ ൯
algorithm the image is segmented to vertical regions. ‫ ݓ݋݈݂ܥ‬ൌ  (17)

The number of regions is dynamically calculated


depending on the resolution. For 160×120resolution, the A. Behavior Based Navigation
image is divided into 16 equal columns that each column
regions has 10×120 optical flow vectors. Having the In robotics, earlier works focused on of the “sense-model-
coordinates of the FOE and the optical flow field, the TTC plan-act” variety, requiring intense computation for inferring
corresponding to the ith region of the image is calculated via the location and identity of objects, updating a central world
following expression. x and y are center points of considered model, and planning a course of action to achieve some
region. ‫ݔܧܱܨ‬and ‫ ݕܧܱܨ‬are coordinates of FOE in the image, defined goal state. In contrast, researchers have found that it is
‫ݑ‬and ‫ ݒ‬optical flow components of ith region. beneficial to decompose the navigation task into the multiple
independent task-achieving modules, called behaviors. Four
main behaviors are developed for safe navigation based on
magnitude and analyze of optical flow vectors. These
ሺ࢞࢏ െ ࡲࡻࡱ࢞ሻ૛ ൅  ሺ࢟࢏ െ ࡲࡻࡱ࢟ሻ૛ behaviors are Forward Motion, Obstacle Avoidance,
࣎࢏ ൌ
ඩ Emergency Turn and Change way. State machines are a good
ට࢛૛࢏ ൅ ࢜૛࢏
way of implementing system involving all four behaviours and
the navigation strategy as shown in Fig. 4.The robot is
(14) designed to wander around any cluttered indoor environment
whilst not colliding with any obstacle. It reacts to the presence
of an obstacle by this environment, both changing its heading
angle from its current value to a new value and adjusting its
current speed. Robot starts its initial movement via a Forward
Motion behavior in which the robot has 0.2 m/s speed and 0
heading angle, and goes in the forward direction until it
encounters an obstacle. The obstacle avoidance behavior
depends on the type of obstacle. An emergency behavior is
initiated when a single object spans over the entire field of
view such as walls or tables. That case happens when the
magnitude of the central CFlow is greater than both LFlow and
RFlow flows and the standard deviation of TTC values of
central region are below a certain threshold value. In
emergency mode robot makes a 180 degree of turn and
decreases its speed from 200 mm/s to 100 mm/s, and starts the
change direction behavior which determines a new heading
angle in the direction of the region having the least TTC value.
In order to calculate new heading angle, turning range is
varied from േ8 degrees, the new heading angle relies on ࣎in
the given range, and is obtained by the following expressions
Figure 4. Focus of Expansion (FOE).
ߠ௥ ൌ െ൫‹ሺ߬௜ ሻ൯, for i =1, 2….8 (18)
In addition to magnitudes of right and left flow vectors,
magnitudes of central flow vectors are also calculated. For ߠ௟ ൌ  ൫‹ሺ߬௜ ሻ൯, for i =8, 9….16 (19)
160×120 resolution, there are 16 regions and those regions are
classified as a member of right flow, left flow or central flow. ߠ௡௘௪ ൌ ൫‹ሺߠ௥ ǡ ߠ௟ ሻ൯ (20)
The Magnitude of those flow vectors is used as primary cue
for behavior module in which four independent task-achieving

548 2010 IEEE Conference on Robotics, Automation and Mechatronics


Conversely, if the magnitude of right or left flow is bigger
than the central flow and TTC value is smaller than a certain
threshold value, the robot makes a turn based on the control
law with the range varying from േ15 degrees, shown in (21),
where σȁ‫ݓ‬௅ ȁ and σȁ‫ݓ‬ோ ȁ are the sum of the magnitudes of
optical flow in the visual hemi fields on both sides of the
robot’s body. The following expression gives the new heading
angle.

σȁ‫ݓ‬௅ ȁ െσȁ‫ݓ‬ோ ȁ
ߠ௡௘௪ ൌ ቆ ൈ ͳͷቇ
σȁ‫ݓ‬௅ ȁ ൅σȁ‫ݓ‬ோ ȁ
(21)

IV. CONCLUSION
The developed system was on a Pioneer 3-DX mobile robot,
whose onboard computer is based on the Intel Pentium 1.8 GHz
(Mobile) processor, and includes 256 Mbytes of RAM memory.
The robot is equipped with TV7230 internet based pan tilt
camera having 25 frames. The software architecture of proposed
Figure 5. The state-machine corresponding to the proposed navigation system.
system is supported by CImg library [14] and Player
Architecture [15], which are open-source software projects. The
system was tested in Robotics’ lab at Newcastle University,
containing chairs, furniture, and computing equipment. Time to
Contact (TTC) graph of chair avoiding maneuver, shown in Fig.
8, related to the environment shown in Fig. 7. According to the
theory of estimation of a translation sequence, estimated TTC
value at the obstacle’s side decreases while approaching. The
analyze results corresponding to the computations involved in
the sensing system are shown in Table I for 160×120 resolution.
According to the results, the time consumed during the image
acquisition is the main problem, however the test results
demonstrate the robot can successfully navigate in the
environments, shown in Fig 6, without colliding any obstacle by
20 minutes employing the multi-scale variational algorithms
and proposed navigation strategy. The proposed multi scale
variational approach extends the applicability of optical flow to Figure 8. TTC values for avoiding maneuver.
fields with larger displacements, and is adapted to vision based
navigation strategy successfully. In Fig 9, a real navigation
scenario is demonstrated in the laboratory. Every behavior is
illustrated with a different color such as orange for forward,
yellow for emergency, red for left turn and blue for right turn. In
order to evaluate the performance of the proposed algorithm, it
is compared with Lucas Kanade’s method [8] and Horn-
Schunck’s method [11], shown in Table II. It evaluates
algorithms’ performance comparing the corridor centering
behavior, total flow computation and safe navigation time.

TABLE I.

THE PERFORMANCE OF THE SENSING SYSTEM FOR 160×120 RESOLUTION


Image Optical Flow TTC && FOE Decision(s) Total
Acquisition Calculation(s) Calculation(s) Time(s)
(s)
0.180 0.120 0.01 0.0001 0,3101
0.176 0.110 0.01 0.0001 0.2961
0.183 0.107 0.01 0.0001 0.3001
Figure 6. Robotics Lab of Newcastle University.

2010 IEEE Conference on Robotics, Automation and Mechatronics 549


Results validate that proposed method provides an
alternative and robust solution for mobile robots using a single
low-cost camera as the only sensor to navigate via mapless
strategy. According to the comparison results, computational
time of proposed method is more than two other methods;
however it is the most suitable method for corridor following
behavior and succeeds safe navigation in cluttered
environments.

REFERENCES

[1] A. Bernardino, and J. Santos-Victor, “Visual Behaviours for Binocular


Tracking, Robotics and Autonomous Systems,” vol. 25, no. 3-4, pp. 137-146,
1998

[2] F. Bergholm and A. Argyros, “Combining central and peripheral vision


for reactive robot navigation,” in proceeding of the IEEE computer Society
Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 356-362,
Figure 7. Chair avoiding manevoure. 1999.

[3] A. P. Duchon, W. H. Warren, and L. Pack Kaelbling, “ Ecological robotics


Adaptive Behaviour,” Special Issue on Biologically Inspired Models of
Spatial Navigation, 6(3/4):473–507, 1998.

[4] S. Szabo, D. Coombs, M. Herman, T. Camus and H. Liu, “A Real-time


Computer Vision Platform for Mobile Robot Applications, Real Time
Imaging,” Real Time Imaging, vol 2, pp.315-327, 1996.

[5] K. Souhilla and A. Karim, “Optical Flow based robot obstacle avoidance,”
Advanced Robotic Systems , International Journal of Advanced Robotic
Systems, vol 4, no. 1, pp 13-16, 2007.

[6] E. Maria, H. Schneebali and M. Sarcinelli-Filho, “An optical flow-based


sensing system for reactive mobile robot navigation,” Revista Controle &
Automação, vol 18, no. 3, 2007

[7] T. Brox , A. Bruhn, N. Papenberg, J. Weickert, “High accuracy optical


flow estimation based on a theory for warping,” In: Proc. 8th ECCV, Prague,
Czech Republic, pp 25–36, 2004

[8] J.-Y. Boguet, “Pyramidal implementation of the Lucas Kanada feature


tracker,” Technical report, Intel Corporation, Microprocessor Research
Labs, Technical report, 1999.

[9] E. R. Davies, “Machine vision,” 2nd edition, chap 17.2, pp 431–433.

[10] B. Atcheson, W. Hiedrich and I. Ihrke, “An evaluation of optical flow


algorithms for background oriented sclieren imaging,” Exp Fluids, vol. 45,
pp. 467-476, 2009.

Figure 9. A real navigation scenario at the laboratory. [11] B. K. G. Horn and B.G Schunck, ”Determining optical flow,” Artificial
Intelligence, vol 17, pp 185-203. 1981.

TABLE II. [12] F. Lauze and M. Nielsen, “A variational algorithm for motion
compensated inpainting,”. In S. Barman A. Hoppe and T. Ellis, editors,
COMPRASION OF FLOW TECHINUQUES British Machine Vision Conference, vol 2, pp. 777-787,2004.
Method Flow Centering Error Average
Time(s) (cm) Navigation [13] L. Alvarez, J. Sanchez and J. Weickert, “Scale-Space Theory in
Time(m) Computer Engineering, vol 1682/1999, pp. 235-246, 1999
L&&K 0.101 9 , for 3.00 m 20
[14] CImg Library, (2009), Available from, http://cimg.sourceforge.net/
H&&S 0.055 16, for 3.00 m 10
[Accessed 05/05/2009].
Variational Alg. 0.120 6, for 3.00 m 20
[15] Player project, (2009), Available from, http://playerstage.sourceforge.net
[Accessed 05/05/2009].

550 2010 IEEE Conference on Robotics, Automation and Mechatronics

Вам также может понравиться