Вы находитесь на странице: 1из 6

IEEE/RSJ International Workshop on Intelligent Robots and Systems lROS '91. Nov. 3-5, 1991, Osaka, Japan.

IEEE Cat. No. 91TH0375-6

Estimating Location and Avoiding Collision Against Unknown Obstacles for the Mobile Robot using Omnidirectional Image Sensor COPIS
Yasushi Yagi, Yoshimitsu Nishizawa and Masahiko Yachida
Department of Information & Computer Sciences Osaka University 1-1 Machikaneyama-cho, Toyonaka, Osaka 560, Japan Phone 06-844-1 151, E-mail y-yagi@ics.osaka-u.ac.jp

Abstract We have proposed a new omnidirectional image sensor COPIS (Conic Projection Image Sensor) for guiding navigation of a mobile robot. Its feature is passive sensing of the omnidirectional image of the environment in real-time (at the frame rate of a TV camera) using a conic mirror. COPIS is a suitable sensor for visual navigation in real world environment with moving objects. This paper describes a method for estimating the location and the motion of the robot by detecting the azimuth of each object in the omnidirectional image. In this method, the azimuth is matched with the given environmental map. We also present a method to avoid collision against unknown obstacles and estimate their locations by detecting their azimuth changes while the robot is moving in the environment. Using the COPIS system, we performed several experiments in the real world. 1. Introduction There has been much work on mobile robots with vision systems which navigate in both unknown and known environments [ 1-41. These mobile robots, however, view only front region of themselves and, as a result, they may collide against objects approaching from the side or behind. Thus, we need an image sensor to view the environments around the'robot so that it may navigate safely, Imaging methods using rotating cameras [SI, a fisheye lens [61, a spherical mirror 171 or a conic mirror [81[9] have been studied for acquiring omnidirectional views of the environment. Although it can acquire very - precise azimuth information in the omnidirectional view taken by a rotating camera, the imaging takes a long time and, thereby, the method is not applicable to real-time problems such as avoiding collision against moving objects. Imaging using a

fish-eye lens can acquire a wide view of a semi-sphere around the camera. However the image analysis of the ground (floor) and objects on it is difficult because they appear along the circular boundary of the image where the image resolution is very poor. A conic mirror yields an image of the environment around it and we can easily take a 360 degrees view. Using spherical mirror, this imaging method is similar to the imaging method using conic mirror; however, the image is similar to the image using a fish-eye lens. Thus structures in an environment such as a wall and a door in a room appear along the circular boundary of the image. We have proposed a new omnidirectional image sensor COPIS (Conic Projection Image Sensor) for guiding navigation of a mobile robot [%101. Its feature is passive sensing of the omnidirectional image of the environment in real-time (at the hame rate of a TV camera) using a conic mirror. COPIS is a suitable sensor for visual navigation in the real world environment with moving objects [ I l l . The imaging is a conic projection; the azimuth of each point in the scene appears in the image as its direction from the image center. Thus, if the azimuth angle is observed at two points, the relative location between the robot and the object point is calculated by triangulation, This paper describes a method for estimating the location and the motion of the robot by detecting the azimuth of each object in the omnidirectional image. In this paper, we assume a priori knowledge (model) of the environment and the azimuth is matched with the given environmental map. We also present a method to estimate locations of unknown obstacles by detecting their azimuth changes while the robot is moving in the environment. Then the robot can avoid collision against them. Using the COPIS system, we

performed experiments i n a room.

2. COPIS System Configuration As shown in Fig.1, the COPIS system has three coiiiponents; an imaging subsystem COPIS, an image
processing subsystcm and a mobile robot. COPIS mounted on thc robot consists of a conic mirror and a TV camera i n a glass tube with a diameter of 200 m m and a height of 200 min. The image processing sub-system consists of a monitor, an image processor, which converts each omnidirectional image into a 5 1 2 x 4 3 2 ~ 8 digital image, bit arid a 32 bits workstation.
A conic mirror with a diameter of 120 mni and a TV

camera are set in the glass tube in such a way that their axes are identical and vertical. Fig.:! is an example of an input image in a real environment as shown in Fig.3. As shown in Fig.4, the image taken by the COPIS is a 2 x view around the vertical axis. Furthermore, the COPIS has advantages that vertical edges in the environment project radially in the image
COPIS %

Fig.3 Experimental Environment

Conic mirror

Robot Mobile +

k*L'
Receiver
Correcm

aWork Station
NFS

4'

Conic mirror

Fig.1 COPIS System Configuration

Fig.4 View Field of COPIS


1 mage plane

Fig.2 A Example of Input Image

F i g 5 Invariant Relation of Azimuth Angle

and that their azimuth angles have an invariant relation with the distance from and the height of the object. As shown in Fig.5, the point P at (Xp,Yp,Zp)in the environment is projected on the image point p (xp,yp) represented by

Thus in the COPIS system, by using the azimuths of the radial edges in the image plane, the COPIS can estimate locations of the robot and objects. 3. Navigation Algorithm The robot is initially parked at a standard position and driven around a room and a corridor of the building via a given route. The robot knows the standard position (starting position) and the robots own movement, however, there are measurement errors caused by swaying motion of the robot. Therefore, by using azimuth information from both the input image and the environmental map, we estimate location and motion of the robot.

Fig.6 Location Estimation of Robot


and the motion of the robot even if the robot are turning. 3.2 Predicting azimuth angle of vertical edge Fig.7(a) shows an example of the environmental map. As seen in the figure, the map is two-dimensional model which is viewed from the vertical direction. Therefore, when the robots location is given, we can predict the azimuth angles of each edge as shown in Fig.7(b). 3.3 Matching radial lines to predicted azimuth angle model Since the rough locations of the robot have been obtained, we generate an predicted azimuth angle model from the environmental map. In case of robot moving, the rough locations of the robot are calculated by adding the robot movement from the encoder of the robot and the estimated location at the prior frame. Then this predicted azimuth angle of each edge is compared with the azimuth angle of the radial line obtained from the input image as follows.

3. 1 Location and motion estimation of robot Essentially, the location of a mobile robot would be defined just by the planar polar coordinates (r,e) as shown in
Fig.5. Thus, as shown in Fig.6, if two and more azimuth angles of the object are observed in the given environmental map where the robot moves, the location of the robot is calculated by matching the obtained azimuth angles with the environmental map. Actually. we estimate more precise location by the least squares method. Furthermore, the motion of the robot can be estimated by measuring its location in consecutive images. As COPIS can take an omnidirectional image, this system can estimate the location

Vertical Edge

Predicted Radial line


(b)

Fig.7 Environmental Map and Prediction of the Azimuth Angle

-9//-

In the case when the robot is at the starting position, we first set a search region around each predicted azimuth angle of vertical edge, which is an estimated position from the environmental map. We examine if an radial line in the image exists in each search region. In the case when the robot is moving, to set the search region, we use the observed locus of each edge in the image while the robot moves. Let us denote the robot motion by (u(t),v(t)). Defining the position of P at time t i by Pi(X1,Y i,Zi), the relative velocity of the point P in the environment at time tl+t is represented by (-u(tl+t), -v(tl+t), 0). We get the location of point Pat time ti+t as t Xp = j-u(tl+t) dt + X i
t=O

error with the swaying motion of the robot, we can find out the precise pan angle when the deviation of the least square method becomes minimum. Then the location of the robot can be detected. Fig.8 shows the process of finding the minimum deviation by changing the pan angle.
4. Estimation of Unknown Obstacle

After matching the observed objects with the environmental map, the objects which are undetectable in the environmental map are recognized as the unknown obstacles. If there are edges which have not been matched to the environmental map, the robot considers that they are caused by some unknown obstacles and estimates their locations. The relative location between the robot and the obstacle is calculated by triangulation if the azimuth angle is detected at two positions while the robot is moving. 4.1 Estimation of unknown obstacle's location The locus of the azimuth angle in consecutiveimage is represented by (3). Thus, if the azimuth angle 0 is observed at two points, the relative location between the robot and the object point is calculated by triangulation as follow,

t Yp = I-v(tl+t) dt
t=O

+ Y1

(2)

z p =z1 Thus, from (1) and (21, the relation between an azimuth angle of an object and time tis obtained as follows,
t

tane(ti+t) =

I-v(tl+t) dt t=O

+ Y1
(3)

J-u(ti+t) dt + Xi
t=O

The locus of the azimuth angle in consecutive images is represented by (3). Thus, the azimuth angle at the next frame can be predicted, then we can set a search region around this predicted azimuth angle of vertical edge. After setting the search regions, we estimate the location of the robot. However, the obtained azimuth angles have observational errors due to the swaying motion (pan angle) of the robot. Therefore, we estimate more precise location by the least squares method. By changing the pan angle of the robot every 0.5 degree in a certain margin of
th angle 6

Minimum
F i g 8 Process of Estimating Pan Angle of Robot Fig.9 Estimation of Unknown Obstacle's Location

-9/2-

[;:I=[

-1 tan02 -1 U(2)tanQ tan83 - 1 1 U(3)tanOg

V (2) V (3)

(4)

3 f

.............
t

V (i) = Av(ti+t (i)) dt


t-

(i=2,3)

.. ..

where 82 and 83 are the azimuth angle 8 after t2 and t3 (sec) respectively. In case of 82=83, the mamx of (6) is singular. Now, when the object moves along the same axis as the the robot motion. the conditions of tan82 # tan83 is not satisfied. In this case, it is impossible to calculate the location. However, as the object has usually a size, it is unlikely that all points on the object move on the same axis as the robot movement. Therefore, at least, locations of a few points on the object can be calculated. Actually, the azimuth angle has an observational error due to the swaying motion of the robot. Therefore, as shown in Fig.9, we estimate more precise location using consecutive measurements by the least squares method.

Id

.-

a
Fig.10 Locus Map of Vertical Edge after the robot moves round the arc toward right side. a large e m r was produced in the front region of the robot. During the f i t 32 frames the given vertical edges matched with the environmental map exist around the robot randomly, then the location of the robot can be calculated from a lot of these vertical edges. However, when the robot moves along a final straght course, these vertical edges move toward the behind region of the robot, and some vertical edges are occuluded by unknown obstacle. Then, the number of the vertical edges matched with the environmental map is going to decrease. Therefore, the location of the robot can not be calulated precisely. The average error of the location of unknown obstacles was approximately 3 cm. However, we consider that the precision of the obtained location of the robot and the unknown obstacle are enough for robot navigation. Thus, these results suggest that the COPIS system is a useful sensor for robot navigation.

==?--

5. Experimental Results

Using the COPIS system, we performed several experiments in the real world. One of them was performed in a room with a size of 2.5m by 2.5m. The robot moved in the environment as shown in Fig.3. Each image was taken after motion of every 5cm when the robot moved. Observing the locus of azimuth angle of vertical edges, COPIS can estimate its own location and motion. First, the robot moves straight from the 1st to the 16th frame, and during next 16 frames the robot changes its direction and moves round the arc toward right side, and finally the robot moves straight again. Fig.10 shows plots of the azimuth angle of vertical edges in the environment. The result is shown in Fig. 11. The given vertical edges in the environmental map were plotted as small black rings, and the obtained vertical edges on the unknown obstacles matched in more than 15 frames were plotted as black points. Furthermore, the real locus of robot movement was drawn by straight and curved black thin lines, and the estimated locus of the robot was drawn by a black thick line. An average error of the location measurement of the robot was approximately 3 cm and the maximum error was approximately 7 cm. In this experiment, as shown in Fig.1 1,

6.Conclusions In this paper, we have described a method of estimating location and motion of the robot and estimating the location of unknown obstacles. we consider that the measurement precisions are enough for robot navigation. In the feature work, we will be trying to perform the experiment with the environmental map made by learning the

-9/3 -

-200

41 'n
vironmental Layout
0

-2co

Observed Locatlon
Robot

Fig.11 Result of 'Measutment of Obstacle's Location and Robot's Location and Motion

location of objects in the environment while the robot moved before. Furthermore, an application of the COPIS to road following including moving objects with a given environmental map is the subject of on-going studies. References

[6] S.J. Oh and E.L. Hall, Guidance of a mobile robot using o. an omnidirectional vision navigation system, R c SPIE 1 852 Mobile Robots 1 , pp.288-300 (1987) 171 J. Hong, X. Tan, B. Pinette, R. Weiss and E. M. Riseman, Image-based homing, Proc. Int. conf. Robotics and Automation, pp. 620-625 (April 1991)

111 A.M.Waxman, J.J LeMoigne and B.Scinvasan, A visual


navigation system for autonomous land vehicles, IEEE J.Robotics & Auto., vol.RA-3, No.2, pp.124-141 (1987) [ 2 ] M.Turk, K.D.Morgenthaler, K.D.Gremban and M.Marra, VITS-A vision system for autonomous land vehicle navigation, IEEE Trans. Pattem Anal. Mach. Intell., vol.PAM1-IO, no.3 pp.342-360 (1988)

[3] C. Thrope, M.H. Hebert, T. Kanade and S.A. Shafer, Vision and navigation for the Camegie-Mellon Navilab, IEEE Trans. Pattem Anal. Mach. Intell., vol.PAM1-IO, No.3, pp.362-373 (1988) [4] M. Yachida, T. Ichinose and S . Tsuji, Model-guided monitoring o a building environment by a mobile robot, f ROC. IJCAI, pp.1125-1127 (August 1983) 8th 151 H. Ishiguro, M. Yamamoto and S. Tsuji, Omnidirectional stereo for making global map, Proc.3rd ICCV (1990).

[8] R.A. Jarvis and J.C. Byme, An automated guided vehicle with map building and path finding capabilities, Prw. 4th ISSR, pp.497-504 (1988) 191 Y. Yagi and S . Kawato, Panorama scene analysis with conic projection, Proc.IEEE Int. Workshop Intelligent Robots & Systems, pp.181-187 (1990) [lo] Y. Yagi and M. Yachida, Real-time generation of environmental map and obstacle avoidance using o. omnidirectional image sensor with conic mirror, R c IEEE Conf. on Computer Vision and Pattem Recognition, Hawaii, (June,1991) 1111 Y. Yagi, S. Kawato and S . Tsuji, Collision avoidance using omnidirectional image sensor (COPIS), Proc. IEEE Int. Conf. on Robotics and Automation, pp.910915 (Apri1,1991)

9/Y-

Вам также может понравиться