Вы находитесь на странице: 1из 51

University of Obrany Brno, czech republic

10th September 30th November 2012

COMPUTER VISION APPLIED TO COOPERATIVE ROBOTIC

Project director : Col. STEFEK Alexandr Mentor : Professor BERGEON Yves

Promotion CBA BULLE 2010 - 2012

2nd lieutenant LABOUDIGUE MILTARY ACADEMY OF SAINT-CYR 1st battalion

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Acknowledgements
I want to thank Colonel Stefek, head of the Department of Air Defense Systems. He gave me precious advices and helped me to understand the objective of this report. I also want to thank professor Bergeon from the French military School of Saint-Cyr for his advices when he came to Brno.

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

A Contents
1. Introduction 2. Presentation of the project
2.1 Cooperative robotic 2.2 Visual odometry 2.3 Computer vision 2.4 Application, motivation

p.6 p.7 p.7 p.8 p.10 p.11

3. Practice
3.2 Image processing

p.15 p.16 p.16 p.17 p.20 p.21 p.27 p.27 p.31 p.34

3.2.1 First codes 3.1.2 First solution : thresholding 3.1.3 Solution remained

3.3 Algorithm to detect points

4. Experience
4.1 4.2 4.3 Experience area Presentation of the experience Experiences

5. Limit of the code 6. Conclusion Attached documents


5

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Notification

This report is the result of a Cadet Officers work. On the occasion of filing and possible publication, the Saint-Cyr Cotquidan Schools attract your attention on the fact that this reports version is not a proofread version. Thus this report may contain spelling or syntax mistakes as well as imprecision.

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Abstract

Navigation is an important field for armed forces and reconnaissance robots can be used to provide navigation data. This report has been made to give theoretical basements to implement systems of navigation on reconnaissance robots. To reach this goal, it describes the code on Matlab which has been performed in order to estimate positions of a target, using computer vision. Targets coordinates have been got by a camera, and estimate by program. The second objective of this research is to estimate the distance and angles of rotation, of the target in order to determine the exact location of the target in the space. This report describes different steps to reach this objective, and different possible way to get it.

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

1. Introduction

The way of leading Conflict has changed since the appearance of electronic on the battlefield. In fact, technologies provide key to success and advantages in relation to the enemy. Robots are an asset in many fields, especially for reconnaissance or to detect targets. Among the many ways to determine targets, this project is focus on the computer vision which is used to ascertain the distance, and the angle of rotation of the objective. Specifically, in cooperative robot, many robots are moving, forming a swarm. The objective of this project is to be able for robots to locate others, and to follow one of those robots, if it has to do a short distance. Image processing will be done by the software Matlab 2011, which is useful to transform or add filters to the picture to make the detection easier. Image processing consists of analyzing pictures, obtained by optical sensors, as a camera, inspecting distinctive points of the target. As a result, the targets characteristics have to be visible and known by the program in order to provide exact results, and to follow the target from two following images. The advantage of visual odometry is discretion ; indeed, contrary to the GPS technologies, odometry cant be detected because robots dont use any connection by satellites, or laser to aim the objective.

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

2. Presentation of the project


2.1 Cooperative robotic
An essential issue, which appears in the automation of many surveillance, or reconnaissance tasks, is that moving of targets navigating in a secured area of interest. In this project, the main objective is to observe the movement of one robot, when all robots of the swarm are stationary, and to be able to determine its own position.

In the simplest version of this problem, the number of cameras and robots formation can be fixed in advance to guarantee adequate coverage of the area of interest. Furthermore, on a battlefield, or dangerous areas, the risk to lose one or many robots is high, so, using a swarm of robots instead of one is an additional security. In the general case, the coverage capabilities will not be enough to cover the entire terrain of interest. As a result, the above constraints force to use of several sensors moving over time.

fig. 2.1 : Example of swarm of robots

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

2.2 Visual odometry

Visual odometry (VO) is the method of estimating the egomotion of an object (for example, human, and robot) a single or multiple cameras fixed on it. Application fields are robotics, wearable computing, or car industry. The term VO was defined in 2004 by Niste, but already appeared earlier. This word comes from its similarity to wheel odometry, which evaluates the mobility of a vehicle by counting the number of turns of its wheels. Furthermore, VO operates by accrual appreciation the position of the vehicle through observation of the changes that movement induces on the picture of a video. In order to get the best result of the VO process, there should be enough illumination in the area and a static background, to be able to study texture. Furthermore, consecutive frames should be captured.

There are two main process to get feature points and their similarity. On the one hand, VO finds features in a first picture and search them in the second one, by algorithm, such as correlation. On the other hand, the second technical consist of individually ascertain features in each images and compare them from some similarity between their description. This algorithm is more apt when the images are taken from a static camera. Fields of VO is chosen for the first approach while the studies on the latter approach in the last years concentrated. The reason is that former works were tested for simple environments, where cameras and sensors was tried on small range, while recently, the focus has changed to large area, as a consequence, the images are taken by sensors which are as far as possible.

10

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Feature Detection During the detection algorithm, the image is examined in order to catch important characteristic points and compare them in other images. A feature is an image motif that moves from its previous picture. For VO, point detectors, such as corners or color points, are essential because their coordinates in a picture can be measured accurately..

2.3 Computer vision

Computer vision includes process for acquiring, examining, and interpreting images and, in general visual information from the real world in order to transform it in numerical data. The original goal of this field has been to improve the capacity of human vision, helping by electronically technology and understanding an image. This image understanding can be understood as the separation of symbolic information from pictures using models based on, physics, statistics, and learning theory. Thus, computer vision can be described such as the enterprise of automating and integrating a large collection of algorithms and representations for vision acquisitions. One of the applications of computer vision is to analyze artificial intelligence and computers, or robots, which can discern the area where they are used. The computer vision and machine vision fields have important applications. Computer vision concerns the technology of image analysis which is applied in many fields. Machine vision commonly combines image analysis with other practices and technologies to make robots shifting more independent in industrial applications, or military motivations, such as robots which could be able to provide intelligence on a battlefield. The image data can be studied from several forms, for example video sequences, or can result from many pointviews, if in the experience, many cameras are used.

11

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

2.4

Application, motivation :

This project can have many applications in cooperative robotics field. On the one hand, it let a robot study other moving robots ; or, in the other hand, to calculate its own position. In this part, many cases, where this project can be useful for cooperative robotics, will be exposed. This thesis can be used in order to examine the movement :

Fig. 2.2 : The swarm follow the deplacement of robot A

In the most basic case, this technic is used by the swarm of robots (B,C,D) to determine the position of the moving robot A. As a consequence, if the position of the swarm is known, it can easily determine the position of the moving robot.

12

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Visual odometry can provide the direction, and the distance needed for a movement :

Fig. 2.3 : Robot D come back to its initial placement

Contrary to the first case, here, the moving robot evaluates the distance, and the direction, between its own position and the swarm, and uses it to come back in the initial formation. Then, the swarm can continue its mission.

This process can also be useful in order to determine, of course the position of
another robot, but also of its own position, if for any reason, it is lost.

Fig. 2.4 : Robot A evaluate its own position

13

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Robot A evaluate the distance between its and robot B, in the second part of the process, it determine the distance to robot C. When robot A knows those two values, the code is able to calculate its position using simply geometry theorem.

Moreover, if we add another robot, robot A calculates the error rate between two measures, and get a better precision :

Fig. 2.5 : Robot A evaluates its own position with an error rate

14

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

In fact, the positions of robots B, C, D are known. The robot A can estimate its position, using the previous process, in relation to robots B and C, in order to get a first data about its position. Then it has to do the same with robots C, D. As a result, it can precisely calculate its location, and evaluate the error rate, and an eventual anomaly, if the two results dont concord. Consequently, robot A can make a short movement and try a new estimation of its own position.

Even if best technologies are using for this practice, like sonar, knowing the distance with other robots can also avoid crash, in fact, the robot can stop if its too close of a robot :

Fig. 2.6 : Robot A avoid crash

The concept can also be used in this case. Indeed, the Matlab code can be modify to add a condition about the distance between the two robots. For example : if the distance between robots A and B is too small, robots A stops, moves back, and changes its direction. Else, it do not stop and follow its progression

15

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

3. Practice
3.1 Technical specifications
The main expectation of the project is to ascertain the distance between the camera and target (fig.3.1), fixed on each robot, using computer vision. To reach this objective, I chose to work on Matlab, using my knowledge learned during my formation in the military academy of Saint-Cyr. The code is divided in two main parts, first, many filters are applied to the original picture in order to showcase red points. The second part of the code concerns the detection of the points by an algorithm.

Fig.3.1 : Picture of the target

In this subsection, we will study at first different technics to showcase feature points of the target, and the solution remained. And a second part the algorithm to obtain the distance between the camera and the target.

16

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

3.2 Image processing 3.2.1 First codes

My first objective has been to apply different filters in order to showcase red points, to determine their pixel coordinates. The first step is to transform the picture in a black and white picture, then to have white dots on a black background. The considered possibility ( code 3.1 : first code) isnt conclusive. Indeed, as we can see on the next picture (fig 3.2), this simple code cant delete the noise on the background without suppressing white points. So, the detection is impossible.

Fig. 3.2 : First result

17

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Facing this failure, the second tested solution is focus on the red channel (code 3.2 : red channel), because the red points should the maximum intensity. But, as we observe (fig.3.3), the floor and the background contain also red color. The next step is to find a solution to delete the entire red channel, excepted the red point.

Fig 3.3 : Red channel

3.1.2 First solution : thresholding

A technic studied at first is a thresholding (code 3.3 : thresholding) . After the loading the original picture (fig.3.4), and the creation of a black one, the first loop compare the value of each pixel of the image to a thresholding values (seuilR,
seuilR2, seuilBG).

Indeed, all the 3-dimension matrix of the picture is compared to those thresholding values, in this code, maximum value for the blue and green component (seuilBG), and a band for the red component ([seuilR, seuilR2]) are fixed in order to delete all the noise of the picture (fig. 3.5).
18

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

The objective of the second part of the code is to transform this picture, without noise, into a binary one, to have only white points on a black background (fig.3.6). The first test on short distance seems to be very conclusive. In fact, as we can see on next pictures, the red points are visible if the distance is lower than two meters (fig.3.4 to fig.3.6) and the target is perpendicular. In the next case, the distance is two meters :

Fig.3.4 : Original picture distance : 1 meter

Fig.3.5 : Result of the thresholding , distance : 1 meter

Fig.3.6 : Final result, distance : 1 meter

19

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Even if this technic gets good result on short distances (the six red points are detected), the detection is impossible if the target has an angle of rotation. Indeed, as we can see on the next example, none point is detected (fig.3.7 to fig.3.9) :

Fig. 3.7 : Target with an angle of rotation

fig. 3.8 : Result of the thresholding

Fig. 3.9 : Final result with an angle of rotation

As a consequence, this code is very efficient on short distance, if the objective is perpendicular to the camera. But it has an important angle of rotation, the detection becomes impossible with the light, because the value of the red component of the red points pixels have changed, and they are not any longer in the band [seuilR,
seuilR2].

20

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

The solution could be to modify the thresholding values (seuilR, seuilR2, seuilBG), to adapt it to this new situation. But the robot cant do that without the intervention of the user, as a result, this solution cannot be validated because one of the main objective is to have robots which are able to progress without a human intervention.

3.1.3 Solution remained

After trying those different solutions, the best solution remained has to showcase the red points, delete all the noise, and transform the picture in a binary. The final objective is to get white points on a black background in order to make the detection, in the second part of the code, easier. If we take the example of a target, placed at a distance of one meter (fig 3.10), the first step is to take grayscale picture away from the red channel (fig 3.11).

Fig 3.10 : Original picture, distance : 1 meter

Fig 3.11 : First filter

After this first filter, the red points are the lightest pixels on the picture, as a result, using different functions, as im2bw and medfilt2, we can get the binary picture, and delete all the noise (fig 3.12).

21

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Fig 3.12 : Transforming picture without noise

Thus, the detection part can give the coordinate of the points, but also the distance between the camera and the objective.

3.2 Algorithm to detect points

The second part of the code is using to detect white points, after an algorithm. To detect white points in a binary picture, I have used the function regionprops, which is able to detect different white objects on a black background, by catching them in rectangle, then, the code calculate the pixel coordinates of each center of the rectangle. . The result of this algorithm is the original picture with the rectangle, and pixel coordinates of red points. (fig. 3.10). To be able to estimate the distance, even if the objective is not perpendicular, first, the code uses points on same X-axis, then it try to calculate the distance with points on the same Y- axis. The code favors points on the X-axis, because they are not distorted if the objective has an angle of rotation. As a result, more the angle of rotation is high, more the error rate will go up, but is still acceptable.

22

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Fig. 3.10 : Pixel coordinate of red points, distance : 1 meter

In the other case, if the objective is distant, and perpendicular, the camera can see the two big points on the same Y-axis, so the detection is possible (fig 3.11, fig 3.12). The different cases, which make the detection impossible, will be exposed in another subsection

Fig 3.11 : Distance : 4 meters

fig 3.12 : Detection, distance : 4 meters

23

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

The next loop is essential in order to find 2 points on the same X-axis, or Yaxis, which is essential to get the distance between camera and target from particular points. Now, the Matlab program has determined two characteristic points. So, with optical theory, and information about the camera, we can get the distance. For the experience, the camera has been used is Nikon D70 with an objective AF-S Nikkor DX, which characteristics are :

Focal length: 35 mm f-number: 3.5 - 29 Self timer: 2s Image quality: JPEG normal Image size: Large (3008*2000 pixels) Sensitivity: ISO 320 CCD size: 23.7*15.6 mm Focus: Manually set

Knowing the image size, and the CDD size, we can deduce the size of only one pixel (fig. 3.13)

Fig. 3.13 : Drawing of a CDD sensor

24

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

As a consequence, one pixel measure 7.88 mm on the CDD sensor. If we know the size of one pixel, we can easily convert pixel coordinates, in the metric system.

The problem can be schematized by the following way (fig. 3.14), in the case of distance detection, the knowing data are : b : one of the characteristic measure on the target. f : the focal length, known for experiences a : the pixel distance, measured by the Matlab software. D : the researched distance between the camera and the objective.

25

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

3.14 : drawing of the camera and the objective

On the one hand, if the objective is perpendicular to the camera, the software will use two points on the same Y-axis in order to determine the distance D, using the Thales theorem. On the other, if the target has an angle of rotation, the software will calculate the distance, using two points on the same X-axis, this method entailsn an error, which is acceptable. Then, knowing the distance, two points on the same Y-axis will be essential to determine the angle of rotation.

To apply the Thales theorem, we can suppose the target is perpendicular to the optical axis, as a result, the CDD sensor, and the target are parallel. Thus, the theorem give the following equation :

Equation 1

26

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

We can deduce from the equation 1, the expression of the distance D :

Equation 2

27

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

4. Experience
4.1 - Experience area

In order to get the best result, the model of target has to be placed with high precision. The experience area is a room (fig 4.1) , where the camera is fixed on its static support, and it never moved between different pictures (fig 4.2). This detail is very important because it has a determining impact on the precision of the measurements, to compare the experimental result and the result, which are determine with the Matlab program.

fig 4.1 : Experience area

28

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

To set the support, the camera has been linked to it with a screw. In order to keep the settings of the support, a millimeter grid has been placed on the point, and with a visual estimation of the center of the camera, the support has been placed in front of the model of the target.

Fig 4.2 : Cameras support

On the experience area, every positions of the target is described, in order to have high precision, the model of target has to be placed on points which exact positions are known (fig 4.3).

Fig 4.3 : Description of each targets position


29

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Many combinations of reference points will be useful for the measurements. Indeed, the distance and the angle of rotation can be determine from, for example, two points which are on the same X-axis, or Y-axis. (fig 4.4) The chosen distance which will be used as a reference is the distance between two red points, on the same Y-axis, is 0.5 m. Those two points are used to calculate the angle of rotation of the target, and the distance if the objective is perpendicular. Another solution is to work on vertical red points to have the distance between the objective and the camera, and on red one which are on the same Y-axis.

Fig 4.4 : Illustration of the experience

30

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

The chosen target, is a huge white square, with red and green points. Their position are known (fig 4.5). To have a vertical plane as flat as possible, two screws were added behind the model of target (fig 4.6 and fig 4.7).

fig 4.5 : Geometry of the target

31

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Fig 4.6, 4.7 : Presentation of the target with the screws

The shape and the color of the targets particularity have been chosen in order to make the detection and image processing easier. Indeed, red and green colors have been chosen because image processing will be able by using, for example, red and green channels on Matlab. The vertical line in the center of the objective is important to measure its exact position in the experimentation area.

4.2 Presentation of the experience

To test the code, different kinds of experience have been made in order to try the program in many eventual cases, which could be met by the robot in operational mission. The first case is a perpendicular objective, which is moving away. (fig 4.8 to fig 4.11)

32

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

fig 4.8 : Perpendicular target, distance : 1 meter

fig 4.9 : Perpendicular target, distance : 3 meters

fig 4.10 : Perpendicular target, distance : 5 meters

fig 4.11 : Perpendicular target, distance : 6 meters

33

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

As we can see, this experience simulates a robot which is going away. For the second kind of series of pictures, the target stay at the same place, but is revolving around its principal axe (fig 4.12 to fig 4.15).

Fig.4.12 : Perpendicular target

fig 4.13 : Low angle of rotation

Fig 4.14 : Target with an acceptable Angle of rotation

fig 4.15 : Target with an important angle of rotation

34

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

4.3 Experiences
In this subsection will be exposed different cases in order to determine the effectiveness of this solution, and its limit. Then, we will be able to compare the theoretical results, and the practice, according to the distance and the angle of rotation of the target. Firstly, we will see how behave the code, if the objective simulates a robot which is going away, being perpendicular.

fig 4.16 : Perpendicular target distance : 1 meter

fig 4.17 : Result, distance : 1 meter

fig 4.18 : Perpendicular target, distance : 3 meters

fig 4.19 : Result, distance : 3 meters

For this first experience, all the results are indexed in the following board, where are exposed theoretical result and the practice are compared :

theoretical results (m) 1 2 3 4

Experimental measures (m) 0.9567 1.9866 3.0448 4.0813

Error rate (%) 4,33 0,67 1,49 2,03

35

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

As we can notice, the error rate is correct (its always lower than 5 %), so we can validate this simulation. However, for lake of intermediate picture, we can just conclude the range is between 4 and 5 meters. The area of experience measures 5 meters, as a consequence, the experimental range is acceptable. The presence of an error rate is due to the utilization of red points, on the same X-axis, in priority. Indeed, if we compare the order of magnitude, we can do an approximation between the measured distance and the real one. In theory, the error rate should reduce if the distance grows, but many practice parameters have an influence on those results (for example, the target is not exactly perpendicular, or vertically).

Fig 4.20 : Schematic drawing of the approximation

36

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

The next experience aims to simulate a robot which is righting itself. A case which is useful if the robot has to modify its direction. We will see this experiment for distance of one meter, and two meters.

Fig 4.21, 4.22, 4.23 : distance 1 meter : case 1,2,3

Fig 4.23, 4.24,4.25 : distance : 2 meters, case 1,2,4

This table lists all the result, in order to compare theoretical result, and distance calculate by the Matlab code :

Theoretical results (m) 1 1 1 1 1 2 2 2 2

case 1 2 3 4 5 1 2 3 4

Experimental measures (m) 0.9567 1.0163 1.0642 1.1293 1.1725 1.9866 2.0465 2.0916 2.1769
37

Error rate (%) 4,33 1,63 6,42 12,93 17,25 0,67 2,32 4,58 8,8854

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

This experiment showcases the detection is possible is the angle of rotation is small. If it becomes too big, even if the error rate is higher than 10 % (17,25 %), this evaluation is useful to get an estimation of the position of the robot. For example, if a robot has to go back to its original placement, this approximation gives it a first direction ; then, a second data acquisition will provide the exact position of the swarm, and all robots will be able to go back in the initial formation.

38

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

5. Limit of the code


As we have seen in the previous subsection, the range is one of the most important characteristic of this project. For this code, the maximum range is between four and five meters, which is acceptable. However, an easy modification on the target could increase it, indeed, the objective should have largest red point, in order to make the detection easier when it is farther.

The second important problem is the lighting or the darkness. Indeed, The intensity of the light affects the contrast, and transform the red color. For example, with much light, the color data of red points cant be detected. As a result, if we take the following case (fig 5.1 / 5.2), the objective becomes undetectable.

Fig 5.1 : target with an too much important angle of rotation

39

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Fig 5.2 : binary image after image processing

Moreover, an important angle of rotation can implicate a wrong estimation of the distance. Thus, if the camera only detects two points on the same Y-axis (fig 5.3), the code will consider the objective as perpendicular, so the picture with the pixel coordinates creates the illusion that the target is farther than the reality.

fig 5.3 : wrong estimation

40

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

In this case, the real distance is two meters, but the detected distance is 4.45 meters. This project will be used to analyze video, so many pictures in succession. Consequently, in order to avoid this error, the code could be able to compare this estimation, with the previous one, and notify the user if there is an anomaly in the evolution of the robots movement (if the distance to the objective increases shockingly).

Conclusion :
Trough those experiments, the error of measured distance is, in simply cases, is lower than 5%, and around 15 % in complicated cases. Studying on this project raise awareness about different external parameters, which cant be ignored, whereas they are omnipresent, as lighting or the angle of rotation. This report proves that the use of a CCD camera combined provide accurate measurements concerning indoor navigation data. As a consequence, the results found with these experiments can be used as theoretical basements for a global project : create a navigation system for a swarm of robots, which will be able to study movement of each robots, and determine its own moves. This project could be improved, using other technologies, indeed, thermic sources could be used instead of red points, with a thermic camera.

41

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

References :
Gonzalez, Woods, and Eddins Digital Image Processing Using MATLAB

Jean-Thierry Laprest Introduction MATLAB Davide Scaramuzza and Friedrich Fraundorfer Visual Odometry Part I: The First 30 Years and Fundamentals

Friedrich Fraundorfer and Davide Scaramuzza Visual Odometry Part II: Matching, Robustness, Optimization, and Applications

J. Borenstein, H.R. Everett, and L. Feng Where am I ? Sensors and Methods for Mobile Robot Positioning

Diane Lingrand Introduction au traitement d'images

Rachid Belaroussi , Traitement de l'image et de la vido

42

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

Attached documents

Contents :
Code 3.1 : first code p.2

Code 3.2 : red channel

p.3

Code 3.3 : thresholding

p.4

code 3.4 : image processing

p.6

code 3.5 : Algorithm to obtain the distance.

p.7

43

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic


Code 3.1 : first code clear all; close all; % load the picture img = imread('02-007.jpg'); figure(1); imshow(img); % obtain the black and white picture imgGray = rgb2gray(img);

[BW,tresh] = edge(imgGray,'sobel'); BW = imfill(BW,'holes'); % delete little noise BW2 = bwareaopen(BW,500); figure(3); imshow(BW2); [x,y] = find(BW2);

44

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic Code 3.2 : red channel clear all; close all; img = imread('02-007.jpg'); %Red component Red = img(:,:,1); image(Red), colormap([[0:1/255:1]', zeros(256,1), zeros(256,1)]), colorbar; figure(2); imshow(Red); I = rgb2gray(Red);

figure(2); imshow(I);

45

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic Code 3.3 : thresholding clear all; close all; data = imread('02-007.jpg'); imgred = uint8(zeros(size(data)));

imgred(:,:,1) = data(:,:,1);

s=size(img); seuilR=80; seuilR2=160; seuilBG = 50; imgf = uint8(zeros(s)); for i=1:s(1) for j=1:s(2) if img(i,j,2) < seuilBG & img(i,j,3) < seuilBG & img(i,j,1) > seuilR & img(i,j,1) < seuilR2 imgf(i,j,:)=img(i,j,:); end end end

figure(2) imshow(imgf) dataG = rgb2gray(imgf); diff_im = imsubtract(dataG(:,:,1), rgb2gray(imgf));

dataC = imsubtract(imgf(:,:,1), rgb2gray(imgf)); bw = im2bw(dataC, graythresh(dataC)); imshow(bw) bw2 = imfill(bw,'holes'); 46

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic bw3 = imopen(bw2, ones(5,5)); bw4 = bwareaopen(bw3, 40); bw4_perim = bwperim(bw4); overlay1 = imoverlay(dataC, bw4_perim, [.3 1 .3]);

mask_em = imextendedmax(dataC, 30); mask_em = imclose(mask_em, ones(5,5)); mask_em = imfill(mask_em, 'holes'); mask_em = bwareaopen(mask_em, 40); overlay2 = imoverlay(dataC, bw4_perim | mask_em, [.3 1 .3]); overlay3=rgb2gray(overlay2);

figure(4); imshow(overlay3)

47

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

code 3.4 : image processing clear all; close all; data = imread('02-007.jpg'); figure(1); imshow(data); imgGray = rgb2gray(data); diff_im = imsubtract(data(:,:,1), rgb2gray(data)); % First get individual channels from the original color image. figure(2); %imshow(redBand); imshow(diff_im); %Use a median filter to filter out noise diff_im = medfilt2(diff_im, [3 3]); % Convert the resulting grayscale image into a binary image. diff_im = im2bw(diff_im,0.18); % Remove all those pixels less than 300px diff_im = bwareaopen(diff_im,300); % Label all the connected components in the image. bw = bwlabel(diff_im, 8); % Here we do the image blob analysis. % We get a set of properties for each labeled region. stats = regionprops(bw, 'BoundingBox', 'Centroid'); % Display the image imshow(data) hold on

48

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

code 3.5 : Algorithm to obtain the distance. %This is a loop to bound the red objects in a rectangular box. for object = 1:length(stats) bb = stats(object).BoundingBox; bc = stats(object).Centroid; rectangle('Position',bb,'EdgeColor','r','LineWidth',2) plot(bc(1),bc(2), '-m+') a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2))))); set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow'); Xabs(object)=bc(1); Yabs(object)=bc(2); end Xabs; Yabs;

% Loop to detect two point on the same X-axis for i=1:length(Xabs) for j=1:length(Xabs) if abs(Xabs(i)-Xabs(j))>0 && abs(Xabs(i)-Xabs(j))<50 aa=i; bb=j; P1=[Xabs(aa) Yabs(aa)] P2=[Xabs(bb) Yabs(bb)] Dp=abs(P1(2)-P2(2)) Dm = Dp * 0.00000788; D = 0.2* 0.035 / Dm V = V+1;
49

SLT. LABOUDIGUE Visual odometry applied to cooperative robotic

else t; end end end length(Xabs) if V <2

% Loop to detect two point on the same X-axis for i=1:length(Yabs) for j=1:length(Yabs) if abs(Yabs(i)-Yabs(j))>0 && abs(Yabs(i)-Yabs(j))<50 aa=i; bb=j; P1=[Xabs(aa) Yabs(aa)] P2=[Xabs(bb) Yabs(bb)] Dp=abs(P1(1)-P2(1)) Dm = Dp * 0.00000788; D = 0.5* 0.035 / Dm u else i; end end end else V end
50

Вам также может понравиться