Вы находитесь на странице: 1из 7

CENG 585: Fundamentals of Autonomous

Robotics - Project 2

Kadir Firat Uyanik


KOVAN Research Lab.
Dept. of Computer Eng.
Middle East Technical Univ.
Ankara, Turkey
kadir@ceng.metu.edu.tr
Abstract

Roboticists have got their hands dirty mostly on the problem of localization of
the mobile robots. Although the problem seems to be relatively easy (it is just
making a robot aware of it’s location after all) compared to the other problems
under the roof of machine intelligence, there is still no algorithm that can enable
a robot robustly move both indoor and outdoor environments having dynamic
entities around.
In addition to the localization, map-making is also one of the most estab-
lished topics in A.I robotics. These two topics are trying to answer the questions
of where am I? and where have I been?.
Researchers have proposed several methods. Some of the methods ignore
localization errors, or use topological maps. Some others try to identify natural
landmarks, or match raw sensor data to an a pripori map. These methods can
be divided into two broad categories: iconic and feature-based. Iconic local-
ization algorithms mainly use the occupancy grid-like structures and the grid
occupancy certainties are updated by using various probabilistic methods, such
as Bayesian, Dempster-Shafer, or Histogrammic in motion planning.
In this report, I will present some part of a preliminary study on the oc-
cupancy grid mapping and some experimental results obtained on the Kobot
robot platform.
Introduction
This report explains the problems that I have come across during the mapping
experiment with the robotic platform Kobot. The overall prosedure can be
given as follows:

1. Model the sensor characteristic by using the dataset given


2. Calculate pose of the robot by using the encoder readings and locate the
robot on the world/reference coordinate system
3. Transfer possible object locations represented in the sensor coordinate
system to the world coordinate system
4. Update probabilities of the grids on which objects can be located by using
Bayes’ rule

Sensor Modeling

Figure 1: Average sensor reading for each grid is indicated

Rather than fitting a function that represents a radial distance and span
angle for the IR range sensors, I have used the sensor reading dataset to obtain
conditional probabilities that are going to be used during Bayesian update stage.
To do this, the algorithm makes use of the following property of the probabilistic
events:
r
P(ŝ= s) = (1)
n
where ŝ represents the random variable/event which can happen in r different
ways out of a total of n possible equally-likely ways. This is the probability of a
random event or the relative frequency of occurance of an experiment’s outcome,
when repeating the experiment as Frequentists argue.
In order to obtain conditional probabilities of the sensor readings conditioned
on the occupancy of the grids of the sensor’s observation area, I utilize the
property given in the equation 1 and obtain the following:

#sensor readings given that the ij th grid is occupied


P(ŝ= s|Hij ) = (2)
#all sensor readings given that ij th grid is occupied

1
Equation 2 is used to obtain the probabilities of each grid having a specific
sensor measurement value when it is occupied with an object. This helps us to
model the noise comes from the electronics instability of the sensor itself or the
other components inside the robot. Besides, we obtain the sensor observation
area by using this method.

Figure 2: Experimental setup being used during data set collection

Figure 3: From top left to bottom right P(Hij |ŝ = s) for s having the values
from 7 to 0. Sub-figures are consisted of small grid elements that are indicated
earlier in figure 1. The more the brightness of the grid the more possible to
have corresponding sensor reading value given that the grid is occupied with an
object

In figure 3, sub-figures shows that if a specific sensor reading s is given what


is the probability of a grid to be occupied. Please note that the proability values
are not normalized in each sub-figure just for the sake of easy visualization. But
we know that:
s=7
X
P(Hij |ŝ = s) = 1 (3)
s=0

Hence, it is not possible to have more than one fully bright grids (indicating
probability of one).Please note that the posterior probabilities-indicated in fig-
ure 3- are obtained via Bayes’ rule since we already know sensor probabilities
conditioned on the occupancy of any grid by using the method explained in the
equation 2.
Another thing worths mentioning here is about the bottom-right subfigure.In

2
this subfigure sensor reading is given to be zero, and the probability of being
occupied of an arbitrary grid is investigated. Here, we can see that all the grids
outside of the sensors’ observation area becomes white. In other words, if the
sensor returns 0 reading (= no object observed), probability of these grids to be
occupied is more than the ones located in the sensors’ observation area. In the
real world, such a result makes no sense at all, because we cannot argue about
the grids that we are not observing.

Kinematic Modeling
In this study, we have used the robotic platform, Kobot.

Figure 4: Differential drive robots moves around a point known as ICC - In-
stantaneous Center of Curvature

The parameters shown in the figure 4 can be obtained by using the following
equations:
l
Vr = ω(R + ) (4)
2
l
Vl = ω(R + ) (5)
2
where l is the distance between the two wheels. Vl and Vr are the left and right
wheel velocities, respectively. R is the signed distance from the ICC and the ω
is the rotational velocity around the ICC. And ICC can be found as follows:
 
x − Rsin(θ)
ICC =
y − Rcos(θ)
By using the equations above, one can find the state transition function as in
the following matrix operations:
      
x cos(ωδt) −sin(ωδt) 0 x − ICCx ICCx
y 0  = sin(ωδt) cos(ωδt) 0  y − ICCy  + ICCy 
θ0 0 0 1 θ ωδt
But this equation does not hold if the wheel velocities are the same or the
counter same. In these cases pose update function becomes as the followings:

3
1. vl = vr = v    
x x + vcos(ωδt) 0
y 0  = y + vsin(ωδt) 0
θ0 θ

2. vl = −vr = v  
  x
x
y 0  = 
 y 

δt 
0
θ + 2v l
θ

The problem with pose update operation is that shaft encoders are notoriously
inaccurate. Besides surface and wheels are not ideal as well. If these inaccuracies
are summed up, robot deviates from it’s path. What is worse is that this
error accumulates in time if it is not reset by utilizing another sensor or the
features related to the world or map. To experiment this error, I have set up an
environment in the shape of a square. And robot is programmed to follow the
walls around it, results are shown in the figure 5.

Figure 5: Updating the pose of the robot by taking encoder readings into con-
sideration. This figure is supposed to be a square but due to the errors in shaft
encoders and other non-ideal circumstances, it deviates from it’s actual course.

Sensor Mapping
After obtaining the probably occupied grids with respect to the sensor coordi-
nates, these grids are mapped to the world/reference coordinate system via two
successive homogenous coordinate transformations.

T = T rans(xr , yr , 0)Rot(z, θ)T rans(xs , ys )Rot(z, θs − 90◦ ) (6)

4
Hence, a grid in world coordinate system v can be obtained by pre-multiplying
the grid u in sensor coordinate system by the transformation matrix given in
equation 6 (v = T u). The final step is updating the grid occupancy probabilities
by utilizing the following recursive Bayes’ rule:

P(ŝ= st |Hij )P(Hij |ŝ = st−1 )


P(Hij |ŝ = st ) =
P(ŝ= st |Hij )P(Hij |ŝ = st−1 ) + P(ŝ= st |!Hij )P(!Hij |ŝ = st−1 )
(7)

Conclusion
If the encoders and surface are not that noisy, we hope that the robot will be
able to extract the map of its environment by the method of occupancy-grid
mapping as explained in this report. As a feature plan, some kind of a feature
based method can be added to this system to reset the errors accumulated
during the exploration phase.

References
[1] Murphy R R, Introduction to AI Robotics, 2000, MIT Press.
[2] Dudek G. and Kenkin M., Computational Principles of Mobile Robotics,
2000, Cambridge University Press.

Вам также может понравиться