Вы находитесь на странице: 1из 7

Bogobots Humanoid Team 2008

(Teen Size)
Guillermo Villarreal-Pulido, Salvador Sumohano-Verdeja, Erick Cruz-Hernndez,
Alejandro Aceves-Lpez.
Tecnolgico de Monterrey, Campus Estado de Mxico, Carr. Lago de Guadalupe Km. 3.5
Col. Margarita Maza de J uarez, 52926, Atizapn, Mxico
{ a00464286,a01093720,a00462971,aaceves}@itesm.mx
Abstract. This paper describes the specifications and capabilities of the Teen-
Size humanoid robot that is being developed by the Bogobots Humanoid Team
at Tecnolgico de Monterrey, Campus Estado de Mxico for Robocup 2008
competition. At this moment our robot has 10 DOF, a PWM servo-controller
card and a main DSPic unit. The main research of the team focuses on gait al-
gorithms, a walking parameterized engine as well as a perception systems based
on vision to perform path-planning analysis. The near future work is discussed
as well.
1 Introduction
The interest and feasibility in the developing of humanoid robots has increased
very much in recent years in many universities around the world. Since 2004 Tec-
nolgico de Monterrey started a research project on bipeds and this experience carry
out a next step with humanoid robots since 2006. The goal is to have full-autonomous
robots with efficient walking abilities, high-sensitive perceptions systems, multiple
manipulation-skills and learning-abilities. Our Teen-Size robot is shown in figure 1.

Fig. 1. Our teen-size humanoid.

2 Mechanical Specifications
Our teen-size humanoid was totally designed by our team and is built with custom-
made aluminum parts and powered by high-torque Hitec servomotors [1]. At this
moment the humanoid has 10 DOF but is planned that before the Robocup 2008 com-
petitions it will have 22 DOF. The Robot is equipped with computational power and
energy supply to ensure full autonomous behaviors.

Fig. 2. CAD Design
3 Electronic Specifications
The present electronic architecture considers three main modules which communi-
cations are shown in figure 3. Main processor is based on DSPic connected with:
embedded PWM servocontroller card [1] connected by a RS232 port,
regulated power supply based on Lithium-Polymer Electrifly batteries [5].
Each electronic module is responsible to processes specifics tasks. This distributed
computational power approach allows us to decompose overall soccer play problem in
The servo-controller module is mainly responsible of sending electrical PWM sig-
nals to every joint of the robot at very low level. This module receives from the main
processor all request angular positions for a specific low level-behavior.


Fig. 3. Schematic electronic architecture of bogobots humanoids.
The main processor performs three main tasks: (1) a statically-stable walking-
pattern generator. Using the analytical inverse kinematics of legs and a parameterized
leg-path generator is possible to easily perform omni-directional slow walking, (2)
some simple motions like standing-up, kicking and blocking are developed with
frame-based motion, and (3) off-line decision-making algorithms are run to produce
individual players behaviors.
4 Gait Analysis and Walking Parameter
On past years, the direct and inverse kinematics for humanoid legs have been ana-
lyzed by researchers of our Institute [6]-[12] and some practical approaches have been
developed and reported. All our behaviors are divided into two basic kinds of move-
ments. The first kind is based on interpolated keyframes composed by motors angles
that are off-line specified by programmer and in-line interpolated with numerical
methods. This approach is mainly used for instinctive movements like: kicks, blocks,
recovering from falling down, or transitions among postures. The second kind of
movements is based on run-time parametric walking-pattern generator that allows ro-
bot to walk in different styles, speeds and directions. The approach for controlling
joints of legs is very similar to the one used by our MAYRA first biped prototype, and
described in detail in [9].
All developed walking pattern are composed of two basic feet movements: balanc-
ing and foot-trajectory. Balancing the center-of-mass of biped will need to control
some joints in order to guarantee the robot's center of gravity is always inside the
supporting area. Feet trajectories can have different shapes (e.g. rectangle, ellipse,
half-ellipse, etc.) and are defined by a set of parameters (e.g. foot center, step height,
maximum forward/sideward step size). Figure 4 shows an example of semi-circular
shape. The phases of the two legs should shifted by half a phase in order to guarantee
that one foot is in contact with ground while the other foot is flying over.
In addition, it is also possible to tilt the body and swing the arms to help in balance
the weight while robot is walking.
With this basic idea, we modeled our robot as being a two-wheel vehicle where we
could vary the direction and speed of the virtual-wheels. This idea proved to be very
simple and versatile regarding the kind of walks we could achieve.

Fig. 4. Semi-circular foot-path of robot.
5 Vision System and Head Module
The vision algorithms of the robot are programmed using the vision system CMU-
Cam3 which is fully programmable, so we have the freedom of incorporating features
like color segmentation algorithms, object recognitions, distance estimation, self-
localization and object tracking.
The vision algorithms that we implemented are based on the previous develop-
ments made by our group and reported in [13]-[14]. We proceed in two stages: off-
line color classification and on-line color-based object identification. First, some pic-
ture of objects (ball, landmarks and goals) are needed. Then, one by one are analyzed
to define pixels belonging to a specific color and manually labeled as members of a
specific group. We run a classification algorithm based on implicit-surfaces to define
class for each group of pixel.
With those classes, we implemented an on-line color-based object identification
algorithm. This approach starts analyzing pixels in some scan-lines on the image and
classified in classes. Then, regions on image with same color pixels are identified to
determine its center and size.

Fig. 5. Color segmentation example.

To estimate distance of recognized object we implement an equation that relates
apparent size of objects (Dx and Dy) with distance of object d. From figure 6 it is easy
to understand that distance of object is inversely proportional to apparent size on im-
age. To make more robust our approach, we decide to use Dx and Dy, at the same
time, because bad light conditions or partially occluded object can cause incomplete


Fig. 6. Relationship between distance and apparent size on image of ball.
Self-localization is performed by classic methods of triangulation. Basically, we in-
fer robot position on field by the recognition of two landmarks and the relative dis-
tance respect to robot. The localization of ball on the field is based on relative dis-
tance and orientation of ball related with robot localization.
Visual information obtained from CMUcam3 algorithm provides the center of an
object on the image and it is directly used to track objects. Motion of head is powered
by tilt and pan servomotors independently from legs and arms motions; this division
creates a fully independent head module that could perform intelligent object tracking
by inferring the next balls position and calculating the angle that the servomotors
have to move in order to reach the target.
6 Future Work
We have been proposed long-term and short-term work targets.
In short-term targets we have considered, as told previously, that our robot will be
completely built, tested and with all the modules integrated (vision and head module,
the waking engine, decision making and localization) before the Robocup 2008 com-
petitions. These modules already solved the main challenges that this category re-
quires, like the stability, which our robot shows while waking currently and which is a
bigger problem than in a kid size robot, category where we have had more experience
in research and real practice.
In long-term targets we have considered improvements in different modules.
We will develop vision algorithms to object recognition and self-localization are
processed on the CMUcam3. Robots field-position and information about localized
objects are sent to main processor for decision-making algorithms. Navigation algo-
rithms are based on self-localization information and digital compass sensor that pro-
vide reliable information about robots orientation.
Now that the gait planning is relatively solved and the localization and orientation
of robots is done, we are now working on some path planning strategies for better ap-
proach the ball given different circumstances and perform different kind of action de-
pending on specific situation of robots on the field.
We are now working on dynamic equilibrium and control strategies to make hu-
manoids walk with dynamically stable pattern. Methods like Zero-Moment-Point will
be tested and compared.
We will also implement localization algorithms based on lines instead of color.
Field lines or edges of objects will be used to find landmarks for localization. We are
researching algorithms for color segmentation robust to variant light conditions and
7 Conclusions
In this paper, we presented the present work of Bogobots team. We take advantage
of our previous research results on biped robots and kid-size robots to implement
them on our teen-size robot. Some research done in our Institution about vision sys-
tems was also implemented.
This is the first time our team intends to participate in the RoboCup humanoid
teen-size league, however we really trust that for Robocup days our robot will be fully
capable of performing the tasks assigned in an outstanding way.
1. Hitec servomotors, http://www.hobbyhorse.com/hitec_servo
2. CMUCam3 vision sysytem, http://www.cmucam.org/
3. ElectriFly Lithium-Polymer Bateries. http://www.electrifly.com/
4. Magnetic Compass,http://www.robot-electronics.co.uk/acatalog/Compass.html
5. Gonzlez-Nuez, E., Aceves-Lpez, A., Ramrez-Sosa, M.: Control para el seguimiento de
trayectoria de movimiento de un bpedo con fase: Pie de soporte Pie en movimiento. Pri-
mer Encuentro Internacional de Investigacin Cientfica Multidisciplinara, ITESM Campus
Chihuahua, Mxico (2007)
6. Gonzlez-Nuez, E., Aceves-Lpez, A., Ramrez-Sosa, M.: Anlisis Cinemtico de un B-
pedo con fases: Pie de soporte-Pie en movimiento, IEEE 5 Congreso Inter. en Innovacin y
Desarrollo Tecnolgico CIINDET, Cuernavaca, Mxico, ISBN 968-9152-00-9 (2007)
7. Melndez. A., Aceves-Lpez A.: Human Gait Cycle Analysis for the Improvement of
MAYRAs Biped Foot, 37 Congreso de Investigacin y Desarrollo del Tecnolgico de
Monterrey, Mxico, pp. 60-67, ISBN 968-891-111-9 (2007)
8. Aceves-Lpez A., Melndez A.: Human-inspired walking-style for a low-cost biped proto-
type, IEEE 3rd Latin American Robotics Symposium, Santiago de Chile. ISBN 1-4244-
0537-8 (2006)
9. Serna-Hernndez, R., Aceves-Lpez, A.: From mechatronic design to the construction of a
statically stable biped robot, 2nd IEEE Latin American Robotic Symposium and VII Simpo-
sio Brasileiro de Automatizacin Inteligente, Sao Luis-MA, Brazil, ISBN:85-85048-55-7
10. Gonzlez-Nez, E.: Modelado y control de las dinmicas del caminado del bpedo
MAYRA, Master thesis, Tecnolgico de Monterrey, Mxico (2007)
11. Serna-Hernndez, R.: Discusin y seleccin de los elementos necesarios para la construc-
cin de un robot bpedo de tipo medio-humanoide llamado MAYRA, Master thesis, Tecno-
lgico de Monterrey, Mxico (2005)
12. Alvarez, R., Milln, E., Aceves-Lpez, A., Swain-Oropeza, R.: Accurate color classifica-
tion and segmentation for mobile robots, Book Chapter, Mobile Robots: Perception & Na-
vigation, ISBN 3-86611-283-1, Verlag (2007)
13. Alvarez, R., Millan, E., Swain-Oropeza, R., Aceves-Lpez, A.: Color image classification
through fitting of implicit surfaces, Lecture Notes in Computer Science, Springer-Verlag,
ISSN 0302-9743, Vol. 3315, pp. 677-686, (2004)